text
stringlengths 1
2.28M
| added
stringdate 2024-02-18 23:39:39
2024-02-18 23:45:20
| created
stringdate 1992-02-03 20:07:05
2023-03-03 02:17:43
| attributes
stringclasses 288
values | doc
stringlengths 124
140
| id
stringlengths 21
25
| metadata
stringlengths 31
35
| source
stringclasses 0
values | version
stringclasses 0
values |
---|---|---|---|---|---|---|---|---|
\section{Introduction}\label{intro}
\section{Introduction}\label{mot}
In this paper, an extension of Principal Component Analysis (PCA) and its rigorous justification are considered. In comparison with known techniques, the proposed extension of PCA allows us to improve the associated accuracy and diminish the numerical load. The innovation of the proposed methodology, differences from the known results and advantages are specified in Sections \ref{wn9w}, \ref{dif77}, \ref{x78an} and \ref{x78kl}.
PCA is a technique for finding so called principal components (PCs) of the data of interest represented by a large random vector, i.e. components of a smaller vector which preserve the principal features of the data. PCA also provides a reconstruction of PCs to the original data vector with the least possible error among all linear transforms. According to Jolliffe \cite{Jolliffe2002}, ``Principal component analysis is probably the oldest and best known of the techniques of multivariate analysis''.
This is a topic of intensive research which has an enormous number of related references. For instance, a Google search for `principal component analysis' returns about $8,160,000$ results. In particular, the references which are most related to this paper (from our point view) are represented in \cite{Brillinger2001,681430,905856,Tomasz1293,DuongNguyen2014288,tor5277,Scharf1991113}. PCA is used in a number of application areas (in an addition to the previous references see, for example, \cite{diam96,du2006,Saghri2010,gao111}). Therefore, related techniques with a better performance are of vital importance.
By PCA in \cite{Jolliffe2002,Brillinger2001}, under the strong restriction of invertibility of the covariance matrix, the procedures for finding the PCs and their reconstruction are determined by the matrix of the rank less or equal to $k$ where $k$ is the number of required PCs.
For a fixed number of PCs, the PCA accuracy of their reconstruction to the original data cannot be improved. In other words, PCA has the {\em only degree of freedom} to control the accuracy, it is the number of PCs. Moreover, PCA in the form obtained, in particular, in \cite{Jolliffe2002,Brillinger2001}, is not applicable if the associated covariance matrix is singular. Therefore, in \cite{tor843}, the generalizations of PCA, called generalized Brillinger transforms (GBT1 and GBT2), have been developed. First, the GBT1 and GBT2 are applicable to the case of singular covariance matrices. Second, the GBT2 allows us to improve the errors associated with PCA and the GBT1. We call it the generic Karhunen-Lo\`{e}ve transform (GKLT).
The GKLT requires an additional knowledge of covariance matrices $E_{x y^2}$ and $E_{y^2 y^2}$ where $\x$ and $\y$ are stochastic vectors, and $\y^2$ is a Hadamard square of $\y$ (more details are provided in Section \ref{gbt1} below). Such knowledge may be difficult. Another difficulty associated with the GBT2 and GKLT is their numerical load which is larger than that of PCA. Further, it follows from \cite{924074} that the GKLT accuracy is better than that of PCA in \cite{Jolliffe2002,Brillinger2001,681430} subject to the condition which is quite difficult to verify (see Corollary 3 in \cite{924074}). Moreover, for the GBT2 in \cite{tor843}, such an analysis has not been provided.
We are motivated by the development of an extension of PCA which covers the above drawbacks.
We provide both a detailed theoretical analysis of the proposed extension of PCA and numerical examples that illustrate the theoretical results.
\section{Review of PCA and its known generalizations GB1, GB2, GKLT}\label{gbt1}
First, we introduce some notation which is used below.
Let $\x=[\x_1,\ldots, \x_m]^T\in L^2(\Omega,\mathbb{R}^m)$ and $\y=[\y_1,\ldots, \y_n]^T\in L^2(\Omega,\mathbb{R}^n)$ be random vectors\footnote{Here, $\Omega=\{\omega\}$ is the set of outcomes, $\Sigma$ a $\sigma$-field of measurable subsets of $\Omega$, $\mu:\Sigma\rightarrow[0,1]$ an associated probability measure on $\Sigma$ with $\mu(\Omega)=1$ and $(\Omega,\Sigma,\mu)$ for a probability space.}. Vectors $\x$ and $\y$ are interpreted as reference data and observable data, respectively, i.e. $\y$ is a noisy version of $\x$. Dimensions $m$ and $n$ are assumed to be large.
Suppose we wish to denoise $\y$ and reduce it to a `shorter' vector $\uu\in L^2(\Omega,\mathbb{R}^k)$ where $k \leq \min \{m, n\}$, and then reconstruct $\uu$ to vector $\widetilde{\x}$ such that $\widetilde{\x}$ is as close to $\x$ as possible. Entries of vector $\uu$ are called the principal components (abbreviated above as PCs).
Let us write
$
\displaystyle \|{\bf x}\|^2_{\Omega} =\int_\Omega \|{\bf x}(\omega)\|_2^2 d\mu(\omega) < \infty,
$
where $\|{\bf x}(\omega)\|_2$ is the Euclidean norm of ${\bf x}(\omega)\in\mathbb{R}^m$.
Throughout the paper, we assume that means $E[\x]$ and $E[\y]$ are known. Therefore, without loss of generality, we will assume henceforth that $\x$ and $\y$ have zero means. Then the covariance matrix formed from $\x$ and $\y$ is given by $E_{xy}=E[\x\y^T]=\{e_{ij}\}_{i,j=1}^{m,n}\in\rt^{m\times n}$ where $\displaystyle e_{ij} = \int_\Omega \x_i(\omega) \y_j(\omega) d\mu(\omega)$.
Further, the singular value decomposition (SVD) of matrix $M\in \rt^{m\times n}$ is given by $M=U_M\Sigma_M V_M^T$ where $U_M=[u_1 \;u_2\;\ldots u_m]\in \rt^{m\times m}, V_M=[v_1 \;v_2\;\ldots v_n]\in \rt^{n\times n}$ are unitary matrices, and $\Sigma_M=\diag(\sigma_1(M),$ $\ldots,$ $\sigma_{\min(m,n)}(M))\in\rt^{m\times n}$
is a generalized diagonal matrix, with the singular values $\sigma_1(M)\ge \sigma_2(M)\ge\ldots\ge 0$ on the main diagonal.
Further, $M^{\dag}$ denotes the Moore-Penrose pseudo-inverse matrix for matrix $M$.
The generalizations of PCA mentioned in Section \ref{mot}, GBT1 and GBT2 \cite{tor843}, are represented as follows. Let us first consider the GBT2 since the GBT1 is a particular case of the GBT2. The GBT2 is given by
\begin{eqnarray}\label{bnm11}
B_2(\y)=R_1 [P_1\y + P_2\vv],
\end{eqnarray}
where $\vv\in L^2(\Omega,\mathbb{R}^n)$ is an `auxiliary' random vector used to further optimize the transform, {\bf (Assoc. Edit.: More explanations!!!) } and matrices $R_1\in\rt^{m\times k}$, $P_1\in\rt^{k\times n}$ and $P_2\in\rt^{k\times n}$ solve the problem\footnote{Note that in (\ref{bnm11}) and (\ref{byfx}), strictly speaking, $R_1$ $P_1$ and $P_2$ should be replaced with operators $\rrr_1: L^2(\Omega,\mathbb{R}^k) \rightarrow L^2(\Omega,\mathbb{R}^m)$, $\p_1: L^2(\Omega,\mathbb{R}^n) \rightarrow L^2(\Omega,\mathbb{R}^k)$ and $\p_2: L^2(\Omega,\mathbb{R}^n) \rightarrow L^2(\Omega,\mathbb{R}^k)$, respectively. This is because each matrix, say, $R_1\in\rt^{m\times k}$ defines a bounded linear transformation $\rrr_1: L^2(\Omega,\mathbb{R}^k) \rightarrow L^2(\Omega,\mathbb{R}^m)$. Nevertheless, it is customary to write $R_1$ rather then $\rrr_1$, since $[\rrr_1(\uu)](\omega) = R_1[\uu(\omega)]$, for each $\omega\in \Omega$. We keep this type of notation throughout the paper.}
\begin{equation}\label{byfx}
\min_{R,\hspace*{1mm} [P_1 P_2]} \displaystyle \|\x - R_1 [P_1\y + P_2\vv]\|_\Omega^2
\end{equation}
so that, for $\qq=[\y^T \vv^T]^T\in L^2(\Omega,\mathbb{R}^{2n})$ and $G_q = E_{x q}E_{qq}^{\dag}E_{q x}$,
\begin{eqnarray}\label{ac11}
R_1= U_{G_q,k},\hspace*{4mm}
[P_1, P_2] = \hspace*{-1mm}U_{G_q,k}^T E_{x q}E_{qq}^{\dag},
\end{eqnarray}
where $U_{G_q,k}$ is formed by the first $k$ columns of $U_{G_k}$, and $P_1$ and $P_2$ are represented by the corresponding blocks of matrix $U_{G_q,k}^T E_{x q}E_{qq}^{\dag}$.
The principal components are then given by $\uu = [P_1, P_2][\y^T \vv^T]^T$.
The GBT1 follows from the GBT2 if $R_1 P_2\vv=\mathbf 0$, where $\mathbf 0$ is the zero vector. That is, the GBT2 has one matrix more (i.e., one degree of freedom more) than the GBT1. This allows us to improve the GBT2 performance compared to that by the GBT1 (see \cite{tor843} for more detail).
The GKLT \cite{924074} is given by
\begin{eqnarray}\label{qmiw8}
\kk (\y)=K_1\y + K_2\y^2,
\end{eqnarray}
where matrices $K_1\in\rt^{m\times n}$ and $K_2\in\rt^{m\times n}$ solve the problem
\begin{equation}\label{by29}
\min_{ \substack{[K_1 K_2]\\ \rank [K_1 K_2]\leq k}} \displaystyle \|\x - [K_1 \y + K_2\y^2]\|_\Omega^2,
\end{equation}
and $\y^2$ is given by the Hadamard square so that $\y^2(\omega) = [\y_1^2(\omega),\ldots,\y_n^2(\omega)]^T$, for all $\omega\in\Omega$.
PCA is a particular case of the GKLT if $K_2\y^2={\mathbf 0}$ and matrix $E_{yy}$ is non-singular.
The PCA, BT, GBT1, GKLT and GBT2 follow from the solution of essentially the same optimization problem. The differences are that first, the associated solutions are obtained under different assumptions and second, the solutions result in transforms that have different computational schemes. More details can be found in \cite{tor843}.
Further, each of PCA and the GBT1 has $m\times n$ parameters to optimize, which are entries of matrices $K_1$ and $R_1P_1$, respectively. Similar to PCA, the GBT1 has one degree of freedom to improve the associated accuracy, it is the number $k$ of PCs, i.e., the dimension of vector $\uu$. Thus, for fixed $k$, the GBT1 accuracy cannot be improved.
The GKLT \cite{924074} and GBT2 \cite{tor843} each have two degrees of freedom, $k$ and one matrix more than in PCA and GBT1. That is, the GKLT and GBT2 have twice as many parameters to optimize compared to PCA and GBT1. It is shown in \cite{tor843,924074} that this feature allows us to improve the accuracy associated with the GKLT and GBT2
\section{Contribution and novelty}\label{wn9w}
We propose and justify the PCA extension which\\
\hspace*{5mm} $\bullet$ always exists, i.e. is applicable to the case of singular data (this is because it is constructed in terms of pseudo-inverse matrices; see Section \ref{det}),\\
\hspace*{5mm} $\bullet$ has better associated accuracy than that of the GBT1, GBT2 and GKLT (Sections \ref{298an}, \ref{1n8an}, \ref{nbm198}), \\
\hspace*{5mm} $\bullet$ has more degrees of freedom to improve the associated accuracy than the PCA, GBT1, GBT2 and GKLT (Sections \ref{xvb8an} and \ref{speccases}),\\
\hspace*{5mm} $\bullet$ has a lower computational load than that of the GBT2 and GKLT; in fact, for large $m, n$, it is about 37\% of that of the GBT2 and 22\% of that of the GKLT (Section \ref{cm9vn}),\\
\hspace*{5mm} $\bullet$ does not require the usage of matrices $E_{x y^2}$ and $E_{y^2 y^2}$ (as required in \cite{924074}) which are difficult to determine.
Further, we show, in particular, that \\
\hspace*{5mm} $\bullet$ the condition for the GKLT \cite{924074} mentioned in Section \ref{gbt1} (under which the accuracy improvement is achieved) can be omitted (Section \ref{298an}).
In more detail, in the proposed PCA extension, the additional degrees of freedom are provided by the auxiliary random vectors $\w$ and $\h$ which are introduced below in Sections \ref{w0an} and \ref{xvb8an}, respectively. Vectors $\w$ and $\h$ are called $\w$-injection and $\h$-injection. An improvement in the accuracy of the proposed transform follows from the increase in the number of parameters to optimize, which are represented by matrices $T_0$, $T_1$, specific vector transformation $\f$ (Sections \ref{w0an} and \ref{det1}), and $\w$-injection and $\h$-injection (Sections \ref{stat}, \ref{x78an} and \ref{speccases} ).
\section{Structure of the proposed PCA extension}\label{w0an}
The above advantages are achieved due to the special structure of the proposed transform as follows.
Let $\w\in L^2(\Omega,\mathbb{R}^\ell)$ be a random vector and $\f: L^2(\Omega,\mathbb{R}^n)\times L^2(\Omega,\mathbb{R}^\ell) \rightarrow L^2(\Omega,\mathbb{R}^\ell)$ be a transformation of $\y$ and $\w$ in a random vector $\s\in L^2(\Omega,\mathbb{R}^\ell)$, i.e.
$$
\s = \f(\y,\w).
$$
Reasons for using vector $\w$ and transformation $\f$ are detailed in Sections \ref{stat}, \ref{x78an} and \ref{speccases} below.
We propose to determine the PCs and their reconstruction $\widetilde{\x}$ by the transform $\ttt $ given by
\begin{eqnarray}\label{sjk92}
\widetilde{\x}=\ttt (\y, \w) = T_0\y + T_1 \f(\y, \w),
\end{eqnarray}
where $T_0$ and $T_1$ are represented by $m\times n$ and $m\times \ell$ matrices, respectively, and
\begin{eqnarray}\label{xnm93}
\rank [T_0 \hspace*{1mm} T_1] \leq k,
\end{eqnarray}
where $k=\min\{m, n\}$. Here, three terms, $T_0$, $T_1$ and $\f$, are to be determined.
Therefore, transform $\ttt$ will be called the three-term $k$-rank transform.
A special version of the three-term $k$-rank transform is considered in Section \ref{xvb8an} below.
\section{Statement of the problems}\label{stat}
Below, we assume that $\x$, $\y$ and $\w$ are nonzero vectors.
{\em Problem 1}: Find matrices $T_0$ and $T_1$ that solve
\begin{eqnarray}\label{589mb}
\min_{[T_0 \hspace*{1mm} T_1]}\|\x - [ T_0\y + T_1 \f(\y, \w)]\|^2_\Omega
\end{eqnarray}
subject to constraint (\ref{xnm93}), and determine $\f$ that provides, for $\zz = [\y^T \s^T]^T$,
\begin{eqnarray}\label{akl7}
E_{zz} =\left[ \begin{array}{cc}
E_{yy} & \oo\\
\oo & E_{ss}
\end{array} \right],
\end{eqnarray}
where $\oo$ denotes the zero matrix.
The importance of the condition in (\ref{akl7}) is twofold. First, this allows us to facilitate computation associated with a determination of $T_0$ and $T_1$. Second, the condition in (\ref{akl7}) is used in the solution of the Problem 2 stated below.
The transform obtained from the solution of Problem 1 (see Section \ref{det1} that follows) is called the {\em optimal} three-term $k$-rank transform or the {\sf\em three-term PCA.}
{\em Problem 2}: Show that the error associated with the three-term PCA is less than that of the PCA, GBT1 (see Section \ref{298an}), GBT2 and GKLT (see Section \ref{xvb8an}). Further, show that the computational load associated with the tree-term PCA is less than that of the GBT2 (see Section \ref{58b29}).
\section{Differences from known techniques}\label{dif77}
The proposed three-term PCA differs from PCA in \cite{Jolliffe2002,Brillinger2001} in several instances. Unlike PCA
in \cite{Jolliffe2002,Brillinger2001}, the three-term PCA has the additional terms $T_1$, $\f$ and $\w$-injection, which lead to the improvement in the associated accuracy of determining PCs and the consecutive reconstruction of PCs to the original vector. As distinct from the PCA in \cite{Jolliffe2002,Brillinger2001}, the three-term PCA is always applicable to singular data since it is determined in terms of pseudo-inverse matrices.
Differences of the three-term PCA from the GBT2 are threefold. First, the three-term PCA contains transformation $\f$ aimed to facilitate computation. Second, in the three-term PCA,
the procedure for determining principal components and their reconstruction to an estimate of $\x$ is different from that in the GBT2. Indeed, the three-term PCA can be written as
\begin{eqnarray}\label{sj382}
\ttt (\y, \w) = R_1 P_1\y + R_2 P_2\s,
\end{eqnarray}
where $R_1, R_2$, $P_1$ and $P_2$ are obtained in the form different from those in the GBT2 (see Theorem \ref{389nm} below).
Third, in Section \ref{xvb8an} that follows, we show that a special transformation of vector $\s$ to a vector $\widetilde{\s}$ of a greater dimensionality allows us to achieve the better associated accuracy of $\x$ estimation.
Differences from the GKLT in (\ref{qmiw8}) are similar and even stronger since the GKLT contains vector $\y^2$ (not $\vv$ as the GBT2) which cannot be changed.
The above differences imply the improvement in the performance of the three-term PCA. This issue is detailed in Sections \ref{x78an} and \ref{x78kl} that follow.
\section{Solution of Problem 1}\label{det}
\subsection{Preliminaries}\label{prel}
First, we recall some known results that will be used in the solution of Problems 1 and 2.
\begin{proposition} {\em \cite[Theorem 1.21, p. 44]{zhang2005schur}}\label{proposition5}
Let $M$ be a positive semi-definite matrix given in the block form
$
M=\left[
\begin{array}{cc}
A & B \\
B^T & C \\
\end{array}
\right],
$
where blocks $A$ and $C$ are square. Let $S=C-B^TA^\dagger B$ and
$$
N =\left[
\begin{array}{cc}
A^\dagger +A^\dagger BS^\dagger B^TA^\dagger & -A^\dagger BS^\dagger \\
-S^\dagger B^TA^\dagger & S^\dagger \\
\end{array}
\right].
$$
Then $N=M^\dagger$ if and only if $\mbox{rank}(M)=\mbox{rank}(A)+\mbox{rank}(C)$.
\end{proposition}
\begin{proposition}{\em \cite[p. 217]{Zhang2011}}\label{proposition6}
If $M$ is positive definite, then the condition $\mbox{rank} (M)=\mbox{rank} (A)+\mbox{rank} (C)$ of Proposition \ref{proposition5} is always true and $N=M^{-1}$.
\end{proposition}
\begin{proposition}{\em \cite[Lemma 4.5.11]{harville2008matrix}}\label{proposition10}
For any matrices $A$ and $B$,
\begin{eqnarray}\label{zmnb91}
\mbox{\rm rank}\left[
\begin{array}{cc}
A & \oo \\
\oo & B \\
\end{array}
\right]=\mbox{\rm rank}(A)+\mbox{\rm rank}(B).
\end{eqnarray}
\end{proposition}
\begin{proposition} {{\em (Weyl's inequality)}} {\em \cite[Corollary 4.3.15]{9780511810817}}\label{proposition8}
Let $A$ and $B$ be $m\times m$ symmetric matrices and let singular values $\sigma_i(A)$, $\sigma_i(B)$ and $\sigma_i(A+B)$, for $i=1,\ldots, m$, be arranged in decreasing order. Then, for $i=1,\ldots, m$,
\begin{eqnarray}\label{7hb91}
\sigma_i(A)+\sigma_{m}(B)\leq \sigma_i(A+B)\leq \sigma_i(A)+\sigma_{1}(B).
\end{eqnarray}
\end{proposition}
\subsection{Determination of three-term PCA}\label{det1}
Let $$\displaystyle P_{M,L}=\hspace*{-4mm}\sum_{k=1}^{\rank (M)}u_k u_k^T\in \rt^{m\times m}, \hspace*{3mm} \displaystyle P_{M,R}=\hspace*{-4mm} \sum_{j=1}^{\rank (M)}v_j v_j^T\in \rt^{n\times n}$$
be the orthogonal projections on the range of matrices $M$ and $M^T$ respectively, and let
\begin{eqnarray}\label{mkumk1}
[M]_k= \sum_{i=1}^{k}\sigma_i(M)u_i v_i^T\in \rt^{m\times n}
\end{eqnarray}
for $k=1,\ldots,\rank (M)$, be the truncated SVD of $M$.
For $k>\rank (M)$, we define $[M]_k=M\;(=M_{\rank (M)})$. For $1\le k<\rank (M)$, the matrix $M_k$ is uniquely defined
if and only if $\sigma_k(M)>\sigma_{k+1}(M)$.
Further, $M^{1/2}$ denotes a matrix square root for matrix $M$.
For the covariance matrix $E_{x x}$, we denote $E_{x x}^{1/2 \dag}:=(E_{ x x}^{1/2})^ {\dag}$. Matrix $E_{ x x}^{1/2 \dag}$ is unique since $E_{x x}$ is positive semidefinite.
The Frobenius matrix norm is denoted by $\|\cdot\|$.
Let us denote $G_{xy}=E_{xy}E_{yy}^\dag$ and $G_z = E_{x z}E_{zz}^{\dag}E_{z x}$. Similar to $U_{G_q,k}$ in (\ref{ac11}), $U_{G_z,k}$ denotes the matrix formed by the first $k$ columns of $U_{G_z}$. Recall that, as before in (\ref{akl7}), $\zz=[\y^T \s^T]^T$.
\begin{theorem}\label{389nm}
Let transformation $\f$ in (\ref{sjk92}) be determined by
\begin{eqnarray}\label{smi91}
\f(\y, \w) = \s = \w - G_{wy} \y.
\end{eqnarray}
Then (\ref{akl7}) is true, and $T_0$ and $T_1$ that solve the problem in (\ref{589mb}), (\ref{xnm93}) are such that
\begin{eqnarray}\label{z,.23}
[T_0\hspace*{1.5mm} T_1] = U_{G_z,k}U_{G_z,k}^T [G_{xy}\hspace*{1.5mm} G_{xs}](I + N)
\end{eqnarray}
where
$N= M(I - P_{E_{zz}^{1/2},L})$ and matrix $M$ is arbitrary\footnote{In other words, the solution is not unique.}. The unique minimum norm solution of problem (\ref{589mb}), (\ref{xnm93}) is given by
\begin{eqnarray}\label{294b}
T_0 = U_{G_z,k}U_{G_z,k}^T G_{xy}\qa T_1 = U_{G_z,k}U_{G_z,k}^T G_{xs},
\end{eqnarray}
where
\begin{eqnarray}\label{2mox5}
G_z = G_y + G_s.
\end{eqnarray}
\end{theorem}
\begin{proof}
For vector $\s$ defined by (\ref{smi91}),
$$
E_{ys} = E[\y (\w - E_{wy}E_{yy}^\dag \y)^T]=E_{yw} - E_{yy}E_{yy}^\dag E_{yw}=\oo
$$
because by Corollary 1 in {\em\cite{924074}}, $E_{yw} = E_{yy}E_{yy}^\dag E_{yw}$. Then (\ref{akl7}) follows. Further, since $\|\x\|^2_\Omega = \mbox{\em tr} \hspace*{.5mm}E[\x \x^T]$ (see {\em\cite[pp. 166-167]{torbook2007}}) then, for $T=[T_0\hspace*{.5mm} T_1]$,
\begin{eqnarray}\label{wnmui}
\|\x - [ T_0\y + T_1 \f(\y, \w)]\|^2_\Omega &=& \|\x - T\zz\|^2_\Omega\nonumber\\
&=& \mbox{\em tr}\hspace*{.5mm} E\{(\x - T\zz) (\x - T\zz)^T\}\nonumber\\
&=& \|E_{xx}^{1/2}\|^{2} - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2} \nonumber\\
& &\hspace*{17mm} + \|E_{xz}{E_{zz}^{1/2}}^\dag - T{E_{zz}^{1/2}}\|^{2}.
\end{eqnarray}
Therefore, the problem in (\ref{589mb})--(\ref{xnm93}) is reduced to
\begin{eqnarray}\label{}
\min_{T: \rank T \leq k} \|E_{xz}{E_{zz}^{1/2}}^\dag - T{E_{zz}^{1/2}}\|^{2}.
\end{eqnarray}
Its solution is given in \cite{tor5277} by
\begin{eqnarray}\label{xm02}
T=[T_0 \hspace*{1mm} T_1] = [E_{xz}{E_{zz}^\dag}^{1/2}]_k{E_{zz}^\dag}^{1/2}(I + N).
\end{eqnarray}
Let us write $U_Q\Sigma_Q V_Q^T = Q$ for the SVD of $Q=E_{xz}{E_{zz}^\dag}^{1/2}$.
Then by \cite{tor843},
\begin{eqnarray}\label{amn82}
[E_{xz}{E_{zz}^\dag}^{1/2}]_k = U_{Q, k} U_{Q, k}^T E_{xz}{E_{zz}^\dag}^{1/2}.
\end{eqnarray}
Since ${G_z}=Q Q^T$ then $U_{G_z}=U_Q$ and $U_{{G_z}, k} = U_{Q, k}$. Therefore, (\ref{xm02}) and (\ref{amn82}) imply
\begin{eqnarray}\label{9cn02}
T=[T_0 \hspace*{1mm} T_1] = U_{{G_z}, k}U_{{G_z}, k}^T G_{xz}(I + N).
\end{eqnarray}
Here, on the basis of (\ref{akl7}),
\begin{eqnarray}\label{am198n}
G_{xz} = [E_{xy}\hspace*{1.5mm} E_{xs}]\left[ \begin{array}{cc}
E_{yy}^\dag & \oo\\
\oo & E_{ss}^\dag
\end{array} \right] = [G_{xy}\hspace*{1.5mm} G_{xs}]
\end{eqnarray}
and
\begin{eqnarray}\label{348vn}
&&\hspace*{-30mm} {G_z}=[E_{xy}\hspace*{1.5mm} E_{xs}]\left[ \begin{array}{cc}
E_{yy}^\dag & \oo\\
\oo & E_{ss}^\dag
\end{array} \right] \left[ \begin{array}{c}E_{yx}\\ E_{sx}\end{array} \right] \nonumber\\
&&\hspace*{-20mm} = E_{xy}E_{yy}^\dag E_{yx} + E_{xs}E_{ss}^\dag E_{sx}= G_{y} + G_{s}.
\end{eqnarray}
Then (\ref{z,.23}), (\ref{294b}) and (\ref{2mox5}) follow.
$\hfill\blacksquare$ \end{proof}
Thus, the three-term PCA is represented by (\ref{sjk92}), (\ref{smi91}), (\ref{z,.23}) and (\ref{294b}).
\section{Analysis of the error associated with three-term PCA}
Let us denote the error associated with the three-term PCA by
\begin{eqnarray}\label{s89mb}
\varepsilon_{m,n,\ell} (T_0, T_1) = \min_{\substack{[T_0 \hspace*{1mm} T_1]: \\ \rank [T_0 \hspace*{1mm} T_1] \leq k}}\|\x - [ T_0\y + T_1 \f(\y, \w)]\|^2_\Omega.
\end{eqnarray}
The following theorem establishes a priori determination of $\varepsilon_{m,n,\ell} (T_0, T_1)$.
\begin{theorem}\label{377nm}
Let $\f$, $T_0$ and $T_1$ be determined by Theorem \ref{389nm}. Then
\begin{eqnarray}\label{215smb}
\varepsilon_{m,n,\ell} (T_0, T_1) = \|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^{k} \sigma_i ({G_z}).
\end{eqnarray}
\end{theorem}
\begin{proof}
It follows from (\ref{wnmui}) and (\ref{xm02}) that
\begin{eqnarray}\label{q28mb}
&&\hspace*{-10mm}\varepsilon_{m,n,\ell} (T_0, T_1) \nonumber\\
&=& \|E_{xx}^{1/2}\|^{2} - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2} + \|E_{xz}{E_{zz}^{1/2}}^\dag - [E_{xz}{E_{zz}^\dag}^{1/2}]_k{E_{zz}^\dag}^{1/2}(I + N){E_{zz}^{1/2}}\|^{2}\nonumber\\
&=& \|E_{xx}^{1/2}\|^{2} - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2} + \|E_{xz}{E_{zz}^{1/2}}^\dag - [E_{xz}{E_{zz}^\dag}^{1/2}]_k
\|^{2}.
\end{eqnarray}
The latter is true because by Lemma 42 in \cite[p. 311]{torbook2007},
$$
[E_{xz}{E_{zz}^\dag}^{1/2}]_k = [E_{xz}{E_{zz}^\dag}^{1/2}]_k{E_{zz}^\dag}^{1/2}{E_{zz}^{1/2}}.
$$
Then
\begin{eqnarray}\label{qr56b}
\varepsilon_{m,n,\ell} (T_0, T_1)& = & \|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^{m} \sigma_i({G_z}) + \sum_{i=k+1}^{m} \sigma_i({G_z})\nonumber\\
& = & \|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^{k} \sigma_i({G_z}).
\end{eqnarray}
Thus, (\ref{215smb}) is true.\hfill$\blacksquare$
\end{proof}
\section{Advantages of three-term PCA}\label{x78an}
Here and in Section \ref{x78kl} below, we justify in detail the advantages of the three-term PCA that have been highlighted in Section \ref{wn9w}.
\subsection{Solution of Problem 2. Improvement in the associated error compared to PCA and GBT1}\label{298an}
We wish to show that the error associated with the three-term PCA, ${\varepsilon_{m,n,\ell} (T_0, T_1)}$, is less than that of PCA and the GBT1 \cite{tor843}. A similar statement has been provided in Corollary 3 in \cite{924074} under the condition which is difficult to verify. In Theorems \ref{xnm91n} and \ref{mak92n} below, we show that the condition can be omitted. Let us denote the error associated with the GBT1 by
\begin{eqnarray}\label{3290b}
\varepsilon_{m,n} (B_0) = \min_{\substack{B_0\in{\mathbb R}^{m\times n}: \\ \rank (B_0) \leq k}}\|\x - B_0\y\|^2_\Omega.
\end{eqnarray}
Matrix $B_0=R_1P_1$ that solves the RHS in (\ref{3290b}) follows from (\ref{ac11}) if $R_1 P_2\vv=\mathbf 0$ (see Section \ref{gbt1}).
\begin{theorem}\label{xnm91n}
For any non-zero random vectors $\x, \y$ and $\w$,
\begin{eqnarray}\label{al9b}
\varepsilon_{m,n,\ell} (T_0, T_1) \leq \varepsilon_{m,n} (B_0).
\end{eqnarray}
If $G_s=E_{xs}E_{ss}^\dag E_{sx}$ is positive definite then
\begin{eqnarray}\label{al9xc}
\varepsilon_{m,n,\ell} (T_0, T_1) < \varepsilon_{m,n} (B_0).
\end{eqnarray}
\end{theorem}
\begin{proof}
It is known {\em \cite{tor843}} that
\begin{eqnarray}\label{xm819}
\varepsilon_{m,n} (B_0) = \|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^{k} \sigma_i ({G_y}).
\end{eqnarray}
Consider ${G_z} = G_y + ({G_z} - {G_y}).$ Clearly, ${G_z} - {G_y}$ is a symmetric matrix. Then on the basis of (\ref{7hb91}) in the above Proposition \ref{proposition8},
\begin{eqnarray}\label{20mna}
\sigma_i(G_y)+\sigma_{m}({G_z} - {G_y})\leq \sigma_i(G_z),
\end{eqnarray}
where
$$
{G_z} - {G_y} = G_s=MM^T
$$
and $M = E_{xs}{E_{ss}^\dag}^{1/2}$. Thus, by Theorem 7.3 in \cite{Zhang2011}, ${G_z} - {G_y}$
is a positive semi-definite matrix and then all its eigenvalues are nonnegative \cite[p. 167]{golub2013matrix}, i.e., $\sigma_i({G_z} - {G_y})\geq 0.$
Therefore, (\ref{20mna}) implies
$
\sigma_i ({G_y})\leq \sigma_i ({G_z}),
$
for all $i=1,\ldots,n$, and then
\begin{eqnarray}\label{akl81}
\sum_{i=1}^k\sigma_i ({G_y})\leq \sum_{i=1}^k\sigma_i ({G_z}).
\end{eqnarray}
As a result, (\ref{al9b}) follows from (\ref{215smb}), (\ref{xm819}) and (\ref{akl81}). In particular, if $G_s$ is positive definite then $\sigma_i({G_z} - {G_y}) > 0$, for $i=1,\ldots,n$ and therefore, (\ref{al9xc}) is true.\hfill$\blacksquare$
\end{proof}
In the following Theorem \ref{mak92n}, we refine the result obtained in the above Theorem \ref{xnm91n}.
\begin{theorem}\label{mak92n}
Let, as before, $k= \min \{m, n\}.$ There exists $\gamma\in [\sigma_m(G_s), \sigma_{1}(G_s)]$ such that
\begin{eqnarray}\label{al917}
\varepsilon_{m,n,\ell} (T_0, T_1) = \varepsilon_{m,n} (B_0) - k\gamma,
\end{eqnarray}
i.e., the error associated with the three-term PCA is less than that of the GBT1 by $k\gamma$.
\end{theorem}
\begin{proof}
The Weyl's inequality in (\ref{7hb91}) and the equality in (\ref{348vn}) imply, for $i= 1,\ldots, m$,
\begin{eqnarray*}\label{7dmnf1}
\sigma_i(G_y)+\sigma_{m}(G_s)\leq \sigma_i(G_z)\leq \sigma_i(G_y)+\sigma_{1}(G_s)
\end{eqnarray*}
which, in turn, implies
\begin{eqnarray*}
\sigma_{m}(G_s)\leq \sigma_i(G_z) - \sigma_i(G_y)\leq \sigma_{1}(G_s)
\end{eqnarray*}
and
\begin{eqnarray*}
\sum_{i=1}^k\sigma_{m}(G_s)\leq \sum_{i=1}^k[\sigma_i(G_z) - \sigma_i(G_y)]\leq \sum_{i=1}^k\sigma_{1}(G_s).
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
k\sigma_{m}(G_s)\leq \sum_{i=1}^k\sigma_i(G_z) - \sum_{i=1}^k\sigma_i(G_y)\leq k\sigma_{1}(G_s)
\end{eqnarray*}
and
\begin{eqnarray*}
k\sigma_{m}(G_s)\leq \left(\|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^k\sigma_i(G_y)\right) - \left(\|E_{xx}^{1/2}\|^{2} - \sum_{i=1}^k\sigma_i(G_z)\right)\leq k\sigma_{1}(G_s).
\end{eqnarray*}
Thus
\begin{eqnarray*}
k\sigma_{m}(G_s)\leq \varepsilon_{m,n} (B_0) - \varepsilon_{m,n,\ell} (T_0, T_1) \leq k\sigma_{1}(G_s)
\end{eqnarray*}
and
\begin{eqnarray*}
\frac{\varepsilon_{m,n} (B_0) - \varepsilon_{m,n,\ell} (T_0, T_1)}{k} \in [\sigma_{m}(G_s), \sigma_{1}(G_s)],
\end{eqnarray*}
and then (\ref{al917}) follows.
$\hfill\blacksquare$ \end{proof}
\begin{remark}
If $G_s$ is a full rank matrix then $\sigma_{m}(G_s)\neq 0$ and therefore, $\gamma\neq 0$, i.e. in this case, (\ref{al917}) implies that $\varepsilon_{m,n,\ell} (T_0, T_1)$ is always less than $\varepsilon_{m,n} (B_0)$.
If $\rank (G_s) = r_s$ where $r_s <m$ then $\sigma_{m}(G_s) = 0$ and $\gamma\in [0, \sigma_{1}(G_s)]$, i.e. in this case, $\gamma$ might be equal to $0$.
\end{remark}
\begin{remark}
Recall that PCA is a particular case of the GBT1 (see Section \ref{gbt1}). Therefore, in Theorems \ref{xnm91n} and \ref{mak92n}, $\varepsilon_{m,n} (B_0)$ can be treated as the error associated with PCA, under the restriction that matrix $E_{yy}$ is non-singular.
\end{remark}
\subsection{ Decrease in the error associated with the three-term PCA with
the increase in the injection dimension}\label{xvb8an}
In Theorem \ref{w791n} that follows we show that the error $\varepsilon_{m,n,\ell} (T_0, T_1)$ associated with the three-term PCA (represented by (\ref{sjk92}), (\ref{smi91})--(\ref{294b})) can be decreased if vector $\s$ is extended to a new vector $\widetilde\s$ of a dimension which is larger than that of vector $\s$. The vector $\widetilde\s$ is constructed as $\widetilde\s = [\s^T \gf^T]^T$ where $\gf = \h - G_{h z}\zz\in L^2(\Omega,\mathbb{R}^\eta)$, $G_{h z}=E_{h z}E_{z z}^\dag$ and $\h\in L^2(\Omega,\mathbb{R}^\eta)$ is arbitrary. As we mentioned before, similar to $\w$-injection, vector $\h$ is called the $\h$-injection. As before, $\s$ is defined by (\ref{smi91}) and $\zz=[\y^T \s^T]^T$.
Thus, $\widetilde\s \in L^2(\Omega,\mathbb{R}^{(\ell+\eta)})$ while $\s \in L^2(\Omega,\mathbb{R}^\ell)$, i.e. the dimension of $\widetilde\s$ is larger than that of $\s$ by $\eta$ entries.
In terms of $\widetilde\s$, the three-term PCA is represented as
\begin{eqnarray}\label{sjk49}
\sss (\y, \w,\h) = S_0\y + S_1\widetilde\s,
\end{eqnarray}
where similar to $T_0$ and $T_1$ in (\ref{294b}), and for $\widetilde{\zz}=[\y^T \hspace*{1mm} \widetilde{\s}^T]^T$, matrices $S_0$ and $S_1$ are given by
\begin{eqnarray}\label{wn202}
S_0 = U_{G_{\widetilde{z}},k}U_{G_{\widetilde{z}},k}^T G_{xy}\qa S_1 = U_{G_{\widetilde{z}},k}U_{G_{\widetilde{z}},k}^T G_{x {\widetilde{s}}}.
\end{eqnarray}
Here, $G_{\widetilde{z}} = G_y + G_{\widetilde{s}}$ and
$G_{\widetilde{s}} = E_{x {\widetilde{s}}}E_{{\tilde{s}}{\tilde{s}}}^{\dag}E_{{\widetilde{s}} x}$.
The associated error is denoted by
\begin{eqnarray}\label{cn920}
\varepsilon_{m,n,\ell+\eta} (S_0, S_1) = \min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}}\|\x - [ S_0\y + S_1\widetilde\s]\|^2_\Omega.
\end{eqnarray}
\begin{theorem}\label{w791n}
For any non-zero random vectors $\x, \y,\w$ and $\h$,
\begin{eqnarray}\label{cnm201}
\varepsilon_{m,n,\ell+\eta} (S_0, S_1) \leq \varepsilon_{m,n,\ell} (T_0, T_1).
\end{eqnarray}
If $G_s=E_{xs}E_{ss}^\dag E_{sx}$ is positive definite then
\begin{eqnarray}\label{q87201}
\varepsilon_{m,n,\ell+\eta} (S_0, S_1) < \varepsilon_{m,n,\ell} (T_0, T_1).
\end{eqnarray}
\end{theorem}
\begin{proof}
Let us represent $S_1$ in terms of two blocks, $S_{11}$ and $S_{12}$, i.e., $S_1 = [S_{11}\hspace*{1mm} S_{12}],$ and also write ${\widehat S} = [S_{0}\hspace*{1mm} S_{11}]$. Then
\begin{eqnarray}\label{190cn3}
& &\hspace*{-30mm} \min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}}\left\|\x - [ S_0\y + S_1\widetilde\s]\right\|^2_\Omega\nonumber\\
& = & \min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}}\left\|\x - \left[ S_0\y + S_1\left[
\begin{array}{c}
\s \\
\gf \\
\end{array}
\right]\right]\right\|^2_\Omega \nonumber\\
& = & \min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}}\left\|\x - [ S_0\y + S_{11}\s + S_{12}\gf ]\right\|^2_\Omega \nonumber \\
& = &\min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}} \left\|\x - [{\widehat S}\zz + S_{12}\gf ]\right\|^2_\Omega.
\end{eqnarray}
Here,
$[S_0 \hspace*{1mm} S_1] = [S_0 \hspace*{1mm} S_{11}\hspace*{1mm} S_{12}] = [{\widehat S}\hspace*{1mm} S_{12}]$. Therefore,
\begin{eqnarray}\label{3333m}
& &\hspace*{-37mm} \min_{\substack{[S_0 \hspace*{1mm} S_1]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [S_0 \hspace*{1mm} S_1] \leq k}} \left\|\x - [{\widehat S}\zz + S_{12}\gf ]\right\|^2_\Omega\nonumber\\
& = & \min_{\substack{[{\widehat S} \hspace*{1mm} S_{12}]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [{\widehat S} \hspace*{1mm} S_{12}] \leq k}}\|\x - [{\widehat S}\zz + S_{12}\gf ]\|^2_\Omega.
\end{eqnarray}
In (\ref{s89mb}), let us write $\varepsilon_{m,n,\ell} (T_0, T_1)$ in terms of $\s$,
\begin{eqnarray}\label{qg59mb}
\varepsilon_{m,n,\ell} (T_0, T_1) = \min_{\substack{[T_0 \hspace*{1mm} T_1]\in{\mathbb R}^{m\times (n+\ell)}: \\ \rank [T_0 \hspace*{1mm} T_1] \leq k}}\|\x - [ T_0\y + T_1 \s]\|^2_\Omega.
\end{eqnarray}
Then by (\ref{al9b}) in Theorem \ref{xnm91n},
\begin{eqnarray}\label{po02m}
& & \hspace*{-35mm}\min_{\substack{[{\widehat S} \hspace*{1mm} S_{12}]\in{\mathbb R}^{m\times (n+\ell+\eta)}: \\ \rank [{\widehat S} \hspace*{1mm} S_{12}] \leq k}}\|\x - [{\widehat S}\zz + S_{12}\gf ]\|^2_\Omega \nonumber\\
&\leq &\min_{\substack{T\in{\mathbb R}^{m\times (n+\ell)}: \\ \rank (T) \leq k}}\|\x - T\zz\|^2_\Omega\nonumber\\
& = &\min_{\substack{[T_1 \hspace*{1mm} T_2]\in{\mathbb R}^{m\times (n+\ell)}: \\ \rank [T_1 \hspace*{1mm} T_2] \leq k}}\left\|\x - [T_0 \hspace*{1mm} T_1]\left[
\begin{array}{c}
\y \\
\s \\
\end{array}
\right]\right\|^2_\Omega\nonumber\\
& = &\min_{\substack{[T_0 \hspace*{1mm} T_1]\in{\mathbb R}^{m\times (n+\ell)}: \\ \rank [T_0 \hspace*{1mm} T_1]\leq k}} \|\x - [T_0\y +T_1\s]\|^2_\Omega,
\end{eqnarray}
where $T = [T_0 \hspace*{1mm} T_1]$, $T_0\in{\mathbb R}^{m\times n}$ and $T_1\in{\mathbb R}^{m\times \ell}$.
Then (\ref{cnm201}) follows from (\ref{3333m}) and (\ref{po02m}), and (\ref{al9xc}) implies (\ref{q87201}).
$\hfill\blacksquare$ \end{proof}
\begin{remark}\label{hjk29}
An intuitive explanation of the statement of Theorem \ref{w791n} is that the increase in the dimension of vector $\widetilde\s$ implies the increase in the dimension of matrix $S_1$ in (\ref{sjk49}) so that $S_1\in\rt^{m\times (n+\ell+\eta)}$ while in (\ref{sjk92}), (\ref{smi91})--(\ref{294b}), $T_1\in\rt^{m\times (n+\ell)}$. Therefore, the optimal matrix $S_1$ has $m\times \eta$ entries more than $T_1$ to further minimize the associated error. As a result, the three-term PCA in form (\ref{sjk49}), for the same number of principal components $k$, provides the more accurate reconstruction of $\x$ than the three-term PCA in form (\ref{sjk92}).
\end{remark}
\subsection{Decrease in the error associated with the three-term PCA compared with that of the GBT2 and GKLT}\label{1n8an}
In Theorem \ref{wd87n} below, we show that the three-term PCA in (\ref{sjk49})-(\ref{wn202}) provides the associated accuracy which is better than that of the GBT2 in (\ref{bnm11})-(\ref{ac11}).
Let us denote the error associated with the GBT2 by
\begin{eqnarray}\label{ii9m}
\varepsilon_{m,n} (B_0, B_1) = \min_{\substack{[B_0 \hspace*{1mm} B_1]\in{\mathbb R}^{m\times (2n)}: \\ \rank [B_0 \hspace*{1mm} B_1]\leq k}} \|\x - [B_0\y +B_1\vv]\|^2_\Omega,
\end{eqnarray}
where $B_0=R_1 P_1$ and $B_1=R_1 P_2$ are determined by (\ref{ac11}).
In fact, the result below is a version of Theorem \ref{w791n} as follows.
\begin{theorem}\label{wd87n}
Let $\s = \vv$ where $\ell = n$, and random vector $\vv$ is the same as in (\ref{bnm11})-(\ref{ac11}).
Then for any non-zero random vectors $\x, \y$ and $\h$,
\begin{eqnarray}\label{cnm101}
\varepsilon_{m,n,n+\eta} (S_0, S_1) \leq \varepsilon_{m,n} (B_0, B_1).
\end{eqnarray}
If $G_s=E_{xs}E_{ss}^\dag E_{sx}$ is positive definite then
\begin{eqnarray}\label{n29h01}
\varepsilon_{m,n,n+\eta} (S_0, S_1) < \varepsilon_{m,n} (B_0, B_1).
\end{eqnarray}
\end{theorem}
\begin{proof}
We observe that Theorem \ref{w791n} is true for any form of $\s$. In particular, it is true for $\s = \vv$ where $\ell = n$ and random vector $\vv$ is the same as in (\ref{bnm11})-(\ref{ac11}). Then (\ref{cnm101}) and (\ref{n29h01}) follow from (\ref{cnm201}) and (\ref{q87201}), respectively. $\hfill\blacksquare$
\end{proof}
\begin{remark}\label{op28m}
Theorem \ref{wd87n} is also valid for $\s = \y^2$ with $\ell =n$. Thus, in this case, the three-term PCA in (\ref{sjk49})-(\ref{wn202}) provides the associated accuracy which is better than that of the GKLT in (\ref{qmiw8}).
\end{remark}
\section{Special cases of three-term PCA}\label{speccases}
In both forms of the three-term PCA represented by (\ref{sjk92}) and (\ref{sjk49}), vectors $\s$ and $\widetilde \s$ are constructed from auxiliary random vectors called $\w$-injection and $\h$-injection, respectively, which are assumed to be arbitrary. At the same time, a natural desire is to understand if there are approaches for choosing $\w$ and $\h$ which may (or may not) improve the three-term PCA performance. Here, such approaches are considered.
\subsection{Case 1. Choice of $\w$ on the basis of Weierstrass theorem}\label{}
For the three-term PCA represented by (\ref{sjk92}), a seemingly reasonable choice of vector $\w$ is $\w = \y^2$ where $\y^2$ is defined by the Hadamard product, $\y^2 = \y\circ \y$, i.e. by $\y^2(\omega) = [\y^2_1(\omega), \ldots, \y^2_n(\omega)]^T$, for all $\omega\in \Omega$. This is because if $\ttt(\y, \w)$ in (\ref{sjk92}) is written as $\ttt(\y, \y^2) = T_0\y + T_1 \f(\y,\y^2)$, then $\ttt(\y, \y^2)$ can be interpreted as a polynomial of the second degree. If $T_0$ and $T_1$ are defined by Theorem \ref{389nm} then $\ttt(\y, \y^2)$ can be considered as an approximation to an `idealistic' transform $\p$ such that $\x=\p(\y)$. On the basis of the Stone-Weierstrass theorem \cite{Kreyszig1978,Timofte2005}, $\ttt(\y, \y^2)$ should seemingly provide a better associated approximation accuracy of $\p(\y)$ than that of the first degree polynomial $\ttt(\y) = T_0\y$. Nevertheless, the constraint of the reduced ranks implies a deterioration of
$\p(\y)$ approximation by $\ttt(\y, \y^2) = T_0\y + T_1\y^2$. That constraint is not a condition of the Stone-Weierstrass theorem. To the best of our knowledge, an extension of the Stone-Weierstrass theorem to a best {\em rank constrained} estimation of $\x$ is still not justified.
Further, if for example, $\y = \x +\bxi$ where $\bxi$ is a random noise, then $\y^2 = \x^2 +2\x\circ\bxi + \bxi^2$, i.e., $\y^2$ becomes even more corrupted than $\y$. Another inconvenience is that a knowledge or evaluation of matrices $E_{x y^2}$ and $E_{y^2 y^2}$ is difficult. For the above reasons, the choice of $\w$ in the form $\w = \y^2$ is not preferable.
\subsection{Case 2. Choice of $\w$ as an optimal estimate of $\x$}\label{}
Another seemingly reasonable choice of $\w$-injection in (\ref{sjk92}) is $\w=A\y$ where $A=E_{xy}E_{yy}^\dag$, i.e. $\w=E_{xy}E_{yy}^\dag \y$ is the optimal minimal-norm linear estimation of $\x$ \cite{torbook2007}. Nevertheless, in this case, $\s = A(\y - E_{yy}E_{yy}^\dag \y)$ and then
$$
E_{xs} = (E_{xy} -E_{xy}E_{yy}^\dag E_{yy})A^T = \oo
$$
since $E_{xy} =E_{xy}E_{yy}^\dag E_{yy}$ \cite[p. 168]{torbook2007}. The latter implies $E_{xs} E_{ss}^\dag = \oo$. As a result, by (\ref{294b}), $T_1=\oo$ and then in (\ref{sjk92}), $ \ttt(\y,\w) = T_0\y$. In other words, this choice of $\w$ is unreasonable since then the three-term PCA in (\ref{sjk92}), (\ref{294b}) is reduced to the GBT1 in \cite{tor843}.
\subsection{ Case 3. Choice of $\h$: `worse is better'}\label{vnb01m}
It has been shown in Theorem \ref{w791n} that the error associated with the three-term PCA represented by (\ref{sjk49}) decreases if vector $\s\in L^2(\Omega,\mathbb{R}^{\ell})$ is replaced with a new vector $\widetilde{\s}\in L^2(\Omega,\mathbb{R}^{(\ell+\eta)})$ of a larger dimension. Recall, in (\ref{sjk49}), $\widetilde{\s}$ is formed from an arbitrary $\h\in L^2(\Omega,\mathbb{R}^{\eta})$. In particular, for $\eta = 0$, (\ref{cnm201}) implies $\varepsilon_{m,n,\ell+\eta} (S_0, S_1) = \varepsilon_{m,n,\ell} (T_0, T_1)$. Thus, the increase in $\eta$ implies the decrease in the error associated with the three-term PCA represented by (\ref{sjk49}).
Thus, a reasonable (and quite surprising) choice of $\h$ is as follows: $\h$ is random and $\eta$ is large. This is similar to the concept `worse is better' \cite{Gabriel}, i.e., a random $\h$ with large dimension $\eta$ (`worse') is a preferable option (`better') in terms of practicality and usability over, for example, the choice of $\w$ considered, for instance, in Case 1. In Case 1, the dimension of $\w=\y^2$ is $n$ and cannot be changed while dimension $\ell$ can vary and, in particular, can be increased. Another advantage over Case 1 is that there is no need to evaluate matrices $E_{x y^2}$ and $E_{y^2 y^2}$ as required in Case 1.
In Example \ref{m2b9} that follows, we numerically illustrate Theorem \ref{w791n} and Case 3.
\begin{example}\label{m2b9}
Let ${\bf{x}}=[\x_1,\ldots, \x_m]^T\in L^2(\Omega,\mathbb{R}^{m})$ represent a temperature distribution in $m$ locations in Australia. Entries of $\x$ and corresponding locations are represented in Table \ref{table1}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}
\hline
Entry & Location \\
\hline
$\x_1$ & Canberra \\
\hline
$\x_2$ & Tuggeranong \\
\hline
$\x_3$ & Sydney \\
\hline
$\x_4$ & Penrith \\
\hline
$\x_5$ & Wollongong \\
\hline
$\x_6$ & Melbourne \\
\hline
$\x_7$ & Ballarat \\
\hline
$\x_8$ & Albury-Wodonga \\
\hline
$\x_{9}$ & Bendigo \\
\hline
$\x_{10}$ & Brisbane \\
\hline
$\x_{11}$ & Cairns \\
\hline
$\x_{12}$ & Townsville \\
\hline
$\x_{13}$ & Gold Coast \\
\hline
$\x_{14}$ & Adelaide \\
\hline
$\x_{15}$ & Mount Gambier \\
\hline
$\x_{16}$ & Renmark \\
\hline
$\x_{17}$ & Port Lincoln \\
\hline
\end{tabular}
\begin{tabular}{|c|c|}
\hline
Entry & Location \\
\hline
$\x_{18}$ & Perth \\
\hline
$\x_{19}$ & Kalgoorlie-Boulder \\
\hline
$\x_{20}$ & Broome \\
\hline
$\x_{21}$ & Hobart \\
\hline
$\x_{22}$ & Launceston \\
\hline
$\x_{23}$ & Devonport \\
\hline
$\x_{24}$ & Darwin \\
\hline
$\x_{25}$ & Alice Springs \\
\hline
$\x_{26}$ & Tennant Creek \\
\hline
$\x_{27}$ & Casey \\
\hline
$\x_{28}$ & Davis \\
\hline
$\x_{29}$ & Mawson \\
\hline
$\x_{30}$ & Macquarie Island \\
\hline
$\x_{31}$ & Christmas Island \\
\hline
$\x_{32}$ & Cocos Island \\
\hline
$\x_{33}$ & Norfolk Island \\
\hline
$\x_{34}$ & Howe Island \\
\hline
\end{tabular}
\caption{\large\small Distribution of entries of signal ${\bf \x}$ and locations in Australia.}
\label{table1}
\end{table}
Suppose $\omega\in\Omega$ is associated with time $t_\omega\in[0, 24]$ of the temperature measurement. Then $\x_j(\omega)$, for $j=1,\ldots,m$, is a temperature in the $j$th location at time $t_\omega$. The values of the minimum and maximum daily temperature, and the temperature at 9am and 3pm, in $m=34$ specific locations of Australia, for each day in 2016, are provided\footnote{There were 366 days in 2016. There are four values of temperature per day. Thus, we have 1464 temperature values for 2016. From a statistical point of view, the temperature measurements should be provided in more times of a day but such data are not available for us.} by the Bureau of Meteorology of the Australian Government \cite{australia_temp}.
In particular, the distribution of the maximum daily temperature in 2016 in all 34 locations in Australia is diagrammatically represented in Fig. \ref{fig3}.
Let $t_{min}$ and $t_{max}$ denote times when minimal and maximum temperature occur. We denote by $\omega_{t_{min}}$, $\omega_{t_{max}}$, $\omega_{9{am}}$ and $\omega_{3{pm}}$ outcomes associated with times $t_{min}$, $t_{max}$, 9{am} and 3{pm}, respectively. Further, for $j=1,\ldots,m$, we denote by $\x_{j, (date)}(\omega)$ a temperature in the $j$-th location at time $t_\omega$ on the date labeled as `date'.
For example, on $07/07/2016$, the corresponding temperature values in Ballarat are
$\x_{7, (07/07/2016)}(\omega_{t_{min}}) = 7.4,$ $\x_{7, (07/07/2016)}(\omega_{9 {am}}) = 8.1$,
$\x_{7, (07/07/2016)}(\omega_{3{pm}}) = 9.2,$ $\x_{7, (07/07/2016)}(\omega_{t_{max}}) = 9.5.$
Let ${\bf y}=A{\bf x}+\bxi$ where $A\in\mathbb{R}^{m\times m}$ is an arbitrary matrix with uniformly distributed random entries and $\bxi\in L^2(\Omega,\mathbb{R}^{m})$ is white noise, i.e. $E_{\xi \xi} = \sigma^2 I$.
Further, let $\w\in L^2(\Omega,\mathbb{R}^{\ell})$ and $\h\in L^2(\Omega,\mathbb{R}^{\eta})$ be Gaussian random vectors used in the three-term PCA given by (\ref{sjk49}), (\ref{wn202}).
It is assumed that noise $\bxi$ is uncorrelated with ${\bf x}$, $\w$ and ${\bf h}$.
Covariance matrices are represented in terms of samples. For example, $E_{xw} = \frac{1}{p}X W^T$ and $E_{hh} = \frac{1}{p} H H^T$ where $X\in\mathbb{R}^{m\times p}$, $W\in\mathbb{R}^{\ell\times p}$ and $H\in\mathbb{R}^{\eta\times p}$ are samples of ${\bf x}$, ${\bf w}$ and $\h$, and $p$ is the number of samples. Other covariance matrices are represented similarly. In this example, we consider four specific samples of $\x$ as follows. Matrices $X=X_{9am}\in\mathbb{R}^{m\times 366}$ and $X=X_{3pm}\in\mathbb{R}^{m\times 366}$ represent the temperature taken each day in 2016 at 9am in all $m$ locations. Entries of matrices $X_{max}\in\mathbb{R}^{m\times 366}$ and $X_{min}\in\mathbb{R}^{m\times 366}$ are values of the maximum and minimum temperature, respectively, for each day in 2016 in all $m$ locations. The database was taken from the Bureau of Meteorology website of Australian Government \cite{australia_temp}. Matrices $W$ and $H$ were created by MATLAB command {\tt rand(m,p)}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.65]{temperature_Australia_max1.eps}\\
\caption{Values of maximal temperature in 34 locations of Australia in 2016}
\label{fig3}
\end{figure}
We consider eight different cases of simulations. In each case, matrix $X$ is chosen either as one of matrices $X_{9am}$, $X_{3pm}$, $X_{min}$, $X_{max}$, or their combinations as follows:
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\cline{2-6}
\multicolumn{1}{c|}{} & {\small Case 1} & {\small Case 2} & {\small Case 3} & {\small Case 4} & {\small Case 5} \\
\hline
$X$ & $X_{min}$ & $X_{9am}$ & $X_{3pm}$ & $X_{max}$ & $[X_{9am}, X_{3pm}]$ \\
\hline
$p$ & $366$ & $366$ & $366$ & $366$ & $732$ \\
\hline
\end{tabular}
\vspace{0.7cm}
\begin{tabular}{|c|c|c|c|}
\cline{2-4}
\multicolumn{1}{c|}{}& {\small Case 6} & {\small Case 7} & {\small Case 8} \\
\hline
$X$ & $[X_{min}, X_{max}]$ & $[X_{min}, X_{9am}, X_{3pm}]$ & $[X_{min}, X_{9am}, X_{3pm}, X_{max}]$ \\
\hline
$p$ & $732$ & $1098$ & $1464$ \\
\hline
\end{tabular}
\end{table}
For each case, the diagrams of the error associated with the GBT1, GBT2 and three-terms PCA in form (\ref{sjk49})-(\ref{wn202}), for $m=\ell = 34$, $k=17$ and $\sigma=1$, versus the dimension $\eta=0, 1, \ldots, 500$ of vector ${\bf h}$ are shown in Fig. \ref{fig4}.
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.44]{fig1_pca.eps} & \includegraphics[scale=0.44]{fig2_pca.eps}\\
{\small (a) Case 1: $X=X_{min}$} & {\small (b) Case 2: $X=X_{9am}$} \\
\includegraphics[scale=0.44]{fig3_pca.eps} & \includegraphics[scale=0.44]{fig4_pca.eps}\\
{\small (c) Case 3: $X=X_{3pm}$} & {\small (d) Case 4: $X=X_{max}$} \\
\includegraphics[scale=0.44]{fig5_pca.eps} & \includegraphics[scale=0.44]{fig6_pca.eps}\\
{\small (e) Case 5: $X=[X_{9am}\;X_{3pm}]$} & {\small (f) Case 6: $X=[X_{min}\;X_{max}]$} \\
\includegraphics[scale=0.44]{fig7_pca.eps} & \includegraphics[scale=0.44]{fig8_pca.eps}\\
{\small (g) Case 7: $X=[X_{min}\;X_{9am}\;X_{3pm}]$} & {\small (h) Case 8: $X=[X_{min\;}X_{9AM}\;X_{3PM}\;X_{max}]$}\\
\end{tabular}
\caption{Errors associated with the GBT1, GBT2 and three-terms PCA versus dimension $\eta$ of $H$.}
\label{fig4}
\end{figure}
It follows from the diagrams for all cases represented in Fig. \ref{fig4}, that the error associated with the three-term PCA decreases as $\eta$ increases. This is the numerical illustration of Theorem \ref{w791n} and the Case 3 considered in Section \ref{vnb01m}. In particular, in Figs. \ref{fig4} (a)-(d), for $\eta= 335,\ldots, 500$, the error associated with the three-term PCA coincides with that of the GBT1 applied to the case when $\y = \x$, i.e.. with that of PCA applied to the data {\sf\em without any noise.}
Further, Fig. \ref{fig5} illustrates the following observation. For $\sigma\in [0, 2]$ and $\eta\in [0, 300]$, the behavior of the error associated with the three-term PCA is similar: the increase in $\eta$ implies the decrease in the error. Interestingly, for $\eta\in [300, 500]$, the error remains constantly small regardless of the value of $\sigma$. Similar to the above, for $\eta\in [300, 500]$, the error associated with the three-term PCA coincides with that that of PCA applied to the data {\sf\em without any noise.}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.6]{surface_example_new.eps}\\
\caption{Diagrams of errors associated with the three-terms PCA versus dimension $\eta$ of $\h$ and values of $\sigma$ of $E_{\xi\xi}$.}
\label{fig5}
\end{figure}
\end{example}
\subsection{ Case 4. Pure filtering}\label{}
One more special case of the three-term PCA is as follows.
Consider transform $\ttt_1$ defined by
\begin{eqnarray}\label{sj92}
\ttt_1(\y,\w) = A_0\y + A_1 \f(\y, \w),
\end{eqnarray}
where $A_0\in\rt^{m\times n}$ and $A_1\in\rt^{m\times \ell}$ are full rank matrices. Optimal $A_0$ and $A_1$ are determined from the solution of the following problem:
Given $E_{xy}$, $E_{yy}$, $E_{yw}$ are $E_{ww}$, find full rank $A_0$ and $A_1$ that solve
\begin{eqnarray}\label{zbn91}
\min_{{A_0, A_1}} \|\x - [A_0\y + A_1 \f(\y, \w)]\|^2_\Omega.
\end{eqnarray}
In other words, $\ttt_1$ is a pure filter, with no principal component determination.
As before, we write $\s=\f(\y, \w)=\w- E_{wy} E_{yy}^\dag \y$. We call $\ttt_1$ the three-term filter (TTF).
\begin{theorem}\label{1nm0}
Minimal norm solution to problem (\ref{zbn91}) is given by
\begin{eqnarray}\label{68bn1}
A_0=E_{x y} E_{yy}^\dag \qa A_1=E_{x s} E_{ss}^\dag.
\end{eqnarray}
The associated error is represented by
\begin{eqnarray}\label{xmabn1}
&&\hspace*{-10mm}\min_{A_0, A_1} \|\x - [A_0 \y + A_1 \s]\|^2_\Omega\nonumber\\
&& \hspace*{-3mm}= \|E_{xx}^{1/2}\|^2 - \|[E_{xy}{E_{yy}^{1/2}}^\dag\|^{2} - \|E_{xs}{E_{ss}^{1/2}}^\dag ]\|^{2}.
\end{eqnarray}
\end{theorem}
\begin{proof}
Optimal full rank $A_0$ and $A_1$ given by (\ref{68bn1}) follow from (\ref{xm02}) where ${E_{zz}^\dag}^{1/2}$ is represented by ${E_{zz}^\dag}^{1/2} = \left[ \begin{array}{cc}
{E_{yy}^\dag}^{1/2} & \oo\\
\oo & {E_{ss}^\dag}^{1/2}
\end{array} \right]$ and $[E_{xz}{E_{zz}^\dag}^{1/2}]_{k}$ should be replaced with $E_{xz}{E_{zz}^\dag}^{1/2}$.
Further, for $A=[A_0\hspace*{1mm} A_1]$,
\begin{eqnarray}\label{}
\|\x - [A_1 \y + A_1 \s]\|^2_\Omega = \|E_{xx}^{1/2}\|^2 - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2} + \|E_{xz}{E_{zz}^{1/2}}^\dag - A{E_{zz}^{1/2}}\|^{2}.
\end{eqnarray}
For $A_0$ and $A_1$ given by (\ref{68bn1}), $A=[E_{x y} E_{yy}^\dag \hspace*{1mm}E_{x s} E_{ss}^\dag]=E_{xz}E_{zz}^\dag$. Therefore,
\begin{eqnarray*}
\hspace*{-17mm}\min_{A_1, A_1} \|\x - [A_1 \y + A_1 \s]\|^2_\Omega \hspace*{-5mm}& & =\|E_{xx}^{1/2}\|^2 - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2} \\
& &\hspace*{10mm}+ \|E_{xz}{E_{zz}^{1/2}}^\dag - E_{xz}E_{zz}^\dag {E_{zz}^{1/2}}\|^{2}\\
&& \hspace*{20mm} =\|E_{xx}^{1/2}\|^2 - \|E_{xz}{E_{zz}^{1/2}}^\dag\|^{2}
\end{eqnarray*}
because $E_{zz}^\dag {E_{zz}^{1/2}} = {E_{zz}^{1/2}}^\dag$ {\em \cite[p. 313]{torbook2007}}. Then (\ref{xmabn1}) follows.\hfill$\blacksquare$
\end{proof}
The accuracy of the optimal TTF $\ttt_1$ is better than that of the optimal linear filter $\widetilde{F}=A_0= E_{x y} E_{yy}^\dag$ \cite{torbook2007}. More specifically, the following is true.
\begin{theorem}\label{nm012}
The error associated with the optimal TTF $\ttt_1$ is less than that of the optimal linear filter $\widetilde{F}=A_1$ by $\|E_{xs}{E_{ss}^{1/2}}^\dag ]\|^{2}$, i.e.,
\begin{eqnarray}\label{wom9}
\min_{A_0, A_1} \|\x - [A_0 \y + A_1 \f(\y, \w)]\|^2_\Omega = \min_{A_0} \|\x - A_0 \y \|^2 - \|E_{xs}{E_{ss}^{1/2}}^\dag ]\|^{2}_\Omega.
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof follows directly from (\ref{xmabn1}). \hfill$\blacksquare$
\end{proof}
\section{Advantages of three-term PCA (continued)}\label{x78kl}
\subsection{Increase in accuracy compared to that of GBT2 \cite{tor843} and GKLT}\label{58b29}
The three-term PCA represented by (\ref{sjk92}) and (\ref{sjk49}) has more parameters to optimize than those in the GBT2 (\ref{bnm11}) and GKLT (\ref{qmiw8}), i.e., it has more degrees of freedom to control the performance. Indeed, in the three-term PCA, the dimension of $\w$-injection and $\h$-injection are $\ell$ and $\eta$, and they can be varied while in the GBT2, the dimension of auxiliary vector $\vv$ is $n$ and it is fixed. In the GKLT dimension of vector $\y^2$ is also fixed.
Thus, in the three-term PCA, $\ell$ and $\eta$ represent the additional degrees of freedom.
By Theorem \ref{w791n}, the increase in $\eta$ implies the improvement in the accuracy of the three-term PCA. Thus, unlike the GBT2, the performance of the three-term PCA is improved because of the increase in the dimension of $\h$-injection.
\subsection{Improvement in the associated numerical load}\label{cm9vn}
In a number of applied problems, dimensions $m,n$ of associated covariance matrices are large. For instance, in the DNA array analysis \cite{Alter11828,ziyang18}, $m={\it O}(10^4)$. In this case, the associated numerical load needed to compute the covariance matrices increases significantly. Therefore, a method which requires a lower associated numerical load is, of course, preferable.
Here, we wish to illustrate the computational advantage of the three-term PCA represented by (\ref{sjk92}), (\ref{smi91}) and (\ref{294b}) compared to that of the GBT2 in (\ref{bnm11}) and the GKLT \cite{924074}. In both methods, a pseudo-inverse matrix is evaluated by the SVD.
The computational load of the three-term PCA (abbreviated as $C_{PCA3}$) consists of the matrix products in (\ref{294b}), and computation of SVDs for $E_{yy}^\dag\in\rt^{n\times n}$, $E_{ss}^\dag\in\rt^{\ell\times \ell}$ and $G_z\in\rt^{m\times m}$. Recall that $E_{zz}^\dag = \left[ \begin{array}{cc}
E_{yy}^\dag & \oo\\
\oo & E_{ss}^\dag
\end{array} \right]$.
The GBT2 computational load ($C_{GBT2}$) consists of computation of the matrix products in (\ref{ac11}) and the SVD for $E_{qq}^\dag\in\rt^{2n\times 2n}$. The computational load of the GKLT ($C_{GKLT}$) contains computation of the matrix products given in \cite{924074}, and computation of the SVDs for $2n\times 2n$ and $m\times 2n$ matrices.
Importantly, the dimensions of matrices in the three-term PCA are less than those in the GBT2 and GKLT. For example, the dimension of $E_{yy}$ is twice less than that of $E_{qq}$. This circumstance implies the decrease in the computational load of the three-term PCA compared to that in the GBT2 and GKLT. Indeed, the product of $m\times n$ and $n\times p$ matrices requires approximately $2mnp$ flops, for large $n$. The Golub-Reinsch SVD method (as given in \cite{golub1996}, p. 254) requires $4mn^2 + 8n^3$ flops to compute the pseudo-inverse for a $m\times n$ matrix\footnote{The Golub-Reinsch SVD method appears to be more effective than other related methods considered in \cite{golub1996}.}. As a result, for $m=n=\ell$,
\begin{eqnarray}\label{29cn3}
C_{PCA3} = 52m^3 + 2m^2(k+1)
\end{eqnarray}
while
\begin{eqnarray}\label{88cn3}
C_{GBT2} = 140m^3 + 2m^2(k+2) \mbox{ and } C_{GKLT} = 240m^3 + 4m^2(k+1)+mk .
\end{eqnarray}
That is, for large $m$ and $n$, $C_{PCA3}$ is about $37\%$ of $C_{GBT2}$ and $22\%$ of $C_{GKLT}$ .
Thus, the three-term PCA may provide a better associated accuracy than that of the GBT2 and GKLT (see Section \ref{1n8an}, Example \ref{m2b9} and Section \ref{58b29}) under the computational load which, for large $m, n$, is about one third of $C_{GBT2}$ and a quarter of $C_{GKLT}$.
This observation is illustrated by the following numerical example.
\begin{example}\label{nm498}
Let ${\bf y}=A{\bf x}+\xi$ where ${\bf x}\in L^2(\Omega,\mathbb{R}^{m})$ is a uniformly distributed random vector, $\xi\in L^2(\Omega,\mathbb{R}^{m})$ is a Gaussian random vector with variance one and $A\in\mathbb{R}^{m\times m}$ is a matrix with normally distributed random entries. We choose ${\bf w}\in L^2(\Omega,\mathbb{R}^{\ell})$ as an uniformly distributed random vector.
Covariance matrices $E_{xx}$, $E_{ww}$ and $E_{\xi\xi}$ are represented by $E_{yy}=\frac{1}{s} YY^T,\quad E_{ww}=\frac{1}{s}WW^T, \quad E_{\xi\xi}=\sigma^2 I,$ where $Y\in\mathbb{R}^{m\times p}$ and $W\in\mathbb{R}^{m\times p}$ are corresponding sample matrices, $p$ is a number of samples, and $\sigma=1$. Suppose only samples of $\y$ and $\w$ are available, and for simplicity let us assume that matrix $A$ is invertible. Then, in particular,
$
E_{xx} = A^{-1}(E_{yy} - E_{\xi\xi})A^{-T}\qa E_{xy}=E_{xx} A^T.
$
\begin{figure}[h!]
\centering
\includegraphics[scale=0.60]{TIME_PAPER_3.eps}\\
\caption{Example \ref{nm498}: Time versus matrix dimension $m$ used to execute the three-term PCA (blue line), GBT2 (red line) and GKLT (green line). }
\label{fig2}
\end{figure}
In Fig. \ref{fig2}, for a randomly chosen $A$, and $m=\ell$ and $p=3m$, typical diagrams of time (in sec.) used to execute the three-term PCA in form (\ref{sjk92}) and (\ref{294b}), GBT2 (\ref{bnm11})-(\ref{byfx}) and GKLT \cite{924074} versus dimension $m$ are represented. The diagrams in Fig. \ref{fig2} confirm the observation made before this example, i.e.,
for large $m$, $C_{PCA3}$ is significantly less than $C_{GBT2}$ and $C_{GKLT}$ (see (\ref{29cn3}) and (\ref{88cn3}), respectively).
\end{example}
\section{Final discussion}\label{nbm198}
We have developed the extension of PCA called the three-term PCA. The three-term PCA is presented in two forms considered in Sections \ref{w0an}, \ref{det1}, and \ref{xvb8an}. The associated advantages have been detailed in Sections \ref{x78an} and \ref{x78kl}.
We would like to highlight the following observation. The proposed three-term PCA is applied to observed data represented by random vector $\y$. Here, $\y$ is a noisy version of original data $\x$. It is shown that in this case, the error associated with the three-term PCA, $\varepsilon_{m,n,\ell+\eta} (S_0, S_1) $, is less than that of the known generalizations of PCA, i.e. the GBT1, GKLT and GBT2 (see Section \ref{1n8an}). At the same time, in the ideal case of the observed data {\em without any noise}, i.e. when $\y=\x$, the three-term PCA coincides with the GBT1 (with $\y=\x$ in the GBT1 as well) which is a generalization of PCA for the case of singular data. Its associated error is minimal among all transforms of the same rank and cannot be improved.
Let us denote this error by $\varepsilon_{(y=x)} $.
Example \ref{m2b9} above has discovered an important and quite unanticipated feature of the proposed technique as follows. As dimension $\eta$ of $\h$-injection increases, the error $\varepsilon_{m,n,\ell+\eta} (S_0, S_1) $ decreases up to $\varepsilon_{(y=x)} $. This implies a conjecture that this is true in general, i.e.
$
\lim_{\eta \rightarrow \infty} \varepsilon_{m,n,\ell+\eta} (S_0, S_1) = \varepsilon_{(y=x)}.
$
We intend to develop a justification of this observation.
\bibliographystyle{plain}
|
2024-02-18T23:40:25.430Z
|
2021-11-05T01:22:18.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03040","language":"en","timestamp":1636075338000,"url":"https:\/\/arxiv.org\/abs\/2111.03040","yymm":"2111"}
|
proofpile-arXiv_000-10206
|
{"provenance":"002.jsonl.gz:10207"}
| null | null |
\section*{Results}
\subsection*{Theory of 3D space-time wave packets}
A useful conceptual tool for understanding the characteristics of ST wave packets and the requirements for their synthesis is to visualize their spectral support domain on the surface of the light-cone. The light-cone is the geometric representation of the free-space dispersion relationship $k_{x}^{2}+k_{y}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$, where $\omega$ is the temporal frequency, $c$ is the speed of light in vacuum, $(k_{x},k_{y},k_{z})$ are the components of the wave vector in the Cartesian coordinate system $(x,y,z)$, $x$ and $y$ are the transverse coordinate, and $z$ is the axial coordinate. Although this relationship corresponds to the surface of a four-dimensional hypercone, a useful representation follows from initially restricting our attention to azimuthally symmetric fields in which $k_{x}$ and $k_{y}$ are combined into a radial wave number $k_{r}\!=\!\sqrt{k_{x}^{2}+k_{y}^{2}}$, so that the light-cone can be then visualized in $(k_{r},k_{z},\tfrac{\omega}{c})$-space (Fig.~\ref{Fig:Concept}). The spectral support domain for 3D ST wave packets is restricted to the conic section at the intersection of the light-cone with a spectral plane that is parallel to the $k_{r}$-axis and makes an angle $\theta$ (the spectral tilt angle) with the $k_{z}$-axis, which is given by the equation $\Omega\!=\!(k_{z}-k_{\mathrm{o}})c\tan{\theta}$; here $\Omega\!=\!\omega-\omega_{\mathrm{o}}$, $\omega_{\mathrm{o}}$ is a carrier frequency, and $k_{\mathrm{o}}\!=\!\omega_{\mathrm{o}}/c$. It can be readily shown that such a construction in the narrowband paraxial regime results in a propagation-invariant 3D ST wave packet $E(r,z;t)\!=\!e^{i(k_{\mathrm{o}}z-\omega_{\mathrm{o}}t)}\psi(r,z;t)$, where the slowly varying envelope $\psi(r,z;t)$ travels rigidly at a group velocity $\widetilde{v}\!=\!c\tan{\theta}$, $\psi(r,z;t)\!=\!\psi(r,0;t-z/\widetilde{v})$, where $\psi(r,0;t)\!=\!\int\!dk_{r}\,\,k_{r}\widetilde{\psi}(k_{r})J_{0}(k_{r}r)e^{-i\Omega t}$, and $\widetilde{\psi}(k_{r})$ is the spectrum. Here $k_{r}$ and $\Omega$ are no longer independent variables, but are instead related via the particular spectral trajectory on the light-cone [Supplementary Material]. Although this spectral trajectory is a conic section whose kind is determined by the spectral tilt angle $\theta$, it can nevertheless be approximated in the narrowband paraxial regime by a parabola in the vicinity of $k_{r}\!=\!0$:
\begin{equation}\label{Eq:Parabola}
\frac{\Omega}{\omega_{\mathrm{o}}}=\frac{k_{r}^{2}}{2k_{\mathrm{o}}^{2}(1-\widetilde{n})},
\end{equation}
where $\widetilde{n}\!=\!\cot{\theta}$ is the wave-packet group index in free space. By setting $k_{r}\!=\!k\sin{\varphi(\omega)}$, where $\varphi(\omega)$ is the propagation angle for $\omega$ as shown in Fig.~\ref{Fig:Concept}(a), we have $\varphi(\omega)\!\approx\!\eta\sqrt{\tfrac{\Omega}{\omega_{\mathrm{o}}}}$, which is \textit{not} differentiable at $\Omega\!=\!0$ \cite{Hall21OE1,Hall21OE2}; here $\widetilde{n}\!=\!1-\tfrac{\sigma}{2}\eta^{2}$, $\sigma\!=\!1$ in the superluminal regime, and $\sigma\!=\!-1$ in the subluminal regime. In other words, non-differentiable AD is required to produce a propagation-invariant ST wave packet. This result is similar to that for ST light-sheets \cite{Kondakci17NP} except that the transverse coordinate $x$ is now replaced with the radial coordinate $r$.
The representation in Fig.~\ref{Fig:Concept} is particularly useful in identifying a path towards synthesizing 3D ST wave packets. When $45^{\circ}\!<\!\theta\!<\!90^{\circ}$, the ST wave packet is superluminal $\widetilde{v}\!>\!c$, $\Omega$ is positive, and $\omega_{\mathrm{o}}$ is the minimum allowable frequency in the spectrum. When viewed in $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the wavelengths are arranged in concentric circles, with long wavelengths (low frequencies) at the center, and shorter wavelengths (higher frequencies) extending outward. On the other hand, when $0^{\circ}\!<\!\theta\!<\!45^{\circ}$, the ST wave packet is subluminal $\widetilde{v}\!<\!c$, $\Omega$ is negative, and $\omega_{\mathrm{o}}$ is the maximum allowable frequency in the spectrum. The wavelengths are again arranged in concentric circles in $(k_{x},k_{y},\tfrac{\omega}{c})$-space -- but in the opposite order: short wavelengths are close to the center and longer wavelengths extend outward. For both subluminal and superluminal 3D ST wave packets, each $\omega$ is associated with a single radial spatial frequency $k_{r}(\omega)$, and is related to it via the relationship in Eq.~\ref{Eq:Parabola}. This representation indicates the need for arranging the wavelengths in concentric circles with square-root radial chirp, and then converting the spatial spectrum into physical space via a spherical lens. Moreover, adding a spectral phase factor $e^{i\ell\chi}$, where $\ell$ is an integer and $\chi$ is the azimuthal angle in spectral space, produces OAM in physical space [Supplementary Material].
Closed-form expressions can be obtained for 3D ST wave packets by applying Lorentz boosts to an appropriate initial field \cite{Belanger86JOSAA,Saari04PRE,Longhi04OE,Kondakci18PRL}. For example, starting with a monochromatic beam $E_{\mathrm{o}}(r,z;t)$, a subluminal 3D ST wave packet at a group velocity $\widetilde{v}$ is obtained by the Lorentz boost $E(r,z;t)\!=\!E_{\mathrm{o}}(r,\tfrac{z-\widetilde{v}t}{\sqrt{1-\beta^{2}}};\tfrac{t-\widetilde{v}z/c^{2}}{\sqrt{1-\beta^{2}}})$, where $\beta\!=\!\tfrac{\widetilde{v}}{c}$ is the Lorentz factor. On the other hand, closed-form expressions for superluminal 3D ST wave packets can be obtained by applying a Lorentz boost to the `needle beam' in \cite{Parker16OE}. The time-averaged intensity is $I(r,\varphi,z)\!\propto\!\int\!dk_{r}\,\,k_{r}\sqrt{k_{\mathrm{o}}^{2}+k_{r}^{2}}|\widetilde{\psi}(k_{r})|^{2}J_{\ell}^{2}(k_{r}r)$, which is independent of $\varphi$ even if the field is endowed with OAM. In the case of 2D ST light-sheets, the time-averaged intensity separates into a sum of a constant background pedestal and a spatially localized feature at the center. A similar decomposition is not possible for 3D ST wave packets. However, using the asymptotic form for Bessel functions that is valid far from $r\!=\!0$, we have:
\begin{equation}
I(r)\propto\frac{1}{\pi r}\int\!dk_{r} \sqrt{k_{\mathrm{o}}^{2}+k_{r}^{2}}|\widetilde{\psi}(k_{r})|^{2}+\frac{(-1)^{\ell}}{\pi r}\int\!dk_{r}\sqrt{k_{\mathrm{o}}^{2}+k_{r}^{2}}|\widetilde{\psi}(k_{r})|^{2}\sin{(2k_{r}r)},
\end{equation}
where the first term is a pedestal decaying at a rate of $\tfrac{1}{r}$, and the second term tends to be localized closer to the beam center. In the vicinity of $r\!=\!0$, the two terms merge and cannot be separated. The spatio-temporal intensity profile of such a 3D ST wave packet is depicted in Fig.~\ref{Fig:Concept}(c): two conic field structures emanate from the wave-packet center, such that the profile is X-shaped in any meridional plane containing the optical axis, and the intensity profile is circularly symmetric in any transverse plane.
\subsection*{Synthesizing ST wave packets localized in all dimensions}
Central to converting a generic pulsed beam into a ST wave packet localized in all dimensions is the construction of an optical scheme that can associate each wavelength $\lambda$ with a particular azimuthally symmetric spatial frequency $k_{r}(\lambda)$ and arrange the wavelengths in concentric circles with the order prescribed in Eq.~\ref{Eq:Parabola} [Fig.~\ref{Fig:SynthesisMethod}(a)]. This system realizes two functionalities, producing a particular wavelength sequence, and changing the coordinate system, which are implemented in succession via the three-stage strategy outlined in Fig.~\ref{Fig:SynthesisMethod}(b). In the first stage, the spectrum of a plane-wave pulse is resolved along one spatial dimension. At this point, the field is endowed with linear spatial chirp and the wavelengths are arranged in a fixed sequence. The second stage rearranges the wavelengths in a new prescribed sequence. This spectral transformation is \textit{tunable}; that is, a wide range of spectral structures can be obtained from a fixed input. In the third stage, a 2D conformal transformation converts the coordinate system to map the rectilinear chirp into a radial chirp; i.e., lines corresponding to different wavelengths at the input are converted into circles at the output \cite{Bryngdahl74JOSA,Hossack87JOMO}. Because the spectral transformation in the second stage is tunable, the 2D coordinate transformation can be held fixed. In this way, we obtain arbitrary (including non-differentiable) AD in two dimensions.
The layout of the experimental setup is depicted in Fig.~\ref{Fig:Setup}. We start off in the first stage with pulses from a Ti:sapphire laser (pulse width $\approx\!100$~fs and bandwidth $\approx\!10$~nm at a central wavelength of $\approx\!800$~nm). Because a flat-phase front is critical for successfully implementing the subsequent transformations, the use of conventional surface gratings is precluded, and we utilize instead a double-pass configuration through a volume chirped Bragg grating (CBG). The CBG resolves the spectrum horizontally along the $x$-axis and introduces linear spatial chirp so that $x_{1}(\lambda)\!=\!\alpha(\lambda-\lambda_{\mathrm{o}})$; where $\alpha$ is the linear spatial chirp rate \cite{Glebov14SPIE}, $\lambda_{\mathrm{o}}$ is a fixed wavelength, and the bandwidth utilized is $\Delta\lambda\!\approx\!0.3$~nm. It is crucial that this task be achieved with high spectral resolution. Previous studies have shown that the critical parameter determining the propagation distance of ST wave packets is the 'spectral uncertainty' $\delta\lambda$, which is the finite spectral uncertainty in the association between spatial and temporal frequencies \cite{Yessenov19OE}. Our measurements indicate that the optimal spectral uncertainty after the CBG arrangement is $\delta\lambda\!\sim\!35$~pm, which is achieved for a 2-mm input beam width [Supplementary Material Fig.~S11].
The second stage of the synthesis strategy is a 1D spatial transformation along the $x$-axis to rearrange the wavelength sequence, thereby implementing a \textit{spectral} transformation. Specifically, each wavelength $\lambda$ is transposed from $x_{1}(\lambda)$ at the input via a logarithmic mapping to $x_{2}(\lambda)\!=\!A\ln{(\tfrac{x_{1}(\lambda)}{B})}$ at the output. This transformation is realized via two phase patterns implemented by a pair of spatial light modulators (SLMs) to enable tuning the transformation parameters $A$ and $B$. This particular `reshuffling' of the wavelength sequence pre-compensates the exponentiation included in the subsequent coordinate transformation. By tuning the value of $B$, we can vary the group velocity $\widetilde{v}$ over the subluminal and superluminal regimes [Supplementary Table~S1].
In the third stage we perform a log-polar-to-Cartesian coordinate transformation: $(x_{2},y_{2})\rightarrow(r,\varphi)$ via the 2D mapping: $r(\lambda)\!=\!C\exp{(-\tfrac{x_{2}(\lambda)}{D})}$ and $\varphi=\tfrac{y_{2}}{D}$ \cite{Bryngdahl74JOSA,Hossack87JOMO}. The exponentiation here is pre-compensated by the logarithmic mapping in the 1D spectral transformation, and the wavelength at position $x_{2}(\lambda)$ at the input is converted into a circle of radius $r(\lambda)\!\propto\!(\lambda-\lambda_{\mathrm{o}})^{A/D}$ at the output. This 2D coordinate transformation was developed decades ago \cite{Bryngdahl74JOSA,Hossack87JOMO}, and was recently revived as a methodology for sorting OAM modes \cite{Berkhout10PRL,Lavery12OE}. We operate the system in reverse (lines-to-circles, rather that the more typical circles-to-lines \cite{Berkhout10PRL}), and we make use of a polychromatic field (rather than monochromatic field). The exponent of the chirp rate depends only on the ratio $\tfrac{A}{D}$, so that setting $D\!=\!2A$ yields $r(\lambda)\!\propto\!\sqrt{\lambda-\lambda_{\mathrm{o}}}$ in accordance with Eq.~\ref{Eq:Parabola}. The wavelengths are arranged with square-root radial chirp, thereby realizing the required non-differentiable AD. Finally, a spherical converging lens of focal length $f$ generates the 3D ST wave packets in physical space, equivalently mapping $r\rightarrow\!k_{r}\!=\!k\tfrac{r}{f}$.
The 2D coordinate transformation is performed with two different embodiments: using a pair of diamond-machined refractive phase plates \cite{Lavery12OE}, and using a pair of diffractive phase plates \cite{Li19OE}, which yielded similar performance. Because both of these realizations are stationary, the values of $C$ and $D$ are fixed. The data reported in Fig.~\ref{Fig:SpectralMeasurements} through Fig.~\ref{Fig:2DFieldMeasurements} made use of the refractive phase plates with $C\!=\!4.77$~mm and $D\!=\!1$~mm. Moreover, fixing the value of $D$ entails in turn fixing the value of $A$ to maintain $A\!=\!D/2$. The group velocity $\widetilde{v}\!=\!c/\widetilde{n}$ is tuned over the subluminal and superluminal regimes by varying $B$, whereby $\widetilde{n}\!\approx\!1-\tfrac{4.5}{B}$, with $B$ in units of mm [Supplementary Material].
This experimental strategy provides two pathways for introducing OAM into the 3D ST wave packet. One may utilize a conventional spiral phase plate to imprint an OAM order $\ell$ after the 2D coordinate transformation and before the final Fourier-transforming lens. Another approach, which we implemented here, is to add at the output of the 1D spectral transformation a \textit{linear} phase distribution along $y$ extending from 0 to $2\pi\ell$, which is subsequently wrapped around the azimuthal direction after traversing the 2D coordinate transformation, thereby realizing OAM of order $\ell$ \cite{Li19OE}.
For sake of benchmarking, we also synthesized pulsed Bessel beams with separable spatio-temporal spectrum by circumventing the spectral analysis and 1D spectral transformation, and sending the input laser pulses directly to the 2D coordinate transformation. To match the temporal bandwidth of the pulsed Bessel beams to that of the 3D ST wave packets, we spectrally filter $\Delta\lambda\!=\!0.3$~nm from the input spectrum via a planar Fabry-P{\'e}rot cavity.
\subsection*{Characterizing 3D ST wave packets}
To verify the structure of the synthesized 3D ST wave packet, we characterize the field in four distinct domains: (1) the spatio-temporal spectrum to verify the square-root radial chirp [Fig.~\ref{Fig:SpectralMeasurements}]; (2) the time-averaged intensity to confirm diffraction-free propagation along $z$ [Fig.~\ref{Fig:Time-averagedIntensity}]; (3) time-resolved intensity measurements to reconstruct the wave-packet spatio-temporal profile and estimate the group velocity [Fig.~\ref{Fig:Time-ResolvedIntensity}]; and (4) complex-field measurements to resolve the spiral phase of the ST-OAM wave packets [Fig.~\ref{Fig:2DFieldMeasurements}].
\noindent\textbf{Spectral-domain characterization.} We measure the spatio-temporal spectrum by scanning a single-mode fiber connected to an optical spectrum analyzer across the spectrally resolved field profile. We scan the fiber along $x_{1}$ after the spectral analysis stage and verify the linear spatial chirp [Supplementary Figure~S10], and then scan the fiber along $x_{2}$ after the 1D spectral transformation to confirm the implemented change in spatial chirp. The measurement is repeated for superluminal ($B=10$~mm, $\widetilde{v}\!\approx\!1.8c$) and subluminal ($B=-10$~mm, $\widetilde{v}\!\approx\!0.7c$) wave packets, both with temporal bandwidth $\Delta\lambda\approx0.3$~nm, pulse width of $\sim6$~ps, and $\lambda_{\mathrm{o}}\!=\!796.1$~nm. After the 2D coordinate transformation, the spectrum is arranged radially along an annulus rather than a rectilinear domain, as shown in Fig.~\ref{Fig:SpectralMeasurements}(a). By calibrating the conversion $x_{2}\!\rightarrow\!r$ engendered by the 2D coordinate transformation, and combining with the measured spatial chirp $x_{2}(\lambda)$ at its input, we obtain the radial chirp $k_{r}(\lambda)$ as shown in Fig.~\ref{Fig:SpectralMeasurements}(b) [Supplementary Material Figure~S15]. We find at each radial position a narrow spectrum ($\delta\lambda\!\approx\!50$~pm) whose central wavelength $\lambda_{\mathrm{c}}$ shifts quadratically with $r$, but with differently signed curvature for the superluminal and subluminal cases [Fig.~\ref{Fig:SpectralMeasurements}(c)].
\noindent\textbf{Propagation-invariance of the intensity distribution.} The time-averaged intensity profile $I(x,y,z)\propto\int\!dt|E(x,y,z;t)|^{2}$ is captured by scanning a CCD camera along the propagation axis $z$ after the Fourier transforming lens (Fig.~\ref{Fig:Setup}). For each wave packet, we plot in Fig.~\ref{Fig:Time-averagedIntensity} the intensity distribution (at a fixed axial plane $z\!=\!30$~mm) in transverse and meridional planes. As a point of reference, we start with a pulsed Bessel beam whose spatio-temporal spectrum is separable, where the spatial bandwidth is $\Delta k_{r}\!=\!0.02$~rad/$\mu$m and is centered at $k_{r}\!\approx\!0.06$~rad/$\mu$m [Fig.~\ref{Fig:Time-averagedIntensity}(a)]. Here, the full temporal bandwidth $\Delta\lambda$ is associated with each spatial frequency $k_{r}$. The finite spatial bandwidth $\Delta k_{r}$ renders the propagation distance finite \cite{Durnin87PRL}, and we observe a Bessel beam comprising a main lobe of width $\Delta r\approx30~\mu$m (FWHM) accompanied by several side lobes, which propagates for a distance $L_{\mathrm{max}}\approx50$~mm. For comparison, the Rayleigh range of a Gaussian beam with a similar size and central wavelength is $z_{\mathrm{R}}\!\approx\!1$~mm. By further increasing $\Delta k_{r}$ to 0.07~rad/$\mu$m while remaining centered at $k_{r}\!\approx\!0.06$~rad/$\mu$m as shown in Fig.~\ref{Fig:Time-averagedIntensity}(b), the axial propagation distance is reduced proportionately to $L_{\mathrm{max}}\approx15$~mm, and the side lobes are diminished.
Now, rather than the separable spatio-temporal spectra for pulsed Bessel beams [Fig.~\ref{Fig:Time-averagedIntensity}(a,b)], we utilize the structured spatio-temporal spectra associated with 3D ST wave packets in which each $k_{r}$ is associated with a single $\lambda$ [Fig.~\ref{Fig:SpectralMeasurements}], whose spatial bandwidths are all $\Delta k_{r}\!=\!0.07$~rad/$\mu$m centered at $k_{r}\!\approx\!0.06$~rad/$\mu$m, similarly to the pulsed Bessel beam in Fig.~\ref{Fig:Time-averagedIntensity}(b). Despite the large spatial bandwidth, the one-to-one correspondence between $k_{r}$ and $\lambda$ curtails diffraction, leading to an increase in the propagation distance [Fig.~\ref{Fig:Time-averagedIntensity}(c-e)]. The subluminal 3D ST wave packet ($\widetilde{v}\!=\!0.7c$) in Fig.~\ref{Fig:Time-averagedIntensity}(c) propagates for $L_{\mathrm{max}}\approx60$~mm, which is a $4\times$ improvement compared with the separable Bessel beam and a $60\times$ improvement compared with a Gaussian beam of the same spatial bandwidth. We observe a similar behavior for a superluminal 3D ST wave packet ($\widetilde{v}\!=\!1.8c$) in Fig.~\ref{Fig:Time-averagedIntensity}(d), and a superluminal ST-OAM wave packet ($\widetilde{v}\!=\!1.3c$) with $\ell=1$ in Fig.~\ref{Fig:Time-averagedIntensity}(e).
\noindent\textbf{Reconstructing the spatio-temporal profile and measuring the group velocity.} The spatio-temporal intensity profile $I(x,y,z;t)\!=\!|E(x,y,z;t)|^{2}$ of the 3D ST wave packet is reconstructed by placing the synthesizer (Fig.~\ref{Fig:Setup}) in one arm of a Mach-Zehnder interferometer, while the initial $100$-fs plane-wave pulses from the laser traverse an optical delay line $\tau$ in the reference arm [Fig.~\ref{Fig:Time-ResolvedIntensity}(a)]. By scanning $\tau$ we reconstruct the spatio-temporal intensity profile in a meridional plane from the visibility of spatially-resolved interference fringes recorded by a CCD camera when the 3D ST wave packet and the reference pulse overlap in space and time. The reconstructed time-resolved intensity profile $I(0,y,z;t)$ of the 3D ST wave packets corresponding to those in Fig.~\ref{Fig:Time-averagedIntensity}(c-e) are plotted in Fig.~\ref{Fig:Time-ResolvedIntensity}(b-d) at multiple axial planes, which reveal clearly the expected X-shaped profile that remains invariant over the propagation distance $L_{\mathrm{max}}$. In all cases, the on-axis pulse width, taken as the FWHM of $I(0,0,0;t)$, is $\Delta t\!\approx\!6$~ps. The spatio-temporal intensity profile of the superluminal ST-OAM wave packet with $\ell=1$ in Fig.~\ref{Fig:Time-ResolvedIntensity}(d) reveals a similar X-shaped profile, but with a central null instead of a peak, as expected from the helical phase structure associated with the OAM mode.
A subtle distinction emerges between the subluminal and superluminal wave packets regarding the axial evolution of their spatio-temporal profile. It can be shown that in presence of finite spectral uncertainty $\delta\lambda$, the realized ST wave packet can be separated into the product of an ideal ST wave packet traveling indefinitely at $\widetilde{v}$ and a long `pilot envelope' traveling at $c$. The finite propagation distance $L_{\mathrm{max}}$ is then a consequence of temporal walk-off between the ST wave packet and the pilot envelope \cite{Yessenov19OE}. For subluminal ST wave packets, this results initially in a `clipping' of the leading edge of the wave packet in [Fig.~\ref{Fig:Time-ResolvedIntensity}(b) at $z\!=\!20$~mm], and ultimately a clipping of the trailing edge of the ST wave packet as the faster pilot envelope catches up with it [Fig.~\ref{Fig:Time-ResolvedIntensity}(b) at $z\!=\!40$~mm]. The opposite behavior occurs for the superluminal ST wave packet in Fig.~\ref{Fig:Time-ResolvedIntensity}(c,d).
This experimental methodology also enables us to estimate the group velocity $\widetilde{v}$ \cite{Kondakci19NC,Bhaduri20NatPhot}. After displacing the CCD camera until the interference fringes are lost due to the mismatch between $\widetilde{v}\!=\!c\tan{\theta}$ for the ST wave packets and the reference pulses traveling at $\widetilde{v}\!=\!c$, we restore the interference by inserting a delay $\Delta t$ [Fig.~\ref{Fig:Time-ResolvedIntensity}(e)], which allows us to estimate $\widetilde{v}$ for the 3D ST wave packet. By tuning $B$, we record a broad span of group velocities in the range from $\widetilde{v}\approx0.7c$ to $\widetilde{v}\approx1.8c$ in free space [Fig.~\ref{Fig:Time-ResolvedIntensity}(f)].
\noindent\textbf{Field amplitude and phase measurements.} Lastly, we modify the measurement system in Fig.~\ref{Fig:Time-ResolvedIntensity}(a) by adding a small relative angle between the propagation directions of the 3D ST wave packets and the reference pulses, and make use of off-axis digital holography \cite{Sanchez-Ortiga14AO} to reconstruct the amplitude $|\psi(x,y,z;\tau)|$ and phase $\phi(x,y,z;\tau)$ of their complex field envelope $\psi(x,y,z;t)\!=\!|\psi(x,y,z;t)|e^{i\phi(x,y,z;t)}$ [Supplementary Material]. We reconstruct the complex field at a fixed axial plane $z\!=\!30$~mm for the time delays: $\tau=-5$, 0, and 5~ps [Fig.~\ref{Fig:2DFieldMeasurements}]. First, we plot the results for $|\psi(x,y,z;\tau)|$ and phase $\phi(x,y,z;\tau)$ for a superluminal 3D ST wave packet ($\widetilde{v}=1.1c$) with no OAM ($\ell\!=\!0$). At the pulse center $\tau=0$, the field is localized on the optical axis, whereas at $\tau=\pm5$~ps the field spreads away from the center [Fig.~\ref{Fig:2DFieldMeasurements}(a)]. For $\tau\!\neq\!0$ we find a spherical transverse phase distribution that is almost flat at $\tau\!=\!0$, similar to what one finds during the axial evolution of a Gaussian beam in space through its focal plane \cite{Porras17OL}.
After adding the OAM mode $\ell\!=\!1$ to the field structure, a similar overall behavior is observed for the superluminal ST-OAM wave packet except for two significant features. First, a dip is observed on-axis in Fig.~\ref{Fig:2DFieldMeasurements}(b), in lieu of the central peak in Fig.~\ref{Fig:2DFieldMeasurements}(a), as a result of the phase singularity associated with the OAM mode. Second, the phase at the wave-packet center $\phi(x,y,z;0)$ at $z\!=\!30$~mm is almost flat, while a helical phase front corresponding to OAM of order $\ell\!=\!1$ emerges as we move away from $\tau\!=\!0$. Finally, we plot in Fig.~\ref{Fig:2DFieldMeasurements}(c,d) iso-amplitude surface contours ($0.6\times$ and $0.15\times$ the maximum amplitude $I_{\mathrm{max}}$) for the two 3D ST wave packets in Fig.~\ref{Fig:2DFieldMeasurements}(a,b). We find a closed surface in Fig.~\ref{Fig:2DFieldMeasurements}(c) when $\ell\!=\!0$, and a doughnut structure in Fig.~\ref{Fig:2DFieldMeasurements}(d) when $\ell\!=\!1$ for the first contour $I\!=\!0.6I_{\mathrm{max}}$ that captures the structure of the wave-packet center. The second contour for $I\!=\!0.15I_{\mathrm{max}}$ captures the conic structure emanating from the wave-packet center that is responsible for the characteristic X-shaped spatio-temporal profile of all propagation-invariant wave packets in the paraxial regime.
\section*{Discussion}
We have demonstrated a general procedure for spatio-temporal spectral modulation of pulsed optical fields that is capable of synthesizing 3D ST wave packets localized in all dimensions. At the heart of our experimental methodology lies the ability to sculpt the angular dispersion of a generic optical pulse in two transverse dimensions. Crucially, this approach produces the non-differentiable angular-dispersion necessary for propagation invariance \cite{Yessenov21ACSPhot}. Because such a capability has proven elusive to date, AD-free X-waves have been the sole class of 3D propagation-invariant wave packets conclusively produced in free space. Unfortunately, X-waves can exhibit only minuscule changes in the group velocity with respect to $c$ (typically $\Delta\widetilde{v}\!\sim\!0.001c$) in the paraxial regime, and only superluminal group velocities are supported. Furthermore, ultrashort pulses of width $\sim\!20$~fs are required to observe a clear X-shaped profile \cite{Grunwald2003PRA}, and OAM-carrying X-waves have not been realized to date. Even more stringent requirements are necessary for producing focus-wave modes, and consequently they have not been synthesized in three dimensions to date. By realizing instead propagation-invariant 3D ST wave packets, an unprecedented tunable span of group velocities has been realized, clear X-shaped profiles are observed with pulse widths in the picosecond regime, and they outperformed spectrally separable pulsed Bessel beams of the same spatial bandwidth with respect to their propagation distance and transverse side-lobe structure. In addition, we demonstrated propagation-invariant ST-OAM wave packets with tunable group velocity in free space.
Further optimization of the experimental layout is possible. We made use of four phase patterns to produce the target spatio-temporal spectral structure. It is conceivable that this spectral modulation scheme can be performed with only three phase patterns, or perhaps even fewer. Excitingly, a new theoretical proposal suggests that a single non-local nanophotonic structure can produce 3D ST wave packets through a process of spatio-temporal spectral filtering \cite{Guo21Light}. This theoretical proposal indicates the role nanophotonics is poised to play in reducing the complexity of the synthesis system, potentially without recourse to filtering strategies.
Finally, efforts in the near future will be directed to reducing the spectral uncertainty $\delta\lambda$ and concomitantly approaching $\theta\!\rightarrow\!45^{\circ}$ to increase the propagation length to the kilometer range \cite{Bhaduri19OL}. With access to 3D ST wave packets, previous work on guided ST modes in planar wave-guides \cite{Shiri20NC} can be extended to conventional single-mode and multi-mode waveguides \cite{Guo21PRR}, and potentially to optical fibers. Moreover, the localization in both transverse dimensions provided by 3D ST wave packets opens new avenues for nonlinear optics by increasing the intensity with respect to 2D ST wave packets, for introducing topological features such as spin texture in momentum space \cite{Guo21Light}, and for the exploration of spatio-temporal vortices and polarization singularities \cite{Bliokh12PRA}. Our findings point therefore to profound new opportunities provided by the emerging field of space-time optics \cite{Shiri20NC,Guo21Light,Guo21PRR,Shaltout19Science}.
\clearpage
\section{Representation of 3D space-time wave packets on the surface of the light-cone}
In our previous work on space-time (ST) wave packets in the form of light-sheets \cite{Kondakci16OE,Kondakci17NP,Kondakci19NC,Yessenov19PRA,Yessenov19OE,Yessenov19OPN}, we make heavy use of the representation of the wave-packet spectral support domain on the surface of the light-cone. This is a useful visualization tool that provides physical intuition with regards to the structure and behavior of ST wave packets. In this Section, we briefly review this representation in the reduced-dimension case of ST light sheets \cite{Kondakci17NP}, where the field is localized along one transverse dimension and extended uniformly along the other. We refer to these field structures as 2D ST wave packets (one transverse dimension and one longitudinal dimension). We then proceed to show that such a representation can also be gainfully employed with minor changes for ST wave packets localized in all dimensions. We refer to such field structures as 3D ST wave packets (two transverse dimensions and one longitudinal dimensions). Therefore, the wealth of results that have amassed over the past few years based on this conceptual framework \cite{Yessenov19OPN} can be appropriated for the new 3D ST wave packets investigated here.
\subsection{Light-cone representation for 2D ST wave packets (light sheets)}
When the field is held uniform along one transverse dimension (say $y$), then the dispersion relationship in free space is $k_{x}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$, where $k_{x}$ and $k_{z}$ are the transverse and longitudinal components of the wave vector along $x$ and $z$, respectively, $\omega$ is the temporal frequency, and $c$ is the speed of light in vacuum. This relationship is represented geometrically by the surface of a cone that we refer to as the light-cone. A monochromatic plane wave $e^{i(k_{x}x+k_{z}z-\omega t)}$ is represented by a point on the surface of the light-cone (Fig.~\ref{Fig:STLightSheet}). In general, a pulsed beam $E(x,z;t)$ is expressed as a product of a slowly varying envelope $\psi(x,z;t)$ and a carrier term $e^{i(k_{\mathrm{o}}z-\omega_{\mathrm{o}}t)}$, where $\omega_{\mathrm{o}}$ is a fixed temporal frequency, and $k_{\mathrm{o}}\!=\!\omega_{\mathrm{o}}/c$ is its associated wave number. The envelope is written in terms of an angular spectrum as follows:
\begin{equation}\label{Eq:2DGeneral}
\psi(x,z;t)=\iint\!dk_{x}d\Omega\widetilde{\psi}(k_{x},\Omega)e^{i\{k_{x}x+(k_{z}-k_{\mathrm{o}})z-\Omega t\}},
\end{equation}
where $\Omega\!=\!\omega-\omega_{\mathrm{o}}$, and the spatio-temporal spectrum $\widetilde{\psi}(k_{x},\Omega)$ is the 2D Fourier transform of $\psi(x,0;t)$. The spectral support domain for a pulsed beam or wave packet corresponds in general to a 2D area on the surface of the light-cone \cite{Kondakci17NP}; see Fig.~\ref{Fig:STLightSheet}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.5cm]{FigS1.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for 2D ST wave packets (light sheets) on the light-cone surface. (a) The spatio-temporal spectrum is restricted to the intersection of the light-cone $k_{x}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$ with a tilted spectral plane in the region $k_{z}\!>\!0$. The spectral projection onto the $(k_{z},\tfrac{\omega}{c})$-plane is a straight line, and that onto the $(k_{x},\tfrac{\omega}{c})$-plane is a segment of a conic section. (b) The spatio-temporal intensity profile in the plane $z\!=\!0$, $I(x,z=0;t)$, and the transverse intensity profile in the same $z\!=\!0$ plane at the wave-packet center $t\!=\!0$, and off-center $t\!>\!0$. In the $(x,t)$-plane, the spatio-temporal intensity profile is X-shaped. In the $(x,y)$-plane, the 2D ST wave packet at $t\!=\!0$ takes the form of a light sheet that is localized along the $x$-axis and extends uniformly along the $y$-axis.}
\label{Fig:STLightSheet}
\vspace{12mm}
\end{figure}
For the special case of a propagation-invariant 2D ST wave packet, the spectral support domain is confined to the intersection of the light-cone with a spectral plane that is parallel to the $k_{x}$-axis and makes an angle $\theta$ with the $k_{z}$-axis, which is thus given by the equation:
\begin{equation}
\Omega=(k_{z}-k_{\mathrm{o}})c\tan{\theta},
\end{equation}
where we refer to $\theta$ as the spectral tilt angle. Consequently, there is a one-to-one relationship between $\Omega$ and $|k_{x}|$, which takes the form of a parabola in the narrowband ($\Delta\omega\!\ll\!\omega_{\mathrm{o}}$, where $\Delta\omega$ is the temporal bandwidth) and paraxial ($\Delta k_{x}\!\ll\!k_{\mathrm{o}}$, where $\Delta k_{x}$ is the spatial bandwidth) limits:
\begin{equation}\label{Eq:Parabola1D}
\frac{\Omega}{\omega_{\mathrm{o}}}=\frac{k_{x}^{2}}{2k_{\mathrm{o}}^{2}(1-\widetilde{n})},
\end{equation}
where $\widetilde{n}\!=\!\cot{\theta}$ is the wave packet group index in free space. Because of this correspondence between $\Omega$ and $|k_{x}|$, the spatio-temporal spectrum has a reduced dimensionality with respect to a conventional pulsed beam $\widetilde{\psi}(k_{x},\Omega)\!\rightarrow\!\widetilde{\psi}(k_{x})\delta(\Omega-\Omega(k_{x}))$, where $\Omega\!=\!\Omega(k_{x})$ is given by Eq.~\ref{Eq:Parabola1D}. The wave-packet envelope now takes the simpler integral form:
\begin{equation}\label{Eq:2DSTenvelope}
\psi(x,z;t)=\int\!dk_{x}\,\,\widetilde{\psi}(k_{x})\,\,e^{ik_{x}x}\,\,e^{-i\Omega(t-z/\widetilde{v})}=\psi(x,0;t-z/\widetilde{v}),
\end{equation}
which is a 2D ST wave packet that travels rigidly in free space at a group velocity $\widetilde{v}\!=\!c\tan{\theta}$.
The spectral projection of the 2D ST wave packet onto the $(k_{z},\tfrac{\omega}{c})$-plane is a straight line that makes an angle $\theta$ with the $k_{z}$-axis and intersects with the light-line $k_{z}\!=\!\tfrac{\omega}{c}$ at the point $(k_{z},\tfrac{\omega}{c})\!=\!(k_{\mathrm{o}},k_{\mathrm{o}})$. The corresponding spectral projection onto the $(k_{x},\tfrac{\omega}{c})$-plane is a segment of a conic section: an ellipse when $0^{\circ}\!<\!\theta\!<\!45^{\circ}$ or $135^{\circ}\!<\!\theta\!<\!180^{\circ}$, whereupon $|\widetilde{v}|\!<\!c$; a hyperbola when $45^{\circ}\!<\!\theta\!<\!135^{\circ}$, whereupon $|\widetilde{v}|\!>\!c$; a straight tangent line when $\theta\!=\!45^{\circ}$ and $\widetilde{v}\!=\!c$, corresponding to a plane-wave pulse; and a parabola when $\theta\!=\!135^{\circ}$ and $\widetilde{v}\!=\!-c$. We refer to ST wave packets associated with the range $0^{\circ}\!<\!\theta\!<\!45^{\circ}$ as subluminal, with $45^{\circ}\!<\!\theta\!<\!90^{\circ}$ as superluminal, and with $\theta\!>\!90^{\circ}$ (whereupon $\widetilde{v}\!<\!0$) as negative-$\widetilde{v}$ ST wave packets.
Note that causal emission and propagation require that only the values $k_{z}\!>\!0$ be considered, so that the light-cone half corresponding to the acausal backward-propagating components $k_{z}\!<\!0$ is eliminated from consideration \cite{Shaarawi00JPA,Yessenov19PRA}.
The one-to-one correspondence between $|k_{x}|$ and $\omega$ has a crucial impact on the form of the axial evolution of the time-averaged intensity $I(x,z)\!=\!\int\!dt\,|E(x,z;t)|^{2}\!=\!\int\!\,|\psi(x,z;t)|^{2}$. Substituting for $\psi(x,z;t)$ from Eq.~\ref{Eq:2DSTenvelope} we reach:
\begin{equation}
I(x,z)=\int\!dk_{x}|\widetilde{\psi}(k_{x})|^{2}+\int\!dk_{x}\widetilde{\psi}(k_{x})\widetilde{\psi}^{*}(-k_{x})e^{i2k_{x}x}=I_{\mathrm{o}}+I(2x),
\end{equation}
where $I_{\mathrm{o}}\!=\!\int\!dk_{x}|\widetilde{\psi}(k_{x})|^{2}$ and $I(x)$ is the Fourier transform of $\widetilde{\psi}(k_{x})\widetilde{\psi}^{*}(-k_{x})$. In other words, $I(x,z)$ is altogether independent of the axial coordinate $z$, and is formed of the sum of a constant background pedestal term $I_{\mathrm{o}}$ and a localized spatial feature at $x\!=\!0$. Moreover, the height of the localized spatial feature cannot exceed the height of the pedestal. This structure of the intensity profile for 2D ST wave packets has been borne out in previous measurements \cite{Kondakci17NP,Yessenov19OE}. It is important to note that this structure is unique to 2D ST wave packets. The time-averaged intensity of 3D ST wave packets cannot be separated into a sum of a pedestal and a localized central feature (see main text).
\subsection{Spectral representation for 3D fields on the light-cone surface}
\subsubsection{Conventional optical fields}
When both transverse coordinates $x$ and $y$ are retained, we have the general dispersion relationship $k_{x}^{2}+k_{y}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$ in free space. This relationship is represented mathematically by a hypercone in 4D, which cannot be visualized in 3D space. However, because we are mostly interested in cylindrically symmetric fields, we can write the dispersion relationship as $k_{r}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$, where $k_{r}\!=\!\sqrt{k_{x}^{2}+k_{y}^{2}}$ is the radial wave number. This dispersion relationship can indeed be represented in 3D space. Because $k_{r}$ is positive-valued only, in contrast to $k_{x}$ in the case of the 2D ST wave packets that can take on either positive or negative values, only the quarter of the light-cone corresponding to $k_{r}\!>\!0$ \textit{and} $k_{z}\!>\!0$ need be retained here.
A wave packet is once again given in terms of a carrier term and a slowly varying envelope $E(x,y,z;t)\!=\!e^{i(k_{\mathrm{o}}z-\omega_{\mathrm{o}}t)}\psi(x,y,z;t)$, where:
\begin{equation}
\psi(x,y,z;t)=\iiint\!dk_{x}dk_{y}d\Omega\,\,\widetilde{\psi}(k_{x},k_{y},\Omega)\,\,e^{i\{k_{x}x+k_{y}y+(k_{z}-k_{\mathrm{o}})z-\Omega t\}},
\end{equation}
and $\widetilde{\psi}(k_{x},k_{y},\Omega)$ is the 3D Fourier transform of $\psi(x,y,0;t)$. We switch to transverse polar coordinates $(r,\varphi)$ in physical space and to the corresponding polar coordinates $(k_{r},\chi)$ in Fourier space. In physical space we have the relationships:
\begin{equation}
r=\sqrt{x^{2}+y^{2}},\:\:\:\varphi=\arctan{\left(\frac{y}{x}\right)},\:\:\: x=r\sin{\varphi},\:\:\:y=r\cos{\varphi};
\end{equation}
and in Fourier space we have
\begin{equation}
k_{r}=\sqrt{k_{x}^{2}+k_{y}^{2}},\:\:\:\chi=\arctan{\left(\frac{k_{y}}{k_{x}}\right)},\:\:\:k_{x}=k_{r}\sin{\chi},\:\:\:k_{y}=k_{r}\cos{\chi}.
\end{equation}
We can thus rewrite the angular spectrum of the envelope as follows:
\begin{equation}
\psi(r,\varphi,z;t)=\iiint\!dk_{r}d\chi d\Omega\,\,\, k_{r}\widetilde{\psi}(k_{r},\chi,\Omega)\,\,e^{ik_{r}r\cos{(\varphi-\chi)}}\,\,e^{i(k_{z}-k_{\mathrm{o}})z}\,\,e^{-i\Omega t};
\end{equation}
the integral over $\chi$ extends from 0 to $2\pi$,that over $k_{r}$ extends over 0 to $\infty$, and that for $\Omega$ from $-\Delta\omega/2$ to $\Delta\omega/2$.
We can separate the spatio-temporal spectrum $\widetilde{\psi}(k_{r},\chi,\Omega)$ with respect to the radial and azimuthal coordinates $k_{r}$ and $\xi$, respectively, as follows:
\begin{equation}
\widetilde{\psi}(k_{x},\chi,\Omega)=\sum_{\ell=-\infty}^{\infty}\widetilde{\psi}_{\ell}(k_{r},\Omega)e^{i\ell\chi},
\end{equation}
whereupon the wave packet envelope can be expressed as:
\begin{equation}
\psi(r,\varphi,z;t)=\sum_{\ell=-\infty}^{\infty}e^{i\ell\varphi}\iint\!dk_{r}d\Omega\;\;k_{r}\widetilde{\psi}(k_{r},\Omega)J_{\ell}(k_{r}r)e^{i(k_{z}-k_{\mathrm{o}})z}e^{-i\Omega t};
\end{equation}
where we have made use of the identity $2\pi J_{\ell}(x)=\int_{-\pi}^{\pi}dy\,e^{i(x\sin{y}-\ell y)}$, and $J_{\ell}(\cdot)$ is the $\ell^{\mathrm{th}}$-order Bessel function of the first kind.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.5cm]{FigS2.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain of a monochromatic Bessel beam in 3D space on the surface of the light-cone. (a) A point on the surface of the light-cone $k_{r}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$ at coordinates $(k_{r},k_{z},\tfrac{\omega}{c})$ corresponds to a monochromatic Bessel beam of the form $E(r,z;t)\!=\!J_{0}(k_{r}r)e^{i(k_{z}z-\omega t)}$. (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the point on the light-cone in (a) takes the form of a horizontal iso-frequency circle of radius $k_{r}$ at a height $\tfrac{\omega}{c}$. (c) In physical space at an axial plane $z\!=\!0$, the intensity $I(x,y,0;t)$ is uniform along $t$ and takes the form of a Bessel function $J_{0}^{2}(k_{r}r)$ in the $(x,y)$-plane at any $t$.}
\label{Fig:BesselBeam}
\vspace{12mm}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{FigS3.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for a monochromatic beam in 3D space on the surface of the light-cone. (a) The spectral support domain is restricted to the circle on the light-cone surface at its intersection with a horizontal iso-frequency plane $\omega\!=\!\omega_{\mathrm{o}}$. (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain is a horizontal iso-frequency disc at $\omega\!=\!\omega_{\mathrm{o}}$. The disc radius corresponds to the maximum spatial frequency in (a). (c) In physical space at $z\!=\!0$, the intensity $I(x,y,0;t)$ is uniform along $t$ with the transverse beam profile given by the plot in the $(x,y)$-plane at any $t$.}
\label{Fig:GaussianBeam}
\vspace{12mm}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{FigS4.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for a plane-wave pulse in 3D space on the surface of the light-cone. (a) The spectral support domain is restricted to the straight-line tangent at $k_{r}\!=\!0$ on the light-cone surface. (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain is a vertical line along the $\tfrac{\omega}{c}$-axis. (c) In physical space at an axial plane $z\!=\!0$, the intensity $I(x,y,0;t)$ is uniform everywhere at fixed $t$. The overall intensity drops away from $t\!=\!0$ according to the pulse linewidth.}
\label{Fig:PulsedPlaneWave}
\vspace{12mm}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{FigS5.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for a conventional pulsed beam in 3D space on the surface of the light-cone. (a) The spectral support domain is a 2D area on the light-cone surface. (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain is a cylindrical volume whose radius and height correspond to the spatial and temporal bandwidths of the wave packet. (c) In physical space at $z\!=\!0$, the intensity $I(x,y,0;t)$ varies in intensity along $t$ according to the pulse linewidth, with the transverse beam profile as prescribed in the plot in the $(x,y)$-plane at $t\!=\!0$.}
\label{Fig:PulsedBeam}
\vspace{12mm}
\end{figure}
One simplification is obtained by assuming that the spatio-temporal spectrum is azimuthally symmetric, which corresponds to setting $\ell\!=\!0$, and the summation over $\ell$ is removed. This assumption is equivalent to setting $\widetilde{\psi}(k_{r},\chi,\Omega)\!\rightarrow\!\widetilde{\psi}(k_{r},\Omega)$, which is thus independent of $\chi$, whereupon:
\begin{equation}\label{Eq:3DGeneralAzimuthallySymmetric}
\psi(r,\varphi,z;t)=\iint\!dk_{r}d\Omega \,\,\,k_{r}\widetilde{\psi}(k_{r},\Omega)J_{0}(k_{r}r)\,\,e^{i(k_{z}-k_{\mathrm{o}})z}\,\,e^{-i\Omega t}=\psi(r,z;t),
\end{equation}
where $J_{0}(\cdot)$ is the zeroth-order Bessel function of the first kind. As a result of the azimuthal symmetry of the spectrum, the wave-packet envelope is also azimuthally symmetric in physical space. We will address shortly the more general case of fields with azimuthal variation.
Comparing Eq.~\ref{Eq:2DGeneral} for $\psi(x,z;t)$ to Eq.~\ref{Eq:3DGeneralAzimuthallySymmetric} for $\psi(k_{r},z;t)$, we find that both have 2D spatio-temporal spectra: $\widetilde{\psi}(k_{x},\Omega)$ for the former and $\widetilde{\psi}(k_{r},\Omega)$ for the latter. The light-cone in $(k_{r},k_{z},\tfrac{\omega}{c})$-space can thus be used to represent the spectral support domain for $\psi(r,z;t)$. Here, each point on the light-cone at coordinates $(k_{r},k_{z},\tfrac{\omega}{c})$ corresponds to a monochromatic Bessel beam $E(r,z;t)\!=\!J_{0}(k_{r}r)e^{i(k_{z}z-\omega t)}$ rather than a monochromatic plane wave; see Fig.~\ref{Fig:BesselBeam}.
In the case of light sheets in which the field is uniform along $y$, the light-cone in Fig.~\ref{Fig:STLightSheet}(a) suffices to capture the complete description of the spectral support domain. On the other hand, in the case of fields in 3D space, the light-cone in Fig.~\ref{Fig:BesselBeam}(a) does \textit{not} convey the whole picture because we collapsed all the plane waves having spatial-frequency pairs $(k_{x},k_{y})$ into their radial counterpart $k_{r}$. The picture can be completed by adding a second spectral representation in $(k_{x},k_{y},\tfrac{\omega}{c})$-space. Each point with coordinates $(k_{x},k_{z},\tfrac{\omega}{c})$ corresponds to a monochromatic plane wave $e^{i(k_{x}x+k_{y}y+k_{z}z-\omega t)}$, where $k_{z}\!=\!\sqrt{(\tfrac{\omega}{c})^{2}-k_{x}^{2}-k_{y}^{2}}$. Although $k_{z}$ is not represented in this space explicitly, it can nevertheless be found by referring to the light-cone in Fig.~\ref{Fig:BesselBeam}(a). The single point representing a monochromatic Bessel beam on the light-cone in Fig.~\ref{Fig:BesselBeam}(a) corresponds to a horizontal circle of radius $k_{r}$ in $(k_{x},k_{y},\tfrac{\omega}{c})$-space in Fig.~\ref{Fig:BesselBeam}(b).
The representation in $(k_{x},k_{y},\tfrac{\omega}{c})$-space is particularly useful for wave packets in 3D space because it provides the spatio-temporal spectral structure required to synthesize the field in question. For a monochromatic Bessel beam, Fig.~\ref{Fig:BesselBeam}(b) points to the well-known approach for producing such a beam by inserting a spatial filter in the Fourier domain in the form of a thin annulus, followed by a spherical converging lens \cite{Durnin87PRL}.
More generally, the spectral support domain for a monochromatic beam at frequency $\omega_{\mathrm{o}}$ is restricted to the circle at the intersection of the light-cone with a horizontal iso-frequency plane $\omega\!=\!\omega_{\mathrm{o}}$; see Fig.~\ref{Fig:GaussianBeam}(a). In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain lies in the horizontal plane $\omega\!=\!\omega_{\mathrm{o}}$, and takes the form of a disc. The central point at $k_{x}\!=\!k_{y}\!=\!0$ corresponds to the point on the light-line in Fig.~\ref{Fig:GaussianBeam}(a). The radius of the disc in Fig.~\ref{Fig:GaussianBeam}(b) corresponds to the maximum radial spatial frequency in Fig.~\ref{Fig:GaussianBeam}(a).
As another example, consider a pulsed plane wave that lacks transverse spatial features. The spectral support domain of this field in $(k_{r},k_{z},\tfrac{\omega}{c})$-space lies along the light-line $k_{z}\!=\!\tfrac{\omega}{c}$ [Fig.~\ref{Fig:PulsedPlaneWave}(a)], whereupon $\widetilde{\psi}(k_{r},\Omega)\!\rightarrow\!\widetilde{\psi}(\Omega)$, and therefore:
\begin{equation}
\psi(r,z;t)=\int\!d\Omega\widetilde{\psi}(\Omega)e^{i\Omega(t-z/c)}=\psi(r,0;t-z/c).
\end{equation}
The spectral support domain in $(k_{x},k_{y},\tfrac{\omega}{c})$-space lies along the $\tfrac{\omega}{c}$-axis, with $k_{x}\!=\!k_{y}\!=\!0$ [Fig.~\ref{Fig:PulsedPlaneWave}(b)].
Finally, the most general conventional azimuthally symmetric pulsed beam or wave packet has a 2D spectral support domain on the surface of the light-cone in $(k_{r},k_{z},\tfrac{\omega}{c})$-space as shown in Fig.~\ref{Fig:PulsedBeam}(a). Typically, $\widetilde{\psi}(k_{r},\Omega)$ is separable with respect to $k_{r}$ and $\Omega$, $\widetilde{\psi}(k_{r},\Omega)\!\rightarrow\!\widetilde{\psi}_{r}(k_{r})\widetilde{\psi}_{t}(\Omega)$. The spectral support domain in $(k_{x},k_{y},\tfrac{\omega}{c})$-space as shown in Fig.~\ref{Fig:PulsedBeam}(b) takes the form approximately of a cylindrical volume centered on the $\tfrac{\omega}{c}$-axis, whose radius is the maximum width of spatial spectrum $\widetilde{\psi}_{r}(k_{r})$, and whose heights is the extent of the temporal spectrum $\widetilde{\psi}_{t}(\Omega)$.
\subsubsection{3D ST wave packets}
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{FigS6.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for a superluminal 3D ST wave packet on the surface of the light-cone. (a) The spectral support domain is restricted to the hyperbola on the light-cone surface at its intersection with a spectral plane that is parallel to the $k_{r}$-axis and makes an angle $\theta$ with the $k_{z}$-axis ($45^{\circ}\!<\!\theta\!<\!90^{\circ}$). (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain is a 2D surface in the form of one half of a two-sheet hyperboloid (an elliptic hyperboloid). (c) In physical space at $z\!=\!0$, the intensity $I(x,y,0;t)$ takes the form of two cones emanating from the central feature at $t\!=\!0$. In any meridional plane such as $x\!=\!0$, the spatio-temporal intensity profile is X-shaped. In the $(x,y)$-plane at $t\!=\!0$, the intensity profile is centered at $x\!=\!y\!=\!0$. At $t\!\neq\!0$, the profile in the $(x,y)$-plane is an annulus whose radius increases with $t$.}
\label{Fig:SuperlumST}
\vspace{12mm}
\end{figure}
When considering 3D ST wave packets that are localized along all dimensions, such that the time-averaged intensity takes the form of an axial needle \cite{Turunen10PO,FigueroaBook14,Parker16OE}, the spectral support domain is once again restricted to the intersection of the light-cone $k_{r}^{2}+k_{z}^{2}\!=\!(\tfrac{\omega}{c})^{2}$ with the spectral plane $\Omega\!=\!(k_{z}-k_{\mathrm{o}})c\tan{\theta}$, as in the case of the 2D ST wave packets, where the plane is parallel to the $k_{r}$-axis and makes an angle $\theta$ with the $k_{z}$-axis. The group velocity of the propagation-invariant 3D ST wave packet is $\widetilde{v}\!=\!c\tan{\theta}$:
\begin{equation}
\psi(r,\varphi,z;t)=\sum_{\ell=-\infty}^{\infty}e^{i\ell\varphi}\int\!dk_{r}\,\,k_{r}\widetilde{\psi}(k_{r})J_{\ell}(k_{r}r)e^{-i\Omega(t-z/\widetilde{v})}=\psi(r,\varphi,0;t-z/\widetilde{v}).
\end{equation}
The impact of $\theta$ on the shape of the conic section resulting from this intersection is the same as that for the 2D ST wave packet or light sheets. We depict the superluminal scenario in Fig.~\ref{Fig:SuperlumST}(a) where $45^{\circ}\!<\!\theta\!<\!90^{\circ}$, $\widetilde{v}\!>\!c$, and the spectral support domain is a hyperbola. Alternatively, a subluminal wave packet where $0^{\circ}\!<\!\theta\!<\!45^{\circ}$ and $\widetilde{v}\!<\!c$ is shown in Fig.~\ref{Fig:SublumST}(a), which has an ellipse for its spectral support domain. In the narrowband paraxial regime, the conic section can be approximated in the vicinity of $k_{r}\!=\!0$ by a parabola:
\begin{equation}\label{Eq:Parabola3D}
\frac{\Omega}{\omega_{\mathrm{o}}}=\frac{k_{r}^{2}}{2k_{\mathrm{o}}^{2}(1-\widetilde{n})}.
\end{equation}
Introducing this quadratic relationship between $\Omega$ and $k_{r}$ into a pulsed beam is the goal of our synthesis methodology. Note that for small bandwidths, this also entails a quadratic relationship between $\lambda-\lambda_{\mathrm{o}}$ and $k_{r}$, where $\lambda_{\mathrm{o}}\!=\!\tfrac{2\pi}{k_{\mathrm{o}}}$.
\begin{figure}[]
\begin{center}
\includegraphics[width=8.5cm]{FigS7.png}
\end{center}\vspace{-5mm}
\caption{Representation of the spectral support domain for a subluminal 3D ST wave packet on the surface of the light-cone. (a) The spectral support domain is restricted to the ellipse on the light-cone surface at its intersection with a spectral plane that is parallel to the $k_{r}$-axis and makes an angle $\theta$ with the $k_{z}$-axis ($0^{\circ}\!<\!\theta\!<\!45^{\circ}$). (b) In $(k_{x},k_{y},\tfrac{\omega}{c})$-space, the spectral support domain is a 2D surface in the form of a ellipsoid of revolution (a spheroid). (c) In physical space at $z\!=\!0$, the intensity $I(x,y,0;t)$ takes the form of two cones emanating from the central feature at $t\!=\!0$. The structure of the spatio-temporal intensity profile is similar to that in Fig.~\ref{Fig:SuperlumST}(c).}
\label{Fig:SublumST}
\vspace{12mm}
\end{figure}
In the case of the 3D ST wave packets, the structure of spectral representation in $(k_{x},k_{y},\tfrac{\omega}{c})$-space is particularly instructive, as shown in Fig.~\ref{Fig:SuperlumST}(b) for the superluminal case and in Fig.~\ref{Fig:SublumST}(b) for its subluminal counterpart. Consider first the superluminal 3D ST wave packet in Fig.~\ref{Fig:SuperlumST}(b). Each point $(k_{r},k_{z},\tfrac{\omega}{c})$ along the hyperbola on the surface of the light-cone in Fig.~\ref{Fig:SuperlumST}(a) corresponds to a circle of radius $k_{r}$ at a height $\tfrac{\omega}{c}$ in Fig.~\ref{Fig:SuperlumST}(b). Because each $k_{r}$ is associated with a different $\omega$, the circles in Fig.~\ref{Fig:SuperlumST}(b) are all located at different heights, thus forming a 2D surface rather than the 3D volume in Fig.~\ref{Fig:PulsedBeam}(b) for a conventional pulsed beam. Because the spectral trajectory on the light-cone surface in Fig.~\ref{Fig:SuperlumST}(a) is a hyperbola, the surface in Fig.~\ref{Fig:SuperlumST}(b) is one half of a two-sheet hyperboloid (an elliptic hyperboloid) centered on the $\tfrac{\omega}{c}$-axis.
In physical space, as shown in Fig.~\ref{Fig:SuperlumST}(c), the spatio-temporal intensity profile at a fixed axial profile $z\!=\!0$ takes the form of a central peak at $x\!=\!y\!=\!0$ and $t\!=\!0$, with two cones centered on the $t$-axis emanating from this peak. Consequently, in any meridional plane, for example $x\!=\!0$, the intensity profile is X-shaped, similarly to its 2D ST wave packet counterpart [Fig.~\ref{Fig:STLightSheet}(c)]. The shape of the cross section of the intensity profile is time-dependent: at $t\!=\!0$ it is localized at $x\!=\!y\!=\!0$, at $t\!\neq\!0$ it takes the form of an annulus whose radius increases with $t$.
The corresponding graphs for a subluminal 3D ST wave packet are plotted in Fig.~\ref{Fig:SublumST}(b,c). Because the intersection of the spectral plane with the light-cone is an ellipse, the spectral support domain in $(k_{x},k_{y},\tfrac{\omega}{c})$-space is an ellipsoid of revolution, or spheroid, as shown in Fig.~\ref{Fig:SublumST}(b). Depending on the value of $\theta$, this spheroid may be oblate of prolate. The threshold value of $\tan{\theta}\!=\!\tfrac{1}{\sqrt{2}}$ separates these two regimes. Indeed, at $\tan{\theta}\!=\!\tfrac{1}{\sqrt{2}}$, the projection of the spectral support domain on the light-cone surface onto the $(k_{r},\tfrac{\omega}{c})$-plane is a circle, and the spectral support domain in $(k_{x},k_{y},\tfrac{\omega}{c})$-space is a sphere. Of course, in all cases only the plane-wave components corresponding to $k_{z}\!>\!0$ are physically meaningful.
Lastly, the spatio-temporal intensity profile of a subluminal 3D ST wave packet [Fig.~\ref{Fig:SublumST}(c)] in general resembles that of its superluminal counterpart [Fig.~\ref{Fig:SuperlumST}(c)].
\clearpage
\section{Synthesis of 3D space-time wave packets}
The experimental methodology to synthesize 3D ST wave packets shown in Fig.~3 in the main text is expanded in more technical detail here in Fig.~\ref{Fig:ST_setup}. Our strategy consists of three stages as outlined in Fig.~\ref{Fig:ST_setup}(a):
\begin{enumerate}
\item Spectral analysis
\item A tunable 1D spectral transformation
\item A fixed 2D coordinate transformation
\end{enumerate}
We start off with femtosecond plane-wave pulses from a mode-locked Ti:sapphire laser (Tsunami; Spectra Physics) of width $\approx\!100$~fs and bandwidth $\Delta\lambda\!\approx\!10$~nm centered at a wavelength of $\approx\!800$~nm. The pulses are directed to the first stage of the synthesis system: spectral analysis using a volume chirped Bragg grating (CBG), which spatially resolves the spectrum \cite{Kaim13SPIE,Glebov14SPIE}. A double-pass through the CBG produces a linear spatial chirp but with a flat phase front. The spectrally resolved wave front from the CBG arrangement is then fed to the second stage: a 1D conformal mapping implemented by a pair of spatial light modulators (SLM; Meadowlark 1920$\times$1080 series) that produces a logarithmic coordinate transformation to yield a logarithmic spatial chirp. The transformed wave front is then directed to a fixed log-polar-to-Cartesian coordinate transformation. This 2D coordinate transformation maps a line at its input into a circle at its output \cite{Bryngdahl74JOSA,Hossack87JOMO,Berkhout10PRL}, and is implemented by means of two refractive \cite{Lavery12OE} or diffractive \cite{Sung06AO} phase plates. The combination of the 1D spectral transformation and the 2D coordinate transformation produces a field endowed with a quadratic radial chirp. Finally, a spherical converging lens performs an optical Fourier transform along both transverse dimensions to yield 3D ST wave packets. We proceed to provide a detailed description of each stage of this novel spatio-temporal synthesis setup.
\subsection{Spectral analysis: Volume chirped Bragg grating (CBG)}
\subsubsection*{Introducing spatial chirp}
The goal of the first stage in the setup is to spatially resolve the spectrum but retain a flat phase-front, which is necessary for the successful operation of the subsequent coordinate transformations. A conventional surface grating is therefore \textit{not} suitable for our purposes because the resolved spectrum does \textit{not} have a flat phase. Instead, we make use of an arrangement based on a volume CBG to achieve this goal.
The CBG is a reflective Bragg grating \cite{SalehBook07} with a multilayered structure having a linearly varying periodicity $\Lambda(z)$ along the longitudinal axis $z$. Consequently, different wavelengths are reflected from different depths $z$ within the grating volume \cite{Glebov14SPIE}. As a result, the CBG introduces a \textit{spectral chirp} into normally incident pulses, thus stretching the plane-wave pulse in time [Fig.~\ref{Fig:CBG_configuration}(a)]. For this reason, volume CBGs are widely used in high-power chirped pulse amplification (CPA) systems, where they are well-known for their high damage threshold and their ability to introduce extremely large spectral chirp \cite{Liao07OE,Sun16OE}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17cm]{ST_synthesis_setup.png}
\end{center
\caption{Setup for synthesizing 3D ST wave packets localized in all dimensions. (a) A conceptual layout of the setup identifying its three main stages: spectral analysis, a tunable 1D spectral transformation, and a fixed 2D coordinate transformation. (b) Detailed setup with a chirped Bragg grating (CBG) in a double-pass configuration, followed by a tunable 1D spectral transformation implemented via SLM$_1$ and SLM$_2$, and a fixed 2D coordinate transformation implemented via phase plates PP$_1$ and PP$_2$. The camera CCD$_{1}$ characterizes the 3D ST wave packets in physical space $(x,y,z)$, and CCD$_{2}$ in the Fourier domain $(k_{x},k_{y},\lambda)$.}
\label{Fig:ST_setup}
\vspace{12mm}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=11cm]{CBG_configuration.png}
\end{center}\vspace{-5mm}
\caption{Spectral analysis via volume chirped Bragg gratings (CBGs). (a) When a plane-wave pulse is normally incident on a CBG, different wavelengths are reflected from different depths within the CBG. Spectral chirp is thus introduced into the pulse, but with no spatial chirp. (b) When a pulse is obliquely incident on the CBG, both spectral and spatial chirps are introduced. (c) By traversing a pair of identical CBGs with reversed chirp rates, the spectral chirp acquired by the plane-wave pulse from the first CBG is undone by the second, while doubling the spatial chirp. On the right-hand side in each panel, we sketch the change in the spatial distribution of the spectrum and the pulse profile before and after the CBG configuration.}
\label{Fig:CBG_configuration}
\vspace{12mm}
\end{figure}
Our goal is to produce \textit{spatial} chirp rather than \textit{spectral} chirp. At oblique incidence, however, both spectral \textit{and} spatial chirps are introduced; i.e., the pulse is stretched longitudinally in time, and the spectrum is resolved transversely in space [Fig.~\ref{Fig:CBG_configuration}(b)]. The spatial chirp can be determined easily from two parameters: (1) the chirp rate of the CBG structure $\beta\!=\!\frac{d\lambda}{dz}=\frac{1}{2n}\frac{d\Lambda}{dz}$, where $n$ is the average refractive index of the CBG; and (2) the incident angle $\phi_{1}$ with respect to the normal to the grating structure. After reflecting from the CBG at oblique incidence, the spectrum is spatially resolved along the $x$-axis such that each wavelength $\lambda$ is located at a position $x(\lambda)$ given by:
\begin{equation}
x(\lambda)=\frac{1}{\beta}(\lambda-\lambda_{o})\zeta(\phi_{1}),
\end{equation}
where $\zeta(\phi_{1})\!=\!\frac{n\sin{2\phi_1}}{n^{2}-\sin^{2}{\phi_1}}$, $\lambda_{\mathrm{o}}\!=\!2n\Lambda_{\mathrm{o}}$ is the central wavelength, $\Lambda_{\mathrm{o}}\!=\!\Lambda(L/2)$ is the central periodicity of the CBG, and $L$ is its length along the direction of the chirp. Here $x(\lambda)$ is the transverse spatial displacement each wavelength experiences with respect to $\lambda_{\mathrm{o}}$.
For our purposes here, we aim at retaining the spatial chirp while eliminating the accompanying spectral chirp \cite{Kaim13SPIE}, which we achieve by directing the field to an identical CBG placed in a reversed geometry with respect to the first one \cite{Glebov14SPIE}; see Fig.~\ref{Fig:CBG_configuration}(c). The output from CBG$_{1}$ first passes through a $4f$ imaging system that flips the field along $x$ and thus reverses the sign of the spatial chirp. Consequently, after passing through CBG$_{2}$ with an opposite sign of chirp $\beta_{2}\!=\!-\beta_{1}$, CBG$_{2}$ doubles the the spatial chirp, while cancelling out the spectral chirp introduced by CBG$_{1}$. As a result, we obtain a spatially resolved spectrum with a flat phase front.
In our setup, we used a folded configuration in which the beam is first incident obliquely on one port of the CBG, and the reflected and flipped field is then directed to the second port of the same device at the same incident angle $\phi_{1}$ [Fig.~\ref{Fig:ST_setup}(b); Spectral analysis]. This double-pass configuration produces a spatially resolved spectrum with the following distribution:
\begin{equation}\label{Eq:CBG_spatial_chirp}
x_{1}(\lambda)=\frac{2}{\beta}(\lambda-\lambda_{o})\zeta(\phi_1)=\alpha(\lambda - \lambda_{o}),
\end{equation}
where $\alpha\!=\!\frac{2}{\beta}\zeta(\phi_{1})$ is the linear spatial chirp rate.
In our experiments, we made use of a CBG (OptiGrate L1-021) with a central periodicity of $\Lambda_{\mathrm{o}}\!=\!270$~nm, a chirp rate of $\beta\!=\!-30$~pm/mm, an average refractive index of $n=1.5$, and a length of $L\!=\!34$~mm. The input beam is incident at an angle $\phi_{1}\!\approx\!16^{\circ}$ with respect to the normal to the CBG entrance surface. We measure the spatially resolved spectrum by scanning a single-mode fiber (Thorlabs 780HP) connected to an optical spectrum analyzer (OSA; Advantest AQ6317B). The measured spectrum after the double-pass configuration is plotted in Fig.~\ref{Fig:Spectrum_afterCBG}, which verifies that the CBG produces a linear spatial chirp of rate $\alpha\!=\!-22.2$~mm/nm (from Eq.~\ref{Eq:CBG_spatial_chirp}) centered at $\lambda_{\mathrm{o}}\!=\!796.1$~nm. Therefore, at the output of the spectral-analysis stage we have a collimated, spectrally resolved optical field.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=7cm]{Linear_chirp_spectrum.png}
\end{center}\vspace{-5mm}
\caption{The measured spatial spread of the spectrum after the spectral analysis stage in Fig~\ref{Fig:ST_setup}(b). The measurements confirm a linear spatial chirp over a temporal bandwidth of $\Delta\lambda\approx0.8$~nm spread over $\approx18$~mm in space horizontally along $x_{1}$. The straight line is a theoretical fit, and symbols are data points.}
\label{Fig:Spectrum_afterCBG}
\vspace{12mm}
\end{figure}
\subsubsection*{Spectral uncertainty induced by CBGs}
A critical parameter characterizing ST wave packets in general is the so-called `spectral uncertainty' $\delta\lambda$: the unavoidable `fuzziness' in the association between spatial frequencies ($k_{x}$ for 2D ST wave packets and $k_{r}$ for 3D ST wave packets) and the wavelength $\lambda$. In the case of a CBG, its spectral resolving power (the smallest spectral shift that can be resolved) determines $\delta\lambda$ \cite{Loewen18Book}. In the case of surface gratings, the spectral uncertainty at a central wavelength $\lambda_{\mathrm{o}}$ is given by $\delta\lambda\!=\!\lambda_{o}/N$, where $N$ is the number of grating rulings covered by the incident beam. Therefore, $\delta\lambda$ here depends on the beam size and the grating period. To minimize $\delta\lambda$ with a surface grating with a fixed ruling density, it is therefore desirable to use the largest possible incident beam size. In contrast, the spectral resolving power of a CBG at normal incidence $\delta\lambda_{\mathrm{BG}}$ -- and thus the associated spectral uncertainty -- is determined by its refractive-index modulation contrast and the grating length \cite{Glebov14SPIE}. Furthermore, an additional contribution to the spectral uncertainty $\delta\lambda_{\mathrm{GE}}$ arises at oblique incidence due to a geometric effect stemming from the finite spatial width of the input beam. When a beam of finite transverse width $w$ is obliquely incident on the CBG, the shifted spectral intervals reflected from different points within the CBG will overlap, thereby leading to an additional contribution to the spectral uncertainty, which is proportional to the input beam width, $\delta\lambda_{\mathrm{GE}}\propto w$. The total spectral uncertainty of the CBG at oblique incidence is estimated to be $\delta\lambda_{\mathrm{CBG}}\!=\!\sqrt{(\delta\lambda_{\mathrm{BG}})^{2}+(\delta\lambda_{\mathrm{GE}})^{2}}$. When the beam passes through the CBG twice, a factor-of-2 \textit{drop} is expected in the spectral uncertainty due to the doubling of the total CBG length traversed by the beam.
In our experiments, we measured $\delta\lambda_{\mathrm{CBG}}$ as a function of the input beam width $w$ after traversing the CBG once [Fig.~\ref{Fig:UncertaintyOfCBG}(a)] and twice [Fig.~\ref{Fig:UncertaintyOfCBG}(b)]. The beam width $w$ is tuned using a variable-width aperture preceding the CBG [Fig.~\ref{Fig:ST_setup}(b)]. The measurements confirm that $\delta\lambda_{\mathrm{CBG}}$ increases with $w$ as expected. However, this trend ends when $w\!\sim\!2$~mm. Below this beam size, the spectral uncertainty begins to increase rather than to decrease further, thereby leading to a valley in the plots in Fig.~\ref{Fig:UncertaintyOfCBG}(a,b) in the vicinity of $w\!\sim\!2$~mm. This unexpected increase in $\delta_{\mathrm{CBG}}$ at small $w$ is most likely caused by diffraction inside the CBG resulting from the small input beam size. The optimum beam width is thus $w\!\approx\!2$~mm, which yields a minimum spectral uncertainty of $\delta\lambda_{\mathrm{CBG}}\approx35$~pm in the double-pass configuration. The synthesis experiments we performed made use of $w\!\approx\!2$~mm, and thus we take $\delta\lambda\geq35$~pm for the 3D ST wave packets produced.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=15cm]{CBG_uncertainty.png}
\end{center}\vspace{-5mm}
\caption{Spectral uncertainty for the CBG. (a) The measured spectral uncertainty $\delta_{\mathrm{CBG}}$ from the CBG while varying the the input beam width in a single-pass configuration, and (b) in a double-pass configuration. The vertical dashed lines in (a) and (b) identify the operating point for our experiments.}
\label{Fig:UncertaintyOfCBG}
\vspace{12mm}
\end{figure}
\subsection{Tunable 1D spectral transformation}
The second stage of the 3D ST wave-packet synthesis setup is the tunable 1D spectral transformation. This is a coordinate transformation performed along the horizontal $x$-axis. Because the wavelengths are arranged linearly along $x$ after the CBG and the field is uniform along $y$, this 1D coordinate transformation results in a reshuffling of the arrangement of the wavelengths along $x$, thereby producing a spectral transformation. This spectral transformation aims at achieving two goals: (1) pre-compensating for the exponentiation in the subsequent 2D coordinate transformation; and (2) tuning the group velocity $\widetilde{v}$ of the synthesized 3D ST wave packet.
The $x$-axis at the input plane is labelled $x_{1}$ and that at the output plane $x_{2}$. The targeted transformation then takes the form:
\begin{subequations}\label{Eq:1DTransform}
\begin{align}
x_{2}=&A\,\,\ln\left(\frac{x_{1}}{B}\right),\\
y_{2}=&y_{1},
\end{align}
\end{subequations}
where $A$ and $B$ are the transformation parameters, the physical significance of which will be discussed shortly. The uniform field distribution along $y$ remains intact. This conformal mapping is implemented by means of two phase patterns placed at the input and output planes and separated by a distance $d_{1}$. The first phase distribution $\Phi_{1}(x_{1},y_{1})$ at the input plane performs the particular transformation given in Eq.~\ref{Eq:1DTransform}. In other words, the wavelength $\lambda$ located at position $x_{1}$ is now located at position $x_{2}$. However, such a transformation does not produce a collimated field. The second phase distribution $\Phi_{2}(x_{2},y_{2})$ placed at the output plane collimates the transformed wave front to yield an afocal transformation. Usually, a lens is placed between the two phase plates in a $2f$-configuration. In our experiments, we do not make use of a lens and instead distribute the quadratic phase associated with such a lens between the two phase plates. The required phase profiles $\Phi_{1}(x_1,y_{1})$ and $\Phi_{2}(x_{2},y_{2})$ -- both of which are independent of $y$ -- can be derived using the methodology outlined in \cite{Hossack87JOMO}, and they take the form:
\begin{subequations}\label{Eq:1DTPhase}
\begin{align}
\Phi_{1}(x_{1},y_{1})=&\frac{kA}{d_{1}}\left[ x_{1}\ln\left(\frac{x_{1}}{B}\right)-x_{1}\right] - \frac{k x_{1}^{2}}{2d_{1}},\\
\Phi_{2}(x_{2},y_{2})=&\frac{kAB}{d_{1}}\exp{\left(\frac{x_{2}}{A}\right)}-\frac{k x_{2}^{2}}{2d_{1}},
\end{align}
\end{subequations}
where $k=\frac{2\pi}{\lambda}$ is the wave number. In the setup, $\Phi_{1}(x_1,y_{1})$ and $\Phi_{2}(x_{2},y_{2})$ are displayed on two phase-only SLMs with $d_{1}\!=\!400$~mm [Fig.~\ref{Fig:ST_setup}(b)]. Because the phase profiles according to Eq.\ref{Eq:1DTPhase} depend only on $x$ and not $y$, 1D SLMs can be used here in principle. However, as we show later, utilizing 2D SLMs provides the possibility of also modulating the field along $y$, which translates into an azimuthal modulation after the subsequent 2D coordinate transformation.
In our experiments, we fix $A\!=\!0.5$~mm and tune $B$ over the range $B\!=\![-15,20]$~mm to control the group velocity of the 3D ST wave packets, as explained in detail later. See Fig.~\ref{Fig:TranformationPlot}(a) for a plot of the relationship between $x_{1}$ and $x_{2}$ when making use of these values.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=13cm]{1D_and_2D_transf.png}
\end{center}\vspace{-5mm}
\caption{Spatial transformations associated with the 1D spectral transformation and the 2D coordinate transformation. (a) Plot of the transformation from the input plane $x_{1}$ to the output plane $x_{2}$ of the tunable 1D spectral transformation. We make use of the parameter values $A\!=\!0.5$~mm, and $B\!=\!-5$~mm (blue line) or $B\!=\!5$~mm (dashed red line). (b) Plot of the relationship between $x_{3}$ at the input plane to the radius $r$ at the output plane for the fixed 2D coordinate transformation. We make use of the parameter values $D\!=\!2A\!=\!1$~mm and $C\!=\!4.77$~mm.}
\label{Fig:TranformationPlot}
\vspace{12mm}
\end{figure}
The field after this 1D spectral transformation is imaged by a one-to-one $4f$ system comprising two spherical lenses $L_{\mathrm{s}3}$ and $L_{\mathrm{s}4}$, as shown in Fig.~\ref{Fig:ST_setup}(b), from the output plane of the 1D spectral transformation $(x_{2},y_{2})$ to the input plane ($x_{3},y_{3}$) of the 2D coordinate transformation. Thus, the beam is flipped along the $x$ and $y$ axes after this $4f$ system: $x_{3}=-x_{2}$ and $y_{3}=-y_{2}$. This is important to keep in mind for determining the required transformation parameters $A$ and $B$ based on the desired group velocity $\widetilde{v}$. In addition, a spatial filter is placed in the Fourier plane of the $4f$ system to eliminate the undesired zeroth-order field component resulting from the limited efficiency of SLM$_{1}$ and SLM$_{2}$.
\subsection{Fixed 2D coordinate transformation}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=9cm]{2D_transform.png}
\end{center}\vspace{-5mm}
\caption{Principle of operation for the fixed 2D coordinate transformation. (a) For a monochromatic input field in the form of a vertical rectangular strip, the 2D transformation converts it into an annulus. (b) When the vertical rectangular strip is shifted horizontally, an annulus of different radius is formed at the output, which is nevertheless concentric with the annulus in (a). (c) In our experiments, the input field is a spatially extended spectrally resolved wave front. The transformation converts the input into an annulus with the wavelengths arranged radially in circles.}
\label{Fig:2DTransform}
\vspace{12mm}
\end{figure}
The third stage of the spatio-temporal synthesis strategy for 3D ST wave packets is a fixed 2D coordinate transformation; see Fig.~\ref{Fig:ST_setup}(b) and Fig.~\ref{Fig:2DTransform}. This transformation performs a conformal mapping from log-polar to Cartesian coordinate systems. The input plane is spanned by Cartesian coordinates $(x_{3},y_{3})$ and the output by $(x_{4},y_{4})$. The transformation maps the input Cartesian coordinate system $(x_{3},y_{3})$ to a polar coordinate system at the output $(x_{3},y_{3})\rightarrow(r,\varphi)$, where $r\!=\!\sqrt{x_{4}^{2}+y_{4}^{2}}$ and $\varphi\!=\!\arctan{(\tfrac{y_{4}}{x_{4}})}$; see Fig.~\ref{Fig:TranformationPlot}(b). The transformation is given explicitly in the following form \cite{Bryngdahl74JOSA,Hossack87JOMO,Saito83OC}:
\begin{subequations}\label{Eq:2DTransform}
\begin{align}
r=&C\exp{\left(-\frac{x_{3}}{D}\right)}, \\
\varphi=&\frac{y_{3}}{D}.
\end{align}
\end{subequations}
The transformation parameter $D$ is chosen to map the vertical size of the input field $y_{3}\!=\![-y_{3}^{\mathrm{max}},y_{3}^{\mathrm{max}}]$ to the range $\varphi\!=\![-\pi,\pi]$, and thus we impose $D=\frac{y_{3}^{\mathrm{max}}}{\pi}$; the value of C is selected based on the aperture size of the optics used. The 2D coordinate transformation therefore maps a vertical line located at $x_{3}$ at the input plane into a circle of radius $r$ at the output plane. Shifting the location of the line horizontally along $x_{3}$ at the input thus results in a radial expansion or shrinkage of the circle radius at the output according to the direction of the shift at the input [Fig.~\ref{Fig:2DTransform}(a,b)]. If the input beam is a rectangle of width $\Delta x_{3}$, the transformed beam is an annulus of radial thickness $\Delta r$, where the inner and outer radii of the annulus correspond to the two vertical boundaries of the input rectangle.
This 2D coordinate transformation is implemented by two 3D phase patterns (both depending on the $x$ and $y$ coordinates) separated by a distance $d_{2}$: $\Phi_{3}$ at the input plane $(x_{3},y_{3})$ and $\Phi_{4}$ at the output plane $(x_{4},y_{4})$. For the mapping given in Eq.~\ref{Eq:2DTransform}, the phase patterns take the following form \cite{Hossack87JOMO,Berkhout10PRL,Lavery12OE}:
\begin{subequations}\label{Eq:2DTPhase}
\begin{align}
\Phi_{3}(x_{3},y_{3})=& - \frac{kCD}{d_{2}}\exp{\left(-\frac{x_{3}}{D}\right)}\cos{\left(\frac{y_{3}}{D}\right)} - \underbrace{\frac{k (x_{3}^{2}+y_{3}^{2})}{2d_{2}}}_{\textrm{lens term}},\\
\Phi_{4}(x_{4},y_{4})=& \frac{kD}{d_{2}}\left[\mathrm{atan2}(y_{4},x_{4})-x_{4}\ln{\left(\frac{\sqrt{(x_{2}^{4}+y_{4}^{2})}}{C}\right)} +x_{4}\right] - \underbrace{\frac{k (x_{4}^{2}+y_{4}^{2})}{2d_{2}}}_{\textrm{lens term}},\
\end{align}
\end{subequations}
where $\mathrm{atan2}(y_{4},x_{4})$ is 2-argument $\arctan$ function. Usually, a spherical lens is placed midway between the two phase patterns in a $2f$ configuration. Instead, the appropriate quadratic phases are added in $\Phi_{3}$ and $\Phi_{4}$, which are the last terms in Eq.~\ref{Eq:2DTPhase}(a) and Eq.~\ref{Eq:2DTPhase}(b) \cite{Lavery12OE}.
The 2D transformation can be implemented by making use of \textit{diffractive} optics \cite{Lavery11IOP,Berkhout11OL,Berkhout10PRL,Li19OE} or \textit{refractive} optics \cite{Lavery12OE}. We exploited both types of phase plates in our experiments to imprint the desired phase profiles in Eq.~\ref{Eq:2DTPhase}: diamond-edged refractive phase plates \cite{Lavery12OE} and analog diffractive phase plates \cite{Li19OE}.
\subsubsection*{Refractive phase plates}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=10cm]{Refractive_PP.png}
\end{center}\vspace{-5mm}
\caption{Refractive phase plates for implementing the 2D coordinate transformation. (a) Height profile of the refractive element 1, and (b) of the refractive element 2. The phase plates are designed such that the aperture size is $2y_{3}^{\mathrm{max}}=8$~mm, the distance separating them in the setup is $d_{2}=310$~mm, and the transformation parameters are $C=4.77$~mm, $D=1$~mm.}
\label{Fig:RefractivePP}
\vspace{12mm}
\end{figure}
The refractive optical elements used in our experiments are similar to those outlined by Lavery \textit{et al.} in \cite{Lavery12OE}, in which the transformation parameters are $C=4.77$~mm, $D\!=\!\tfrac{3.2}{\pi}\approx\!1$~mm, and $d_{2}=310$~mm. Each phase plate is made of the polymer PMMA (Poly methyl methacrylate) with accurately manufactured height profiles $Z_{1}(x_{3},y_{3})$ and $Z_{2}(x_{4},y_{4})$ to imprint the required phase profiles given in Eq.~\ref{Eq:2DTPhase}. The phase encountered by light at a wavelength $\lambda$ traversing a height $Z$ of a material of refractive index $n$ -- with respect to the phase encountered over the same distance in vacuum -- is given by $\Phi\!=\!2\pi(n-1) Z/\lambda$. Thus, the height profile of the first element is $Z_{1}(x_{3},y_{3})=\frac{\lambda}{2\pi(n-1)}\Phi_{3}(x_{3},y_{3})$ [Fig. \ref{Fig:RefractivePP}(a)] and that of the profile of the second element is $Z_{2}(x_{4},y_{4})=\frac{\lambda}{2\pi(n-1)}\Phi_{4}(x_{4},y_{4})$ [Fig. \ref{Fig:RefractivePP}(b)]. Note that each surface height is wavelength-independent, and dispersion effects in the material manifest themselves as a change in the focal length $d_{2}$ of the integrated lens for different wavelengths. Hence, in the experiment the system can be tuned to a specific wavelength by changing the distance between the two elements.
The elements were diamond-machined using a Natotech, 3-axis (X,Z,C) ultra precision lathe (UPL) in combination with a Nanotech NFTS6000 fast tool servo (FTS) system. The machined PMMA surfaces had a radius of 5.64~mm, angular spacing $1^{\circ}$, radial spacing of 5~$\mu$m, a spindle speed of 500~RPM, a roughing feed rate 5~mm/minute with a cut depth of 20~$\mu$m, and a finishing feed rate of 1~mm/minute with a cut depth of 10~$\mu$m \cite{Dow91PE}. The total sag height difference for each part was relatively small ($\approx\!115$~$\mu$m for surface 1 and $\approx\!144$~$\mu$m for surface 2). The transmission efficiency of the combination of the elements is $\approx85\%$.
\subsubsection*{Diffractive phase plates}
The diffractive phase plates were fabricated in fused silica using Clemson University facilities. The fabrication process is outlined in \cite{Sung06AO}, which involves writing a binary phase grating on a stepper mask with an electron-beam and subsequently transferring this analog mask into a fused silica substrate with projection lithography. The phase grating period is designed to be larger than the cutoff period of the projection stepper for higher diffraction orders, so only the zeroth-order diffracted light from the stepper can be transmitted. The transmission coefficient of the stepper light is then a function of the duty cycle of the electron-beam-patterned binary phase grating. The spatial intensity distribution of light in the wafer plane can be controlled with a spatial duty cycle function, which then exposes the I-line resist with a spatially varying analog intensity profile. This allows fabrication of analog diffractive optics with a single exposure from the stepper rather than binary $2^n$ diffractive optics, resulting in high-efficiency optics. The transmission efficiency of the combination of the two faces is $\approx\!92\%$.
The design parameters for the analog diffractive phase plates are chosen as follows: $D\!=\!\frac{7}{\pi}\!\approx\!2.2$~mm, $C\!=\!6$~mm, $\lambda_{\mathrm{o}}\!=\!798$~nm, and $d_{2}\!=\!225$~mm. These design parameters were optimized so the paraxial approximation remains valid over the desired transformation range of 5~mm.
\begin{table}[b!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
& Group velocity $\widetilde{v}$ & $B$ \\ \hline\hline
Positive-subluminal & $0<\widetilde{v}<c$ & $B<0$ \\ \hline
Positive-luminal & $\widetilde{v}=c$ & $B=\infty$ \\ \hline
Positive-superluminal & $c<\widetilde{v}<\infty$ & $\frac{|\alpha|\lambda_{o}C^2}{f^{2}}<B<\infty$ \\ \hline
Infinite-$\widetilde{v}$ & $\widetilde{v}=\infty$ & $B=\frac{|\alpha|\lambda_{o}C^2}{f^{2}}$ \\ \hline
Negative-superluminal & $\widetilde{v}<-c$ & $\frac{|\alpha|\lambda_{o}C^2}{2f^{2}}<B<\frac{|\alpha|\lambda_{o}C^2}{f^{2}}$\\ \hline
Negative-luminal & $\widetilde{v}=-c$ & $B=\frac{|\alpha|\lambda_{o}C^2}{2f^{2}}$ \\ \hline
Negative-subluminal & $-c<\widetilde{v}<0$ & $0<B<\frac{|\alpha|\lambda_{o}C^2}{2f^{2}}$ \\ \hline
\end{tabular}
\end{center}
\caption{Conversion from the group velocity to the parameter $B$}
\label{Table:vgVsB}
\end{table}
\subsection{Spatial Fourier transform}
After the 2D coordinate transformation, the spatially resolved wavelengths are arranged radially $r(\lambda)$. A spherical converging lens L$_{\mathrm{s}5}$ [Fig.~\ref{Fig:ST_setup}(b)] performs a 2D Fourier transform to produce the 3D ST wave packet in physical space. In effect, each spatial position is now mapped into a spatial frequency $k_{r}(\lambda)\!=\!k\tfrac{r}{f}$, where $f$ is the focal length of the lens. because of the small bandwidth $\Delta\lambda$ utilized in our experiments, we can use the approximation $k\!\approx\!k_{\mathrm{o}}$ in the scaling of the radial spatial frequency.
\subsection{Full system analysis}
We can now combine all the stages in our synthesis system to identify the role played by the various parameters involved: the chirp rate $\alpha$ from the CBG; the parameters $A$ and $B$ from the tunable 1D spectral transformation; the parameters $C$ and $D$ from the fixed 2D coordinate transformation; and the focal length $f$ of the Fourier-transforming lens. After the CBG we have $x_{1}(\lambda)\!=\!\alpha(\lambda-\lambda_{\mathrm{o}})$; after the tunable 1D spectral transformation we have $x_{2}(\lambda)\!=\!A\ln{\{\tfrac{x_{1}(\lambda}{B}\}}$; after the $4f$ imaging system we have $x_{3}\!=\!-x_{2}$; after the fixed 2D coordinate transformation we have $r(\lambda)\!=\!C\exp{\{-\tfrac{x_{3}(\lambda)}{D}\}}$; and, finally, after the Fourier-transforming lens we have $k_{r}\!=\!k_{\mathrm{o}}\tfrac{r(\lambda)}{f}$. Combining all these steps, we have:
\begin{equation}
k_{r}(\lambda)=C\frac{k_{\mathrm{o}}}{f}\left(\frac{\alpha}{B}(\lambda-\lambda_{\mathrm{o}})\right)^{A/D}.
\end{equation}
To obtain the desired radial chirp from Eq.~\ref{Eq:Parabola3D}, we must impose the constraint $\tfrac{D}{A}\!=\!2$, whereupon:
\begin{equation}\label{Eq:SpectralCorrelation}
k_{r}(\lambda)=C\frac{k_{\mathrm{o}}}{f}\sqrt{\frac{\alpha}{B}(\lambda-\lambda_{\mathrm{o}})}.
\end{equation}
The parameter $C\!=\!4.77$~mm is determined by the size of the optics (the aperture size is $2y_{3}^{\mathrm{max}}\!=\!8$~mm), $f\!=\!300$~mm, $\alpha\!=\!-22.2$~mm/nm is determined by the CBG, and $\lambda_{\mathrm{o}}\!\approx\!798$~nm. This leaves $B$ as the sole free parameter determining the chirp rate, and thus also the group velocity $\widetilde{v}$.
By comparing the result in Eq.~\ref{Eq:SpectralCorrelation} to the target spatio-temporal spectrum for 3D ST wave packets given in Eq.~\ref{Eq:Parabola3D}, we can obtain the group index:
\begin{equation}\label{Eq:ExpGroupIndex}
\widetilde{n}=1+\left(\frac{C^2\alpha\lambda_{o}}{f^2}\right)\frac{1}{B}=1-\frac{4.5~\mathrm{mm}}{B}
\end{equation}
As a result, we can generate ST wave packets with group velocities from subluminal $(B<0)$ to superluminal $(B>C^{2}|\alpha|\lambda_{o}/f^2)$, and in principle negative group velocities $(0<B<C^{2}|\alpha|\lambda_{o}/f^2)$. See Table~\ref{Table:vgVsB} for the conversion between the desired group velocity ranges and the required value of the parameter $B$.
\subsection{Synthesizing pulsed Bessel beams with separable spatio-temporal spectrum}
To compare the performance of 3D ST wave packets with that of the conventional pulsed Bessel beams shown in Fig.~5(a-b) in the main text, we bypass the spectral analysis and 1D spectral transformation stages and send the input laser pulses directly to the 2D transformation. Consequently, separable pulsed Bessel beams are produced. In addition, we spectrally filter $\Delta\lambda\approx0.3$~nm of the input pulses to obtain a spectral bandwidth comparable to that of the 3D ST wave packets.
\clearpage
\section{Characterization of space-time wave packets}
We characterize the 3D ST wave packets in four domains:
\begin{enumerate}
\item The spatio-temporal spectral plane $(k_{x},k_{y},\lambda)$ or $(k_{r},\lambda)$ to confirm the presence of desired spectral structure.
\item In physical space we follow the evolution of the time-averaged intensity $I(x,y,z)$ along the $z$-axis to verify the diffraction-free propagation of the 3D ST wave packets.
\item We reconstruct the spatio-temporal intensity profile $I(x,y,z;t)$ using time-resolved linear interferometry, which also enables us to estimate the group velocity $\widetilde{v}$.
\item The amplitude and phase of the complex-field envelope $\psi(x,y,z;t)\!=\!|\psi(x,y,z;t)|e^{i\phi(x,y,z;t)}$ is reconstructed using off-axis digital holography.
\end{enumerate}
\subsection{Spatio-temporal spectrum measurements}
The spatio-temporal spectrum $|\widetilde{\psi}(k_{x},k_{y};\lambda)|^{2}$ is captured at CCD$_2$ (The ImagingSource, DMK 33UX178) as shown in Fig.~\ref{Fig:ST_setup}, which corresponds to the Fourier plane ($k_{x},k_{y})$ of the synthesized 3D ST wave packets [Fig.~4(a) in the main text]. Because the camera cannot distinguish between the various wavelengths, we resolve the temporal spectrum in two steps. We first scan the fiber tip connected to an OSA along the horizontal axis $x_{2}$ after the 1D spectral transformation and determine the spatial chirp $x_{2}(\lambda)$. The spatial chirp after the $4f$ imaging system is identical to $x_{2}(\lambda)$ except for a spatial flip [Fig.~\ref{Fig:SpectrumCalibration}(a)]. In a second step, we verify experimentally the impact of the 2D coordinate transformation by scanning a vertical slit horizontally along $x_{3}$ and measure the radius of the annulus formed at the output [Fig.~\ref{Fig:SpectrumCalibration}(b)]. By combining these two measurements we obtain the spatial chirp along the radial direction $r(\lambda)$ after the 2D coordinate transformation [Fig.~\ref{Fig:SpectrumCalibration}(c)]. Finally, we obtain spatio-temporal spectrum $k_{r}(\lambda)$ [Fig.~\ref{Fig:SpectrumCalibration}(d) and Fig.~4(iii) in the main text] by converting from the physical space to Fourier space $k_{r}\!=\!k\frac{r}{f}$, where $k\!=\!\frac{2\pi}{\lambda}$ is the wave number, and $f\!=\!300$~mm is the focal lens of the Fourier-transforming lens L$_{\mathrm{s}5}$. The solid curves correspond to theoretical predictions and the dots correspond to data points.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17cm]{Spectrum_calibration.png}
\end{center}\vspace{-5mm}
\caption{Evolution of the spatio-temporal spectrum of the 3D ST wave packets through the synthesis setup. (a) Measured relationship between $\lambda$ and the transverse coordinate $x_{3}$ after the 1D spectral transformation. (b) Measured relationship between $x_{3}$ before the 2D coordinate transformation and $r$ after it. (c) By combining (a) and (b), we obtain the relationship between $\lambda$ and $r$ after the 2D spectral transformation. (d) Transforming the relationship in (c) to one between $\lambda$ and the radial spatial frequency $k_{r}$.}
\label{Fig:SpectrumCalibration}
\vspace{12mm}
\end{figure}
\subsection{Time-averaged intensity measurements}
The axial evolution of the time-averaged intensity profile $I(x,y,z)\!=\!\int\!dt|\psi(x,y,z;t)|^{2}$ is obtained in the physical space by scanning CCD$_{1}$ (The ImagingSource, DMK 27BUP031) along the propagation axis $z$ [Fig.~\ref{Fig:ST_setup}(b)]. This measurement confirms the diffraction-free propagation of the 3D ST wave packets, as shown in the Fig.~5 in the main text.
\subsection{Time-resolved intensity measurements}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=14cm]{ST_characterization.png}
\end{center}\vspace{-5mm}
\caption{Schematic illustration of the setup for reconstructing the spatio-temporal profile of 3D ST wave packets and estimating their group velocity. The spatio-temporal synthesis arrangement from Fig.~\ref{Fig:ST_setup}(b) is placed in one arm of a Mach-Zehnder interferometer, and an optical delay $\tau$ is placed in the reference arm that is traversed by the short pulses from the initial laser source.}
\label{Fig:ST_characterization}
\vspace{12mm}
\end{figure}
The spatio-temporal intensity profile of the 3D ST wave packet is measured using a Mach-Zehnder interferometer in which the spatio-temporal synthesis arrangement is placed in one arm. The second (reference) arm is traversed by the original short plane-wave laser pulses from the source (pulse width $\sim\!100$~fs) that encounter an optical delay line $\tau$ [Fig.~\ref{Fig:ST_characterization}]. The two wave packets (the 3D ST wave packet and the reference pulse) propagate co-linearly after they are merged at the beam splitter BS$_{2}$. When the two wave packets overlap in space and time, spatially resolved interference fringes are recorded by CCD$_{1}$ at a fixed axial position $z$, from whose visibility we extract the spatio-temporal profile at that plane. By scanning the delay $\tau$, we can reconstruct the spatio-temporal intensity profile $I(x=0,y,z;\tau)$ at the fixed axial plane $z$. Using this approach, we obtained the wave packet profiles shown in Fig.~6(b-d) in the main text (see \cite{Kondakci19NC,Bhaduri20NatPhot} for further details).
To estimate the group velocity $\widetilde{v}$ of the 3D ST wave packet, we first arrange for the 3D ST wave packet and the reference pulse to overlap at $z\!=\!0$ as described above. We then axially displace CCD$_{1}$ to a different axial position $\Delta z$, which results in a loss of interference visibility due to the group-velocity mismatch between the 3D ST wave packet traveling at $\widetilde{v}\!=\!c\tan{\theta}$ (where $\theta$ is the spectral tilt angle) and the reference pulse traveling at $\widetilde{v}\!=\!c$. The interference is retrieved, however, by adjusting the delay by $\Delta t$ in the reference arm, from which we obtain the relative group delay between the two wave packets and thence the group velocity $\widetilde{v}\!=\!\tfrac{\Delta z}{\Delta t}$ for the 3D ST wave packet [Fig.~6(e) in the main text]. By repeating this procedure for ST wave packets with different $\theta$ (by varying the parameter $B$), we obtain the data plotted in Fig.~6(f) in the main text.
The uncertainty in the group-velocity measurement is estimated using the propagation-of-errors principle \cite{Bevington02BOOK}. In our case, the largest contribution to errors in estimating $\widetilde{v}$ stems from the uncertainty $\delta t$ in estimating the group delay $\Delta t$, which is limited by the pulse-width $\Delta T$, which we set at $\delta t\!=\!\Delta T/10$. The error in the estimated value of $\widetilde{v}$ is $\delta\widetilde{v}\!=\!|\tfrac{\partial\widetilde{v}}{\partial t}|\delta t\!=\!\tfrac{\widetilde{v}^2}{\Delta z}\delta t$. Using this relationship, we calculate the error bars in Fig.~6(f) in the main text.
\subsection{Measurements of the field amplitude and phase for 3D ST wave packets}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=17cm]{Phase_retrieval.png}
\end{center}\vspace{-5mm}
\caption{Scematic depiction of the off-axis holography strategy for reconstructing the complex field for 3D ST wave packets. (a) The system is similar to that in Fig.~\ref{Fig:ST_characterization} except that the reference pulse is spatially tilted with respect to the 3D ST wave packet. (b) Outline of the procedure for estimating the field amplitude and phase from the spatially resolved interference fringes.\\}
\label{Fig:Phase_retrieval}
\vspace{12mm}
\end{figure}
Finally, we make use of off-axis digital holography \cite{Cuche99OL,Cuche00AO,Sanchez-Ortiga14AO} to obtain the field amplitude and phase of 3D ST wave packets at fixed locations along $z$ and at fixed instances in time $\tau$. This is especially crucial to measure the phase of the OAM-carrying 3D ST wave packets shown in Fig.~7(b) in the main text.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=13cm]{Measured_phase.png}
\end{center}\vspace{-5mm}
\caption{Measured phase profiles for 3D ST wave packets with and without OAM. (a) Measured transverse phase profile $\phi(x,y)$ (first row) and on-axis phase profile $\phi(x,y=0)$ along the $x$-axis (second row) at delays $\tau\approx0,\pm5$~ps for a superluminal 3D ST wave packet. Here $z\!=\!30$~mm and $\ell=0$; i.e., there is no OAM structure. (b) Same as (a) except that $\ell=1$; i.e., OAM structure is introduced into the field.}
\label{Fig:Measured_phase}
\vspace{12mm}
\end{figure}
We make use of the off-axis digital holography (ODH) methodology \cite{Cuche99OL,Cuche00AO,Sanchez-Ortiga14AO} to reconstruct the complex field of 3D ST wave packets $\psi(x,y,z;t)\!=\!|\psi(x,y,z;t)|e^{i\phi(x,y,z;t)}$. For this purpose, the same Mach-Zehnder configuration from the previous section is exploited but with a slight modification -- we add a small angle between the propagation directions of the short reference pulse and the 3D ST wave packets, which are arranged to overlap in space and time at a fixed axial position $z$ [Fig.~\ref{Fig:Phase_retrieval}(a)]. The interference pattern captured on CCD$_{1}$ contains a constant background term and an interference term of interest [Fig.~\ref{Fig:Phase_retrieval}(b)]. Following the ODH algorithm, we perform a digital fast Fourier transform (FFT)
to separate the constant background from the interference term. By digitally isolating the first diffraction order of the Fourier-transformed image, we access the interference term that contains the complex field. Finally, the inverse FFT of the centered first term gives the amplitude $|\psi(x,y,z;\tau)|$ and phase $\phi\{\psi(x,y,z;\tau)\}$ of the 3D ST wave packets at that axial location $z$ and time $\tau$ (see \cite{Cuche99OL,Cuche00AO,Sanchez-Ortiga14AO} for more details). By repeating this procedure for several instances $\tau$ we obtain the data plotted in Fig.~7 of the Main text.
The measured phase structure is plotted in Fig.~\ref{Fig:Measured_phase}(a) for $\ell\!=\!0$ and in Fig.~\ref{Fig:Measured_phase}(b) for $\ell\!=\!$. In the first case, in absence of OAM, the phase distribution takes a Gaussian form. The curvature increases as we move away from the center of the wave packet $\tau\!=\!0$. We highlight the phase along the center of the wave front $y\!=\!0$, and especially where the intensity of the field is appreciable (between the two vertical dotted lines). Here the phase is flat at $\tau\!=\!0$, and becomes Gaussian as $\tau$ increases. This is to be expected for a monochromatic Gaussian beam going through its focal point (where the phase front is flat). However, this is seen here in the time domain, which is a manifestation of so-called `time diffraction' \cite{Porras17OL,Kondakci18PRL,Yessenov20PRLveiled}.
In presence of non-zero OAM, superimposed on the above-described behavior of the phase front is a helical phase distribution. At $\tau\!=\!0$ the helical phase front flattens out, but appears as we move away from the wave packet center [Fig.~\ref{Fig:Measured_phase}(b)]. Once again, this is the expected behavior in space for a diffracting OAM mode when examined from the beam waist outward, but observed here in the time domain.
\clearpage
|
2024-02-18T23:40:25.434Z
|
2021-11-08T02:01:01.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03095","language":"en","timestamp":1636336861000,"url":"https:\/\/arxiv.org\/abs\/2111.03095","yymm":"2111"}
|
proofpile-arXiv_000-10207
|
{"provenance":"002.jsonl.gz:10208"}
| null | null |
\section{Introduction}
\vspace{-8pt}
Modern society has refined the condition of solitude to the point where countless seniors, marginalized because of their age, have magically disappeared: left to their own devices, these individuals fade from social life and essentially live in a parallel world.
The COVID-19 crisis and resulting lockdowns have both entrenched this phenomenon and helped to reveal how widespread it really is. Can artificial intelligence (AI) help to reconnect generations by making them part of a transgenerational art experience? At the crossroads of laughter—an act of communication between two individuals—and artificial intelligence—a purely functional entity—can we rediscover our humanity?
\vspace{-8pt}
\paragraph{An interactive experience.}
The end-goal of this project is to connect people via an interactive web experience driven by synthetic laughter. Using our models, we
will explore the phenomenon of empathy triggered by the sound of laughter, the relationship between individual memory and laughter, and how the sound of laughter evolves over a lifetime.
\vspace{-8pt}
\paragraph{Laughter generation for advancing audio synthesis research.}
With stunning advancements in image synthesis~\cite{karras2017progressive, karras2019style, karras2020analyzing, karnewar2020msg}, Generative Adversarial Networks (GANs) ~\cite{goodfellow2014generative} have gained the attention of researchers in the field of audio synthesis~\cite{donahue2018adversarial, engel2019gansynth, kumar2019melgan, binkowski2019high}. Synthesizing audio opens new doors for musicians and artists and enables them to expand their repertoire of expression~\cite{donahue2018adversarial}.
Despite significant progress by the ML community on methods for audio synthesis, there have been only a few attempts in the topic of laughter synthesis~\cite{mancini2013laugh}, and none leveraging modern approaches such as GANs.
Compared to speech, laughter is made challenging by its many context-dependent attributes, such as emotions~\cite{schroder2001emotional}, age, and gender. Moreover, compared to well-studied topics like speech synthesis, there are not established evaluation methods for synthesized laughter.
Thus laughter synthesis, has the potential to become a standard benchmark in unconditional audio synthesis.
\vspace{-8pt}
\paragraph{Related work.}
Previous work in the field of laughter generation involves~\cite{mori2019conversational} the use of oscillatory system~\cite{sundaram2007automatic}, formant synthesis~\cite{oh2013lolol}, articulatory speech synthesis~\cite{lasarcyk2007imitating}, and hidden Markov models (HMM)~\cite{urbain2014arousal}. Recently, some researchers have also used deep learning~\cite{mori2019conversational, tits2020laughter} methods for laughter synthesis. GANs are advantageous in learning of a compact latent space allowing for interpolation, mixing, and style transfer as well as emotional analysis. In this paper, we propose to use GANs for the purpose of unconditional laughter generation and manipulation (LaughGANter). Our aim is to enable a unique interactive art experience that surprises and connects through the primordial intimacy of our laughter interacting and juxtaposed with others.
\vspace{-8pt}
\section{Methodology}
\vspace{-8pt}
We adapt Multi-Scale Gradient GAN (MSG-GAN)~\cite{karnewar2020msg} for laughter synthesis. Among other popular image synthesis methods, like DCGAN~\cite{radford2015unsupervised}, ProgressiveGAN~\cite{karras2017progressive}, and StyleGAN~\cite{karras2019style}, LaughGANter employs multi-scale gradients
on a DCGAN architecture to address the training instability prevalent in GANs. Progressive growing of network resolutions is avoided to limit the hyperparameters to be tuned (e.g. training schedule, learning rates for each resolution, etc) while the multi-scale discriminator penalizes intermediate and final layer outputs of the generator.
We refer the reader to~\cite{karnewar2020msg} for an in-depth study of the MSG-GAN architecture. Concisely, the generator ($G$) samples a random vector $z$ from a normal distribution and outputs $x=G(z)$. The generated samples are fed into the discriminator ($D$), along with real samples, in order to measure the divergence. We perform \textit{pixel normalization} after every layer in $G$, and employ the \textit{Relativistic Average Hinge} loss~\cite{jolicoeur2018relativistic} in $D$. Moreover, inspired by~\cite{oord2016wavenet, odena2016deconvolution}, we explored the impact of \textit{induced receptive field expansion}, adding residual blocks with dilations after each upsampling layer in $G$, which exponentially increases the model's receptive field and can lead to better long range correlation in audio data.
\vspace{-8pt}
\paragraph{Categorical Conditional Generation.}
A more directed data generation process is employed through a conditional adaptation of MSG-GAN ~\cite{mirza2014conditional}, facilitating the laughter representation learning given additional context beyond unlabeled laughter (e.g. gender, age, humor style, etc). Here, categorical information augments the latent noise vector in $G$, and to each of the multi-scale vectors within $D$, through a concatenation with an embedding of context information.
\vspace{-10pt}
\begin{figure}[t]
\label{fig:spec}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{spec8000.png}
\vspace{.5cm}
\caption{\small Generated Mel spectrogram}
\end{subfigure}
\hspace{2pt}
\begin{subfigure}[b]{0.32\textwidth}
\includegraphics[width=\textwidth]{real1.png}
\vspace{.5cm}
\caption{ \small Real Mel spectrogram}
\end{subfigure}
\hspace{2pt}
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{FID-paper.pdf}
\caption{\small FID score for LaughGANter}
\end{subfigure}
\caption{(a)-(b) Log-magnitude spectrograms for real and generated samples show similar features on qualitative analysis and (c) FID score for LaughGANter decreases during training, as the diversity of the generated samples approaches that of the training data distribution.}
\label{fig:results}
\end{figure}
\section{Experiments}
\vspace{-8pt}
\paragraph{Setup.}
Our model is implemented in PyTorch. We use a laughter dataset containing 2145 laughter samples collected by the National Film Board of Canada. Samples are 1-8s long (22.05kHz mono), and were collected (and labeled) from subjects with different ages and genders (55\% male, 45\% female; 93\% adult, 6\% child, 1\% teen). The audio data is augmented using a random combination of additive noise, shifting, and changing pitch and duration (using \texttt{pyrubberband}). Then, this data is converted to Mel spectograms and fed into the model. In addition to qualitative evaluation, i.e., listening to generated samples, we have used Fréchet inception distance (FID)~\cite{heusel2017gans} to assess the diversity of the generated samples compared to the training dataset. Instead of using Inception features used in the original FID score, we use features from a classifier (gender and age group) trained on the spectrograms of our laughter dataset.
\vspace{-8pt}
\clearpage
|
2024-02-18T23:40:25.440Z
|
2021-11-08T02:04:29.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03146","language":"en","timestamp":1636337069000,"url":"https:\/\/arxiv.org\/abs\/2111.03146","yymm":"2111"}
|
proofpile-arXiv_000-10208
|
{"provenance":"002.jsonl.gz:10209"}
| null | null |
\section{Introduction}
The rapid progress of observational cosmology in recent years has been fuelled by an abundance
of accurate observations \citep{Riess1998, Perlmutter1999, Eisenstein2005, Cole2005, wmap9,
Anderson2012, Alam17,Planck2018,eboss2021}. The $\Lambda$CDM model has emerged
as the new cosmological paradigm, being able to simultaneously describe all state-of-the-art observations.
However, the two components making up the majority of the total energy budget of the Universe
today in this model, dark energy and dark matter, remain poorly understood.
As it provides the most precise parameter constraints to date, the best-fitting
$\Lambda$CDM model to the cosmic microwave background (CMB) observations by the
\textit{Planck} satellite \citep{Planck2018} has become the synonym to `standard cosmological model'.
The comparison of predictions for the expansion history of the Universe $H(z)$, and the rate at which
cosmic structures form at later times $f(z)$ with observations at lower redshifts
serves as a powerful test of the $\Lambda$CDM paradigm.
While a broad range of observations are in agreement with the CMB predictions,
the increasingly precise measurements from the cosmic distance ladder \citep{Riess2018,Riess_2019},
as well as the ever-increasing weak gravitational lensing data sets
\citep{Hildebrandt_2016, Abbott2017b, Hikage2019}, display hints of tension with Planck $\Lambda$CDM.
In particular, local direct probes seem to prefer a greater expansion rate of the Universe
(the `$H_0$ tension') while, less significantly, weak gravitational lensing measurements exhibit
a lower amplitude (the `$\sigma_8$ tension') than predicted by Planck. There
is still no conclusive evidence on what drives these differences in the recovered values, and
it is not uncommon for proposed solutions to the $H_0$ tension to make the $\sigma_8$ tension
worse \citep[see, for example,][]{Hill_2020}.
With the lack of consensus on where (and whether) the inconsistencies with Planck $\Lambda$CDM
arise, the cosmological community has put in an increased effort in assessing the internal consistency
between different data sets within the $\Lambda$CDM scenario. Galaxy clustering allows to probe and
distinguish how the underlying cosmology affects the background expansion of the Universe and its
effects on the structure growth through baryon acoustic oscillations (BAO) and redshift-space
distortions (RSD). Both of these effects set important features of the two-point correlation function,
which can be fit to obtain summary statistics that carry compressed cosmological information.
BAO set the angular scale of the acoustic peak, allowing to probe the distance -- redshift relation.
On the other hand, RSD provide information about structure growth through galaxy peculiar
velocities, whose effect on the amplitude of the power spectrum is commonly characterised by
the product of the logarithmic growth rate $f$ and the RMS linear perturbation theory variance
$\sigma_8$ \citep[although see][for the problems caused by this approach]{sanchez2020let}.
While analyses based solely on RSD and BAO summary statistics allow excellent internal
consistency tests and may help constrain beyond - $\Lambda$CDM scenarios, it has been shown that
they do not preserve all the information of the full measurement, in particular, losing the additional constraining power available from its shape \citep{ShapeFit}.
Other analyses, therefore, make use of the information recovered from fitting the full shape of two-point
clustering measurements, either in Fourier or configuration space, directly comparing models against
data \citep{Tr_ster_2020, dAmico_2020, Ivanov_2020}. These analyses tend to lose some of
the immediate interpretability of the summary statistics but instead allow to directly obtain
constraints of cosmological parameters independently of external data sets. This type of
analyses have, therefore, recently received attention as a way to test the consistency
between large-scale structure (LSS) and CMB measurements.
The $\sigma_8$ value recovered from the full shape analysis of the correlation function wedges of
BOSS DR12 galaxies by \citet{Tr_ster_2020} is 2.1$\sigma$ low compared to Planck's
prediction, with the difference increasing to 3.4$\sigma$ when weak lensing measurements from the
Kilo-Degree Survey, KV450, are added, indicating that there may be some consistent discrepancy
between CMB predictions and low redshift observations. This is also consistent with the more
recent analysis by \citet{kids1000} where BOSS galaxies are used as lenses in the so
called `$3\times2$pt' analysis (a set of three correlation functions consisting of autocorrelation of the lens
galaxy positions, source galaxy shapes and the cross-correlation of the two), which finds a
~3$\sigma$ discrepancy with \textit{Planck}'s value of $S_8=\sigma_8\sqrt{\Omega_m/0.3}$, that
combines $\sigma_8$ with the matter density $\Omega_m$ in a way that minimises correlation
between the two parameters. This result is consistent with the findings from other major weak
lensing surveys (Dark Energy Survey \citep{desyr3} and Hyper Suprime-Cam \citep{Hikage2019}),
even though, most recently, DES reported consistency between their $3\times2$pt $\Lambda$CDM
constraints and those of Planck when the full parameter space is considered. Furthermore,
$\sigma_8$ may not be an entirely appropriate parameter for assessing consistency among the
different surveys, as \citet{sanchez2020let} have shown that it is affected by the different posterior
distributions of Hubble parameter $h$ recovered by different analyses. Alternatively, one may define
$\sigma_{12}$ - the variance as measured on a fixed scale of 12 Mpc. In this work we adopt this
notation and use $\sigma_{12}$ to both accurately characterise the amplitude of the power spectrum
today as well as assess the consistency among the probes considered.
In this work we are, therefore, interested in building upon \citet{Tr_ster_2020} and exploring,
whether the discrepancy between the low-redshift probes and Planck within the $\Lambda$CDM
model holds when extending the redshift range probed by the clustering measurements with the
addition of eBOSS quasar clustering. We provide the joint constraints from the full shape analysis
of BOSS galaxy and eBOSS quasar clustering on their own, as well as in combination with weak
lensing information. For our weak lensing data set we use the $3\times 2$pt measurements from the
Dark Energy Survey Year 1 \citep[DES Y1,][]{Abbott2017b} release, which both cover a larger area
than KV450 and include
galaxy clustering and galaxy-galaxy lensing as well as the shear-only measurements. If the tension
seen between the low-redshift probes and Planck is purely statistical, adding more data should not
only tighten the posterior contours but be able to bring the constraints to a better agreement. The
results from an equivalent analysis with KiDS-450 shear measurements are available in the
appendix \ref{appendix:kids}.
In addition to expanding our data sets we also aim to re-define the parameter space following
\citet{Sanchez2021}, who distinguish `shape' and `evolution' cosmological parameters.
In such parameter space, $\sigma_8$ is replaced by $\sigma_{12}$, as discussed above,
and the relative matter and dark energy densities ($\Omega_{\rm{m}}$, $\Omega_{\rm{DE}}$) are
replaced by their physical counterparts ($\omega_{\rm{m}}$, $\omega_{\rm{DE}}$).
The advantage of this choice is two-fold: first, the derived constraints do not depend on the
posterior of $h$ of the particular analysis and can, therefore, be directly compared with
constraints from other data sets and, second, the effect that each of the parameters has
on the power spectrum is clear, with evolution parameters affecting its amplitude and
shape parameters determining the shape.
We provide a more detailed description of the parameter space we use (including the
prior choices) in Sec. \ref{section:methods}, together with a summary of our data and models,
and illustrate how it compares with its $h$-dependent equivalent in Sec. \ref{sec:clustering},
where we also present our cosmological constraints BOSS and eBOSS. The results
obtained when adding DES are further presented in Sec. \ref{sec:joint} We finish with a
discussion of our results in Sect. \ref{section:discussion} and present our conclusions
in Sec. \ref{section:conclusions}.
\section{Methodology}
\label{section:methods}
This work is an extension of \citet{Tr_ster_2020} and largely follows the
same structure and methods - we assume
flat $\Lambda$CDM cosmology and obtain the joint low-redshift parameter
constraints by combining the likelihoods
for each data set considered independently. Our model for anisotropic galaxy
and quasar clustering measurements
follows that described in \citet{S17} for the so-called `full shape' analysis
(with the exception of the matter power spectrum model) whereas for the
`$3\times 2$pt' analysis (galaxy shear, galaxy-galaxy lensing, and galaxy clustering)
we use the model described in \citet{Abbott2017b}. In this section we summarise
the data and models used with a more detailed description available in the references above.
The measurements here are as used in the respective original analyses and, therefore, had
been tested against various systematics and include the appropriate corrections.
\subsection{Galaxy and QSO clustering measurements}
\label{sec:data}
The Sloan Digital Sky Survey (SDSS) has mapped the large-scale structure of the Universe
thanks to the accurate measurements by the double-armed spectrographs \citep{SDSS_spectro}
at the Sloan Foundation Telescope at Apache Point Observatory \citep{SDSS_telescope}.
Throughout it different stages \citep{York2000, SDSS_III, SDSS_IV} the SDSS has provided
redshift information on millions of galaxies and quasars.
We consider clustering measurements in configuration space from two data sets:
the galaxy samples of BOSS \citep{BOSS}, corresponding to SDSS DR12 \citep{Alam2015, Reid2016},
and the QSO catalogue from eBOSS \citep{eBOSS}, contained in SDSS DR16 \citep{sdss_dr16,ross2020}.
In each case, the information from the full anisotropic correlation function
$\xi(s,\mu)$, where $s$ denotes the comoving pair separation and
$\mu$ represents the cosine of the angle between the separation vector and the line of sight, was compressed into different but closely related statistics.
We analyse the clustering properties of the combined BOSS galaxy sample using the
measurements of \citet{S17}, who employs the clustering wedges statistic \citep{Kazin2012},
$\xi_{\Delta\mu}(s)$, which corresponds to the average of $\xi(s,\mu)$, over the interval
$\Delta\mu=\mu_{2}-\mu_{1}$, that is
\begin{equation}
\xi_{\Delta\mu}(s)= \frac{1}{\Delta \mu}\int^{\mu_2}_{\mu_1}{\xi(\mu,s)}\,{{\rm d}\mu}.
\label{eq:wedges}
\end{equation}
\citet{S17} measured three wedges by splitting the $\mu$ range from 0 to 1 into
three equal-width intervals. We consider their measurements in two redshift bins, with
$0.2 < z < 0.5$ (the LOWZ sample) and $0.5 < z < 0.75$ (CMASS), corresponding to the effective redshifts
$z_{\rm eff} = 0.38$ and 0.61, respectively.
The covariance matrices, $\mathbfss{C}$, of these data were estimated using the set of 2045
{\sc MD-Patchy} mock catalogues described in \citet{Kitaura2016}.
These measurements were also used in the analysis of \citet{Tr_ster_2020} and the
recent studies of the cosmological implications of the KiDS 1000 data set \citep{kids1000, troster2021}.
For the eBOSS QSO catalogue we use the measurements of \citet{Hou2021}, who considered
the Legendre multipoles given by
\begin{equation}
\xi_{\ell}(s) =\frac{2\ell+1}{2} \int^{1}_{-1} \xi(\mu,s) \mathcal{L}_{\ell}(\mu) \,{\rm d}\mu,
\label{eqn:multipoles}
\end{equation}
where $\mathcal{L}_{\ell}(\mu)$ denotes the $\ell$-th order Legendre polynomial.
We consider the multipoles $\ell = 0,\, 2, \, 4$ obtained using the redshift range
$0.8 < z < 2.2$, with an effective redshift $z_{\rm eff} = 1.48$.
The covariance matrix of these measurements were obtained using
the set of 1000 mock catalogues described in \citet{Zhao2021}.
Besides the QSO sample used here, the full
eBOSS data set contains two additional tracers, the luminous red galaxies (LRG)
and emission line galaxies (ELG) samples
\citep[for the corresponding BAO and RSD analyses, see][]{Bautista2021, GilMarin2020,
deMattia2021,Tamone2020}.
These samples overlap in redshift among them and with the galaxies from BOSS.
We therefore restrict our analysis of eBOSS data to the QSO sample to, in combination with BOSS,
cover the maximum possible redshift range while ensuring that the clustering measurements can
be treated as independent in our likelihood analysis.
We treat the measurements from BOSS and eBOSS as in the original analyses of \citet{S17} and
\citet{Hou2021}. We restrict our analysis to pair separations within the range
$20\,h^{-1}{\rm Mpc} < s< 160\,h^{-1}{\rm Mpc}$. We assume a Gaussian likelihood
for each set of measurements, in which the covariance matrices are kept fixed.
We account for the impact of the finite number of mock catalogues used
to derive $\mathbfss{C}$ \citep{kaufman1967,Hartlap2007,Percival2014}.
The large number of mock catalogues used ensures that the effect of the
noise in $\mathbfss{C}$ on the obtained cosmological constraints corresponds to a modest
correction factor of less than 2 per cent.
\subsection{Modelling anisotropic clustering measurements}
\label{section:model}
Our modelling of the full shape of the Legendre multipoles and clustering wedges of the anisotropic
two-point correlation function largely follows the treatment of \citet{S17}, with some
important differences.
We compute model predictions of the non-linear matter power spectrum, $P_{\rm{mm}}(k)$,
using the Rapid and Efficient SPectrum
calculation based on RESponSe functiOn approach \citep[{\sc respresso},][]{respresso}.
The key ingredient of {\sc respresso} is the response function, $K(k,q)$, which
quantifies the variation of the non-linear matter power spectrum at scale $k$ induced by a change
of the linear power at scale $q$ as
\begin{equation}
K(k,q)\equiv q\frac{\partial P_{\rm{mm}}(k)}{\partial P_{\rm{L}}(q)}.
\end{equation}
\cite{NisBerTar1712} presented a phenomenological model for $K(k,q)$ based on renormalised
perturbation theory \citep{regpt}, which gives a good agreement with simulation results
over a wide range of scales for $k$ and $q$.
The response function allows to obtain $P_{\rm{mm}}(k)$ for arbitrary cosmological parameters
$\pmb{\theta}$ based on a measurement from N-body simulations of a fiducial cosmology
$\pmb{\theta}_{\rm{fid}}$ as
\begin{equation}
\begin{split}
P_{\rm{mm}}(k|\pmb{\theta}) &= P_{\rm{mm}}(k|\pmb{\theta}_{\rm{fid}})\int {\rm d}\ln q\,K(k,q)\\
&\times [P_{\rm{L}}(q|\pmb{\theta})-P_{\rm{L}}(q|\pmb{\theta}_{\rm{fid}})].
\end{split}
\label{eq:resp}
\end{equation}
The choice of $\pmb{\theta}_{\rm{fid}}$ in {\sc respresso} corresponds to the best-fitting
$\Lambda$CDM model to the
\textit{Planck} 2015 data \citep{Planck2015}. Equation (\ref{eq:resp}) is most accurate for cosmologies
that are close to $\pmb{\theta}_{\rm{fid}}$. For cosmologies further away from the
fiducial, its accuracy can be improved by performing a multi-step reconstruction.
\citet{biasmodel} showed that {\sc respresso}
outperforms other perturbation theory based models
in terms of the range of validity and accurate recovery of mean posterior values.
Following the notation of \citet{Eggemeier2019}, we describe the relation between the galaxy density
fluctuations, $\delta$, and the matter density fluctuations, $\delta_{\rm m}$, at one loop in terms of
the four-parameter model
\begin{equation}
\delta = b_1\delta_m+\frac{b_2}{2}\delta_m^2+\gamma_2\mathcal{G}_2(\Phi_v)+\gamma_{21}\mathcal{G}_2(\varphi_1, \varphi_2)+... ,
\end{equation}
where the first two terms represent contributions from linear and quadratic local bias, while the
remaining ones correspond to non-local terms.
Here, $\mathcal{G}_2$ is the Galileon operator of the normalized velocity potential $\Phi_{\nu}$, and
$\varphi_1$ is the linear Lagrangian perturbation potential with $\varphi_2$ as a second-order
potential that accounts for the non-locality of the gravitational evolution,
\begin{align}
\mathcal{G}_2(\Phi_{\nu})&=(\nabla_{ij}\Phi_{\nu})^2 - (\nabla^2\Phi_{\nu})^2,\\
\mathcal{G}_2(\varphi_1, \varphi_2)&=\nabla_{ij}\varphi_2\nabla_{ij}\varphi_1 - \nabla^2\varphi_{2}\nabla^2\varphi_{1}.
\end{align}
Two-point statistics alone do not constrain $\gamma_2$ well, because $\gamma_2$ enters at higher order and is degenerate with $\gamma_{21}$. Therefore, we set the value
of this parameter in terms of the linear bias $b_1$ using the quadratic relation
\begin{equation}
\gamma_{2}(b_1) = 0.524-0.547b_1+0.046b_1^2,
\label{eq:gamma2}
\end{equation}
which describes the results of \citet*{tidal1} using excursion set theory.
\citet{biasmodel} showed that this relation is more accurate for tracers with $b_1\gtrsim 1.3$
than the one obtained under the assumption of local bias in Lagrangian space used in \citet{S17}.
The value of $\gamma_{21}$ can also be derived in terms of $b_1$ under the
assumption of the conserved evolution of
galaxies (hereafter co-evolution) after their formation
as \citep{Fry9604,CatLucMat9807,CatPorKam0011,Chan2012}
\begin{equation}
\gamma_{21}= - \frac{2}{21}(b_1-1)+\frac{6}{7}\gamma_2.
\label{eq:coevol}
\end{equation}
This relation was thoroughly tested against constraints derived from a combination of power spectrum and bispectrum data in \citet{Eggemeier2021}, and found to be in excellent agreement for BOSS galaxies. In addition to this, in Sec.~\ref{section:validation}, we confirm that the use of this relation gives an accurate description
of the results of N-body simulations and we therefore implement it in our analysis of the BOSS and
eBOSS data. In this way, the only required free bias parameters in our recipe are $b_1$ and $b_2$, while the non-local bias terms can be fully expressed in terms of the linear bias through
equations (\ref{eq:gamma2}) and (\ref{eq:coevol}).
Our description of the effects of RSD matches that of \citet{S17}.
Following \citet{RSD} and \citet{Taruya_2010}, we write the two dimensional
redshift-space power spectrum as
\begin{equation}
P(k,\mu) = W_\infty(i f k \mu) \, P_{\rm novir}(k,\mu),
\label{Prsd}
\end{equation}
where the `no-virial' power spectrum, $P_{\rm novir}(k,\mu)$, is computed using the
one-loop approximation and includes three terms, one representing a non-linear version of the
Kaiser formula \citep{Kaiser}
and two higher-order terms that include the contributions of the cross-spectrum and
bispectrum between densities and velocities.
Besides the non-linear matter power spectrum, $P_{\rm novir}(k,\mu)$ requires also the
the velocity-velocity and matter-velocity power spectra, which we compute using the empirical
relations measured from N-body simulations of \citep{Bel2019}.
The function $W_{\infty}(\lambda=ifk\mu)$ represents the
large-scale limit of the generating function of the pairwise velocity distribution, which accounts for
non-linear corrections due to fingers-of-God (FOG) or virial motions and can be parametrised
as \citep{S17}
\begin{equation}
W_{\infty}(\lambda)=\frac{1}{\sqrt{1-\lambda^2a^2_{\rm{vir}}}}\,\exp\left(\frac{\lambda^2\sigma^2_v}{1-\lambda^2\mathnormal{a}^2_{\rm{vir}}}\right),
\end{equation}
where $a_{\rm{vir}}$ is a free parameter characterizing the kurtosis of the small-scale velocity
distribution, and $\sigma_{\rm{v}}$ is the one-dimensional linear velocity dispersion defined
in terms of the linear matter power spectrum as
\begin{equation}
\sigma_v^2 \equiv \frac{1}{6\pi^2}\int {\rm d}k\,P_{\rm L}(k).
\label{eq:sigmav}
\end{equation}
We also account for the impact of the non-negligible redshift errors of the QSO sample
following \citet{Hou_2018}, who showed that this effect can be correctly described by
including an additional damping factor to the power spectrum of equation~(\ref{Prsd})
of the form $\exp\left(-k\mu \sigma_{\rm err}\right)$, where $\sigma_{\rm{err}}$
is treated as an additional free parameter.
Finally, the Alcock-Paczynski distortions \citep{Alcock1979} due to the difference
between the true and fiducial cosmologies
are accounted for by introducing the geometric distortion factors
\begin{align}
q_{\bot} &=D_{\rm{M}}(z_{\rm eff})/D_{\rm{M}}'(z_{\rm eff}),\\
q_{\parallel} &=H'(z_{\rm eff})/H(z_{\rm eff}).
\end{align}
Here, $D_{\rm{M}}(z)$ is the comoving angular diameter distance and $H(z)$ is the Hubble parameter,
with primed quantities corresponding to the fiducial cosmology used to convert redshifts to distances.
The distortion factors are then applied to rescale the separations $s$ of galaxy pairs and the angles
between the separation vector and the line of sight $\mu$ such that
\begin{align}
s &=s'\left( q_{\parallel}^2\mu'^2+q^2_{\bot}(1-\mu'^2)\right),\\
\mu &=\mu'\frac{q_\parallel}{\sqrt{q_{\parallel}^2\mu'^2+q^2_{\bot}(1-\mu'^2)}}.
\end{align}
In summary, our model of the clustering wedges from BOSS requires three free parameters,
$b_1$, $b_2$, and $a_{\rm vir}$, with the values of $\gamma_2$ and $\gamma_{21}$ given
in terms of $b_1$ using equations~(\ref{eq:gamma2}) and (\ref{eq:coevol}). This is one less
free parameter than in the original analysis of \citet{S17}. The Legendre multipoles of the eBOSS
QSO require the addition of $\sigma_{\rm{err}}$, leading to a total of four free parameters .
\subsection{Additional data sets}
We complement the information from our clustering measurements with the
$3\times2$pt measurements from DES Y1 \citep{Abbott2017b}.
We also use the shear measurements from the Kilo-Degree Survey \citep[KiDS-450,][]{Hildebrandt_2016}
and present the results in Appendix~\ref{appendix:kids}.
The source galaxy samples from DES are split into four redshift bins, spanning the redshift range of
$0.2<z\leq1.3$. In addition to shear measurements from the source galaxies, the DES Y1 data set
also includes galaxy clustering and galaxy-galaxy lensing two-point correlation function measurements,
as well as the lens redshift distributions for five redshift bins in the range of $0.15<z<0.9$.
Our scale cuts for these measurements match those of \citet{Abbott2017b}.
For our $3\times 2$pt analysis, we use the DES likelihood as implemented in \textsc{CosmoMC}
\citep{Lewis_2002}, which corresponds to the model described in \citet{Abbott2017b}. The likelihood includes
models for the two-point correlation functions describing galaxy-galaxy lensing, galaxy
clustering, and cosmic shear. The correlation functions are modelled making use of Limber and
flat-sky approximations \citep{Limber1, Limber2, Limber3, Limber4} with the non-linear power
spectrum obtained using \textsc{HMCode} \citep{HMcode}
as implemented in \textsc{camb} \citep{Lewis_2000}.
The smallest angular separations considered correspond to a comoving scale of $8\,h^{-1}{\rm Mpc}$.
The intrinsic alignment is modelled using a `non-linear linear'
alignment recipe \citep{IA1, IA2}.
The model also includes a treatment for multiplicative shear bias and photometric redshift uncertainty.
The former is accounted for by introducing multiplicative bias terms of the form $(1+m^i)$ for
each bin $i$ for shear and galaxy-galaxy lensing. The latter is modelled by the shift parameters
$\delta z^i$ assigned to each bin for both source and lens galaxies. Finally, baryonic effects are
not included as they are expected to be below the measurement errors for the range of scales
considered in the analysis. For all the weak lensing nuisance parameters we impose the same
priors as the ones listed in \citet{Abbott2017b}.
Additionally, we test the consistency of the low-redshift LSS measurements with
the latest CMB temperature and polarization power spectra from the \textit{Planck}
satellite \citep{Planck2018}, to which we refer simply as `Planck'. We do not include CMB lensing information.
We use the public nuisance parameter-marginalised
likelihood \texttt{plik\_lite\_TTTEEE+lowl+lowE} for all Planck constraints \citep{Planck2018}.
\begin{table}
\centering
\caption{Priors used in our analysis. $U$ indicates a flat uniform prior within the specified range.
The priors on the cosmological and clustering nuisance parameters match those of
\citet{Tr_ster_2020} with the exception of $n_{\rm s}$, for which the allowed range is widened.
The priors on the nuisance parameters of weak lensing data sets match those
of \citet{Abbott2017b}.}
\label{tab:priors}
\begin{tabular}{cc}
\hline
Parameter & Prior \\
\hline
\hline
\multicolumn{2}{c}{Cosmological parameterss} \\
\hline
$\Omega_{\rm{b}}h^2$ & $U(0.019, 0.026)$ \\
$\Omega_{\rm{c}}h^2$ & $U(0.01, 0.2)$ \\
100$\theta_{\rm{MC}}$ & $U(0.5, 10.0)$ \\
$\tau$ & $U(0.01, 0.8)$ \\
$\rm{ln}( 10^{10}A_{\rm{s}})$ & $U(1.5, 4.0)$ \\
$\rm{n}_s$ & $U(0.5, 1.5)$ \\
\hline
\multicolumn{2}{c}{Clustering nuisance parameters} \\
\hline
$b_1$ & $U(0.5, 9.0)$ \\
$b_2$ & $U(-4.0, 8.0)$ \\
$a_{\text{vir}}$ & $U(0.0, 12.0)$ \\
$\sigma_{\rm err}\text{ (eBOSS only)}$ & $U(0.01, 6.0)$ \\
\hline
\end{tabular}
\end{table}
\subsection{Parameter spaces and prior ranges}
\label{sec:parameters}
Our goal is to obtain constraints on the parameters of the standard $\Lambda$CDM
model, which corresponds to a flat universe, where dark energy is
characterized by a constant equation of state parameter $w_{\rm DE} = -1$.
Assuming also a fixed total neutrino mass of $\sum m_{\nu} =0.06\,{\rm eV}$, this
model can be described by the parameters
\begin{equation}
\pmb{\theta} = \left(\omega_{\rm b},\omega_{\rm c},\omega_{\rm DE}, A_{\rm s}, n_{\rm s} \right).
\label{eq:base_lcdm}
\end{equation}
These are the present-day physical energy densities of baryons, cold dark matter, and
dark energy, and the amplitude and spectral index of the primordial power spectrum
of scalar perturbations at the pivot wavenumber of $k_0= 0.05\,{\rm Mpc}^{-1}$.
Additional parameters can be derived from the set of equation~(\ref{eq:base_lcdm}).
The dimensionless Hubble parameter, $h$, is defined by the sum of all
energy contributions. For a $\Lambda$CDM model, this is
\begin{equation}
h^2 = \omega_{\rm b} + \omega_{\rm c} + \omega_{\nu} + \omega_{\rm DE}.
\label{eq:hubble}
\end{equation}
It is also common to express the contributions of the various energy components in terms
of the density parameters
\begin{equation}
\Omega_i = \omega_i/h^2,
\label{eq:Omegas}
\end{equation}
which represent the fraction of the total energy density of the Universe corresponding to
a given component $i$. The overall amplitude of matter density fluctuations is often
characterized in terms of $\sigma_{8}$, the linear-theory RMS mass fluctuations in spheres
of radius $R=8\,h^{-1}{\rm Mpc}$.
A common property of these parameters is their dependence on the value of $h$. The issues associated with this dependence are discussed in detail by \citet{sanchez2020let}
and can be summarised as follows.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figs/hpost2.pdf}
\caption{ Upper panel\, one-dimensional marginalised posteriors for $h$ for the different data sets considered in this work (with the
priors used in this analysis). Lower panel: the corresponding posteriors of the physical value of $(8/h)\,{\rm Mpc}$
- the scale used to define $\sigma_8$. Any distance defined in $h^{-1}{\rm Mpc}$ units will correspond to a range of physical scales,
as determined by the posterior of $h$. If the posterior is prior limited, as is the case with weak lensing, the choice of prior will also
influence the range of physical scales that contribute to $\sigma_8$. On the other hand, the effect is much smaller for the case of
narrow Gaussian $h$-posterior - for Planck $\sigma_8$ will correspond to a scale of $12\,{\rm Mpc}$.}
\label{fig:hpost}
\end{figure}
The main consequence of using quantities that depend on $h$ in cosmological analyses is that this
complicates the comparison of constraints derived from probes that lead to different posterior
distributions on $h$.
This can be illustrated the most straightforwardly when considering $\sigma_8$, which is
defined in terms of a scale in $h^{-1}{\rm Mpc}$ units. As done by \citet{sanchez2020let},
the one-dimensional marginalised posterior distribution for $h$ can be used to obtain the
corresponding posterior for $(8/h)\,{\rm Mpc}$ to explore what physical distances this
radius corresponds to. Figure \ref{fig:hpost} repeats this simple exercise
for the data sets considered in this work - as expected, the range of scales recovered in each case heavily depends on the type of probe considered (Planck displaying an extremely narrow posterior at the physical
scale of approximately $12\,{\rm Mpc}$, while the remaining probes cover varying ranges), especially in the
case where the posterior of $h$ is simply limited by the prior imposed, as is the case for weak
lensing data sets.
The solid line in Figure \ref{fig:sigmar} shows the density
field variance $\sigma_R$ as a function of the scale $R$ in a Planck $\Lambda$CDM Universe.
The shaded areas indicate the range of physical scales covered by the the posterior distributions of
$(8/h)\,{\rm Mpc}$ for DES, BOSS, and Planck shown in Fig.~\ref{fig:hpost}. The issue with $\sigma_8$ is then that its marginalized value corresponds to a weighted average of
$\sigma_R$ on a range of scales that is different for each data set.
A further complication is that the value of $h$ also has an impact on the amplitude of $\sigma_R$. As discussed in \citet{sanchez2020let}, these issues can be avoided by considering the variance of
the density field on a reference scale in Mpc such as $\sigma_{12}$, which is equivalent to $\sigma_8$
but is defined on a physical scale of $12\,{\rm Mpc}$. We, therefore, opt to focus on $\sigma_{12}$
and quantities that carry no explicit dependence on the Hubble constant $h$ in order to enable us to
appropriately combine and compare the constraints from our data sets.
\begin{figure}
\centering
\includegraphics[width=0.95\columnwidth]{figs/sigma_R_squareshade_planck_font_ycut_cut.pdf}
\caption{The change of the value of standard deviation of linear matter fluctuations $\sigma_R$
measured in a sphere of physical radius $R$ in Mpc in Planck $\Lambda$CDM Universe. The shaded
areas indicate the ranges that $(8/h)\,{\rm Mpc}$ correspond to for BOSS, DES and Planck based on the posteriors in Figure \ref{fig:hpost}. When $R$ is defined in $h^{-1}$Mpc, as is the case for
$\sigma_8$, the value measured is, in fact, a weighted average of $\sigma_R$ over a range of $R$. }
\label{fig:sigmar}
\end{figure}
\begin{figure*}
\includegraphics[width=0.99\columnwidth]{figs/gamma2coev_gamma21_minerva.pdf}
\includegraphics[width=0.99\columnwidth]{figs/gamma2coev_gamma21_or.pdf}
\caption{Flat $\Lambda$CDM constraints derived from mean measurements of \textsc{Minerva} (left)
and \textsc{OuterRim} (right) HOD samples using the model described in Sec.~\ref{section:model}
while freely varying the non-local bias parameters $\gamma_{21}$ (grey contours) and when
its value is fixed using the co-evolution relation of equation \ref{eq:coevol} (red). The dashed lines
mark the true input parameter values. Both cases recover the input cosmology well but
the co-evolution relations yields slightly more accurate and precise constraints.}
\label{fig:or_validation}
\end{figure*}
We obtain the posterior distribution of all these parameters by performing Monte Carlo Markov chain
(MCMC) sampling with {\sc CosmoMC} \citep{Lewis_2002}, which uses {\sc CAMB} to calculate the
linear-theory matter power spectra \citep{Lewis_2000}, adapted to compute the theoretical model of
our anisotropic clustering measurements described in Sec.~\ref{section:model}.
{\sc CosmoMC} uses as basis parameters the set
\begin{equation}
\pmb{\theta}_{\rm base} = \left(\omega_{\rm b},\omega_{\rm c},\Theta_{\rm MC}, A_{\rm s}, n_{\rm s} \right),
\label{eq:base_cosmomc}
\end{equation}
where $\Theta_{\rm MC}$ is defined by a factor 100 times the approximate angular size of the
sound horizon at recombination.
With the exception of the physical baryon density, we assign flat uninformative priors to all the
parameters of equation~(\ref{eq:base_cosmomc}) as was done in \citet{Tr_ster_2020}.
Our prior for $\omega_{\rm b}$ has to be restrictive, as our clustering measurements cannot
constrain this parameter by themselves. Nevertheless, it is chosen to be approximately
25 times wider than the constraints on this parameter derived from \textit{Planck} data alone \citep{Planck2018}.
Even though we do not sample the Hubble parameter $h$, we still need to specify the values
allowed - our chosen range $0.5 < h< 0.9$ is wider than that of the KiDS-450 analysis of \citet{Hildebrandt_2016}
and comparable to the one used in the DES-YR1 fiducial analysis of \citet{Abbott2017b}.
\citet{Joudaki_2016} showed that the prior on $h$ has no impact on the significance of the
$\sigma_8$ tension. We list all the priors used in this analysis in Table \ref{tab:priors}.
\citet{Sanchez2021} classified all cosmological parameters into shape and evolution parameters.
The former are parameters that control the shape of the linear-theory power spectrum expressed
in Mpc units. The latter only affect the amplitude of $P_{\rm L}(k)$ at any given redshift.
This means that the effect of all evolution parameters is degenerate:
for a given set of shape parameters, the linear power spectra of all possible
combinations of evolution parameters that lead to the same value of $\sigma_{12}(z)$
are identical. This behaviour is inherited by the non-linear matter power spectrum predicted
by {\sc respresso}, which depends exclusively on $P_{\rm L}(k)$.
However, the full model of $P(k,\mu)$ does not follow this simple degeneracy due to the effect
of bias, RSD and AP distortions.
Of the parameters listed in equation~(\ref{eq:base_lcdm}), $\omega_{\rm b}$, $\omega_{\rm c}$,
and $n_{\rm s}$ are shape parameters, while $\omega_{\rm DE}$ and $A_{\rm s}$ are
purely evolution parameters. Other quantities such as $h$ or $\Omega_i$ represent a
mixture of both shape and evolution parameters.
Present-day CMB measurements can constrain the values of most shape parameters with high
accuracy, with posterior distributions that are well described by a multivariate Gaussian,
independently of the evolution parameters being explored. On the other hand, clustering
measurements on their own provide only weak constraints on the values of the shape parameters.
However, if the shape parameters are fixed, clustering data can provide precise measurements
of the evolution parameters.
To test the impact of the additional information on the shape of the linear power spectrum,
along with the priors described above, we use another set of priors to explore the constraints
on the evolution parameters. For these runs, we impose Gaussian priors on the cosmological
parameters that control the shape of the linear
power spectrum - $\omega_{\rm b}$, $\omega_{\rm c}$, and $n_s$.
We derived the covariance matrix and mean values for these priors
from our Planck-only posterior distributions.
We refer to these constraints as the `Planck shape' case.
\begin{table*}
\centering
\caption{Marginalised posterior constraints (mean values with 68 per-cent confidence interval) derived from the full
shape analysis of BOSS + eBOSS clustering measurements on their own, as well as in combination with the
$3\times 2$pt measurements from DES Y1 and the CMB data from Planck. We present two sets of constraints: our main results derived with wide priors, as listed in Table \ref{tab:priors}, and the `Planck shape' constraints obtained by imposing narrow Gaussian priors on the cosmological parameters controlling the shape of the linear power spectrum: the physical baryon density $\omega_{\rm{b}}$, the physical cold dark matter density $\omega_{\rm{c}}$ and the spectral index $\rm{n}_{\rm{s}}$, as discussed in Sect. \ref{sec:parameters}. }
\label{tab:constraints}
\begin{tabular}{cccccc}
\hline
\multicolumn{4}{c|}{Wide priors} & \multicolumn{2}{c}{Gaussian priors on $\omega_{\rm{b}}$, $\omega_{\rm{c}}$, $\rm{n}_{\rm{s}}$} \\
\hline
Parameter & BOSS~+~eBOSS & \makecell{BOSS~+~eBOSS\\+ DES} & \makecell{BOSS + eBOSS\\+ DES + Planck} & BOSS~+~eBOSS& \makecell{BOSS~+~eBOSS\\+ DES}\\
\hline
$\sigma_{12}$ & 0.805 $\pm$ 0.049 & 0.795 $\pm$ 0.035 & 0.789 $\pm$ 0.008 & 0.785 $\pm$ 0.039 & 0.766 $\pm$ 0.019\\
$\omega_{\rm{m}}$ & 0.134 $\pm$ 0.011 & 0.131 $\pm$ 0.011 & 0.141 $\pm$ 0.001 & 0.143 $\pm$ 0.001 & 0.142 $\pm$ 0.001\\
$\omega_{\rm{DE}}$ & 0.328 $\pm$ 0.020 & $0.327 \pm 0.020$ & 0.327 $\pm$ 0.006 & 0.327 $\pm$ 0.012 & 0.335 $\pm$ 0.011\\
$\rm{ln}(10^{10}\rm{A}_{\rm{s}})$ & 3.129 $\pm$ 0.147 & 3.137 $\pm$ 0.125 & 3.041 $\pm$ 0.016 & 3.011 $\pm$ 0.099 & 2.976 $\pm$ 0.054\\
$n_{\rm s}$ & 1.009 $\pm$ 0.048 & 1.001 $\pm$ 0.047 & 0.970 $\pm$ 0.004 & 0.966 $\pm$ 0.004 & 0.967 $\pm$ 0.004\\
\hline
$\sigma_8$ & 0.814 $\pm$ 0.044 & 0.803 $\pm$ 0.028 & 0.803 $\pm$ 0.007 & 0.800 $\pm$ 0.039 & 0.785 $\pm$ 0.021 \\
$\Omega_{\rm{m}}$ & 0.290 $\pm$ 0.013 & 0.286 $\pm$ 0.012 & 0.301 $\pm$ 0.005 & 0.304 $\pm$ 0.0081 & 0.298 $\pm$ 0.007\\
$h$ & $0.679 \pm 0.021$ & $0.677 \pm 0.021$ & $0.6838 \pm 0.0041$ & $0.686 \pm 0.009$ & $0.691 \pm 0.008$ \\
$S_8$ & 0.801 $\pm$ 0.043 & 0.783 $\pm$ 0.020 & 0.805 $\pm$ 0.011 & 0.805 $\pm$ 0.042 & 0.783 $\pm$ 0.020\\
\hline
\end{tabular}
\end{table*}
\subsection{Model validation}
\label{section:validation}
As we are using an updated prescription for the modelling of both the non-linear matter power
spectrum and galaxy bias compared to the previous work of
\citet{Tr_ster_2020}, we want to assess if it can recover unbiased cosmological parameter estimates, using mock data
based on numerical simulations as a testing ground. We do so by applying our model to
the mocks that were used for model validation in the original analyses: the \textsc{Minerva}
simulations \citep{Grieb2016,Lippich2019} for a BOSS-like sample and
\textsc{OuterRim} \citep{Heitmann_2019} for an eBOSS-like data set.
\textsc{Minerva} mocks are produced from a set of 300 N-body simulations with $1000^3$ particles
and a box size of $L=1.5\,h^{-1}{\rm Gpc}$. The snapshots
at $z=0.31$ and $z=0.57$ were used to create halo catalogues with a minimum halo mass
of $M_{\rm{min}}=2.67\times 10^{12}h^{-1}\rm{M}_\odot$,
which were populated with synthetic galaxies using the halo occupation distribution (HOD)
model by \citet{Zheng_2007} with parameters designed to reproduce the clustering
properties of the LOWZ and CMASS galaxy samples from BOSS.
The \textsc{OuterRim} \citep{Heitmann_2019} simulation uses $10\,240^3$ dark matter
particles to trace the dark matter density field in a $L=3\,h^{-1}{\rm Gpc}$ size box.
We use a set of 100 mock catalogues constructed from the snapshot at $z=1.433$, which
was populated using an HOD model matching the clustering of the eBOSS QSO sample and
tested extensively in the mock challenge \citep[labeled as HOD0 in][]{Smith_mocks}.
These realizations include catastrophic redshift failures at a rate of 1.5\%, which
corresponds to that of the eBOSS quasars.
We measured the mean clustering wedges of the samples from \textsc{Minerva} and the
Legendre multipoles from \textsc{OuterRim} with the same binning and range of scales as those of
the real data from BOSS and eBOSS and computed their corresponding theoretical covariance matrices
using the Gaussian recipe of \citet{Grieb2016}. We analysed these measurements using identical
nuisance and cosmological parameter priors as for our final results and tested the validity
of the model described in Sec.~\ref{section:model} with and without the assumption of the
co-evolution relation for $\gamma_{21}$ of equation~(\ref{eq:coevol}). We performed a
joint fit of the two BOSS-like samples from \textsc{Minerva} while the \textsc{OuterRim} measurements,
which correspond to a different cosmology, were analysed separately. Figure \ref{fig:or_validation}
shows the posterior distributions recovered from these measurements, which are in excellent
agreement with the true input cosmology for all cases (shown by the dashed lines).
Nevertheless, we find that setting the value of $\gamma_{21}$ according to
equation (\ref{eq:coevol}) recovers the true parameter values more accurately for both
samples and results in tighter constraints than when it is freely varied. We therefore adopt this
approach in the analysis of the clustering measurements from BOSS and eBOSS.
\begin{figure*}
\includegraphics[width=0.99\columnwidth]{figs/tris8_fin3.pdf}
\includegraphics[width=0.99\columnwidth]{figs/tris12_fin2.pdf}
\caption{Marginalised posterior contours in the `traditional' and $h$-independent parameter spaces
from the Legendre multipoles of eBOSS QSO sample (orange) and the clustering wedges of
BOSS DR12 galaxies (light blue) for a flat $\Lambda$CDM model. The joint constraints are shown in green,
with Planck in dark blue for comparison. }
\label{fig:boss+eboss_s8}
\end{figure*}
\section{Results}
\label{section:results}
Our main results come from the combination of the full shape analyses of the
BOSS galaxy clustering wedges and eBOSS QSO Legendre multipoles described in Sec.~\ref{sec:data}.
We also present combined late-Universe constraints obtained from the joint analysis of these clustering
measurements with the $3\times 2$pt data set from DES Y1.
For comparison, in Appendix~\ref{appendix:kids} we present the constraints obtained using instead the
cosmic shear measurements from KiDS-450, which lead to similar results.
As we find a good agreement between BOSS~+~eBOSS~+~DES and Planck, we also present
the parameter constraints obtained from the combination of all four data sets.
These constraints are summarized in Table \ref{tab:constraints} and are discussed in
Sects. ~\ref{sec:clustering} -- \ref{sec:joint}.
\subsection{Clustering constraints}
\label{sec:clustering}
Here we present the main result of our work - the combined flat $\Lambda$CDM constraints
from the anisotropic clustering measurements from BOSS and eBOSS.
Fig.~\ref{fig:boss+eboss_s8} shows the posterior distributions for BOSS and eBOSS separately
(light blue and orange contours, respectively) as well as their combined constraints (green contours) for
two sub-sets of cosmological parameters. For comparison, we also show the Planck-only
constraints in dark blue. The panels on the left show the results on the more traditional
parameter set of $\sigma_8$, $\Omega_{\rm{m}}$, and $H_0$ whereas the ones on the right
correspond to the alternative basis discussed in Sec.~\ref{sec:parameters} of $\sigma_{12}$,
$\omega_{\rm{m}}$, and $\omega_{\rm{DE}}$.
Regardless of the parameter space considered, we find all of our data sets to be in good
agreement with each other. The largest deviation between the joint BOSS~+~eBOSS constraints
and those recovered from Planck can be observed in the matter density $\Omega_{\rm{m}}$, which
displays a difference at the 1.7$\sigma$ level. Nevertheless, this deviation does not indicate a similarly significant
disagreement in the physical matter density preferred by these probes, as the
value of $\omega_{\rm m}$ recovered by our clustering constraints matches that
of Planck within $0.8\sigma$.
This suggests that the differences seen in $\Omega_{\rm{m}}$ are related to the
posterior distributions on $h$ recovered from these data sets.
Indeed, looking at our $h$-independent parameter space, we see
that the marginalised constraint of the physical dark energy density also differs from the
value preferred by Planck by $0.8\sigma$,
with clustering measurements preferring slightly higher $\omega_{\rm{DE}}$, which translates into a
higher value for $H_0$ and a lower $\Omega_{\rm{m}}$.
\citealp{Tr_ster_2020} found that the clustering measurements from BOSS wedges prefer a
2.1$\sigma$ lower value of $\sigma_8$ as compared to Planck. Here we confirm the low
preference, albeit with much lower significance due to the differences in the modelling of the
power spectrum, for both $\sigma_8$ and $\sigma_{12}$
(consistent with Planck at the 1.1$\sigma$ and 1.3$\sigma$ level, respectively). The increased
consistency between these results is mainly due to the tighter constraints enabled by the use
of the co-evolution relation of equation~(\ref{eq:coevol}), which restricts the
allowed region of the parameter space to higher values of $\sigma_8$ and $\sigma_{12}$, as can be seen in Figure \ref{fig:or_validation}.
The constraints on $\sigma_8$ and $\sigma_{12}$ recovered from eBOSS are at similar levels
of agreement with Planck, however, the values recovered are 1.3$\sigma$ and 1.2$\sigma$
\textit{higher} than the CMB results. This is also consistent with the most recent analysis by
\citet{Hou2021} and \citet{Neveux_ebossqso}, who found the inferred growth rate $f\sigma_8$ to be ~2$\sigma$ higher than the
$\Lambda$CDM model with the best-fitting Planck parameters. The combination of the
clustering measurements from BOSS and eBOSS is, therefore, in an overall excellent agreement
with Planck - with differences at the level of 0.05$\sigma$ for $\sigma_{8}$ and
0.04$\sigma$ for $\sigma_{12}$.
As discussed in Sec.~\ref{sec:parameters}, the shape parameters
$\omega_{\rm{b}}$, $\omega_{\rm{c}}$, and $n_{\rm s}$ are all tightly constrained by \textit{Planck}
with posterior distributions that are in complete agreement with those inferred from the other
cosmological probes considered here.
We can, therefore, study the improvement in the constraints on the evolution parameters
$\omega_{\rm DE}$ and $A_{\rm s}$ that are obtained from the LSS probes when
the shape of the power spectrum is constrained to match that of Planck's cosmology. As described in Sec.~\ref{sec:parameters}, we achieve this by adding an informative Gaussian prior on the shape parameters
based on our Planck runs and repeating our analysis with an otherwise identical set up.
The results of this exercise are shown in Figure \ref{fig:boss+eboss_planckshape}. As
the two data sets were already in a good agreement across the parameter space, including the shape
parameters, imposing additional priors simply adds constraining power on the degenerate
evolution parameters, most notably $\omega_{\rm{DE}}$ (degenerate with $\omega_{\rm{m}}$),
which is recovered to be slightly higher than the Planck value to compensate the slight shifts in
$\sigma_{12}$ and $\text{ln}(10^{10}A_{\rm s})$ to lower values.
\subsection{Consistency with Planck}
When looking at marginalised posteriors we are limited by our selection of the parameter space as
well as the associated projection effects and, while we can use the standard deviation to quantify
agreement on a particular parameter value, this becomes inappropriate when larger parameter
spaces are considered. We, therefore, wish to further explicitly quantify the agreement between
eBOSS~+~BOSS and Planck using a tension metric, as has become standard in cosmological analyses.
First, we want to establish agreement over the whole parameter space considered. In order to do this,
we use the suspiciousness tension metric, $S$, introduced by \citealp{PhysRevD.100.043504}. The
main advantages of using suspiciousness include the fact that it measures the agreement between
two data sets across the entire parameter space, similarly to the Bayes factor $R$. However, unlike
$R$, the suspiciousness is by construction insensitive to prior widths, as long as the posterior is not
prior-limited. Given two data sets, A and B, the suspiciousness quantifies the mismatch between
them by comparing the relative gain in confidence in data set A when data set B is added (as
measured by $R$) with the unlikeliness of the two data sets ever matching as measured by
the information ratio $I$, that is
\begin{equation}
\ln S=\ln R-\ln I.
\label{eq:def_lnS}
\end{equation}
Following the method described in \citet{kids1000}, we redefine $\ln R$ and $\ln I$
in terms of the expectation values of the log-likelihoods $\left<\ln\mathcal{L}\right>$ and evidences
$Z$. The evidences, however, cancel out and we are able to calculate $S$ from the expectation values only:
\begin{equation}
\ln S =\left<\ln\mathcal{L}_{\rm A+B}\right>_{P_{\rm A+B}}-\left< \ln\mathcal{L}_{\rm A}\right>_{P_{\rm A}}-\left<\ln\mathcal{L}_{\rm B}\right>_{P_{\rm B}}.
\end{equation}
The value of $S$ can then be interpreted using the fact that, for Gaussian posteriors, the
difference $d-2\ln S$, where $d$ is the Bayesian model dimensionality, is $\chi^2_d$
distributed. We calculate $d$ for each of the data sets separately, $d_{\rm A}$ and $d_{\rm B}$, and their combination,
$d_{\rm A+B}$, as described in \citet{PhysRevD.100.043504} and combine the results as
$d=d_{\rm A}+d_{\rm B}-d_{\rm A+B}$.
Applying this procedure to eBOSS~+~BOSS and Planck, we find $\ln S=0.041\pm 0.16$ with a Bayesian dimensionality
of $d=4.5\pm 0.1$, which correctly indicates that there are approximately 5 cosmological parameters shared
between the two data sets. This can then be related to a tension probability of $p = 0.52 \pm 0.05$
or a tension of $0.64 \pm 0.07\sigma$, which is consistent with the $0.76 \pm 0.05\sigma$
tension between Planck and BOSS alone found by \citet{Tr_ster_2020} and
indicates a good agreement between these data sets.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{figs/bossebossHST_fin22.pdf}
\caption{When an informative prior is imposed on BOSS~+~eBOSS for the shape parameters
$\omega_{\rm b}$, $\omega_{\rm c}$, and $n_{\rm s}$ so as to match the power spectrum
shape obtained by Planck, the recovered constraints (in green) on the evolution parameters
are in a good agreement with Planck (dark blue) with a slightly more significant deviation in
$\omega_{\rm{DE}}$ only: BOSS~+~eBOSS prefer a ~1.2$\sigma$ higher value of
$\omega_{\rm{DE}}$ than Planck. }
\label{fig:boss+eboss_planckshape}
\end{figure}
In addition to the suspiciousness, we want to use a tension metric that allows for a greater
control to focus only on a selected subset of parameters. For this purpose, we use the update
difference-in-mean statistic, $\mathcal{Q}_{\rm{UDM}}$, as described in \citet{raverihu19} and
implemented in \textsc{tensiometer}\footnote{https://github.com/mraveri/tensiometer}
\citep{lemos2020assessing}.
This statistic extends the simple difference in means, where the difference in mean parameter values $\hat{\pmb{\theta}}$
measured by two data sets is weighted by their covariance $\mathbfss{C}$. The `update' in UDM refers to the
fact that instead of comparing data set A with data set B, we consider the updated information
in the combination ${\rm A+B}$ with respect to A by means of
\begin{equation}
\mathcal{Q}_{\rm{UDM}} = \left(\hat{\pmb{\theta}}_{\rm A+B}-\hat{\pmb{\theta}}_{\rm A}\right)^t\left(\mathbfss{C}_{\rm A}-\mathbfss{C}_{\rm A+B}\right)^{-1} \left(\hat{\pmb{\theta}}_{\rm A+B}-\hat{\pmb{\theta}}_{\rm A}\right).
\end{equation}
This has the advantage of the posterior of ${\rm A+B}$ being more Gaussian than that of B alone.
For Gausian distributed parameters, $\mathcal{Q}_{\rm{UDM}}$ is chi-square distributed with
a number of degrees of freedom given by the rank of
$\left(\mathbfss{C}_{\rm A}-\mathbfss{C}_{\rm A+B}\right)$. The calculation of
$\mathcal{Q}_{\rm{UDM}}$ may be performed by finding the Karhunen–Loéve (KL) modes
of the covariances and re-expressing the cosmological parameters in this basis. This
transformation allows us to reduce the sampling noise by imposing a limit to the eigenvalues
of the modes that are considered and in this way cutting out those that are dominated by
noise (which represent the directions in which adding B does not improve
the constraints with respect to A). The number of remaining modes correspond to
the degrees of freedom with which $\mathcal{Q}_{\rm{UDM}}$ is distributed. For our tension
calculations we, therefore, only select the modes $\alpha$ whose eigenvalues $\lambda_{\alpha}$ satisfy :
\begin{equation}
0.05 < \lambda_{\alpha}-1<100.
\end{equation}
This corresponds to requiring that a mode of the base data set is updated by at least 5 per-cent.
We subsequently find that there are 2 modes being constrained when Planck is updated by both probe combinations
considered in this work (BOSS~+~eBOSS and BOSS~+~eBOSS~+~DES).
For BOSS~+~eBOSS we get $\mathcal{Q}_{\rm{UDM}} = 2.0$ for the full parameter space, resulting
in a `tension' with Planck of $0.90\sigma$ - only slightly higher than what $S$ suggests.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{figs/liltens_fin22.pdf}
\caption{In orange - `low-redshift' constraints for flat $\Lambda$CDM obtained from
combining BOSS~+~eBOSS clustering (green) with DES $3\times 2$pt (light blue) and compared
with Planck (dark blue). While we obtain a good consistency overall, we note the
slight discrepancy between the low redshift probes and Planck contours in
$\log(10^{10}A_{\rm{s}})-\sigma_{12}$ and $\omega_{\rm m}$--$\sigma_{12}$
projections, reminiscent of the tension seen in $\sigma_8$--$\Omega_{\rm m}$ plane. }
\label{fig:lowz}
\end{figure}
\subsection{Joint analysis with DES data}
\label{sec:joint}
Following \citet{Tr_ster_2020}, we want to further investigate the constraints from multiple
low-redshift probes together by adding a weak lensing data set - in this case, the
$3\times2$pt measurements from DES Y1. \citet{Tr_ster_2020} used the suspiciousness statistic
and showed that the combination of BOSS clustering and KiDS-450 shear measurements are
in ~$2\sigma$ tension with Planck. The most recent KiDS-1000 $3\times2$pt analysis
\citep{kids1000}, where the BOSS galaxy sample was used for galaxy clustering and
galaxy-galaxy lensing measurements, also found a similar level of tension when the entire
parameter space is considered. As DES Y1 measurements have no overlap with either
BOSS or eBOSS, we can treat these data sets as independent and easily combine them to
test whether we also find a similar trend.
The resulting constraints are presented in Fig.~\ref{fig:lowz}. We confirm that
DES is in good agreement with eBOSS~+~BOSS (with $\ln S=-1.08\pm 0.05$, which
corresponds to a $1.3\pm 0.08\sigma$ tension) and it is, therefore, safe to combine
them. The addition of DES data to the analysis provides only slightly tighter constraints
with respect to eBOSS~+~BOSS, with the greatest improvement in $\sigma_{12}$, and
an overall good agreement with Planck.
\begin{figure}
\includegraphics[width=0.99\columnwidth]{figs/liltensHST_fin22.pdf}
\caption{
Constraints on flat $\Lambda$CDM models from the full combination of low-redshift
probes (DES~+~BOSS~+~eBOSS) obtained after imposing a Planck prior on shape parameters
$n_{\rm s}, \omega_{\rm b}$ and $\omega_{\rm c}$ (orange contours). The constraints
from the original uninformative prior analysis (grey contours) and Planck (dark blue contours) are shown for
comparison. The results show similar trends as in the case of BOSS~+~eBOSS.
There is a shift to higher values of $\omega_{\rm DE}$ that leads to a lower power
spectrum amplitude today, $\sigma_{12}$, and, to a lesser extent, a lower
$\log(10^{10} A_{\rm s})$.}
\label{fig:lowz_shape}
\end{figure}
Nevertheless, it is worth noting that, when considering the two dimensional
posterior projections, there are two parameter combinations in particular for
which the $1\sigma$ contours of DES~+~BOSS~+~eBOSS and Planck do not overlap.
The slight discrepancy we observe in the $\omega_{\rm{m}}$ -- $\sigma_{12}$ plane
is reminiscent of the `$\sigma_8$ tension' seen in $\Omega_{\rm{m}}$ -- $\sigma_{8}$
and is larger than the discrepancy displayed by either of the probes individually.
In addition to that, we also see a similarly slight disagreement in the
$\ln(10^{10}A_{\rm s})$ -- $\sigma_{12}$ plane.
The projection of $A_{\rm s}$
with $\sigma_{12}$ (as opposed to $\sigma_8$) allows us to recover the tight degeneracy
between the two parameters which exposes how, for a given present-day clustering amplitude,
low-redshift probes prefer a higher initial power spectrum amplitude.
We find that adding the DES Y1 $3\times 2$pt measurements worsens the agreement
with Planck with respect to the results obtained from the combination of BOSS and eBOSS alone.
We obtain a suspiciousness of $\ln S=-1.86 \pm 0.14$, corresponding to a tension of
$1.54 \pm 0.10\sigma $.
When considering the UDM statistic across the entire shared parameter space, we
find $\mathcal{Q}_{\rm{UDM}}=6.3$ distributed with 2 degrees of freedom, which translates
into a tension at the ~$2.0\sigma$ level. As for the case of the clustering-only constraints,
$\mathcal{Q}_{\rm{UDM}}$ indicates a greater level of tension than $S$.
\citet{lemos2020assessing} found that the DES Y1 $3\times 2$pt measurements alone
are in a 2.3$\sigma$ tension with Planck, as measured by $\mathcal{Q}_{\rm{UDM}}$. This increases
to $2.4 \pm 0.02\sigma$ when using the suspiciousness statistic. These levels of tension are
comparable with what we find from the full combination of low-redshift probes. The tension between
Planck and weak lensing data sets is usually interpreted as a reflection of tension in the parameter
combination $S_8 = \sigma_8(\Omega_{\rm{m}}/0.3)^{0.5}$, that is taken to describe the
`lensing strength'. The $S_8$ value that we recover from the joint low redshift probes is also about 2$\sigma$ lower than the Planck constraint (see Table \ref{tab:constraints}). Nevertheless, as we see in Figure \ref{fig:lowz}, there is a comparable discrepancy in $\log(10^{10}A_{\rm{s}})$ -- $\sigma_{12}$ plane.
We can use $\mathcal{Q}_{\rm{UDM}}$ in order to quantify and compare the level of tension present in
these two-dimensional projections by calculating it for a subset of shared parameter space. We find that the
amount of tension in both $\Omega_{\rm{m}}$ -- $\sigma_{8}$ and its $h$-independent equivalent
is ~2.0$\sigma$, whereas $\log(10^{10}A_{\rm{s}})-\sigma_{12}$ displays a slightly higher tension
of ~2.1$\sigma$.
We also repeated our fitting procedure with an additional Gaussian prior on the parameters controlling
the shape of the power spectrum to be consistent with Planck, as described in Sec.~\ref{sec:parameters}.
The resulting posteriors are shown in Fig.~\ref{fig:lowz_shape}. We observe the same general trends as
from the analysis of our clustering data alone discussed in Sec.~\ref{sec:clustering}. However,
the prior on the shape parameters leads to larger shifts in the evolution parameters.
This is expected, as DES data on their own cannot constrain the shape parameters well. Adding the
informative priors breaks the denegeracies between shape and evolution parameters and increases
the constraining power significantly. This,
in turn, exposes any discrepancies in the evolution parameters. The values of
$\sigma_{12}$ and $\ln(10^{10}A_{\rm s})$ preferred by our low-redshift probes
when an informative prior is imposed are, respectively, $1.89\sigma$ and $1.22\sigma$ lower
than the corresponding Planck values. Meanwhile, the recovered value for
$\omega_{\rm{DE}}$ is $1.73\sigma$ higher.
\section{Discussion}
\label{section:discussion}
The flat $\Lambda$CDM constraints from the low-redshift probes presented in Sec.~\ref{section:results}
show a consistent picture. Updating the power spectrum model and
supplementing the clustering measurements with eBOSS data brings the joint BOSS~+~eBOSS constraints
to a better agreement with Planck than the BOSS-only results from \citet{Tr_ster_2020}.
These constraints are not significantly modified when these data are combined with DES,
resulting in a good overall consistency with Plank across the entire parameter space, as indicated by
both $S$ and $\mathcal{Q}_{\rm{UDM}}$.
Nevertheless, when considering specific two-dimensional projections we still see intriguing differences,
mainly driven by the lensing data.
Although the constraints in the $\sigma_{12}$ -- $\omega_{\rm{m}}$ plane obtained using BOSS~+~eBOSS
and DES data separately do not show the discrepancy with Planck that characterizes the results in their
$h$-dependent counterparts of $\sigma_8$ and $\Omega_{\rm{m}}$,
the full combination of low-redshift probes tightens the degeneracy between these parameters
and leads to constraints that are just outside the region of the parameter space preferred by Planck.
We also see differences in the $\log(10^{10}A_{\rm s})$ -- $\sigma_{12}$ plane between DES and
Planck, which are inherited by the full combination of low-redshift data sets.
The tight relation between these parameters, which is not seen when using $\sigma_8$, illustrates
the closer link between $\sigma_{12}$ and the overall amplitude of density fluctuations obtained
by eliminating the ambiguity caused by the dependency on $h$.
For a given value of $\sigma_{12}$, Planck measurements prefer a lower initial amplitude of density
fluctuations than DES, suggesting a discrepancy in the total growth of structures predicted by these
two data sets.
Within the context of a $\Lambda$CDM model, the key parameter controlling the
growth of structure at low redshift is the physical dark energy density. Indeed, as
can be seen in Fig.~\ref{fig:lowz} the posterior distribution of $\omega_{\rm DE}$ recovered
from DES extends to significantly higher values than the one obtained using Planck CMB measurements.
The tendency of the low-redshift data to prefer a higher value of $\omega_{\rm DE}$ than
that of Planck can be seen more clearly in the results obtained after imposing a prior on the
shape parameters shown in Fig.~\ref{fig:lowz_shape}. In this case, we find $\omega_{\rm DE} = 0.335 \pm 0.011$
using BOSS~+~eBOSS~+~DES while Planck data lead to $\omega_{\rm DE} = 0.3093 \pm 0.0093$.
A higher value of $\omega_{\rm DE}$ corresponds also to a higher value of $h$.
Therefore, this difference is also interesting in the context of the Hubble tension, as many of
the proposed solutions to this issue focus on modifying the dark energy component.
The analysis of the consistency between low- and high-redshift data has been focused on
the comparison of constraints on $S_8$, which depends on the present-day value of $\sigma_8$.
Fig.~\ref{fig:S8_z} shows the redshift evolution of $S_8(z)$ predicted by Planck and
the combination of all low-redshift data sets. These curves are consistent at
high redshift during matter domination and start to diverge at $z < 1$ to reach a difference
at the 2$\sigma$ level at $z = 0$.
However, as this redshift is not probed by any LSS data set, the value of $S_8(z=0)$
is an extrapolation based on the assumption of a $\Lambda$CDM background evolution.
Extending this extrapolation to $a > 1$, the difference between the two cosmologies
continues to increase and becomes even more significant.
Therefore, quoting the statistical significance of any discrepancy in the recovered
values of $S_8(z=0)$ might not be the best characterization of the difference in the
cosmological information content of these measurements.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{figs/S8z_square.pdf}
\caption{Comparison of the inferred mean value for $S_8(z)$ (solid lines) and their corresponding 68
per-cent confidence level (shaded area) corresponding to the combination of BOSS~+~eBOSS~+~DES
(orange) and Planck (blue). }
\label{fig:S8_z}
\end{figure}
As discussed before, DES and Planck data appear to prefer different evolutions for the
growth of cosmic structure, which in a $\Lambda$CDM universe depends on
$\omega_{\rm m}$ and $\omega_{\rm DE}$. As the former is exquisitely constrained by
Planck for general parameter spaces, the latter is perhaps the most interesting parameter
to consider. As $\omega_{\rm DE}$ is constant in redshift
for a $\Lambda$CDM universe, the deviations in the value of this parameter recovered
from different data sets could be used as an indication of their consistency within
the standard cosmological model.
\section{Conclusions}
\label{section:conclusions}
We obtained constraints on the parameters of the standard $\Lambda$CDM model
from the joint analysis of anisotropic clustering measurements in configuration space
from BOSS and eBOSS. In particular, we used the information of the
full shape of the clustering wedges of the final BOSS galaxy samples obtained
by \citet{S17} and the legendre multipoles of the eBOSS DR16 QSO
catalogue of \citet{Hou2021}.
We updated the recipes to describe the non-linear matter power spectrum and the
non-local bias parameters with respect to those used in the BOSS-only analyses of
\citet{S17} and \citet{Tr_ster_2020}.
We directly compared our theoretical predictions for different cosmologies against the
BOSS and eBOSS clustering measurements, without the commonly used RSD and
BAO summary statistics.
We focus on cosmological parameters that can be classified
either as shape or evolution parameters \citep{Sanchez2021}, such as the physical
matter and dark energy densities, instead of other commonly used quantities
such as $\Omega_{\rm m}$ and $\Omega_{\rm DE}$ that depend on the
value of $h$.
Our constraints from the combination of BOSS~+~eBOSS represent improvements ranging from
20 to 25 per-cent with respect to those of \citet{Tr_ster_2020} and are in excellent agreement
with Planck, with the suspiciousness and updated difference in means tension metrics
indicating agreement at the level of 0.64$\sigma$ and 0.90$\sigma$, respectively.
We combined the clustering data from BOSS and eBOSS with the $3\times 2$pt correlation
function measurements from DES Y1 to obtain joint low-redshift cosmological constraints
that are also consistent with the $\Lambda$CDM Planck results,
albeit with larger deviations (1.54$\sigma$ and 2.00$\sigma$ differences as inferred
from $S$ and $\mathcal{Q}_{\rm{UDM}}$, respectively).
We do see interesting discrepancies in certain parameter combinations at the level of
$2\sigma$ or more, such as the $\Omega_{\rm{m}}$ -- $\sigma_{8}$ and
$\omega_{\rm{m}}$ -- $\sigma_{12}$ planes, and,
more significantly, in the $\log( 10^{10}A_{\rm{s}})$ -- $\sigma_{12}$ projection.
For a given value of $\sigma_{12}$, low-redshift probes (mostly driven by DES) prefer
a higher amplitude of primordial density fluctuations than Planck, indicating differences
in the total growth of structure predicted by these data sets.
We further tested the impact of imposing a Gaussian prior on
$\omega_{\rm b}$, $\omega_{\rm c}$, and $n_{\rm s}$ representing
the constraints on these shape parameters recovered from Planck data.
Such prior leads to a significant improvement in the constraints on the evolution
parameters, such as $\omega_{\rm DE}$ and $A_{\rm s}$.
In this case, we find that the full combination of low-redshift data sets prefers
a value of the physical dark energy density $\omega_{\rm{DE}}$ that is
1.7$\sigma$ higher than that preferred by Planck.
This discrepancy, which is also related to the amount of structure growth preferred by
these data sets, offers and interesting link with the $H_0$ tension, as it points to a higher value of
$h$ being preferred by the low-redshift data.
The advent of new large, high-quality data sets such as the Dark Energy Spectroscopic
Instrument \citep[DESI,][]{desi_survey}, the ESA space mission {\it Euclid} \citep{Laureijs2011},
and the Legacy Survey of Space and Time (LSST) at the Rubin Observatory \citep{Ivezic2019},
will allow us to combine multiple probes and significantly tighten our cosmological constraints.
The discussion of the consistency between different data sets has so far been centred on
the comparison of constraints on $S_8(z=0)$.
As we move on to the analysis of Stage IV data sets, it would be beneficial to shift our
focus from best constrained parameter combinations within a $\Lambda$CDM scenario to
quantities that more closely represent the cosmological information content of those data,
or that have a more direct physical interpretation.
\section*{Acknowledgements}
We would like to thank Benjam\'in Camacho, Daniel Farrow,
Martha Lippich, Tilman Tr\"oster, and Marco Raveri
for their help and useful suggestions.
This research was supported by the Excellence Cluster ORIGINS,
which is funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) under
Germany's Excellence Strategy - EXC-2094 - 390783311.
G.R. acknowledges support from the National Research Foundation of Korea (NRF) through Grants No. 2017R1E1A1A01077508 and No.2020R1A2C1005655 funded by the Korean Ministry of Education, Scienceand Technology (MoEST)
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges
support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS Collaboration including the
Brazilian Participation Group, the Carnegie Institution for Science,
Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics,
Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) /
University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik Potsdam (AIP),
Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE),
National Astronomical Observatories of China, New Mexico State University,
New York University, University of Notre Dame,
Observat\'ario Nacional / MCTI, The Ohio State University,
Pennsylvania State University, Shanghai Astronomical Observatory,
United Kingdom Participation Group,
Universidad Nacional Aut\'onoma de M\'exico, University of Arizona,
University of Colorado Boulder, University of Oxford, University of Portsmouth,
University of Utah, University of Virginia, University of Washington, University of Wisconsin,
Vanderbilt University, and Yale University.
Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017, 177.A-3018, 179.A-2004, and 298.A-5015.
\section*{Data availability}
The clustering measurements from BOSS and eBOSS used in this analysis are publicly available
via the SDSS Science Archive Server (https://sas.sdss.org/).
\bibliographystyle{mnras}
|
2024-02-18T23:40:25.442Z
|
2021-11-08T02:04:46.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03156","language":"en","timestamp":1636337086000,"url":"https:\/\/arxiv.org\/abs\/2111.03156","yymm":"2111"}
|
proofpile-arXiv_000-10209
|
{"provenance":"002.jsonl.gz:10210"}
| null | null |
\section{Introduction}
The standard model for active galactic nuclei (AGN) posits the existence of a geometrically thin, optically thick accretion disk \citep{Shakura73} around a supermassive black hole (SMBH) as the primary source of continuum emission across ultraviolet (UV) through optical wavelengths. However, the application of the standard thin disk model to normal AGNs is still under debate, and many aspects of AGN accretion physics and emission processes remain poorly understood \citep[e.g.,][]{Collin02,Kishimoto08,Antonucci13,Antonucci15,Lawrence18}.
Although enormous progress has been made recently in interferometric observations that can resolve the sub-parsec infrared emission-line regions of quasars \citep{GravityCollaboration18}, the primary UV-optical emitting regions of AGN accretion disks have angular sizes too small to be resolved by any current facility. Fortunately, two indirect approaches, i.e., microlensing of gravitationally lensed quasars \citep[e.g.,][]{Morgan10} and continuum reverberation mapping \citep[see a recent review;][]{Cackett21}, are capable of resolving structure on the scale of the accretion disk by making use of time-domain information. The former utilizes multi-band photometry of microlensed quasars to constrain the temperature profiles of accretion disks based on the microlensing-perturbation-induced variations of multi-band flux ratios against wavelength \citep[e.g,][]{Poindexter08,Morgan10,Blackburne11}. The latter measures the inter-band time lags by conducting photometric monitoring campaigns to infer the disk size within the context of the ``lamp-post reprocessing'' model \citep[e.g.,][]{Cackett07}, in which UV-optical variability is interpreted as resulting from disk reprocessing of X-ray emission from a corona located above the disk's central regions. Intensive disk reverberation mapping (IDRM) programs have successfully resolved UV-optical continuum reverberation lags in a growing number of sources in recent years though the connection of X-ray to UV/optical is unclear \citep[e.g.,][]{Mchardy14,Shappee14,Edelson15,Fausnaugh16,Edelson17,Kokubo18,Fausnaugh18,Edelson19,Cackett18,Cackett20,HernandezSantisteban20,Lobban20}.
A key result of those continuum reverberation mapping (RM) campaigns is that the UV-optical-NIR lags generally follow a trend consistent with the expected $\tau \propto \lambda^{4/3}$ dependence for reprocessing by a thin disk, but typically with a normalization $\sim$ 3 times larger than the prediction from the standard disk model, suggesting the possibility that disks may be substantially larger than expected. Similarly, disk sizes measured from microlensed quasars also indicate larger radii than anticipated from standard disk models \citep[e.g.,][]{Morgan10,Blackburne11,Mosquera13}. Suggested explanations for the long continuum reverberation lags have included modifications to accretion disk structure and reprocessing geometry \citep{Gardner17} or models in which UV-optical variations originate from temperature fluctuations within the disk itself rather than from reprocessing \citep[e.g.,][]{Cai18,Sun20a,Sun20b}, although \citet{Kammoun21} have argued that lamp-post reprocessing by a standard Novikov-Thorne disk model \citep{Novikov73} can reproduce the observed lag spectra with a more extended corona height than usually assumed.
Another possibility is that continuum emission originating from spatial scales larger than the accretion disk may be responsible for reverberation lags in excess of simple model predictions.
It is well known that the UV-optical continua of AGN include reprocessed emission from the broad-line region (BLR) in addition to the dominant disk emission \citep{Malkan82,Wills85,Maoz93,Korista01}. The most visible evidence of this BLR emission is the contribution of hydrogen free-bound emission to the ``small blue bump'' feature spanning $\sim2200-4000$~\AA, which also includes \ion{Fe}{2} emission blends and other features. The strong Balmer continuum emission below the Balmer jump at 3647~\AA\ (all wavelengths are in vacuum through this paper) is just one portion of the overall nebular diffuse continuum (DC) emission, which consists of free-free, free-bound, and scattered continuum emission spanning all wavelengths from the UV through near-infrared \citep{Korista01}. Recently, the DC emission has been examined in detail by \citet{Lawther18} and \citet{Korista19}, who carried out photoionization modeling to assess the flux spectrum and lag spectrum of DC emission over a broad range of physical conditions of BLR clouds, finding that the DC contribution to the total continuum emission can be as high as $\sim40\%$ at wavelengths below the Balmer jump. Since the DC emission arises from the BLR, its reverberation response will introduce an additional delay signal to measured continuum lags. Continuum reverberation mapping campaigns have identified a distinct excess in lag in the $U$-band spectral region in several objects that has been identified with this DC emission \citep[e.g.,][]{Edelson15, Fausnaugh16, Cackett18, Cackett20, HernandezSantisteban20}. If the observed $U$-band excess lags do arise from DC emission from the BLR, this implies that the continuum lags across all wavelengths are also affected by the DC component, and other AGN monitoring results have suggested a substantial or even dominant contribution of DC emission to AGN optical variability \citep{Chelouche19,Cackett21b,Netzer21}.
Even if the DC component does not dominate the optical continuum reverberation lags, it certainly contributes to the lag spectrum, and determining the magnitude of its contribution is essential for isolating the wavelength-dependent lags of the accretion disk itself. To model precisely the impact of the DC component on the lag spectrum, we need to isolate the contributions of the DC and disk emission in total flux as well as in the wavelength-dependent variability. In the optical, the strongest feature in the DC spectrum appears at the Balmer jump. However, the Balmer jump spectral region contains a multitude of other emission lines including \ion{Fe}{2} emission blends as well as high-order Balmer emission lines \citep{Wills85}, making it difficult to obtain a unique determination of the DC emission strength in total flux.
The Paschen jump at 8206 \AA\ provides another possible diagnostic of the DC contribution that is easily accessible to observations for low-redshift AGN. In contrast to the complex blend of emission features in the small blue bump region, the Paschen jump is relatively uncontaminated by other emission lines. The strength of the Paschen jump in DC spectra is expected to be much weaker than that of the Balmer jump \citep{Lawther18, Korista19}, and it has consequently received much less attention than the Balmer jump region as a diagnostic of nebular continuum emission, but with data of high S/N the Paschen spectral region could still allow for a useful and independent assessment of the DC strength that avoids the degeneracies inherent in modeling the small blue bump region. \citet{Malkan82} proposed that the Paschen jump strength in AGN should be ``almost completely washed out'' as a result of dilution by the featureless continuum and the blended flux from broadened high-order Paschen emission lines. Higher-quality data from later near-infrared spectroscopic programs \citep[e.g.,][]{Osterbrock92, Rodriguez-Ardila00} also indicated that there was no obvious Paschen jump in Seyfert 1 galaxy spectra, although these programs did not derive quantitative constraints on the nebular continuum contribution.
Prompted by recent developments in AGN continuum reverberation mapping, we have revisited the question of whether the DC Paschen jump is detectable in AGN spectra. In this work, we use spectra of nearby AGN from the Space Telescope Imaging Spectrograph (STIS) on the Hubble Space Telescope (HST) to fit multi-component models to the Paschen jump spectral region and examine the constraints that can be derived on the DC contribution. While there are only a small number of existing STIS spectra covering the Paschen jump in unobscured Seyfert 1, these spectra have the advantage (compared with ground-based spectra) of excluding nearly all host galaxy starlight, thanks to the 0\farcs1 or 0\farcs2 slit width of STIS, and this largely eliminates degeneracies between AGN and stellar continuum components when fitting models to the data. We describe our sample selection and spectral decomposition method in \S \ref{sec:sample}. The fitting results and DC contributions are demonstrated in \S \ref{sec:results}. Finally, we discuss the implications of the spectral modeling in \S \ref{sec:dis} and conclude in \S \ref{sec:con}.
\section{Sample and Spectral Decomposition} \label{sec:sample}
\subsection{Sample, observations, and reductions}
The sample for this work is based on the subset of reverberation-mapped AGN \citep{BentzKatz15} at redshifts $z$ $<$ 0.1 to ensure that continuum redward of the Paschen jump and the Pa$\epsilon$ line (important for constraining the contribution of high-order Paschen series) falls within the STIS G750L band. A search of the HST archive for STIS observations with the G750L grating yielded a sample of six nearby Seyfert 1 galaxies (see Table \ref{tab:sample}). In addition to these six objects, there are also archival G750L spectra of a few reverberation-mapped radio-loud sources (including 3C 120, 3C 382, and 3C 390.3), but we did not include these in our sample in order to avoid objects in which jet emission might contribute to the optical continuum.
Most of our targets were observed by HST in a single visit except for Mrk 110 and NGC 4593 (see Table \ref{tab:sample}). In order to measure the wavelength-dependent lags and constrain the DC contribution, Mrk 110 and NGC 4593 were observed over multiple visits coordinated with recent reverberation mapping campaigns. Mrk 110 \citep{Vincentelli21} was observed on three occasions (2017 Dec 25, 2018 Jan 3, and 2018 Jan 10), while NGC 4593 \citep{Cackett18} was observed approximately daily from 2016 Jul 12 to 2016 Aug 6 with 26 successful observations.
Spectra obtained with the STIS G750L grating cover the wavelength range from 5240 to 10270~\AA\ with an average dispersion of 4.92~\AA/pixel, and a spatial scale of 0\farcs05 pixel$^{-1}$. We downloaded the raw STIS data and calibration files from the HST archive and reprocessed the data to improve cosmic-ray and bad pixel cleaning and to apply fringing corrections. We applied charge transfer inefficiency corrections to the data with the stis\_cti package\footnote{\url{https://www.stsci.edu/hst/instrumentation/stis/data-analysis-and-software-tools/pixel-based-cti}} in addition to the standard pipeline. STIS CCD spectra taken at wavelengths $>$7000~\AA\ are impacted by fringing, caused by interference of multiple reflections between two surfaces of the CCD \citep{Goudfrooij98}. We defringed the G750L spectra with contemporaneously obtained fringe flats according to standard STIS data reduction procedures.\footnote{\url{https://stistools.readthedocs.io/en/latest/defringe_guide.html}}
One-dimensional spectra were extracted from the wavelength- and flux-calibrated CCD frames using an extraction width of 0.35 arcsec. For each AGN other than NGC 4593 and NGC 3227, we combined the individual extracted spectra to obtain a mean flux spectrum and error spectrum to be used for spectral fitting. For NGC 4593, we found that the quality of the fringe correction varied over the monitoring campaign, and we chose to use only the epoch having the cleanest fringe correction (2016 Jul 21), rather than averaging together data from different visits.
The calibrated spectra were then corrected for Galactic extinction (see Table \ref{tab:sample}) based on the \citet{Schlegel98} dust map and the \citet{Fitzpatrick99} extinction law assuming $R_{V}=3.1$, and transformed to the AGN rest frame.
All of these AGN also have exposures in STIS UV and optical settings obtained contemporaneously with the G750L data. For this work, we primarily make use of the G750L spectra. While fits to a broader spectral range can further constrain the DC contribution, broad-band spectral models are subject to a variety of other degeneracies and ambiguities related to the spectral shape of the accretion disk continuum, the emission from \ion{Fe}{2} blends, and numerous other emission features that would need to be modeled in order to obtain precise fits to the small blue bump spectral region. Our focus in this paper is on the more restricted question of whether the DC component can be detected and constrained in the Paschen jump region, and we defer the larger and more challenging problem of broad-band spectral modeling to future work.
\begin{deluxetable*}{lccccccccc}[htb]
\tablecaption{Sample information \label{tab:sample}}
\tablecolumns{6}
\tablewidth{0pt}
\tablehead{
\colhead{Name} &
\colhead{Obs. date} &
\colhead{Exp. time $\times$ n } &
\colhead{Slit width} &
\colhead{PID} &
\colhead{$z$} &
\colhead{log $M_{\rm BH}$ }&
\colhead{log $L_{\rm 5100,AGN}$}&
\colhead{$\dot{m}$}&
\colhead{$A_{\rm V}$}\\
\colhead{} &
\colhead{} &
\colhead{(s)} &
\colhead{(\arcsec)} &
\colhead{} &
\colhead{}&
\colhead{(\ensuremath{M_{\odot}})}&
\colhead{ (erg s$^{-1}$)}&
\colhead{}&
\colhead{(mag)}
}
\startdata
Mrk 110 & 2017 Dec 25 -- 2018 Jan 10 &60 $\times$ 9 & 0.2 & 15413 &0.035&7.29 &43.62&0.157 &0.021 \\
Mrk 493 & 2017 Aug 28 &532 $\times$ 7 & 0.2 & 14744 &0.031&6.12 &43.11&0.718 &0.065 \\
Mrk 509 & 2017 Oct 22 &50 $\times$ 4 & 0.2 & 15124 &0.034&8.05 &44.13&0.009 &0.152 \\
NGC 3227& 2000 Feb 8 &120 $\times$ 1 & 0.2 & 8479 &0.004&6.78 &42.24&0.021&0.059 \\
NGC 4151& 2000 May 28 &720 $\times$ 3 & 0.1 & 8136 &0.003&7.56 &42.09&0.002 &0.071\\
NGC 4593& 2016 Jul 21 &288 $\times$ 1 & 0.2 & 14121&0.008&6.89 &42.87&0.070 &0.065\\
\enddata
\tablecomments{For NGC 4593, the date listed corresponds to the observation from the \citet{Cackett18} monitoring campaign for which we obtained the cleanest fringe correction, and only this observation was used for spectral fitting. The reverberation-based black hole masses and AGN continuum luminosities (given as $\log[\lambda L_\lambda]$ at 5100~\AA) are obtained from \cite{BentzKatz15}, and the normalized Eddington accretion rate is based on $L_{\rm Bol} = 9.26L_{\rm 5100,AGN}$. PID lists the HST program ID for each observation. The extinctions ($A_{\rm V}$) are based on \citet{Schlegel98} and assume $R_{V}=3.1$.}
\end{deluxetable*}
\begin{figure}
\centering
\includegraphics[width=9.cm]{model.pdf}
\caption{Models of diffuse continuum (DC) emission and high-order Paschen lines. Upper panel: DC spectrum predicted by photoionization models for NGC 5548 \citep{Korista19}, used in our model fits. Lower panel: the DC model and Paschen-series emission lines up to $n = 164$, (BLR) over the range 8000--8500~\AA. The Paschen lines have been broadened with a Gaussian kernel with $\sigma = 65$ km s\ensuremath{^{-1}}.}
\label{fig:model}
\end{figure}
\subsection{Spectral decomposition method}
\label{sec:method}
We fit models to the data using the spectral fitting code {\tt PyQSOFit} \citep{Guo18,Shen19}, modified to incorporate additional model components. All of the components, including accretion disk emission (modeled as either a power law or a standard thin disk model), nebular DC, host galaxy starlight, \FeII\ emission, and high-order Paschen lines with blended \SIII\ lines, are modeled together from 6800 to 9700~\AA\ after masking out unrelated prominent emission lines, and the model is optimized by $\chi^2$ minimization.
For the DC emission, we use a model spectrum from \cite{Korista19}. This spectrum is based on local optimally-emitting cloud photoionization models for the BLR in NGC 5548, assuming a 100\% cloud coverage of the source's sky, hydrogen column density $\log(N_\mathrm{H}/\mathrm{cm}^{-2}) = 23$, and hydrogen gas density $\log(n_\mathrm{H}/\mathrm{cm}^{-3})$ integrated from 8 to 12 dex. For this work, we use only the red portion of this model ranging from 6800 to 9700~\AA. In this wavelength range, the DC spectrum primarily consists of free-bound and free-free emission. Figure \ref{fig:model} displays the full DC model over 1000--10000~\AA, including the strong scattered Ly$\alpha$ feature and Balmer jump as well as the Paschen jump. The model fitting routine modifies the DC spectrum by two free parameters: the Doppler velocity broadening and the flux normalization factor. Velocity broadening is implemented through convolution by a Gaussian kernel after rebinning the DC spectrum to a linear grid in $\log\lambda$. Following convolution, the broadened spectrum is rebinned to match the wavelength scale of the data.
High-order Paschen lines need to be included for accurate fitting of this spectral region \citep{Malkan82,Wills85,Guseva06,Reines10}. An important parameter here is the BLR gas density, because high gas densities set an upper limit to the number of Paschen series lines and also shift the wavelength of the Paschen jump slightly redward. Ionization potentials are calculated for isolated atoms, but at high densities they are subject to the electric fields of other atoms/ions. As density increases, the interaction between the electric field strengths of other atoms/ions becomes increasingly important, and the number of bound levels diminishes. The ionization potential is subsequently lowered, and this shifts the Paschen jump and the limit of the high-order lines to longer wavelengths. Here we consider Paschen lines from Pa$\epsilon$ ($n =~8$, 9548.5~\AA, where $n$ represents the upper energy level of the electron transition) to $n =$ 164 for the BLR. This upper limit of $n=164$ for bound states is an estimate corresponding to a gas density of $n_\mathrm{H} = 10^{10}\ \mathrm{cm}^{-3}$, where levels at larger $n$ would have bound state orbit size larger than the typical separation between atoms. The flux ratios of the first 50 Paschen lines are derived from \cite{Hummer87}, assuming a typical BLR with an electron temperature of 15,000 K and a density of $\rm 10^{10}\ cm^{-3}$ (case B). However, $n = 50$ may still be insufficient in some scenarios \citep{Kovacevic14} since the slope formed by the blended high-order lines starts to decrease before it reaches the Paschen edge, especially when the line dispersion is not large enough to cover the discontinuity. We further extrapolate it to 164 ($\lambda=8208.66$~\AA) by fitting a polynomial to the line intensities up to $n = 50$. The Paschen-series line spectrum is shown in the lower panel of Figure \ref{fig:model}. In the model fits, the Paschen emission-line spectrum shares the same Doppler velocity broadening as the DC spectrum, while its overall flux normalization is a free parameter.
The high density gas will also shift the Paschen continuum jump slightly redward: shifts of $+3.1$ \AA\ for the free-bound limit and of $+0.31$ \AA\ for the high-order Paschen lines are expected. To simplify the spectral modeling, considering that these the small wavelength shifts make almost no difference in the fitting results for spectral models broadened to match the velocity broadening of the BLR (several thousand km s$^{-1}$), we neglected these wavelength shifts in the following analysis.
Other continuum components include the accretion disk spectrum, host-galaxy starlight, and dust emission. The accretion disk spectrum is modeled using either a power-law (PL) spectrum (a reasonable first approximation over this limited wavelength range) or a standard thermally emitting accretion disk model \citep[][hereafter SSD]{Shakura73} with an outer radius ranging from 500 to 10,000 Schwarzschild radii ($R_{\rm g}$, allowed to vary as a free parameter) and a fixed inner radius of 3$R_{\rm g}$. The power-law slope of the $F_\lambda$ spectrum from the SSD model in our fitting range is around $-7/3$, slightly depending on the black hole mass ($M_{\rm BH}$) and the normalized Eddington accretion rate ($\dot{m}$), which is estimated from the AGN continuum luminosity at 5100~\AA\ by assuming $L_{\rm Bol} = 9.26L_{\rm 5100, AGN}$ \citep{Richards06}. The reverberation-based BH mass, continuum luminosity at 5100~\AA, and $\dot{m}$ of each object are listed in Table \ref{tab:sample}. The potential contribution of starlight is modeled using simple stellar population (SSP) model spectra from \citet[][hereafter BC03]{Bruzual03}, allowing for a linear combination of model spectra spanning a range in age to achieve the best fit to the data. To model emission from hot dust, which is expected to add a small amount of the continuum flux in this spectral region \citep{Honig14}, we use a single-temperature blackbody model\footnote{Although there will be continuum contributions from dust with a range of grain sizes and chemical compositions, and from thermally emitting gas \citep{Korista19}, we use the simplest model to represent the dust emission due to the limited wavelength coverage of our STIS data.}, with a free temperature in the range 1200--1900~K \citep{Netzer15} and a free normalization factor.
Emission lines other than the Paschen-series lines were mostly masked out from the fits, except for two [\ion{S}{3}] lines blended with high-order Paschen lines. These [\ion{S}{3}] lines ($\lambda\lambda$9068, 9531~\AA) were modeled as single Gaussians. We do not include iron emission templates in these fits since \ion{Fe}{2} emission lines do not contribute significantly beyond 7000~\AA.
\begin{figure*}
\centering
\hspace*{-1cm}
\includegraphics[width=20.cm]{J1228+4407.pdf}
\caption{Spectral fitting for SDSS J1228$+$4407, a star-forming galaxy at $z = 0.0007$ with a known Paschen jump from nebular continuum emission. The dereddened spectrum (black) is modeled with a synthetic host component (yellow) from BC03, diffuse continuum (DC) from Starburst99 (cyan), and Paschen lines to n = 300 (brown). The residuals (grey) are shifted downward for clarity and unrelated emission lines are masked. The blue dashed line indicates the Paschen jump at 8206~\AA.}
\label{fig:sdss}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17.3cm]{spec.pdf}
\caption{Spectral decompositions of the six AGN in our sample, for the PL$+$Single model fits. The components are the same as in Figure \ref{fig:sdss}, except for a power-law component for disk and single Gaussian profiles for blended [\ion{S}{3}] lines. In these fits, the hosts are modeled with a 5 Gyr-old SSP model, but with zero flux in all objects. Primary emission lines and Paschen lines (Paschen jump) are marked with grey and blue dotted (dashed) lines, respectively. Some fringing residuals are still apparent at the longer wavelengths.
}
\label{fig:fit}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=18.cm]{frac.pdf}
\caption{\emph{Left:} The fractional contribution of DC component as a function of wavelength, from the PL$+$SingleG model. The DC contribution at the wavelengths shortward of the Paschen jump is around 10\% to 50\% with respect to the total DC$+$disk continuum. \emph{Right:} The range in DC fraction at 8000~\AA\ for each AGN, based on the range of model results listed in Table \ref{tab:models}.}
\label{fig:frac}
\end{figure*}
\section{Spectral fitting results}\label{sec:results}
\subsection{Star-forming galaxy with a known Paschen jump}\label{sec:sdss}
Star-forming galaxies often exhibit a strong Paschen jump originating from \ion{H}{2} regions \citep[e.g.,][]{Guseva06,Reines10}. As a test of our model-fitting approach, we first apply the fitting method to a Galactic-extinction-corrected spectrum of a star-forming galaxy in the rest frame. For this test, we selected one galaxy with a clear Paschen jump, SDSS J1228$+$4407 at $z$ = 0.0007 (SDSS Plate-MJD-FiberID: 1371-52821-0059), from the sample of \citet{Guseva06}. To model the host-galaxy stellar component, we used a linear combination of 39 SSP models from BC03 and allowed the fit to optimize the weights for each component. As shown in Figure \ref{fig:sdss}, an obvious spectral break is present at the Paschen edge.
For this model fit, we incorporated a separate nebular continuum spectral model appropriate to the physical conditions of a star-forming galaxy (rather than an AGN BLR) using the Starburst99 models\footnote{\url{https://www.stsci.edu/science/starburst99/docs/default.htm}} \citep{Leitherer99}. The Starburst99 model is based on an instantaneous burst of $10^4~ M_{\odot}$ with a Kroupa initial mass function (0.1$-$100$M_{\odot}$) \citep{Kroupa01}, the Geneva evolutionary tracks with high mass loss, and the Pauldrach/Hillier atmospheres \citep{Hillier98,Pauldrach01}. The Paschen-series emission lines were also separately obtained from \cite{Hummer87} using case B with an estimated electron temperature of 10,000 K from \cite{Guseva06} and a density of $\rm 10^2\ cm^{-3}$ estimated from the [\ion{S}{2}] line ratio $f(6716)/f(6731) \sim 1.3$ \citep{Osterbrock89}. The high-order Paschen lines are extrapolated to $n = 300$ ($\lambda=8206.07$~\AA) rather than the $n=164$ limit used for our BLR models, due to the lower gas density in the \ion{H}{2} region environment. In the best-fitting model, the starlight component is dominated by two SSP models, having stellar population ages of 900 Myr and 9 Gyr, with a flux ratio of $\sim$2:1 at 8000~\AA, basically dominated by young stars.
Figure \ref{fig:sdss} shows that the model fit to the star-forming galaxy spectrum successfully matches the overall continuum shape across the Paschen jump region as the sum of a starlight spectrum and the nebular continuum, except in a narrow ``notch'' in the model where the opposite spectral jumps from the nebular continuum and the blended high-order Paschen lines intersect. This small gap in the model may be the result of an imperfect match of the model to the velocity broadening of the DC component, or an imperfect estimation of the blended high-order Paschen lines based on our simple extrapolation from the $n\leq50$ line intensities given that our model fit slightly underestimates the peak fluxes of the individual Paschen lines at $\lambda>8350$~\AA. Additionally, the absorption component around the Paschen jump in the synthetic host could partially account for the flux deficit. In the model fit, the DC contribution is nearly equal to the host-galaxy flux at wavelengths just below 8206~\AA, consistent with model-fitting results from other studies of other similar targets \citep{Reines10}. The strong nebular Paschen jump feature in this galaxy spectrum, easily visible in total flux, stands in strong contrast to the AGN spectra that we discuss below.
\subsection{Model fits to AGN spectra}
Figure \ref{fig:fit} displays the STIS spectra of the six AGN in our sample. Unlike the star-forming galaxy spectrum, there is no obvious sign of a jump or discontinuity at the Paschen edge in any of these spectra. To a first approximation, the continuum in each object appears smooth and featureless across this wavelength range. Here we apply the spectral decomposition method outlined above in Section \ref{sec:method} to these six STIS spectra.
To test the sensitivity of the results to the assumptions made in constructing the model, we carried out several iterations of the fit with different model components. In each case, the DC component and high-order Paschen lines were incorporated as described in Section \ref{sec:method}. First, a simple model consisting of a power-law (PL) AGN continuum and a single age stellar population model (``SingleG'') was tested as a reference. We expect that the STIS spectra are AGN dominated and the host contribution should be small due to the small aperture size adopted: for example, \cite{Vincentelli21} estimated that the host contribution to the Mrk~110 STIS spectrum over 7000--9500~\AA\ is less than 5\%. Furthermore, we detect no evidence of high-order Paschen absorption lines that might be expected from a post-starburst stellar population. Thus, we model the host using a single 5 Gyr-old, solar-metallicity stellar population model, since that is generally expected to be present in the host galaxy bulge and there is little spectral difference between the 5 Gyr-old and other older SSP models. Then we considered variations of this model that included (1) a more physically motivated disk component by using the SSD continuum instead of a single PL; (2) a contribution of thermal dust emission from the torus. The model combinations that we tested include: PL$+$SingleG$+$Dust, SSD$+$SingleG, and SSD$+$SingleG$+$Dust.
Focusing primarily on the continuum components in the fit, we first mask out unrelated broad and/or narrow emission lines, including \ion{He}{1} $\lambda\lambda$7065,7281, [\ion{Ar}{3}] $\lambda$7136, [\ion{Ar}{4}] $\lambda$7171, [\ion{O}{2}] $\lambda\lambda$7320,7330, \ion{O}{1} $\lambda$8446, and \ion{Ca}{2} $\lambda\lambda\lambda$8498,8542,8662, where these lines are found to be present in the STIS data (see Figure \ref{fig:fit}). The fits are then carried out over the rest wavelength range 6800 -- 9700~\AA, over a total number of pixels between 1541 and 1586 (after masking) for each object.
The spectral fitting results for models with a power-law AGN continuum and single stellar population are displayed along with the STIS data in Figure \ref{fig:fit}. As expected, the host fraction in each object is zero if considering a 5 Gyr-old SSP model. The model fits also demonstrate that, while the DC contribution can be significant shortward of the Paschen jump (also see below and Table \ref{tab:models}), its ``step-down" feature can be almost perfectly offset by the excess of the blended and broadened high-order Paschen lines. These results demonstrate that a smooth and featureless continuum can still be compatible with a substantial contribution of DC flux around 8000~\AA, as predicted by \citet{Malkan82}.
The left panel of Figure \ref{fig:frac} illustrates the fractional contribution of the DC component with respect to the total flux for these initial model fits. The DC fraction from the BLR at 8000~\AA\ ranges from $\sim$ 10 to 50\% in different objects, with a median value of $\sim$ 20\%. This significant contribution of DC is consistent with discoveries in star-forming galaxies \citep{Guseva06,Reines10}. However, the results of these fits are not unique, as the inferred DC fraction depends on the assumptions made for the other continuum components (AGN disk emission, starlight, and dust). Table \ref{tab:models} and the right panel of Figure \ref{fig:frac} illustrate the range in the inferred DC fraction for each object for the different model variants described above. The range in fitting results for different model variants indicates that the uncertainty in the DC fraction is dominated by the choices made for other continuum components, similar to the conclusions of \citet{Vincentelli21}. This range is a factor of $\sim$ 1 to 3 for the six objects, much larger than the typical statistical uncertainties in DC fraction ($\lesssim$ 5\%) from the Monte Carlo error analysis procedure applied to each model fit. It is worth noting that the 22 -- 29\% DC fraction of Mrk 110 is consistent with the 12 -- 30\% DC fraction in $i$ band found by \cite{Vincentelli21}, indicating that our Paschen jump fit yields a similar result as the full spectral fit over 1500 -- 10000~\AA\ in this object. (The Vincentelli et al.\ model fit to the STIS spectrum of Mrk 110 did not include blended high-order Balmer and Paschen lines; instead, the regions around the Balmer and Paschen jumps were simply masked out from their fit.)
The choice of disk spectrum (PL or SSD) substantially alters the DC fraction in all cases, while the dust contribution is usually minor (almost zero for PL) and only slightly affects the DC fraction, except in Mrk 493 for the SSD model fit. The best-fitting PL slopes (close to the observed spectral slope, see Table \ref{tab:models}) in all cases are much shallower than the expected slope of $-7/3$ from the SSD model such that the spectral fitting can only obtain a satisfactory fit with the SSD continuum by lowering its contribution. Only in Mrk 110 is the DC fraction similar between the PL and SSD models, primarily due to the relatively steep spectral slope of the data. In addition, if we employ more complex SSP models\footnote{We also tested more complex SSP models: a linear combination of three SSP models with different ages of 100 Myr, 900 Myr, 5 Gyr, together with the PL model. The young stellar population only has small contributions in NGC~3227 and NGC~ 4151, which increases the DC contribution by a fraction of $\sim$ 3\% to 6\%.} that include a young stellar population that could mimic the AGN disk continuum, this will naturally reduce the disk fraction and therefore increase the relative DC contribution. We here consider the PL$+$Single model as our fiducial result as this model is the simplest, and it yields the best reduced $\chi^2$ for all six objects. Moreover, it is worth emphasizing that we can consider the DC fractional contribution in PL$+$SingleG model as a lower limit based on the current DC model from \citet{Korista19}, since it gives the necessary fraction to balance the high-order Paschen lines in the simplest model if no obvious Paschen jump appears in total flux, and any additional components will usually reduce the disk contribution and thus increase the DC fraction.
One important note of caution for interpretation of these fitting results is that the DC model is based on photoionization modeling tailored to the properties of NGC 5548 \citep{Korista19} and not computed specifically for each object in our sample. Differences in physical conditions within the BLR, such as the gas metallicity, and distributions of gas number density and column density, can alter the DC shape and strength \citep{Korista19}. Furthermore, the uncertainty is relatively larger in estimating the ionizing photon flux based on an optical continuum measurement, which will be used in constructing the photoionization model to map the ionized flux to radius in flux density plane. Our modeling results are dependent on the simplifying assumption that the DC spectral shape and hydrogen ionizing photon flux are intrinsically similar among the objects in our sample, other than its fractional contribution to flux and its velocity broadening. Moreover, a single PL fit is based on the asymptotic spectral energy distribution (SED) behavior of a disk with infinite radius, and is not expected to be an accurate model for the actual disk continuum except as an approximation over very restricted wavelength ranges. A real disk SED is expected to turn downward at wavelengths in the near-IR, asymptoting toward the Rayleigh-Jeans tail of the last annulus in the disk. This would allow for potentially greater contributions from the DC and/or the dusty torus. Finally, winds originating from the central accretion disk region and BLR would produce faint broad wings to lines but would be otherwise hard to detect \citep{Dehghanian20}. The optical/NIR emission from the wind could be significant and would manifest as warm free-free emission with recombination features superimposed.
\subsection{Validation of DC models}
\subsubsection{Comparing with photoionization predictions}
One important consistency check for these spectral fitting results is whether the inferred DC strength is compatible with the fluxes of BLR emission lines such as Ly$\alpha$ and the Balmer lines. We can carry out this check using photoionization modeling results from \citet{Korista19}.
To complement the STIS G750L spectra for our AGN sample, we also collected and combined the available STIS data in the G140L and G430L grating settings, for observations taken contemporaneously with the G750L data. After correcting for the Galactic extinction, we conducted local fits to the spectral regions surrounding Ly$\alpha$ and \hb\ with {\tt PyQSOFit} following methods similar to \cite{Shen19}. The broad Ly$\alpha$ and \hb\ are separated from the narrow components for luminosity estimation. Table \ref{tab:line} presents the measurements of these line luminosities along with continuum luminosities at 1215~\AA\ ($L_{\rm 1215}$). The Ly$\alpha$ line for Mrk 110 is unavailable due to the lack of G140L spectra. The continuum luminosity of Mrk 110 at 1215~\AA\ is estimated by extrapolating the power-law continuum from a near-UV spectral fit to the STIS G230L data. The far-UV spectrum of NGC 3227 (both continuum at 1215~\AA\ and Ly$\alpha$) is heavily reddened by small dust grains intrinsic to the AGN \citep{Crenshaw01,Mehdipour21}, especially at wavelengths below 2000~\AA, and we thus exclude it in our comparison with photoionization models. The blue wing of Ly$\alpha$ is strongly affected by broad and/or narrow absorption lines for all objects in our sample, and its red wing is blended with \ion{N}{5} $\lambda$1240. Despite masking these absorption lines, it is still very challenging to estimate precisely the intrinsic line profile and to separate the broad and narrow emission components. Consequently, the measurement uncertainty of Ly$\alpha$ is estimated to be $\sim$ 20\%, much higher than those of \hb\ and $L_{\rm 1215}$ ($\lesssim$ 5\%).
To compare the DC fluxes with H$\beta$, we extrapolated the DC models (see Table \ref{tab:models}) for each object based on the Paschen jump region spectral fits to derive the DC luminosities at 5200~\AA. The DC luminosity ranges (based on the set of model fits listed in Table \ref{tab:models}) are listed in Table \ref{tab:line}), where $L_\mathrm{DC,5200}$ refers to $\lambda L_\lambda$ for the DC component at $\lambda=5200$~\AA.
\begin{figure}
\hspace{-1cm}
\centering
\includegraphics[width=9.cm]{check.pdf}
\caption{STIS spectra of the six AGN in our sample, shown with extrapolated DC fluxes from different models. The STIS spectra are obtained from the G430L and G750L gratings. The DC spectra correspond to different model fits listed in Table \ref{tab:models}. The DC components are velocity broadened according to their original fitting results. The Balmer and Paschen jumps are marked with dashed grey lines.}
\label{fig:check}
\end{figure}
For NGC 5548, \cite{Korista19} predicted that the ratio $L_{\rm DC, 5200}/L_{\rm incident, 1215}$ is $\sim$ 0.2 in a typical BLR environment (see their Figure 1, 9 and our Table \ref{tab:line}). Furthermore, the DC luminosity at 5200~\AA\ should be comparable to the line luminosity of Ly$\alpha$ (e.g., $L_{\rm DC, 5200}/L_{\mathrm{Ly}\alpha} \sim $ 1) and the line flux of \hb\ is expected to be one order of magnitude lower than that of Ly$\alpha$ or $L_{\rm DC, 5200}$ \cite[see Figure 2 in][]{Korista19}. These ratios are approximate and also depend on the BLR environment assumptions (e.g., gas density, gas distribution, and metallicity), but they provide a general consistency check on the DC strengths derived from our model fits to the STIS data. In our comparison, we assume the total continuum luminosity at 1215~\AA\ is similar to the incident one (e.g., $L_{\rm 1215} \approx L_{\rm incident,1215}$), because \citet{Korista19} predict that the diffuse/total continuum flux ratio is $<0.1$ in the continuum region around the Ly$\alpha$ line.
In Table \ref{tab:line}, we list the measured luminosity ratios of our sample, as well as those of NGC 5548 as a reference. Aside from the anomalous case of NGC 3227 which is impacted by heavy extinction, the ratios are $L_{\rm DC, 5200}/L_{\rm 1215}$ $\sim$ 0.1 $-$ 0.6, $L_{\rm DC, 5200}/L_{\mathrm{Ly}\alpha}$ $\sim$ 1 $-$ 5, and $L_{\rm H\beta}/L_{\rm DC, 5200}$ $\sim$ 0.1 $-$ 0.3, respectively. These are broadly consistent with the photoionization predictions considering the potential AGN host reddening, different photoionization properties, and measurement uncertainties of Ly$\alpha$. This general consistency between the predicted and observed luminosity ratios between the DC strength and BLR hydrogen lines provides additional support for our conclusions regarding the range of inferred DC strengths in the Paschen jump region.
\subsubsection{Comparing with total UV/optical flux}
Another important consistency check is to use the results of our model fits to extrapolate the inferred DC flux to shorter wavelengths, and compare with the observed AGN flux. If the DC flux greatly exceeds the total observed flux at shorter wavelengths (e.g., below the Balmer jump), this would indicate that our method of fitting models to the Paschen jump region does not provide an accurate indication of the true DC flux level. If the extrapolated DC component accounts for a substantial fraction of the observed continuum flux below the Balmer jump, without exceeding the observed flux, then the normalization of the DC component inferred from the Paschen-jump region may be reasonable.
Figure \ref{fig:check} presents a broader spectral range for our sample of AGN from STIS data in the G430L and G750L gratings, together with the extrapolated DC flux from three versions of our model fits. (The DC fluxes are identical in the PL$+$SingleG models with or without dust included, so we present only the dust-free version of the model fit). For the three models presented, the DC fluxes are usually very similar since adding additional components only changes the relative contribution of the disk (PL or SSD), except in the case of Mrk 509 where the normalization of the DC component in the SSD+Single model differs substantially from the other two model fits. The DC component reaches its maximum in $f_\lambda$ around the Balmer edge, then decreases towards shorter wavelengths as shown in Figure \ref{fig:model}. We found that the DC fluxes are all comparable to or significantly lower than then total flux around the Balmer edge, indicating that our model results pass the qualitative consistency check described above. For the case of NGC 3227, the extrapolated DC flux just below the Balmer jump is essentially equal to the total observed continuum flux. However, NGC 3227 has substantial internal extinction within the AGN host that has not been corrected in the STIS data, and this extinction suppresses the observed flux at blue wavelengths.
In principle, carrying out multicomponent fits to the full STIS wavelength range would provide more stringent constraints on the DC contribution in these AGN. However, complete broad-band spectral fitting from the UV through near-IR remains a major challenge, requiring accurate modeling of the overall disk emission spectrum including possible departures from SSD model predictions, the pile-up of high-order Balmer lines near the Balmer jump, the \ion{Fe}{2} emission blends, and internal extinction within the AGN. Such modeling is beyond the scope of this work, but the STIS data for these six objects represents the best available testing ground for broad-band AGN spectral fitting spanning Ly$\alpha$ through the Paschen jump region.
\begin{deluxetable*}{cccccc}[htb]
\tablecaption{DC fraction (\%) at 8000 \AA\ in different models \label{tab:models}}
\tablecolumns{6}
\tablewidth{0pt}
\tablehead{
\colhead{Name} &
\colhead{PL + SingleG (slope)} &
\colhead{PL+ SingleG + Dust (slope)} &
\colhead{SSD + SingleG } &
\colhead{SSD + SingleG + Dust }
}
\startdata
Mrk 110 &22.2 ($-$1.95)&22.2 ($-$1.95) &21.0 &28.6 \\
Mrk 493 &17.7 ($-$0.85)&17.7 ($-$0.85) &13.0 &53.6\\
Mrk 509 &11.1 ($-$1.62)&11.1 ($-$1.62) &36.0 &35.6\\
NGC 3227&30.5 ($-$0.61)&30.5 ($-$0.61) &49.4 &55.3 \\
NGC 4151&25.3 ($-$1.47)&25.3 ($-$1.47) &32.0 &36.8\\
NGC 4593&45.4 ($-$0.58)&45.4 ($-$0.58) &60.5 &69.8 \\
\enddata
\tablecomments{The spectra are modeled with a power-law (PL, the slopes $\alpha$ in $f_{\lambda} \propto \lambda^{a} $ are listed in parentheses) or standard accretion disk model (SSD) for the disk component. Starlight is modeled by a single 5 Gyr-old, solar-metallicity stellar population model (SingleG), and the last two models include thermal dust emission.}
\end{deluxetable*}
\begin{deluxetable*}{cccccccccccc}[htb]
\tablecaption{Continuum and line luminosities \label{tab:line}}
\tablecolumns{12}
\tablewidth{0pt}
\tablehead{
\colhead{Name} &
\colhead{Log $L_{\rm 1215}$} &
\colhead{Log $L_{\rm Ly\alpha}$} &
\colhead{Log $L_{\rm \hb}$}&
\colhead{Log $L_{\rm DC, 5200}$}&
\colhead{$L_{\rm DC, 5200}/L_{\rm 1215}$}&
\colhead{$L_{\rm DC, 5200}/L_{\mathrm{Ly}\alpha}$}&
\colhead{$L_{\rm H\beta}/L_{\rm DC, 5200}$}\\
\colhead{} &
\colhead{(erg s$^{-1}$)} &
\colhead{(erg s$^{-1}$)} &
\colhead{(erg s$^{-1}$)} &
\colhead{(erg s$^{-1}$)} &
\colhead{} &
\colhead{}
}
\startdata
NGC 5548 & 43.6 & 42.6 & 41.4 & 42.9 & 0.20 & 2.00 & 0.32 \\
\hline
Mrk 110 & 44.1 & -- & 42.4 & 43.8 $-$ 43.9 & 0.5 $-$ 0.63 & -- & 0.03 $-$ 0.04 \\
Mrk 493 & 43.1 & 42.3 & 40.9 & 42.1 $-$ 42.6 & 0.10 $-$ 0.32 & 0.79 $-$ 1.99 & 0.02 $-$ 0.06 \\
Mrk 509 & 44.4 & 43.5 & 42.5 & 42.9 $-$ 43.5 & 0.03 $-$ 0.13 & 0.25 $-$ 1.00 & 0.10 $-$ 0.40 \\
NGC 4151 & 43.0 & 41.8 & 41.2 & 41.9 $-$ 42.1 & 0.08 $-$ 0.13 & 1.26 $-$ 1.99 & 0.13 $-$ 0.20 \\
NGC 4593 & 42.3 & 41.4 & 40.9 & 41.9 $-$ 42.1 & 0.40 $-$ 0.63 & 3.16 $-$ 5.01 & 0.06 $-$ 0.10 \\
\enddata
\tablecomments{The emission-line luminosities are measured from STIS spectra, and the DC luminosity ranges at 5200~\AA\ are obtained from extrapolation of different DC models in Figure \ref{fig:fit}. The continuum luminosity of Mrk 110 at 1215~\AA\ is extrapolated from its PL continuum fitting of the near-UV spectrum. NGC 3227 is excluded from this comparison due to its high UV extinction. }
\end{deluxetable*}
\section{Discussion}\label{sec:dis}
The Paschen jump is one of a few prominent features of the diffuse nebular continuum spectrum expected in AGN, yet it is not apparent at all in high-quality, host-free HST spectra. None of the six AGN presented in this work exhibit any evidence of a spectral break or discontinuity at the Paschen edge at 8206~\AA, and to the best of our knowledge, no previous studies have reported the discovery of Paschen jumps in other broad-line AGN. The results of our model fits show that a plausible explanation for the smoothness of the spectra across the jump is that the DC discontinuity is smoothed out by the pile-up of high-order Paschen lines, as previously anticipated by \citet{Malkan82}, and also by the potential presence of gas at very high density \citep[see below;][]{Vincentelli21}. In comparison with star-forming galaxies exhibiting a clear Paschen jump \citep[e.g.,][]{Guseva06,Reines10,Gunawardhana20}, the Doppler velocity broadening in broad-line AGN is also a key factor in smoothing out the jump. Our fits assume the same velocity broadening for the DC and the Paschen lines as both of them are expected to raise from the BLR, and the satisfactory fitting results across the Paschen jump are consistent with the DC and the Paschen lines originating from similar radii in the BLR. However, we note that the assumption of a uniform velocity broadening for all of the Paschen-series lines is an oversimplification. Based on a model for NGC 5548, the line formation radius is expected to be smaller for higher-order lines, decreasing by a factor of $\sim2$ from the low-order to high-order lines; thus the velocity width of high-order lines should be correspondingly broader (by a factor of $\sim2^{1/2}$ under the assumption of virial motion of BLR clouds). This effect will further smooth out the Paschen edge feature contributed by the high-order lines.
Compared with the Paschen jump, the Balmer jump excess is much more prominent in total flux spectra of AGN, but is also blended with \ion{Fe}{2} emission, high-order Balmer lines, and other emission lines. A notable feature of the small blue bump excess is that it typically extends to somewhat longer wavelengths than the 3647~\AA\ wavelength of the Balmer jump (e.g., $\sim$ 3800~\AA), as can be seen in the composite quasar spectrum of \citet{VandenBerk01}, for example. This extended ``bump" feature extending redward of 3647~\AA\ is unlikely to be due to the velocity broadening of the DC Balmer jump excess, as it would require a much larger broadening width than the widths of the Balmer emission lines. As discussed by \cite{Vincentelli21}, the extension of the small blue bump redward of the Balmer jump is likely due to the presence of high-density gas (e.g., $\rm 10^{12-13}\ cm^{-3}$) within the BLR making the free-bound jump slightly redshifted, as a result of the finite gas density effect\footnote{As previously described, the wavelength of the Paschen jump is dependent on gas density due to the reduction in the number of bound levels of hydrogen atoms in high-density gas. If there exists a broad range in gas densities in the BLR, the Paschen jump effectively becomes a series of jumps each with a different wavelength that when summed together appear as a smooth decline toward longer wavelengths, rather than an abrupt jump.} as mentioned in \S \ref{sec:method}, and the \citet{Korista19} models do not incorporate this effect. Nevertheless, our model fits show that the Paschen jump region does not appear to have a corresponding excess, and the Paschen region can be modeled adequately with the combination of a DC model and high-order Paschen lines all broadened by the same velocity width.
Recently, intensive disk RM experiments have revealed strong evidence of a $U$-band excess around the Balmer jump in the lag spectrum, as well as a mild 8000~\AA\ excess in the vicinity of the Paschen jump in some objects \citep[e.g., NGC 5548, NGC 4593, Fairall 9;][]{Fausnaugh16,Cackett18,HernandezSantisteban20}. These two excesses are important clues to the unique asymmetric Balmer and Paschen jumps of the DC component. Thus, searching for a change in reverberation lag across the Paschen jump, combined with modeling the spectral shape across the jump in both total flux and variable (rms) flux, could be a useful diagnostic to constrain the underlying accretion disk lag spectrum and test disk reprocessing models \citep{Korista19}. However, a complete physical interpretation of the lag spectra would require models that account for the lags of all of the variable components in this region, including the disk, the DC, the high-order Paschen lines, and the dust continuum. Our results indicate a lower limit to the DC fraction, based on the specific set of spectral models considered here, ranging from 10\% to 50\% with respect to the total continuum flux, and this DC contribution should appreciably affect the continuum lag measurements (particularly if it is at the upper end of this estimated range), to a degree that depends on physical properties of the BLR gas as well as on the variability amplitude and characteristic variability timescale of the driving continuum \citep{Korista19}. According to the photoionization calculations of \citet{Korista19}, in the $i$-band spectral region, the relative importance of the DC component to the measured inter-band continuum delays is nearly proportional to its fractional contribution to the total flux (see their Figure 11). This implies that the ratio of total continuum lag to disk continuum lag will be $\tau_{\rm disk+DC}/\tau_{\rm disk}$ $\sim$ 1.1 to 1.5. However, quantitative estimation of $\tau_{\rm disk}$ will require both specific photoionization modeling that is beyond the scope of this paper and variability models incorporating the lag response of all time-varying spectral components, and this work will be deferred to a future analysis incorporating broader spectral coverage spanning the Balmer and Paschen jumps. According to the recent modeling of Fairall 9 \citep{HernandezSantisteban20} and NGC 5548 \citep{Lawther18}, the DC component indeed increases the inter-band lags to some degree, but is probably still insufficient to explain the factor of $\sim3$ discrepancy between disk sizes inferred from continuum reverberation mapping and simple model predictions for standard disks \citep{Shakura73}. We note that even in the context of other disk reprocessing scenarios \citep[e.g.,][]{Kammoun21}, the contribution of the time-variable DC emission to the overall continuum lag spectrum must be taken into account for detailed comparison of models with reverberation data. Alternatively, other scenarios have been proposed in which the observed lags are dominated by the BLR rather than by the disk: \citet{Netzer21} recently examined the continuum lag spectra of several AGN and concluded that reprocessing by radiation pressure confined BLR clouds can entirely explain the observed lag-spectra both in shape and magnitude, without requiring any lag contribution from the disk.
\section{Conclusions}\label{sec:con}
We performed spectral decompositions for six nearby type 1 AGN at $z$ $<$0.1 to evaluate the fractional contribution of the nebular diffuse continuum \citep[DC,][]{Korista19} to the total flux. HST STIS spectra taken with narrow slit apertures exclude the vast majority of the circumnuclear starlight from the data, allowing a much cleaner test for the presence of a Paschen jump than would be possible with ground-based data. Our main findings are as follows:
\begin{enumerate}
\item In each case, the Paschen jump is imperceptible in the spectra of these unobscured AGN. Our model fits imply that the DC Paschen jump, which is expected to be present in the data, can be balanced by the excess flux of high-order Paschen lines in the vicinity of 8206~\AA\ (Figure \ref{fig:fit}), as expected by \cite{Malkan82}. These two spectral components from the BLR can combine to produce a virtually featureless and smooth total continuum across the Paschen edge. This behavior stands in contrast to the Balmer jump region, where the much stronger DC Balmer jump combined with \ion{Fe}{2} emission gives rise to the easily recognised small blue bump excess in total flux.
\item The DC emission originating from the BLR makes a significant contribution to the total flux at 8000~\AA\ in our sample, at least 10\% to 50\% in different cases (Figure \ref{fig:fit} and \ref{fig:frac}), although our modeling is subject to substantial systematic uncertainty primarily due to the spectral shape of the AGN disk continuum emission. The DC emission may still be responsible for a discontinuity in continuum reverberation lag across the Paschen jump that could potentially serve as a useful diagnostic if it can be measured accurately in future reverberation mapping campaigns.
\end{enumerate}
In the future, we will further explore broad-band spectral fitting incorporating both the Balmer and Paschen jumps to further constrain the DC contribution. Furthermore, the host galaxy and torus contributions can be better constrained using data extending to longer wavelengths in the near-infrared to reduce the degeneracies in the spectral decompositions, although this could introduce other modeling challenges due to differing spatial aperture sizes for UV/optical and infrared spectroscopic data. Together, continuum reverberation mapping and detailed spectral decompositions across a broad wavelength range can provide important new insights into the physics of DC emission as well as AGN accretion disk reprocessing.
\acknowledgements
We thank Matthew Malkan and Ari Laor for the useful discussions about the physics of the Paschen jump. We acknowledge the contributions of additional co-investigators to the proposal for HST program 15124 including Rick Edelson, Michael Fausnaugh, Jelle Kaastra, and Bradley Peterson. Research at UCI has been supported in part by NSF grant AST-1907290. Support at UCI for HST programs 14744 and 15124 was provided by NASA through grants from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. EC acknowledges funding support from HST program number 15413 (for Mrk 110) and NSF grant AST-1909199. LCH is supported by the National Science Foundation of China (11721303, 11991052) and the National Key R\&D Program of China (2016YFA0400702). MCB gratefully acknowledges support from the NSF through grant AST-2009230. VU acknowledges funding support from the NASA Astrophysics Data Analysis Program Grant \# 80NSSC20K0450. MV gratefully acknowledges support from the Independent Research Fund Denmark via grant number DFF 8021-00130. GJF acknowledges support by NSF (1816537, 1910687), NASA (ATP 17-ATP17-0141, 19-ATP19-0188), and STScI (HST-AR-15018 and HST-GO-16196.003-A).
JMG gratefully acknowledges support from NASA under the awards 80NSSC17K0126 and 80NSSC19K1638.
Funding for the Sloan Digital Sky
Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S.
Department of Energy Office of
Science, and the Participating
Institutions.
SDSS-IV acknowledges support and
resources from the Center for High
Performance Computing at the
University of Utah. The SDSS
website is www.sdss.org.
SDSS-IV is managed by the
Astrophysical Research Consortium
for the Participating Institutions
of the SDSS Collaboration including
the Brazilian Participation Group,
the Carnegie Institution for Science,
Carnegie Mellon University, Center for
Astrophysics | Harvard \&
Smithsonian, the Chilean Participation
Group, the French Participation Group,
Instituto de Astrof\'isica de
Canarias, The Johns Hopkins
University, Kavli Institute for the
Physics and Mathematics of the
Universe (IPMU) / University of
Tokyo, the Korean Participation Group,
Lawrence Berkeley National Laboratory,
Leibniz Institut f\"ur Astrophysik
Potsdam (AIP), Max-Planck-Institut
f\"ur Astronomie (MPIA Heidelberg),
Max-Planck-Institut f\"ur
Astrophysik (MPA Garching),
Max-Planck-Institut f\"ur
Extraterrestrische Physik (MPE),
National Astronomical Observatories of
China, New Mexico State University,
New York University, University of
Notre Dame, Observat\'ario
Nacional / MCTI, The Ohio State
University, Pennsylvania State
University, Shanghai
Astronomical Observatory, United
Kingdom Participation Group,
Universidad Nacional Aut\'onoma
de M\'exico, University of Arizona,
University of Colorado Boulder,
University of Oxford, University of
Portsmouth, University of Utah,
University of Virginia, University
of Washington, University of
Wisconsin, Vanderbilt University,
and Yale University.
|
2024-02-18T23:40:25.447Z
|
2021-11-08T02:00:48.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03090","language":"en","timestamp":1636336848000,"url":"https:\/\/arxiv.org\/abs\/2111.03090","yymm":"2111"}
|
proofpile-arXiv_000-10210
|
{"provenance":"002.jsonl.gz:10211"}
| null | null |
\section{Introduction}
To date, various neural networks have appeared, and these networks are being used in various fields (classification, generation, detection, etc..).
However, there are tasks that humans can do, but neural network models can't.
Human's representative ability is imagination and creating something new.
In this study, we designed an imagine networks model based on artificial association networks.
It was designed by sequencing the structures imagined by humans.
several abilities are needed to imagine.
(1) The first is recognition. If the information is not recognized, it can't be utilized. We need to recognize which object is what. We learn this through classification or clustering.
(2) The second is Deduction.
We create a proposition and combine the propositions to create a compound proposition and solve our problems through the relationship between the main object and the other objects.
And we need to learn what the results will be if we combine the objects in principle.
(3) The third is memory.
Humans have a memory input device called the hippocampus and it stores information in the brain.
And the memory device can take out the desired information and use it in the deduction process.
It is possible to recall information based on experience without input, similar to when we close our eyes and cover ears.
(4) The fourth is that the choosing process which is to get the optimal reward.
This selector consists of a reinforcement learning structure such as Q-learning\cite{watkins1992q}, and it is known that reinforcement learning is a structure similar to the human brain. And if we choose another object by reinforcement models, it should be something better.
(5) the last part is the discriminator.
This model is a discriminator of whether the generated sample is a target sample for us to obtain.
The discriminator learns the conditions to be the target sample and determines whether the conditions are satisfied every step.
It consists of a model like GAN\cite{goodfellow2020generative}, and it determines whether the sample generated is what we want.
We will understand the principle relationship between the object and the new object.
And this network is characterized by being a data-driven network that uses a association tree data structure.
Therefore, the generated data sample may be input to the association model again and used as a model for generating hierarchical and relational information.
We experimented in a reinforcement learning environment to learn this.
And similar to simulating in our heads, we generated a sample that performs simulation in a network.
\section{Related works}
\label{sec:related_works}
\paragraph{Artificial Association Networks \cite{kim2021association}}
This study is the first study of artificial association networks(AANs).
In this study, it is possible to learn various datasets simultaneously, and this paper introduces various sensory organs and the association area where information is integrated.
instead of using a fixed sequence layer, the network learns according to the tree structure using a data structure called an association tree.
The data structure defined $\mathbf{AN} = \{x, t, \mathbf{A}_c, \mathbf{C}\}$, And propagation method is conducted using recursive convolution called DFC and DFD.
\paragraph{Associational Deductive Networks \cite{kim2021deductive}}
This study is a model for the role of the frontal lobe and is a study to utilize information generally transmitted from the association area. Representatively, it is designed to be responsible for the ability to deduce and think.
This model uses the result of the previous proposition as input to the next proposition to combine various principles.
\paragraph{Associational Memory Networks \cite{kim2021memory}}
This model is designed to store root vectors.
In this study, short-term memory is used to solve the class-imbalance problem, and long-term uses it to create distributions of objects.
\paragraph{Q-learning \cite{watkins1992q}}
This agent learns what information to take out of memory or what action to perform for the current state to get the optimal reward.
If the space is continuous, models such as DQN\cite{mnih2013playing} and SAC\cite{haarnoja2018soft} could be used.
\paragraph{GAN \cite{goodfellow2020generative}}
The main concept is that the generator generates a sample and the discriminator determines whether the sample is the desired sample.
\section{Imagine Networks}
\label{sec:imagine}
\begin{figure}[h]
\centering
{\includegraphics[width=0.80\textwidth]{figure/imaginenet.png}}
\caption{ Ideal Imagine Modeling }
\label{fig:long-term}
\vspace{-10px}
\end{figure}
Imagine Networks is created by combining various neural network models. This structure can be seen as the ability to perform in the frontal lobe. In addition, the association tree generated in the frontal lobe learns relational information and hierarchical information, and The generated association tree uses it again for an association model.
\subsection{Eureka! Learning}
\label{sec:imagine}
\paragraph{Recognition}
This process is for extracting the feature vector of the object. Classification or Clustering tasks are representative recognition learning processes in which each object can produce different feature vectors.
\paragraph{Memorization}
This process is to use the information learned in the past by recalling memories without any input. Short-term memory is stored for each class of objects, and each class has a distribution for a generation.
\paragraph{Deduction}
Deduction performs a prediction task on the proposition result that appears when the objects are combined. This is like learning a simple proposition. This step can be replaced by a style transfer model etc.
\paragraph{Reinforcement learning}
Q-learning plays the role of using currently recognized information as state and selecting it to perform deduction with other information.
And through deduction, information is combined, and optimal compensation is obtained.
\paragraph{Discriminator}
The discriminator plays a role in determining whether the currently recognized information is what I want and instructing the q-learning model to stop when the desired information comes out.
\section{Experimental results}
\label{sec:result}
\begin{figure}[h]
\centering
{\includegraphics[width=0.90\textwidth]{figure/imagine-learning.png}}
\caption{ Eureka Training }
\label{fig:recognition-task}
\vspace{-10px}
\end{figure}
\subsection{Imagination in A Reinforcement Learning Environment}
Let's apply the above description to a reinforcement learning environment.
First, We created a queue for each state in short-term memory, and samples from the environment are stored in short-term memory(state number(label), screen, action, reward, next state number(label), next screen, next action, done).
In addition, sampling was performed as much as batch-size in short-term memory, and the samples were trained to recognition, memory, deduction, discriminator, and agent model.
\subsubsection{How to train Imagine Networks}
\paragraph{Recognition}
In the recognition process, the current screen is input and the number of the current state is recognized.
Recognition in this experiment means being aware of the current state. Since the state in this experiment is a discrete space, we numbered this state and labeled it by state, and supervised learning was performed to preset the state number by inputting the screen image of the state.
\paragraph{Memorization}
In the memorization process, the root vector shown is stored in memory networks, and the distribution is learned together with the decoder.
The memory network learns that generates a screen of each state.
Therefore, we stored image information generated from environments in short-term memory and learned the images in long-term memory. Each state's information is stored as a distribution. We can generate screen information by entering the number of the desired state.
\paragraph{Reinforcement learning}
In the agent process, the root vector means current state information. And state information is discrete space.
Therefore, we can generate q-table for each state. And encode the optimal action to be performed in the current state and use it as an input to the deductive model.
In the future, we can change this process to perform an action on "which sample should be taken out of memory".
Reinforcement learning in this experiment has randomness and serves as a role in generating data in the environment.
A selector selects an action to move by finding the shortest path.
\paragraph{Deduction}
In the deduction process, Now, there is a root vector of the current state and action information to be performed, and there is the next screen information that appears when the action is performed.
The next screen information becomes the next root vector to learn the state + action = next state relationship with a deductive model.
In this experiment, Deduction receives the current state information and action information as input and predicts the root vector containing the next state information.
It is similar to receiving action and performing a step in the environment, and the next state is generated.
\paragraph{Discriminator}
In the discriminator process, Learn with a network that classifies whether the current state is an end state or not.
Discriminator in this experiment means whether the current state is the final target endpoint whenever the agent acts.
Therefore, Discriminator was learned using done (whether the game was over or not).
\subsubsection{Imagine Result}
\begin{figure}[h]
\centering
{\includegraphics[width=0.80\textwidth]{figure/imagine-result.png}}
\caption{ A Generated Sample }
\label{fig:long-term}
\vspace{-10px}
\end{figure}
The learned network no longer generates data from the environment, but the following simulations are possible.
this figure is a generated sample and there are four characteristics.
(image generation by state number and root vectors (memory), Continuous scenes(deduction), the shortest path(reinforcement learning), When is the end state?(Discriminator))
\paragraph{Discuss : How do we think creatively? }
\begin{equation}
G \oplus G \to G^{c} \cap G'
\label{eqn:grouptheory1}
\end{equation}
Creative samples appear when they are moved to other set of elements without being closed to any operation of the elements.
If the element move from the current set $G$ to $G^{c}$, It moves away from the existing knowledge and reaches a different set space($G'$) and creates something new. And since it is a sample produced by deduction, it is theoretically valid.
I think we may create something new by designing the conditional expression, learning the discriminator, and generating a sample.
\section{Conclusion}
We are designing an agent model that behaves similarly to the human brain by combining various networks developed to date. The purpose of this study is as follows. "Let’s create a brain that thinks like humans in a similar environment to human life". This paper is part of a series. The next paper is ~.
\bibliographystyle{unsrt}
|
2024-02-18T23:40:25.451Z
|
2021-11-18T02:25:09.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03048","language":"en","timestamp":1637202309000,"url":"https:\/\/arxiv.org\/abs\/2111.03048","yymm":"2111"}
|
proofpile-arXiv_000-10211
|
{"provenance":"002.jsonl.gz:10212"}
| null | null |
\section{Introduction}
The long distance dynamics of certain physical systems exhibit universal behaviors. Typically, these infrared structures are dictated by the underlying symmetries of the theory. For example, scattering amplitudes in gauge theory and gravity conform to soft theorems which are direct consequences of charge and energy-momentum conservation, respectively \cite{Low:1958sn,Weinberg:1965nx, Burnett:1967km, Bern:2014vva}. Another notable instance is the Adler zero \cite{Adler:1964um}, which mandates the vanishing of soft pion amplitudes due to non-linearly realized, spontaneously broken chiral symmetries.
These known examples suggest that universal soft behavior should only be expected of theories exhibiting some incarnation of enhanced symmetry. In this paper we show that this intuition is incorrect. In particular, we derive a soft theorem that applies to a {\it completely general theory} of interacting scalar fields. Our results are applicable in the presence of masses, potential interactions, and higher-derivative couplings, sans any underlying linear or non-linear symmetries.
The key strategy in our analysis is to interpret scattering amplitudes not merely as functions of external kinematics, but also as functions of the vacuum expectation values (VEVs) of scalar fields. As is well-known, every scalar field theory is endowed with a notion of geometry in which the scalars parameterize coordinates of an underlying field-space manifold \cite{Meetz:1969as,Honerkamp:1971sh,Honerkamp:1971xtx, Ecker:1971xko,Alvarez-Gaume:1981exa,Alvarez-Gaume:1981exv,Boulware:1981ns,Howe:1986vm}. Furthermore, since on-shell scattering amplitudes are physical quantities, they are necessarily functions of the corresponding geometric invariants \cite{Dixon:1989fj,Alonso:2015fsp,Alonso:2016oah,Nagai:2019tgi,Cohen:2021ucp}.
As we will prove, the soft limit $q\rightarrow 0$ for a massless scalar of a tree-level $(n+1)$-particle amplitude is related to the tree-level $n$-particle amplitude by a {\it geometric soft theorem} of the schematic form,
\eq{
\lim_{q\rightarrow 0} \Amp{n+1} \sim \left(\nabla + \frac{\nabla m^2}{p^2-m^2} \right) \Amp{n} \, ,
}{eq:soft_thm_schematic}
where $\nabla$ is a covariant derivative in {\it field space} and $p$ and $m$ are the momentum and mass of an exchanged state. The geometric soft theorem has an elegant and intuitive physical interpretation. The soft limit of the massless particle is equivalent to an infinitesimal shift of the VEV of the corresponding scalar. From this perspective, the first term in \Eq{eq:soft_thm_schematic} corresponds to the variation of interaction vertices and internal propagators with respect to the VEV. The second term arises from the variation of amputated external propagators---in particular the masses appearing therein---with respect to the VEV.
Even though \Eq{eq:soft_thm_schematic} holds in general, it is illuminating to consider its application to systems of particular interest. For instance, we study examples with an arbitrary potential, as well as two- or higher-derivative interactions. Our results also hold for any theory with linear or non-linear symmetries. The latter case describes the dynamics of spontaneous symmetry breaking, where the potential is absent and the field-space manifold is a coset. When the coset manifold exhibits isometries, the right-hand side of the soft theorem is in general non-zero but is directly related to the associated Killing vectors. If the coset is also symmetric, then the right-hand side is zero, thus proving the Adler zero condition. We also show how the dilaton soft theorem is a special case of \Eq{eq:soft_thm_schematic}.
As another application, we derive new theorems describing amplitudes when multiple particles are taken soft, either consecutively or simultaneously, in a general theory with vanishing potential. We describe how these multiple-soft theorems characterize the local curvature of field space. In addition, we derive on-shell recursion relations that crucially leverage the geometric soft theorem. Remarkably, many theories that are lacking in any enhanced symmetry properties are still on-shell constructible via these methods.
The remainder of this paper is structured as follows. In \Sec{sec:geometry}, we briefly describe the geometry of scalar field space. The bulk of this discussion is review, but we do present some new results, {\it e.g.}~how the wavefunction normalization factors in the Lehmann, Symanzik and Zimmermann (LSZ) reduction formula \cite{Lehmann:1954rq} have the geometric interpretation as tetrads that toggle between fields and physical scattering states.
Afterwards, in \Sec{sec:theorem} we present an explicit definition of the geometric soft theorem and prove it. Next, in \Sec{sec:examples} we verify the validity of this soft theorem for salient examples, including theories exhibiting general potential, two-derivative, and higher-derivative interactions. For the case of theories with symmetry, we dedicate a longer discussion in \Sec{sec:coset}, {\it e.g.}~describing how the symmetries of the Lagrangian are described by Killing vectors whose Lie derivatives annihilate all scattering amplitudes. In addition, we explore the soft limits of Nambu-Goldstone bosons (NGBs) and explain how the Adler zero and soft dilaton theorem are simple corollaries of our geometric soft theorem.
Finally, in \Sec{sec:multisoft} and \Sec{sec:recursion}, we derive multiple-soft theorems and on-shell recursion relations.
\section{Geometry of Scalars}
\label{sec:geometry}
To begin, let us present a brief primer on the underlying geometric structure of scalar field theories. The bulk of our discussion will recapitulate known facts, though we will describe some new results, in particular relating to the geometry of fields versus physical scattering states.
The upshot of this exposition is that on-shell scattering amplitudes can be expressed purely in terms of the natural geometric invariants appropriate to scalar field space.
\subsection{Lagrangian and Field Basis}
Any theory of scalars $\Phi^{I}$ is described by a Lagrangian of the form
\eq{
L = \tfrac{1}{2} g_{IJ}(\Phi) \partial_\mu \Phi^I \partial^\mu \Phi^J - V(\Phi)
+ \tfrac{1}{4} \lambda_{IJKL}(\Phi) \partial_\mu \Phi^I \partial^\mu \Phi^J \partial_\nu \Phi^K \partial^\nu \Phi^L + \cdots,
}{L_general}
where $g_{IJ}(\Phi)$ is a symmetric, non-degenerate matrix function of the scalars encoding all possible two-derivative couplings. Furthermore, $V(\Phi)$ is a general potential while $\lambda_{IJKL}(\Phi)$ and
the terms in ellipses denote general higher-derivative operators. For convenience we will refer to the indices $I,J, K$, etc.~as flavors, even in the absence of any underlying symmetry.
The scalar fields are maps from points in spacetime to points in a field space (or target space) describing a manifold $\cal M$. Under field redefinitions (or coordinate transformations in field space) the scalar fields transform as
\eq{
\Phi^I \quad \to \quad \Phi'^I \, ,
}{}
where $\Phi'$ is a function of $\Phi$, while their derivatives transform as tensors via
\eq{
\partial_\mu \Phi^I \quad \to \quad \frac{\partial \Phi'^I}{\partial \Phi^J} \partial_\mu \Phi^J\,.
}{}
As is well-known, $g_{IJ}(\Phi)$ can be identified as a metric on $\cal M$ that transforms as a tensor under field redefinitions,
\eq{
g_{IJ}(\Phi) \quad \to\quad \frac{\partial \Phi^K}{\partial \Phi'^I} \frac{\partial \Phi^L}{\partial \Phi'^J} g_{KL}(\Phi')\,.
}{}
The metric in turn defines a corresponding covariant derivative (or Levi-Civita connection), $\nabla_I$, together with Christoffel symbols, $\Gamma_{IJK}(\Phi)$, and Riemann curvature, $R_{IJKL}(\Phi)$.
Under what conditions is the coupling for a given Lagrangian interaction actually a tensor in field space? Obviously, the potential is a scalar
\eq{
V(\Phi) &\quad \to \quad V(\Phi') \, ,
}{}
while the higher-derivative coupling from \Eq{L_general} is a tensor,
\eq{
\lambda_{IJKL}(\Phi) &\quad \to \quad \frac{\partial \Phi^P}{\partial \Phi'^I} \frac{\partial \Phi^Q}{\partial \Phi'^J} \frac{\partial \Phi^R}{\partial \Phi'^K} \frac{\partial \Phi^S}{\partial \Phi'^L} \lambda_{PQRS}(\Phi') \, .
}{}
In general, any Lagrangian interaction that depends solely on $\Phi^I$ and its first derivative $\partial_\mu \Phi^I$ will appear with a coupling that is a tensor. This is true because $\partial_\mu \Phi^I$ is already itself a tensor. There can, however, exist interactions with {\it more} than one derivative per field, {\it e.g.}~involving the non-tensorial object $\partial_\mu \partial_\nu \Phi^I$. The couplings associated with these interactions are not tensors. As is well known, on-shell scattering amplitudes are invariant under changes of field basis, so these non-tensorial couplings will always enter in combinations that behave precisely as tensors.
\subsection{Vacuum}
Of course, in order to compute physical quantities it will be necessary to choose a vacuum at which the fields reside. To this end we define
\eq{
\Phi^I = v^I + \phi^I \, ,
}{}
where $\phi^I$ describe dynamical fluctuations about constant VEVs of the scalar fields, $v^I$. All of the dynamics of $\phi^I$ are dictated by the Lagrangian couplings and their derivatives evaluated at the VEV. So for example, their dispersion relations are controlled by $g_{IJ}(v)$ and $\partial_I \partial_J V(v)$---so crucially we have {\it not} assumed that the scalar fields are canonically normalized or that the mass matrix is diagonalized. We will return to this point later on. Meanwhile, the cubic interactions arise from $\partial_I g_{JK}(v)$ and $\partial_I \partial_J \partial_K V(v)$, and so on and so forth. Since the vacuum is assumed to be stable, the tadpole $\partial_I V(v)$ is zero.
In general, off-shell quantities such as the interaction vertices or correlators are {\it not} covariant under field redefinitions. However, as noted previously, observables such as the on-shell scattering amplitudes are indeed covariant.
For notational convenience, we will sometimes denote symmetrized covariant derivatives of the potential by
\eq{
V_{I_1 \cdots I_n}(v) = \nabla_{(I_1} \cdots \nabla_{I_n)} V(v)\, ,
}{}
where the right-hand size is defined to include a $1/n!$ symmetry factor. We emphasize that the derivatives here are taken with respect to the VEVs rather than the dynamical fields. Hereafter this will be assumed throughout, unless otherwise stated. Since $V_I(v)=0$ for a stable vacuum, we also find that $V_{IJ}(v) = \partial_I\partial_J V(v)$ and $V_{IJK}(v) = \nabla_I V_{JK}(v)$.
\subsection{Fields vs.~States}
The scalar fields $\phi^I$ we have discussed thus far are {\it flavor eigenstates}\footnote{Strictly speaking, this is a slight abuse of nomenclature since these states are not even canonically normalized on account of $g_{IJ}(v)$ being completely arbitrary.}. This basis is particularly useful for exhibiting the underlying geometry of field space. On the other hand, we know that physical particles that undergo scattering are actually {\it mass eigenstates}. Naively, it might seem a somewhat formal exercise to construct the explicit mapping between these bases. However, this is actually crucial for any analysis of soft limits in amplitudes. The reason for this is that the soft limit requires the identification of a massless particle, which in turn requires the existence of a well-defined notion of mass. In other words, it is not actually possible to take the soft limit of a general flavor eigenstate.
To construct the relationship between flavor and mass eigenstates we have to {\it canonically normalize and diagonalize} the linearized equation of motion for the scalar fields,
\eq{
\left( g_{I J}(v) \Box + V_{I J}(v) \right) \phi^{I}(x) =0 \,.
}{}
Amusingly, this is achieved with the aid of another well-known geometric object: the local orthonormal basis. In particular, we introduce a tetrad $e_i{}^I(v)$ which flattens the metric via
\eq{
g_{IJ}(v) e_{i}{}^{I}(v) e_{j}{}^{J}(v) = \delta_{ij}\, .
}{eq:vielbein}
Hereafter, the indices $i,j,k$, etc.~will denote tetrad indices---or equivalently, mass eigenstate indices---that are raised and lowered using $\delta^{ij}$ and $\delta_{ij}$, respectively. The inverse tetrad is related to the tetrad via $
e^i{}_I(v) =e_{j}{}^{J}(v) \delta^{ij} g_{\cJI}(v)$.
\Eq{eq:vielbein} shows that the tetrads canonically normalize the kinetic terms of the scalar fields. Meanwhile, in the tetrad basis the mass matrix is
\eq{
V_{I J}(v) e_{i}{}^{I}(v) e_{j}{}^{J}(v) = V_{i j}(v) = m^2_i(v) \delta_{ij} \,,
}{}
where in the last equality we have made the additional assumption that the tetrad also diagonalizes the mass matrix. This is always possible by composing an arbitrary tetrad by a suitable orthogonal rotation.
The mass eigenstates $|p^i\rangle $ which describe scattering particles are labelled by a momentum $p$ and tetrad index $i$. The overlap of this mass eigenstate with a flavor eigenstate field is proportional to the tetrad,
\eq{
\langle p^i |\phi^J(x) | 0\rangle = e^{\fIJ}(v) e^{ip \cdot x}\,,
}{}
where the on-shell condition is $p^2 =m^2_i(v)$.
Last but not least, let us define the $n$-particle scattering amplitude,
\eq{
\Amp{n}^{i_1 \cdots i_n}(p_1, \cdots , {p_n}) \delta^D(p_1 +\cdots + p_n) &= \langle p_1^{i_1} \cdots p_n^{i_n} | 0\rangle \, ,
}{}
which is obtained via LSZ reduction of the $n$-particle correlator,
\eq{
\langle p_1^{i_1} \cdots p_n^{i_n} | 0\rangle
= (-i)^{n+1} \left[ \prod_{a=1}^n \lim_{p_a^2\rightarrow m_{i_a}^2}
(p_a^2- m_{i_a}^2) e^{i_a}{}_{I_a}
\right]
\langle T \phi^{I_1}(p_1) \cdots \phi^{I_n}(p_n) \rangle \, .
}{eq:LSZ}
This perspective offers a new geometric interpretation for the wavefunction normalization factors in the LSZ reduction formula: they are tetrad factors.
Since our analysis leans heavily on the geometry of field space, it will sometimes be more convenient to transform amplitudes from mass eigenstate to flavor eigenstate,
\eq{
\Ampu{n}_{I_1 \cdots I_n}(p_1, \cdots , {p_n}) = \left[ \prod_{a=1}^n e_{i_a I_a} \right] \Amp{n}^{i_1 \cdots i_n}(p_1, \cdots , {p_n}) \, ,
}{eq:A_tetrad}
and vice versa.
It should be obvious from \Eq{eq:A_tetrad} and its inverse that it is mechanically trivial to toggle between field space and tetrad indices, or equivalently, between flavor and mass eigenstate. Indeed, for any expression written in terms of geometric invariants, this operation is nothing more than a lexographical relabelling of indices.
Let us comment briefly on our choice of basis for explicit computations. Of course, since the Lagrangian is composed of objects with field-space indices, the same will be true of the Feynman rules derived from this Lagrangian. Thus, for practical calculations it is simplest to use those Feynman rules to directly compute the amplitude in flavor eigenstate, $\Ampu{n}_{I_1 \cdots I_n}$, and in the end to transform to the amplitude in mass eigenstate, $ \Amp{n}^{i_1 \cdots i_n}$. The latter is the physical quantity that is subject to our soft theorem.
\section{Geometric Soft Theorem}
\label{sec:theorem}
Armed with a framework describing the underlying geometry of scalar fields, we are now ready to introduce the main result of this paper: a geometric soft theorem valid for any massless state in a general theory of scalars.
\subsection{Statement of Theorem}
Our claim is that the soft limit of the tree-level $(n+1)$-particle amplitude is universally related to the tree-level $n$-particle amplitude via\footnote{We believe that this soft theorem actually applies non-perturbatively in the case of vanishing potential.}
\eq{
\lim_{q\rightarrow 0}\Amp{n+1}^{i_1\cdots i_n i}
=& \phantom{{}+{}} \nabla^i \Amp{n}^{i_1\cdots i_n}
+ \sum_{a=1}^n
\frac{ \nabla^i V^{i_a}{}_{j_a}}{(p_a +q)^2 - m_{j_a}^2} \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu} \right)
\Amp{n}^{i_1 \cdots j_a \cdots i_n} \, .
}{eq:soft_thm}
Here the soft particle is a massless scalar carrying momentum $q$ and labelled by index $i$, and we have dropped all contributions at ${\cal O}(q)$ and higher.
As we will show, this result is applicable even in the presence of massive spectator states and arbitrary higher-derivative operators.
Let us comment briefly on the technical application of this soft theorem. First, we emphasize again that the covariant derivative $\nabla^i$ is computed with respect to the VEVs and {\it not} the dynamical scalars, as should be obvious since the latter do not even appear in the amplitude. Second, in order to ensure that the soft limit itself maintains total momentum conservation, we send $q\rightarrow 0$ only after eliminating the momentum of some auxiliary leg from the $(n+1)$-particle amplitude.
Consequently, the differential operator $ q^\mu \frac{\partial}{\partial p_a^\mu}$ should also be applied with that same prescription. A similar approach is needed in the application of subleading soft theorems in gauge theory and gravity \cite{Bern:2014vva}. We will elaborate on this point later on.
\Eq{eq:soft_thm} is a simple geometric restatement of the usual physical intuition: a soft scalar parameterizes a shift of the VEVs. In particular, the first term in \Eq{eq:soft_thm} describes the effect of a shift of the VEV on coupling constants and internal propagators, while the second term corresponds to the effect of this shift on external propagators. Notably, the latter disappears when the theory does not have a potential.
\subsection{Proof of Theorem}
We are now ready to prove the soft theorem in \Eq{eq:soft_thm}. To begin, consider the Euler-Lagrange equations of motion for the fields $\phi^I$ that fluctuate about $v^I$,
\eq{
\partial_\mu \mathcal{J}_I^\mu = \frac{\delta L}{\delta \phi^I } \qquad \textrm{where} \qquad \mathcal{J}_I^\mu = \frac{\delta L}{\delta \partial_\mu \phi^I } \, .
}{}
By construction, the dependence of the Lagrangian on the scalar field and on the VEV enter identically, since $\Phi^I = v^I + \phi^I$. The equations of motion thus imply that
\eq{
\partial_\mu \mathcal{J}_I^\mu = \partial_I L\, ,
}{eq:EOM}
where the derivative on the right-hand side is with respect to the VEV.
We will show that the soft theorem in \Eq{eq:soft_thm} follows from evaluating \Eq{eq:EOM} between scattering states.
To begin, let us expand the Lagrangian in \Eq{L_general} about $\Phi^I = v^I + \phi^I$, yielding
\eq{
L = & \phantom{{}+{}} \tfrac{1}{2}\left(g_{IJ}(v)+ \partial_K g_{IJ}(v) \phi^K +\cdots \right) \partial_\mu \phi^I \partial^\mu \phi^J \\
& - \left(V(v)+\partial_I V(v) \phi^I + \tfrac{1}{2} \partial_I \partial_J V(v) \phi^I \phi^J +\cdots \right) + \cdots \, ,
}{L_general_exp}
where the ellipses denote terms higher order in fields or higher order in derivatives which will not affect our analysis.
From \Eq{L_general_exp} we compute the left- and right-hand sides of \Eq{eq:EOM},
\eq{
\partial_\mu \mathcal{J}^\mu_I &= \Box \phi_I + \partial_K g_{IJ} (v)\partial_\mu ( \phi^K \partial^\mu \phi^J ) + \cdots \,, \\
\partial_I L &= \tfrac{1}{2} \partial_I g_{JK}(v) \partial_\mu \phi^J \partial^\mu \phi^K - \partial_I \partial_J V(v) \phi^J - \tfrac{1}{2} \partial_I \partial_J \partial_K V(v) \phi^J \phi^K + \cdots \,.
}{ops}
We will be interested in the overlap of the above operators with $n$-particle scattering states. These operators carry a free index $I$ which we identify with that of the soft particle. Furthermore, the momentum $q$ entering through each operator corresponds to that of the soft particle. So we assume that $q^2=0$ and consider the $q\rightarrow 0$ limit. For convenience, let us define the $n$-particle overlap with an operator $\cal O$ to be
\eq{
\langle {\cal O} \rangle \delta^D (p_1 +\cdots + p_n) = \lim_{q\rightarrow 0} \int d^D x e^{iqx} e_{i_1 I_1} \cdots e_{i_n I_n} \langle p_1^{i_1} ,\cdots p_n^{i_n} | {\cal O}(x) |0\rangle \, .
}{}
Now taking the $n$-particle overlap of the terms in the first line of \Eq{ops}, we obtain
\eq{
\langle \Box \phi_I\rangle &= \lim_{q\rightarrow 0}
\Ampu{n+1}_{I_1 \cdots I_n I} \,,
}{}
which implements LSZ reduction on the soft particle to give the on-shell $(n+1)$-particle amplitude. Meanwhile, we also find that
\eq{
\langle \partial_K g_{IJ} \partial_\mu ( \partial^\mu \phi^J \phi^K)\rangle &= \tfrac{1}{2} \langle \partial_K g_{IJ} \Box (\phi^J \phi^K)\rangle - \tfrac{1}{2} \langle \partial_K g_{IJ} \partial_\mu ( \phi^J \overset{\leftrightarrow}{\partial^\mu} \phi^K)\rangle \\
&= - \tfrac{1}{2} \langle \partial_K g_{IJ} \partial_\mu ( \phi^J \overset{\leftrightarrow}{\partial^\mu} \phi^K)\rangle_{\rm ext}
\, ,
}{}
dropping the term involving $\Box(\phi^I \phi^J)$ since it is proportional to $q^2=0$. Moreover, since the term $\partial_\mu ( \phi^J \overset{\leftrightarrow}{\partial^\mu} \phi^K)$ scales manifestly as ${\cal O}(q)$, we know that as $q\rightarrow 0$ it can only contribute when inserted on an external propagator of a lower-point amplitude. Only in this case can a soft pole going as ${\cal O}(1/q)$ arise from the external propagator, thus yielding a non-vanishing contribution. We denote contributions of this type with a corresponding subscript, so the operator insertion $\langle {\cal O} \rangle_{\rm ext}$ corresponds to the diagram in Fig.~\ref{fig:O_ext}. Putting all of these pieces together we obtain
\eq{
\langle \partial_\mu \mathcal{J}^\mu_I \rangle &= \lim_{q\rightarrow 0}
\Ampu{n+1}_{I_1 \cdots I_n I}
- \tfrac{1}{2} \langle \partial_K g_{IJ} \partial_\mu ( \phi^J \overset{\leftrightarrow}{\partial^\mu} \phi^K)\rangle_{\rm ext}\, ,
}{EOM_LHS}
for the $n$-particle overlap of the first line of \Eq{ops}.
\begin{figure}
\centering
\begin{tikzpicture}
\coordinate (d) at (-3.75, 0);
\coordinate (c) at (-3, 0);
\coordinate (a) at (-2, 1);
\node[left] at (d) {$\langle {\cal O} \rangle_{\rm ext} \quad = \quad \mathlarger{ \sum_{a=1}^n}$};
\node[left] at (c) {$a$};
\coordinate (f) at (1, 1);
\coordinate (l) at (1, -1);
\coordinate (m4) at (-2,0);
\coordinate (mn) at (-0.25, 0);
\coordinate (d1) at (1.00, 0);
\coordinate (d2) at (0.85, 0.5);
\coordinate (d3) at (0.85, -0.5);
\draw [hard] (c) -- (m4);
\draw [hard] (mn) -- (m4);
\draw [hard] (f) -- (mn);
\draw [hard] (l) -- (mn);
\draw[fill=lightgray, opacity=1] (mn) circle (0.65);
\node at (mn) {$\Amp{n}$};
\draw[fill=white, opacity=1] (m4) circle (0.2);
\node[below] at (-2,-0.2) {${\cal O}(q) $};
\draw[fill=black, opacity=1] (d1) circle (0.02);
\draw[fill=black, opacity=1] (d2) circle (0.02);
\draw[fill=black, opacity=1] (d3) circle (0.02);
\end{tikzpicture}
\caption{Diagrams computing $\langle {\cal O}\rangle_{\rm ext}$, which sums over the insertion of an operator ${\cal O}$ on each external leg $a$ of the $n$-particle amplitude.}
\label{fig:O_ext}
\end{figure}
Let us now compute the $n$-particle overlap of the second line of \Eq{ops}. It will be crucial to distinguish between terms in the Lagrangian which are quadratic in fields, corresponding to kinetic and mass terms, versus those which are cubic order or higher, corresponding to interaction vertices. The latter contribute to $\langle \partial_I L \rangle$ through $n$-particle diagrams in which the coupling in each interaction vertex is replaced with the {\it derivative} of that coupling with respect to the VEV. Meanwhile, the former contribute to $\langle \partial_I L \rangle$ through $n$-particle diagrams in which a propagator is replaced with the {\it derivative} of that propagator---or more precisely, the derivative of the kinetic and mass parameters in that propagator---with respect to the VEV. Thus we learn that
\eq{
\langle \partial_I L \rangle &= \partial_I
\Ampu{n}_{I_1 \cdots I_n }
+ \tfrac{1}{2} \langle \partial_I g_{JK} \partial_\mu \phi^J \partial^\mu \phi^K - \partial_I \partial_J \partial_K V \phi^J \phi^K \rangle_{\rm ext}\,,
}{EOM_RHS}
where we have used that $\langle \partial_I \partial_J V(v) \phi^J\rangle$ will eventually vanish when transforming to mass eigenstate basis since the soft particle is assumed to be massless. Indeed, if the soft particle was massive then a shift of the VEV would induce a tadpole, which would be inconsistent.
The first term in this expression arises from insertions of $\partial_I L$ into interaction vertices or internal propagators. Those graphical elements are the only objects that appear in the scattering amplitude, since external propagators are amputated. Meanwhile, the second term appears when $\partial_I L$ is inserted on an external propagator, corresponding to the deformation of the LSZ operation from a shift of the VEV.
Equating \Eq{EOM_LHS} and \Eq{EOM_RHS} by the equations of motion and rearranging terms, we obtain
\eq{
\lim_{q\rightarrow 0}
\Ampu{n+1}_{I_1 \cdots I_n I}
& =
\partial_I \Ampu{n}_{I_1 \cdots I_n }
+ \tfrac{1}{2} \langle \partial_K g_{IJ} \partial_\mu ( \phi^J \overset{\leftrightarrow}{\partial^\mu} \phi^K) + \partial_I g_{JK}\partial_\mu \phi^J \partial^\mu \phi^K - \partial_I \partial_J \partial_K V \phi^J \phi^K \rangle_{\rm ext} \\
& =
\partial_I \Ampu{n}_{I_1 \cdots I_n }
- \langle \Gamma_{K IJ} \phi^J \Box \phi^K + \tfrac{1}{2} \partial_I V_{J K} \phi^J \phi^K \rangle_{\rm ext} \,,
}{}
where we have used that the Christoffel symbol is $ \Gamma^I{}_{JK} = \tfrac12 g^{IL} (\partial_J g_{LK} + \partial_Kg_{LJ} - \partial_Lg_{JK})$. Note that the operator inside the brackets is the cubic vertex for the scalar fields.
By rewriting $\Box$ as the linearized wave equation plus a difference term, we find that the latter contribution exactly combines with $ \partial_I V_{J K} $ to make it covariant, so
\eq{
\lim_{q\rightarrow 0}
\Ampu{n+1}_{I_1 \cdots I_n I}
& =
\partial_I \Ampu{n}_{I_1 \cdots I_n }
- \langle \Gamma_{K IJ} \phi^J (\Box \phi^K + V^{K}{}_{ L} \phi^L) + \tfrac{1}{2} \nabla_I V_{ J K} \phi^J \phi^K \rangle_{\rm ext} \,.
}{}
As noted earlier, the remaining operator insertions should be evaluated on an external propagator connecting to an $n$-particle subamplitude. By construction, we have arranged so that one of the terms will immediately pinch a neighboring propagator, so
\eq{
- \langle \Gamma_{K IJ} \phi^J (\Box \phi^K + V^{K}_{ L} \phi^L) \rangle_{\rm ext} = -\sum_{a=1}^n \Gamma^{J_a}{}_{ I_a I}\Ampu{n}_{I_1 \cdots J_a \cdots I_n} \, .
}{}
This then implies the following relation between the soft limit of the $(n+1)$-particle amplitude and the $n$-particle amplitude.
\eq{
\lim_{q\rightarrow 0}
\Ampu{n+1}_{I_1 \cdots I_n I}
& =
\nabla_I \Ampu{n}_{I_1 \cdots I_n } - \tfrac{1}{2} \langle \nabla_I V_{ J K} \phi^J \phi^K \rangle_{\rm ext} \,.
}{eq:soft_thm_part1}
The last term corresponds to all contributions to the $(n+1)$-particle amplitude which arise from the cubic potential vertex attached to an $n$-particle subamplitude. Notably, the latter is actually an {\it off-shell} object, since one of the external legs has accumulated the soft momentum $q$. The potential contribution can be evaluated explicitly, and is
\begin{align}
- \tfrac{1}{2} \langle V_{I J K} \phi^J \phi^K \rangle_{\rm ext} &= \sum_{a=1}^n \nabla_I V_{I_a }{}^{J_a}
\Delta(p_a+q)_{J_a}{}^{K_a}
\Ampu{n}_{I_1 \cdots K_a \cdots I_n}(p_1,\cdots, p_a+q, \cdots, p_n) \label {eq:soft_thm_part2}\\
&= \sum_{a=1}^n \nabla_I V_{I_a }{}^{J_a}
\Delta(p_a+q)_{J_a}{}^{K_a}\left(1+ q^\mu \frac{\partial}{\partial p_a^\mu} \right)
\Ampu{n}_{I_1 \cdots K_a \cdots I_n}(p_1,\cdots, p_a, \cdots, p_n) \, ,\nonumber
\end{align}
where $\Delta(p)_I{}^J = (\delta^I{}_J p^2 - V^I{}_J)^{-1}$ is the propagator in the flavor eigenstate basis.
In the first line, we have technically taken an off-shell continuation of the $n$-particle amplitude since the external leg carrying the operator insertion carries momentum $p_a +q$ accrued from the momentum $q$ of the soft particle. Since this object is off-shell, one might rightly worry that this expression is not invariant under changes of field basis. As we will see in the next section, however, these off-shell contributions magically cancel amongst each other in the full soft theorem. Moreover, in the second line above we have simply rewritten the shifted momentum as an operator $1+ q^\mu \partial /\partial p_a^\mu$ acting on the on-shell $n$-particle amplitude. Combining \Eq{eq:soft_thm_part1} and \Eq{eq:soft_thm_part2} and trivially mapping to tetrad basis, we obtain the claimed geometric soft theorem in \Eq{eq:soft_thm}.
\subsection{Field Basis Independence}
It is not a priori obvious why the soft theorem in \Eq{eq:soft_thm} is a sensible on-shell operation. The left-hand side is the limit of an on-shell quantity, but the right-hand side is the {\it differential} of an on-shell quantity. If the derivative in question does not preserve the on-shell conditions then one could rightly worry that the resulting expression is inherently off-shell and thus dependent on choice of field basis. Here we address those concerns and show that they are not a problem.
First, we study the question of total momentum conservation. Consider the addition of ``zero'' to the $n$-particle amplitude by adding
\eq{
\delta \Amp{n}^{i_1\cdots i_n} = (p_1 +\cdots + p_n)^\mu {\cal O}^{i_1\cdots i_n}_\mu \, ,
}{eq:zero1}
which is simply the total momentum contracted with an arbitrary tensor. By construction, $\delta \Amp{n}^{i_1\cdots i_n}=0$ is zero on-shell, so we would be permitted to include or discard it at our convenience. However, let us instead consider the action of the soft theorem in \Eq{eq:soft_thm} on this contribution. Obviously, $\nabla^i \delta \Amp{n}^{i_1\cdots i_n}=0$ since $\nabla^i$ commutes with total momentum. On the other hand, it is evident that $q^\mu \frac{\partial}{\partial p_a^\mu} \delta \Amp{n}^{i_1\cdots i_n}\neq0$. Thus, the soft theorem naively does not preserve total momentum conservation.
The resolution to this issue is that the soft limit itself requires a prescription in order to preserve momentum conservation. This is because sending $q\rightarrow 0$ with all other momenta fixed does not preserve total momentum conservation. Consequently, the soft limit must be defined with the prescription that the momentum of some auxiliary particle---distinct from the soft particle---has been eliminated from the $(n+1)$-particle amplitude to begin with. The same prescription should then be applied to the $n$-particle amplitude, in which case \Eq{eq:zero1} is algebraically zero.
Second, let us consider the on-shell conditions. Again we add ``zero'' to the $n$-particle amplitude, this time via
\eq{
\delta \Amp{n}^{i_1\cdots i_n} = \Delta^{-1}(p_a)^{i_a}{}_{j_a} {\cal O}^{i_1\cdots j_a \cdots i_n} = (\delta^{i_a}{}_{j_a} p_a^2 - V^{i_a}{}_{j_a}) {\cal O}^{i_1\cdots j_a \cdots i_n} \, ,
}{eq:zero2}
which is the on-shell condition for particle $a$ contracted with an arbitrary tensor. Plugging $\delta \Amp{n}^{i_1\cdots i_n}$ into the soft theorem in \Eq{eq:soft_thm}, we find that the first term is
\eq{
-\nabla^i V^{i_a}{}_{j_a} {\cal O}^{i_1\cdots j_a \cdots i_n} \, ,
}{}
while the second term evaluates to
\eq{
\nabla^i V^{i_a }{}_{j_a}
\Delta(p_a+q)^{j_a}{}_{k_a} \Delta^{-1}(p_a+q)^{k_a}{}_{l_a} {\cal O}^{i_1\cdots l_a \cdots i_n} = \nabla^i V^{i_a}{}_{j_a} {\cal O}^{i_1\cdots j_a \cdots i_n} \, ,
}{}
where we have ignored all terms in which the differentials act on factors other than the on-shell condition. Remarkably, we see that the first and second terms exactly cancel, so $\delta \Amp{n}^{i_1\cdots i_n} $ has no effect on the soft theorem.
In general, one might worry that a factor of the on-shell condition can appear in the $n$-particle amplitude in more complicated ways than in \Eq{eq:zero2}. For example, the on-shell condition might appear quadratically or higher. Fortunately, this scenario is not an issue because the differential operators will always leave a residual factor that is linear or higher in the on-shell condition, which will in turn vanish. Another possibility is that the on-shell condition enters deep inside the $n$-particle amplitude, for instance through an interaction vertex. However, even in this case, any factor of $p_a^2$ which appears can be reassociated with the $\delta^{i_a}{}_{j_a}$ that necessarily sits out in front of the amplitude, in which case \Eq{eq:zero2} applies once again.
To summarize, we have shown that the soft theorem in \Eq{eq:soft_thm} preserves on-shell kinematics, and is thus a sensible operation. As a corollary, all quantities on the left- and right-hand sides of this formula will appear in combinations which are invariant under changes of field basis.
\section{Examples}
\label{sec:examples}
In this section we apply the geometric soft theorem to some concrete examples. Of course, one can study a theory of maximal generality such as the one defined in \Eq{L_general}, which includes arbitrary potential terms as well as two- and higher-derivative interactions. In fact, we have verified via explicit calculation that our soft theorem is valid up to seven-particle scattering in this general theory. Unsurprisingly, the resulting amplitudes expressions are quite complicated and not particularly illuminating. So it will be more instructive to instead consider the soft theorem in some explicit examples in which only a subset of the couplings is active.
\subsection{General Potential}
To start, consider a theory with a flat field-space manifold but with an arbitrary potential,
\eq{
L &= \tfrac{1}{2} \delta_{IJ} \partial_{\mu} \Phi^{I} \partial^{\mu} \Phi^{J} - V(\Phi) \, .
}{}
Since the field-space metric is flat, the tetrad is trivial and the covariant derivative, $\nabla_I$, and partial derivative, $\partial_I$, are interchangeable.
A straightforward computation yields the three-, four-, and five-particle scattering amplitudes,
\eq{
\Amp{3}^{i_1 i_2 i_3} =& - V^{i_1 i_2 i_3} \,,\\
\Amp{4}^{i_1 i_2 i_3 i_4} =& - V^{i_1 i_2 i_3 i_4} - \sum_j \left ( \frac{ V^{ i_1 i_2 j }V_{j}{}^{i_3 i_4} }{s_{12} - m_j^2} +\frac{ V^{ i_2 i_3 j}V_{j}{}^{i_1 i_4} }{s_{23} - m_j^2} + \frac{ V^{ i_1 i_3 j}V_{j}{}^{i_2 i_4} }{s_{13} - m_j^2} \right) \,, \\
\Amp{5}^{i_1 i_2 i_3 i_4 i_5} =& - V^{i_1 i_2 i_3 i_4 i_5} -
\sum_{j} \left(\frac{V^{i_{1} i_{2} j} V_{j}{}^{i_3 i_4 i_5}}{s_{12} - m^{2}_{j}} + {\rm perm.}\right) \\ &- \sum_{j,k} \left(\frac{V^{i_1 i_2 j} V_{j}{}^{ i_3 k} V_{k}{}^{i_4 i_5}}{(s_{12} - m^{2}_{j})(s_{45} - m^{2}_{k})} + {\rm perm.} \right) \, ,
}{}
where here and throughout we define the Mandelstam invariants to be $s_{ij} = (p_i + p_j)^2$ and $s_{ijk} = (p_i + p_j + p_k)^2$.
We can now verify the validity of the soft theorem in \Eq{eq:soft_thm}. For example, consider the $p_4\rightarrow 0$ limit of the four-particle amplitude. The four-particle contact contribution $- V^{i_1 i_2 i_3 i_4}$ maps correctly onto $-\partial^{i_4} V^{i_1 i_2 i_3}$. Furthermore, the four-particle factorization contributions map correctly onto the terms in \Eq{eq:soft_thm} involving the cubic potential terms. As noted previously, these terms have the interpretation as the variation of the external propagators with respect to the VEV. The soft theorem similarly holds for the $p_5 \rightarrow 0$ soft limit of the five-particle amplitude, and so on and so forth.
The above analysis incorporates all possible potential operators. It is of course natural to ask what happens if we restrict to a subset of interactions, for example as would arise in $ \Phi^4$ theory. However, we should emphasize here that our geometric soft theorem requires the inclusion of {\it all interactions} generated by an infinitesimal shift of the VEV. This is necessary because the covariant derivative effectively probes a small region in field space about the VEV. So to apply our geometric soft theorem to $\Phi^4$ theory one would need to compute amplitudes in a neighborhood of the original theory in field space, which in effect requires including $\Phi^3$ operators as well.
\subsection{Two-Derivative Interactions}
\label{subsec:twoder}
Next, let us examine a theory of massless scalars with general two-derivative interactions and no potential, also known as the non-linear sigma model (NLSM). The Lagrangian is
\eq{
L &= \tfrac{1}{2} g_{IJ}(\Phi) \partial_{\mu} \Phi^{I} \partial^{\mu} \Phi^{J}
= \tfrac{1}{2} (\delta_{IJ} - \tfrac{1}{3} R_{I\cKJL} \phi^K\phi^L - \tfrac{1}{6} \nabla_K R_{I\cLJM}\phi^K \phi^L \phi^M +\cdots) \partial_{\mu} \phi^{I} \partial^{\mu} \phi^{J} \, ,
}{L_GNLSM}
where in the second equation we have expanded about the VEV using the analog of Riemann normal coordinates. This choice of field basis makes the dependence on geometric quantities manifest. Note that the three-particle vertex is identically zero, since the Christoffel connection evaluated at the vacuum is zero in Riemann normal coordinates. This fact can also be understood purely from the perspective of amplitudes: the on-shell three-particle amplitude in a derivatively coupled theory of scalars is always zero, so the corresponding Lagrangian term can be eliminated by a field redefinition.
It is a simple but tedious exercise to compute the three-, four-, five-, and six-particle scattering amplitudes,
\eq{
\Amp{3}^{i_1 i_2 i_3} \phantom{{}+{}} =& \phantom{{}+{}} 0 , \\
\Amp{4}^{i_1 i_2 i_3 i_4} \phantom{{}+{}} =& \phantom{{}+{}} R^{i_1 i_3 i_2 i_4 } s_{34} + R^{i_1 i_2 i_3 i_4} s_{24} , \\
\Amp{5}^{i_1 i_2 i_3 i_4 i_5} \phantom{{}+{}} =& \phantom{{}+{}} \nabla^{i_3} R^{i_1 i_4 i_2 i_5} s_{45} + \nabla^{i_4} R^{i_1 i_3 i_2 i_5} s_{35} + \nabla^{i_4} R^{i_1 i_2 i_3 i_5} s_{25} \\& + \nabla^{i_5} R^{i_1 i_3 i_2 i_4} s_{34} + \nabla^{i_5} R^{i_1 i_2 i_3 i_4} (s_{24} + s_{45}) , \\
\Amp{6}^{i_1 i_2 i_3 i_4 i_5 i_6} \phantom{{}+{}} =& - \tfrac{1}{72} \left(R^{i_1 i_3 i_2 j} s_{12} + R^{i_1 i_2 i_3 j} s_{13} \right) \frac{1}{s_{123}} \left( R_{j}^{\;\;i_6 i_5 i_4} s_{46} + R_{j}^{\;\;i_5 i_6 i_4} s_{45}\right)\\&
+ \tfrac{1}{108} \left(R^{i_1 i_3 i_2 j} (s_{12} - \tfrac{1}{6}s_{123}) + R^{i_1 i_2 i_3 j} (s_{13} - \tfrac{1}{6}s_{123}) \right) \left( R_{j}^{\;\; i_6 i_5 i_4} + R_{j}^{\;\; i_5 i_6 i_4} \right) \\&
+ \tfrac{1}{90} R^{i_1 i_6 i_5 j} R_{j}^{\;\; i_2 i_3 i_4} s_{13} + \tfrac{1}{80} \nabla^{i_6} \nabla^{i_5} R^{i_1 i_2 i_3 i_4} s_{13}
+ {\rm perm}\, .
}{GNLSM_amps}
We are now equipped to verify the validity of the soft theorem in \Eq{eq:soft_thm} for the two-derivative theory.\footnote{Note that in all subsequent analysis it is necessary to reduce to a minimal basis of Riemann curvature tensors and their derivatives using the first and second Bianchi identities. Furthermore one must place covariant derivatives in a canonical order.}
The $p_4 \rightarrow 0$ limit of the four-particle amplitude is zero,
\eq{
\lim_{p_4\rightarrow 0} \Amp{4}^{i_1 i_2 i_3 i_4} &=0 = \nabla^{i_4} \Amp{3}^{i_1 i_2 i_3 } \, ,
}{}
which is consistent since the three-particle amplitude vanishes identically. Meanwhile, taking the $p_5 \rightarrow 0$ limit of the five-particle amplitude yields
\eq{
\lim_{p_5\rightarrow 0} \Amp{5}^{i_1 i_2 i_3 i_4 i_5} &= \nabla^{i_5} R^{i_1 i_3 i_2 i_4} s_{34} + \nabla^{i_5} R^{i_1 i_2 i_3 i_4} s_{24} = \nabla^{i_5} \Amp{4}^{i_1 i_2 i_3 i_4} \, ,
}{GNLSM_5pt}
which again accords with the soft theorem.
Last of all, the $p_6 \rightarrow 0$ limit of the six-particle amplitude also satisfies the soft theorem in a quite non-trivial fashion, as factorizable and non-factorizable terms conspire to give the covariant derivative of the five-particle amplitude. While the above analysis applies to any two-derivative theory, we will discuss in \Sec{sec:coset} the case where the soft limit vanishes, {\it i.e.}~there is an Adler zero.
\subsection{Higher-Derivative Interactions}
\label{subsec:lambda}
Last but not least, let us consider the two-derivative theory augmented by the leading higher-derivative interaction. This theory is described by the Lagrangian,
\eq{
L &= \tfrac{1}{2} g_{IJ}(\Phi) \partial_{\mu} \Phi^{I} \partial^{\mu} \Phi^{J} +\tfrac{1}{4}\lambda_{IJKL}(\Phi) \partial_\mu \Phi^I \partial^\mu \Phi^J \partial_\nu \Phi^K \partial^\nu \Phi^L \, .
}{LhigherD}
The $\lambda$-dependent contributions to three-, four-, five-, and six-particle scattering amplitudes are
\eq{
\Amp{3,\lambda}^{i_1 i_2 i_3}
=& \phantom{{}+{}} 0 \,, \\
\Amp{4,\lambda}^{i_1 i_2 i_3 i_4}
=& \phantom{{}+{}} \tfrac{1}{2} s_{12} s_{34} \lambda^{i_1 i_2 i_3 i_4} + \tfrac{1}{2} s_{13} s_{24} \lambda^{i_1 i_3 i_2 i_4} + \tfrac{1}{2} s_{23} s_{14} \lambda^{i_2 i_3 i_1 i_4} \,, \\
\Amp{5,\lambda}^{i_1 i_2 i_3 i_4 i_5}
=& \phantom{{}+{}} \tfrac{1}{2} s_{12} s_{34} \nabla^{i_5} \lambda^{i_1 i_2 i_3 i_4} + \tfrac{1}{2} s_{13} s_{24} \nabla^{i_5} \lambda^{i_1 i_3 i_2 i_4} + \tfrac{1}{2} s_{23} s_{14} \nabla^{i_5} \lambda^{i_2 i_3 i_1 i_4} \\
& + \tfrac{1}{2} s_{23} s_{45} \nabla^{i_1} \lambda^{i_2 i_3 i_4 i_5} + \tfrac{1}{2} s_{24} s_{35} \nabla^{i_1} \lambda^{i_2 i_4 i_3 i_5} + \tfrac{1}{2} s_{34} s_{25} \nabla^{i_1} \lambda^{i_3 i_4 i_2 i_5} \\
& + \tfrac{1}{2} s_{13} s_{45} \nabla^{i_2} \lambda^{i_1 i_3 i_4 i_5} + \tfrac{1}{2} s_{14} s_{35} \nabla^{i_2} \lambda^{i_1 i_4 i_3 i_5} + \tfrac{1}{2} s_{34} s_{15} \nabla^{i_2} \lambda^{i_3 i_4 i_1 i_5} \\
& + \tfrac{1}{2} s_{12} s_{45} \nabla^{i_3} \lambda^{i_1 i_2 i_4 i_5} + \tfrac{1}{2} s_{14} s_{25} \nabla^{i_3} \lambda^{i_1 i_4 i_2 i_5} + \tfrac{1}{2} s_{24} s_{15} \nabla^{i_3} \lambda^{i_2 i_4 i_1 i_5} \\
& + \tfrac{1}{2} s_{12} s_{35} \nabla^{i_4} \lambda^{i_1 i_2 i_3 i_5} + \tfrac{1}{2} s_{13} s_{25} \nabla^{i_4} \lambda^{i_1 i_3 i_2 i_5} + \tfrac{1}{2} s_{23} s_{15} \nabla^{i_4} \lambda^{i_2 i_3 i_1 i_5} \, , \\
\Amp{6,\lambda}^{i_1 i_2 i_3 i_4 i_5}
=&
- \tfrac{1}{32} \left( s_{12} ( s_{13} + s_{23})\lambda^{i_1 i_2 i_3 j} \right) \frac{1}{s_{123}}\left(s_{45} ( s_{46} + s_{56})\lambda_{j}^{\;\; i_6 i_5 i_4 } \right)
\\ &
+ \tfrac{1}{24} \left( s_{12} ( s_{13} + s_{23})\lambda^{i_1 i_2 i_3 j} \right) \frac{1}{s_{123}} \left(R_{j}^{\;\; i_6 i_5 i_4} (s_{46} - \tfrac{1}{3}s_{456}) + R_{j}^{\;\; i_5 i_6 i_4} (s_{45} - \tfrac{1}{3}s_{456}) \right)
\\ &
+ \tfrac{1}{48} s_{12}s_{34} \left(R^{i_4 i_6 i_5 j} \lambda_{j}^{\;\; i_3 i_2 i_1} + R^{i_2 i_6 i_5 j} \lambda_{j}^{\;\; i_1 i_4 i_3 } \right)
+ \tfrac{1}{32}s_{12}s_{34} \nabla^{i_6} \nabla^{i_5} \lambda^{i_1 i_2 i_3 i_4} + {\rm perm.}
\,,
}{}
which should be added to the two-derivative amplitudes computed in \Eq{GNLSM_amps}. In accordance with \Eq{eq:soft_thm}, the $p_{4} \rightarrow 0$ soft limit of the four-particle amplitude vanishes. Furthermore, the $p_{5} \rightarrow 0$ soft limit for the five-particle scattering amplitude also satisfies the soft theorem.
The soft theorem also holds for the $p_6 \rightarrow 0$ soft limit of the six-particle amplitude, with delicate cancellations between contact terms and factorizable terms.
\section{Theories with Symmetry}
\label{sec:coset}
Historically, soft theorems were discovered in the context of spontaneous symmetry breaking. For each broken generator of the internal symmetry, a NGB emerges at low energies. This state transforms non-linearly under the symmetry and can exhibit certain universal soft behaviors.
For these reasons we dedicate the present section solely to the application of our geometric soft theorems to theories with symmetry. Our analysis will apply to any theory whose scalars exhibit a symmetry of linear or non-linear type. The case of NGBs will be a subset of our results. As we will see, our framework offers a new unified perspective on well-established phenomena such as the Adler zero and the dilaton soft theorem.
\subsection{Geometry of Symmetry}
Let us consider a general theory of scalars that is invariant under the following symmetry
transformation of the fields,
\eq{
\Phi^I \quad\to \quad \Phi^I + {\cal K}^I(\Phi)\, .
}{eq:globalsymm}
The symmetry in question may be linear or non-linear. Invariance implies that the Lagrangian is unchanged under \Eq{eq:globalsymm}. Applying the transformation to the general Lagrangian in \Eq{L_general}, we find that the couplings are effectively shifted by
\eq{
g_{IJ}(\Phi) &\quad\rightarrow\quad g_{IJ}(\Phi) + {\cal L}_{\cal K} g_{IJ}(\Phi) \, ,\\
V(\Phi) &\quad\rightarrow\quad V(\Phi) + {\cal L}_{\cal K} V(\Phi) \,, \\
\lambda_{IJKL}(\Phi) &\quad\rightarrow\quad \lambda_{IJKL}(\Phi) + {\cal L}_{\cal K} \lambda_{IJKL}(\Phi) \,,
}{}
and so on, and where ${\cal L}_K$ is the Lie derivative with respect to ${\cal K}^I(\Phi)$. Since each term in \Eq{L_general} has different derivative structure, they are each separately invariant, so
\eq{
\mathcal{L}_{\cal K} g_{IJ}(v) = \mathcal{L}_{\cal K} V(v) = \mathcal{L}_{\cal K} \lambda_{IJKL}(v) = 0\, ,
}{eq:KillV}
where we have chosen to evaluate these expressions at the VEV. Crucially, we observe that the first condition in \Eq{eq:KillV} implies that
\eq{
\mathcal{L}_{\cal K} g_{IJ}(v) = \nabla_I {\cal K}_J(v) + \nabla_J {\cal K}_I(v) = 0\,,
}{eq:killingeq}
so ${\cal K}^I(v)$ is in fact a Killing vector of the field-space manifold at the VEV.
Note that if a tensor is annihilated by $ {\cal L}_{\cal K} $, then so too is the covariant derivative of that tensor, so in particular
\eq{
{\cal L}_{\cal K} {\cal O}_{I_1\cdots I_n}(v) = 0 \qquad \textrm{implies that} \qquad {\cal L}_{\cal K} \nabla_J {\cal O}_{I_1\cdots I_n}(v) = 0 \, .
}{}
Altogether, this means that the tensor couplings in the Lagrangian as well as their covariant derivatives are all annihilated by $ {\cal L}_{\cal K} $. Since on-shell amplitudes are composed precisely out these objects, they are also annihilated,
\begin{align}
\label{eq:LieDerAmplitude}
\mathcal{L}_{{\cal K}}\Amp{n}^{i_{1} \cdots i_{n}} = {\cal K}_{i} \nabla^{i} \Amp{n}^{i_{1} \cdots i_{n}} - \sum^n_{a=1} \nabla_{j_a} {\cal K}^{i_a} \Amp{n}^{i_{1} \cdots j_a \cdots i_{n}} =0 \, .
\end{align}
This is the geometric statement that scattering exhibits the symmetry.
Thus far, we have not drawn any distinctions between linear and non-linear symmetries. To do so, let us expand
\Eq{eq:globalsymm} about the VEV using $\Phi^I = v^I + \phi^I$, yielding the symmetry transformation
\eq{
\phi^I \quad \rightarrow \quad \phi^I + {\cal K}^I(v) + \partial_J {\cal K}^I(v)\phi^J + \cdots \, .
}{}
If the symmetry is linear at the VEV, then ${\cal K}^I(v)=0$ and we identify $\partial_J {\cal K}^I(v) = T^I{}_J$ as the corresponding generator. In this case \Eq{eq:LieDerAmplitude} becomes
\eq{
\sum^n_{a=1} T^{i_a}{}_{j_a} \Amp{n}^{i_{1} \cdots j_a \cdots i_{n}} = 0 \, ,
}{}
which is the usual Ward identity for the amplitude associated with an unbroken symmetry.
If, on the other hand, the transformation has an affine component, ${\cal K}^I(v)\neq 0$, then the symmetry is non-linear. Moreover, using that the Lie derivative of the covariant derivative of the potential is zero, we find that
\eq{
0= {\cal L}_{\cal K} V_I(v) = {\cal K}^J \nabla_J V_I(v) + \nabla_I {\cal K}^J V_J(v) = {\cal K}^J V_{IJ}(v) \,.
}{}
This implies that the Killing vector is a null eigenvector of the mass matrix, and thus defines a massless state. The massless particle associated with ${\cal K}^I(v)$ is defined by ${\cal K}_i(v) |p^i\rangle$. For obvious reasons, we dub this massless particle a NGB, since this is its identity in theories of spontaneous symmetry breaking. In the next section, we derive the corresponding soft theorem for this NGB.
\subsection{Nambu-Goldstone Bosons}
We are now ready to study the interplay between symmetry and soft limits. We restrict to the case of non-linear symmetries in which the spectrum includes a massless NGB. For simplicity we also assume a vanishing potential.
In this case, the soft limit of the NGB is given by
\begin{align}
\lim_{q\rightarrow 0} {\cal K}_{i} \Amp{n+1}^{i_{1}\cdots i_n i} = {\cal K}_{i} \nabla^{i} \Amp{n}^{i_{1}\cdots i_{n}} \, ,
\end{align}
which is simply the geometric soft theorem in \Eq{eq:soft_thm} contracted against the Killing vector corresponding to that NGB.
Combining the geometric soft theorem in \Eq{eq:soft_thm} together with the fact that the Lie derivative of the amplitude with respect to the Killing vector is zero in \Eq{eq:LieDerAmplitude}, we obtain\footnote{This formulation of the geometric soft theorem can also be proven directly using current algebra arguments for the Noether current, $ j_\mu{} = {\cal K}(\Phi)_{I} \partial_\mu \Phi^I $, associated with the symmetry transformation.}
\begin{align}
\lim_{q\rightarrow 0} {\cal K}_{i} \Amp{n+1}^{i_{1}\cdots i_n i} = \sum^n_{a=1} \nabla_{j_a} {\cal K}^{i_a} \Amp{n}^{i_{1} \cdots j_a \cdots i_{n}} \, ,
\label{eq:softK}
\end{align}
which is a {\it non-zero} soft theorem. In particular, the soft limit of the $(n+1)$-particle amplitude produces a linear combination of $n$-particle amplitudes.
Non-zero soft theorems have appeared recently in \cite{Kampf:2019mcd} in the context of two-derivative theories of NGBs, and indeed those are a subcase of the soft theorem described above. Our framework clarifies the content of these non-zero soft theorems by presenting them in terms of geometric quantities which are invariant under changes of field basis.
A broad class of NGB theories describe the symmetry breaking pattern of a group $G$ to a subgroup $H$. The corresponding effective theory is of course given by a NLSM whose target space is the coset $G/H$.
Such manifold is endowed with a set of Killing vectors, ${\cal K}^I_\alpha(\Phi)$ for $\alpha = 1, \cdots, \text{dim}(G)$, which descend from the right-invariant vector fields on the group manifold $G$. These Killing vectors satisfy
\begin{equation}
[ {\cal K}_\alpha , {\cal K}_\beta ] = f_{\alpha\beta}{}^\gamma {\cal K}_\gamma\, ,
\end{equation}
which are the commutation relations of the Lie algebra of $G$.
Consider a specific point on $G/H$ labelled by the VEV of the fields, $v^I$. The full set of Killing vectors ${\cal K}_\alpha(v)$ can be subdivided into the set of \emph{unbroken} generators ${\cal K}_a(v)$ and \emph{broken} generators ${\cal K}_i(v)$. We define the unbroken generators by
\eq{
{\cal K}_a(v)={\cal T}_a(v) \qquad \textrm{for} \qquad
a= 1,\cdots,\text{dim}(H) \, ,
}{}
which satisfy the commutation relations of the Lie algebra of $H$,
\begin{equation}
[ {\cal T}_a , {\cal T}_b ] = f_{ab}{}^c {\cal T}_c{} \, .
\end{equation}
Since the unbroken generators stabilize $v^I$, they vanish precisely at that value of the VEV, so
\eq{
{\cal T}_a^I(v) = 0\,.
}{eq:Tzero}
Note that the identity of the unbroken generators will in general change for different values of the VEV, but the above equation will still hold. Moreover, \Eq{eq:Tzero} only holds at the VEV and not at every point on the manifold, so in general covariant derivatives of the broken generator will not be zero at the VEV.
On the other hand, the broken generators are defined by
\eq{
{\cal K}_i(v)={\cal X}_i(v) \qquad \textrm{for} \qquad i = 1,\cdots, \text{dim}(G/H) \, ,
}{}
which do not vanish at the VEV and which satisfy the commutation relations
\begin{align}
[ {\cal T}_a , {\cal X}_i ] &= f_{ai}{}^j {\cal X}_j \label{eq:commutatorGH_TX}\,, \\
[ {\cal X}_i , {\cal X}_j ] &= f_{ij}{}^a {\cal T}_a + f_{ij}{}^k {\cal X}_k \,. \label{eq:commutatorGH_XX}
\end{align}
Note that we have used the indices $i,j,k,$ etc.~for the broken generators because they precisely span the tangent space of the field manifold.
At last we are ready to derive a geometric soft theorem for a theory of $G/H$ symmetry breaking. In particular, we consider \Eq{eq:softK}
where the soft NGB is defined by the Killing vector ${\cal X}_{j}$ for a broken generator, so
\eq{
\lim_{q\rightarrow 0} ({\cal X}_{j})_{i} \Amp{n+1}^{i_{1} \dots i_{n} i} = \sum^n_{a=1} \nabla_{j_a} ({\cal X}_{j})^{i_a} \Amp{n}^{i_{1} \cdots j_a \cdots i_{n}}\,.
}{eq:softX}
In order to simplify the right-hand side we need to compute the quantity $\nabla_{j_a} ({\cal X}_{j})^{i_a}$.
To this end we define $(t_\alpha)_\beta{}^\gamma = f_{\beta\alpha}{}^\gamma$, which is the generator of the adjoint of $G$.
The Killing vectors at an arbitrary point $\Phi^I = v^I + \phi^I$ on the field manifold are related to those at $v^I$ by the adjoint action of $G$ \cite{Boulware:1981ns}, which in components are
\begin{align}
{\cal K}_\alpha^i(\Phi) &= [e^{\phi^j t_i}]_\alpha{}^\beta \, {\cal K}_\beta{}^i(v) = [e^{\phi^j t_i}]_\alpha{}^k {\cal X}_k{}^i(v) \nonumber \\
&
= \delta_\alpha{}^k {\cal X}_k{}^i(v) + \phi^i f_{\alpha i}{}^{k} {\cal X}_k{}^i(v) + \phi^i \phi^j f_{\alpha i}{}^{\gamma} f_{\gamma j}{}^{k} {\cal X}_k^i(v) + \cdots \, ,
\end{align}
where we have used \Eq{eq:Tzero} to drop the unbroken generators, which vanish at the VEV.
We will also need the components of the spin connection, which are given in \cite{Camporesi:1990wm}:
\eq{
\omega_{ij}{}^k(\Phi) = \tfrac12 f_{ij}{}^k + f_{aj}{}^k {\cal K}^a{}_i(\Phi)\,.
}{}
Combining the above expressions we find that
\eq{
\nabla_i ({\cal X}_j)_k &= \partial_i ({\cal X}_j)_k + \omega_{ik}{}^l ({\cal X}_{j})_l \\
&= f_{ki}{}^l ({\cal X}_j)_l + \tfrac12 f_{ik}{}^l({\cal X}_j)_l + f_{ak}{}^l {\cal T}^a{}_i = -\tfrac12 f_{ik}{}^l ({\cal X}_j)_l \, ,
}{eq:dkfijk}
where again we have used the vanishing of the broken generators in \Eq{eq:Tzero}. Plugging \Eq{eq:dkfijk} back into \Eq{eq:softX}, we obtain the geometric soft theorem for a $G/H$ coset space,
\begin{align}
\lim_{q\rightarrow 0} \Amp{n+1}^{i_{1} \dots i_{n} i} = -\tfrac12 \sum^n_{a=1} f_{j_a}{}^{i_ai} \Amp{n}^{i_{1} \cdots j_a \cdots i_{n}} \, ,
\label{eq:softGH}
\end{align}
where we have stripped off the contraction with ${\cal X}_j$ since it appears on both sides of the soft theorem, which must hold for any $j$. Note that the right-hand side of \Eq{eq:softGH} depends solely on the structure constants of broken generators in \Eq{eq:commutatorGH_XX}.
This result has an interesting geometric interpretation. In addition to the Levi-Civita connection, coset manifolds are endowed with a distinct ``$H$-connection'', $\overline \nabla$, which is also metric compatible but not torsion free \cite{KobayashiNomizu}. The components of the torsion $S= \overline \nabla - \nabla$ are given by the structure constants \cite{Camporesi:1990wm},
\eq{
S_{i}{}^{jk} = \tfrac12 f_i{}^{kj}\,.
}{}
Thus, when the geometric soft theorem in \Eq{eq:softGH} is non-zero, the right-hand side measures the torsion of the $H$-connection of $G/H$. For a detailed discussion of torsion in NLSM see \cite{Braaten:1985is}.
\subsection{Adler Zero Revisited}
Let us now turn to the scenario in which the right-hand side of the geometric soft theorem is vanishing, {\it i.e.}~exhibits an Adler zero. We will see how our approach offers a new geometric perspective on the classic soft theorems of the NLSM \cite{Adler:1964um}.
To begin, let us turn to the general two-derivative theory described in Sec.~\ref{subsec:twoder}, assuming an Adler zero condition for all amplitudes. For the soft limit of the five-particle amplitude in \Eq{GNLSM_amps} this implies that $\nabla^{i_5} R^{i_1 i_2 i_3 i_4} = 0$, while for six-particle scattering this implies that $\nabla^{(i_5}\nabla^{i_6)} R^{i_1 i_2 i_3 i_4} = 0$. For $n$-particle scattering, the Adler zero enforces that $\nabla^{(i_5} \cdots \nabla^{i_n)} R^{i_1 i_2 i_3 i_4} = 0$. In the limit that $n$ goes to infinity, this implies that all symmetric derivatives of the Riemann curvature are zero at the VEV. This is equivalent to saying that $\nabla^{i_5} R^{i_1 i_2 i_3 i_4}=0$ at {\it any arbitrary} point in field space, and so the Riemann curvature is covariantly constant. This is possible if and only if the manifold is a locally symmetric space or symmetric coset \cite{Nomizu,Helgason}. In conclusion, there is an Adler zero \emph{if and only if} the field-space manifold is symmetric. The fact that the Adler zero only holds for symmetric cosets is known \cite{Low:2014nga, Cheung:2020tqz}, but our result explains that this is the only class of target space manifolds for which it holds.
A natural question is then: in what circumstances do higher-derivative deformations of the NLSM preserve the Adler zero \cite{Rodina:2021isd}? Our soft theorem provides a clear answer. The Adler zero is maintained by any higher-derivative coupling that is a covariantly constant tensor in field space. For instance, the general four-derivative interaction in the Lagrangian in \Eq{LhigherD}
only preserves the Adler zero if $\nabla^{i_5} \lambda^{i_1 i_2 i_3 i_4} = 0$ at all points in field space.
Amusingly, if the field-space manifold is a symmetric space then products of Riemann tensors satisfy additional sets of identities such as
\eq{
0 = [\nabla^{i_1},\nabla^{i_2}] R^{i_3 i_4 i_5 i_6} = R^{i_1 i_2 i_3}{}_{j}R^{j i_4 i_5 i_6} + R^{i_1 i_2 i_4}{}_{j}R^{i_3 j i_5 i_6} + R^{i_1 i_2 i_5}{}_{j}R^{i_3 i_4 j i_6} + R^{i_1 i_2 i_6}{}_{j}R^{i_3 i_4 i_5 j}\,,
}{eq:Jacobi}
which follow from the fact that the Riemann curvature is covariantly constant.
This reduces the number of independent tensor structures that can be constructed. For $n$-particle scattering, this number is $(n-2)!$, which exactly coincides with the number of independent color structures in gauge theory \cite{Kleiss:1988ne, DelDuca:1999rs}. This concordance was required in order for color-kinematics duality and the double copy \cite{Kawai:1985xq, Bern:2008qj,Bern:2010ue,Bern:2019prr} to be applicable to amplitudes in the NLSM on a symmetric coset.
Yet another way to understand the Adler zero is in terms of Killing vectors. By definition, a symmetric space is a coset whose generators exhibit a $\mathbb{Z}_2$ parity under which the broken generators are odd, so
\eq{
{\cal T}_a \to {\cal T}_a \qquad \textrm{and} \qquad \quad {\cal X}_i \to - {\cal X}_i\,.
}{}
Compatibility with this parity requires that the commutation relations take the form
\begin{align}
[ {\cal T}_a , {\cal X}_i ] &= f_{ai}{}^j {\cal X}_j \label{eq:commutatorGHsym_TX}\,, \\
[ {\cal X}_i , {\cal X}_j ] &= f_{ij}{}^a {\cal T}_a \, , \label{eq:commutatorGHsym_XX}
\end{align}
which is to say that $f_{ij}{}^k=0$ in \Eq{eq:commutatorGH_XX}.
As a consequence, the covariant derivatives of broken generators are zero \cite{Eschenburg},
\eq{
\nabla_i {\cal K}_j{}^I (v) = \nabla_i {\cal X}_j{}^I (v) = 0\,,
}{eq:dkvzero}
which can be shown using \Eq{eq:dkfijk}. Using \Eq{eq:softK} or \Eq{eq:softGH}, we see that right-hand side of the soft theorem vanishes for symmetric spaces, thus yielding the Adler zero.
Our reformulation of the Adler zero in terms of Killing vectors reveals an unnecessary assumption in the usual current algebra proof of the Adler zero for two-derivative theories without a potential \cite{Adler:1964um}. In particular, it is {\it not necessary} to require the absence of three-point interactions and linear terms in the symmetry transformation. So their presence cannot ever preclude a vanishing soft limit. Indeed, by solving the Killing equation \eqref{eq:killingeq} as a power series about the VEV and using \Eq{eq:dkvzero},
\eq{
{\cal K}_I(\Phi) &= {\cal K}_I(v) + \Gamma^J{}_{IK}(v) {\cal K}_J(v) \phi^K + \cdots\, ,
}{}
and also noting that
\eq{
g_{IJ}(\Phi) &= g_{IJ}(v) + 2\,\Gamma_{(IJ)K}(v) \, \phi^K + \cdots \, ,
}{}
we see that the linear term of the symmetry transformation \Eq{eq:globalsymm} and the three-point two-derivative interaction are both related to the Christoffel symbol. In fact, the relation is such that their effects precisely cancel in an amplitude, which can be easily confirmed by performing a field redefinition to Riemann normal coordinates, where the Christoffel symbol is zero. This simultaneously eliminates both the cubic interaction and the linear term in the symmetry transformation.
In hindsight this had to be true given that the presence of three-point interactions is a field-basis-dependent statement, whereas the Adler zero is not.
Last but not least, it is natural to ask whether there are alternative ways that an Adler zero can arise on the right-hand side of \Eq{eq:softK}. For example, is it possible to have $ \nabla_{j} {\cal K}^{i} = 0$ at {\it all} points in field space? This is equivalent to asking whether the Killing vector associated with the NGB can be covariantly constant. While this is possible, it is simple to show that such a degree of freedom couples rather trivially. Since the Lie derivative ${\cal L}_{\cal K}$ annihilates all coupling tensors and their covariant derivatives, then so too does $n$ powers of the Lie derivative, which for covariantly constant ${\cal K}$ is equal to ${\cal L}_{\cal K}^n= {\cal K}^{i_1}\cdots {\cal K}^{i_n} \nabla_{(i_1} \cdots \nabla_{i_n)}$. The fact that every coupling is annihilated by ${\cal L}_{\cal K}^n$ implies that there is a field basis in which the coupling tensors have no dependence on the corresponding NGB. So for example in the Lagrangian in \Eq{L_general}, the couplings $g_{IJ}$, $V$, and $\lambda_{IJKL}$ would have no dependence on the NGB, although the NGB could still appear derivatively coupled through $\partial_\mu \Phi^I$.
\subsection{Dilatons}
The soft dilaton theorem \cite{Callan:1970yg,Boels:2015pta,Huang:2015sla ,DiVecchia:2015jaq} is actually
a corollary of our geometric soft theorem. To understand why, let us construct the effective Lagrangian for a dilaton coupled to an arbitrary theory of scalar ``matter'' fields. By definition, the dilaton is a compensator for scale transformations, so one can construct its interactions via the so-called Stueckelberg trick. In particular, starting from the scalar matter field Lagrangian, we apply a scale transformation and then promote the scale parameter to a dynamical dilaton.
To start, consider a theory of scalar matter fields described by \Eq{L_general}. Previously, we expanded $\Phi^I = v^I + \phi^I$ about an arbitrary VEV. In the case of the dilaton this is not permitted, however. The reason for this is that the dilaton is by definition the compensator for {\it all} scales, including VEVs. Said another way, if there are any fields in the theory that acquire VEVs other than the dilaton, then these states will necessarily mix with the dilaton. To simplify our analysis, we eliminate this mixing from the start by assuming $v^I = 0$ for the scalar matter fields.
In this basis the scalar matter field Lagrangian is
\eq{
L = \tfrac{1}{2} g_{IJ}(\phi) \partial_\mu \phi^I \partial^\mu \phi^J -V(\phi) + \cdots \, ,
}{eq:L_matt}
where the ellipses denote higher-derivative interactions. Next, consider a scale transformation,
\eq{
x^\mu \quad \to \quad \xi^{{1}/{\Delta}} x^\mu \qquad \textrm{and} \qquad \phi^{ I} \quad \to \quad \xi^{-1} \phi^{ I} \, ,
}{eq:scale_transform}
where by convention we have chosen $\xi$ to have the scaling dimension $\Delta = \frac{D-2}{2}$ of a scalar in $D$ dimensions. Applying \Eq{eq:scale_transform}
to \Eq{eq:L_matt} and promoting the scale parameter $\xi$ to a dynamical dilaton,
we obtain the dilaton effective Lagrangian,
\eq{
L &= \tfrac{1}{2} \partial_\mu \xi \partial^\mu \xi + \tfrac{1}{2} g_{IJ}(\xi^{-1} \phi) \partial_\mu \phi^I \partial^\mu \phi^J - \xi^{D/\Delta}V(\xi^{-1}\phi) + \cdots \, .
}{}
Note that the dilaton kinetic term is fixed by its scaling dimension. Naively, the appearance of inverse powers of the dilaton may seem peculiar but this is not an issue because the VEV of the dilaton, $\langle \xi\rangle$, is assumed to be non-zero.
Let us now study the dilaton effective Lagrangian expanded about the vanishing VEVs of the scalar matter fields. First, we consider the two-derivative terms,
\eq{
\tfrac{1}{2} g_{IJ}(\xi^{-1} \phi) = \tfrac{1}{2} (g_{IJ}(0)+\partial_K g_{IJ}(0)\xi^{-1} \phi^K+ \cdots) \partial_\mu \phi^I \partial^\mu \phi^J \, .
}{}
Crucially, the lowest order coupling of the dilaton is to {\it three} scalar matter fields. Hence, the theory includes a dilaton-matter-matter-matter coupling but no dilaton-matter-matter coupling. The absence of the latter then implies that any Christoffel symbol involving the dilaton---which encodes the corresponding cubic vertices---is zero. This implies the geometric statement,
\eq{
\nabla_{\langle \xi \rangle} = \partial_{\langle \xi \rangle} \, ,
}{}
so covariant and partial derivatives with respect to the VEV of the dilaton are one and the same. More invariantly, this means that the dilaton parameterizes a flat direction in field space that non-linearly realizes scale transformations.
Second, let us consider the expansion of the potential terms about the VEVs, yielding
\eq{
\xi^{D/\Delta}V(\xi^{-1}\phi) = \cdots + \tfrac{1}{2} \xi^{2/\Delta} \partial_I \partial_J V(0) \phi^I \phi^J +\cdots \, .
}{}
Here we see that the mass term for the scalar matter field is simply dressed by a factor of $\xi^{2/\Delta}$. This is of course expected, since the dilaton modulates the mass spectrum uniformly. It will be useful later to realize then that
\eq{
\nabla_{\langle \xi \rangle } V_{IJ} = \tfrac{2}{\Delta \langle \xi\rangle } V_{IJ} \, ,
}{}
so the covariant derivative of the mass matrix with respect to the dilaton VEV acts as a simple multiplicative factor.
At last, we are now equipped to apply the geometric soft theorem in \Eq{eq:soft_thm} to the case of the dilaton. The soft limit of an amplitude with a single dilaton and $n$ scalar matter fields is
\eq{
\lim_{q\rightarrow 0} \Amp{n+1}^{i_1\cdots i_n \langle \xi \rangle}
&= \nabla^{\langle \xi \rangle} \Amp{n}^{i_1\cdots i_n}
+ \sum_{a=1}^n
\frac{ \nabla^{\langle \xi \rangle} V_{j_a}^{i_a }}{(p_a +q)^2 - m_{j_a}^2} \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu} \right)
\Amp{n}^{i_1 \cdots j_a \cdots i_n} \\
& = \partial_{\langle \xi \rangle} \Amp{n}^{i_1\cdots i_n}
+ \tfrac{1}{\Delta \langle \xi \rangle} \sum_{a=1}^n
\frac{m_{i_a}^2}{p_a\cdot q} \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu} \right)
\Amp{n}^{i_1 \cdots i_a \cdots i_n} \, .
}{eq:soft_dilaton_thm}
The above expression is secretly identical to the dilaton soft theorem described in \cite{Callan:1970yg, Boels:2015pta, Huang:2015sla, DiVecchia:2015jaq}. To understand why, recall that dimensional analysis implies that
\eq{
\left(\Delta \langle\xi\rangle \partial_{\langle\xi\rangle} + \sum_{a=1}^n p_a^\mu \frac{\partial}{\partial p_a^\mu} \right) \Amp{n}^{i_1 \cdots i_n} = (D- n\Delta)\Amp{n}^{i_1 \cdots i_n}\, .
}{eq:dim_analysis}
The left-hand side extracts the total mass dimension of the amplitude by counting the overall powers of momenta and dimensionful coupling constants. By definition, the latter enter everywhere with factors of the dilaton VEV. Plugging \Eq{eq:dim_analysis} into \Eq{eq:soft_dilaton_thm} to eliminate $\partial_{\langle \xi \rangle}$, we obtain the soft dilaton theorem in its more standard form,
\eq{
\lim_{q\rightarrow 0} \Amp{n+1}^{i_1\cdots i_n \langle \xi \rangle}
& = \frac{1}{f} \left[D- n\Delta - \sum_{a=1}^n \left( p_a^\mu \frac{\partial}{\partial p_a^\mu} +
\frac{m_{i_a}^2}{p_a\cdot q} \left(1+ q^\mu \frac{\partial}{\partial p_a^\mu} \right) \right) \right]
\Amp{n}^{i_1 \cdots i_a \cdots i_n} \, ,
}{}
where we have defined the decay constant of the dilaton, $f = \Delta \langle \xi \rangle$ \cite{DiVecchia:2017uqn}.
\section{Multiple-Soft Theorems}
\label{sec:multisoft}
In this section we consider multiple-soft theorems that govern the behavior of amplitudes as a set of particles are taken soft, either consecutively or simultaneously. In the former, the soft particles exhibit a hierarchy of softness that dictates their ordering. In the latter, all soft particles are on the same footing. Throughout this analysis we focus on the leading non-trivial order, dropping contributions ${\cal O}(q)$ and higher.
Furthermore, for simplicity we consider scalar field theories in which the potential is vanishing.
\subsection{Consecutive Double-Soft Theorem}
The consecutive double-soft theorem is trivially obtained by applying the geometric soft theorem in \Eq{eq:soft_thm} on a pair of particles in sequence. A quantity of particular interest is the {\it commutator} of consecutive double-soft limits, which a trivial calculation shows to be
\eq{
\left[\lim_{q_a\to 0}, \lim_{q_b\to 0}\right]\Amp{n+2}^{i_1 \cdots i_n i_a i_b}
&= [\nabla^{i_a},\nabla^{i_b}]\Amp{n}^{i_1 \cdots i_n} = \sum _{c\neq a,b}R^{i_{a}i_{b}i_{c}}{}_{j_c} \Amp{n}^{i_1 \cdots j_c \cdots i_n} \, .
}{eq:consDoubleSoft}
The above expression is beautifully intuitive, since the commutator of soft limits measures the change of the amplitude when transported around an infinitesimal square in field space---thus probing the local curvature. Note also that \Eq{eq:consDoubleSoft} holds generally, {\it i.e.}~for arbitrary two- and higher-derivative interactions.
For the special case of the NLSM on a symmetric manifold, the right-hand side of \Eq{eq:consDoubleSoft} sums to zero by the Jacobi identities, which are automatically satisfied by the Riemann curvature on account of \Eq{eq:Jacobi}. This is necessary for consistency with the Adler zero in the single-soft limit, which directly implies that the consecutive double-soft limit is also vanishing.
\subsection{Simultaneous Double-Soft Theorem}
Next, let us consider the simultaneous double-soft theorem, whereby a pair of particles is taken soft at the same rate. Such a setup has previously been studied in the NLSM \cite{Weinberg:1966kf,Arkani-Hamed:2008owk,Cachazo:2015ksa,Du:2015esa,Low:2015ogb}, both at leading and subleading order. Here we prove that the leading double-soft limit for a general scalar theory is
\eq{
\lim_{q_a, q_b\to 0}\Amp{n+2}^{i_1 \cdots i_n i_a i_b}
&= \tfrac12 \sum _{c\neq a,b} \frac{s_{ac}-s_{bc}}{s_{ac}+s_{bc}} R^{i_{a}i_{b}i_{c}}{}_{j_c} \Amp{n}^{i_1 \cdots j_c \cdots i_n} + \nabla^{(i_a}\nabla^{i_b)} \Amp{n}^{i_1 \cdots i_n} \,.
}{eq:doubleSimSoft}
The first term already appears in the NLSM but the second term appears for a general theory without a potential. One can verify directly that this double-soft theorem holds for the amplitudes in Secs.~\ref{subsec:twoder} and \ref{subsec:lambda}.
\begin{figure}
\centering
\begin{tikzpicture}
\coordinate (c) at (-3, 0);
\coordinate (b) at (-2.5, 1);
\coordinate (a) at (-1.5, 1);
\node[above] at (a) {$q_a$};
\node[above] at (b) {$q_b$};
\node[above] at (c) {$p_c$};
\node[above] at (-1, 0 ) {$\tilde p_c$};
\coordinate (f) at (1, 1);
\coordinate (l) at (1, -1);
\coordinate (m4) at (-2,0);
\coordinate (mn) at (0, 0);
\node at (mn) {$ \Amp{n}$};
\node at (m4) {$\tilde A_4$};
\draw [hard] (c) -- (m4);
\draw [hard] (mn) -- (m4);
\draw [soft] (b) -- (m4);
\draw [soft] (a) -- (m4);
\draw [hard] (f) -- (mn);
\draw [hard] (l) -- (mn);
\coordinate (d1) at (1.00, 0);
\coordinate (d2) at (0.85, 0.5);
\coordinate (d3) at (0.85, -0.5);
\draw[fill=lightgray, opacity=1] (mn) circle (0.65);
\draw[fill=lightgray, opacity=1] (m4) circle (0.45);
\draw[fill=black, opacity=1] (d1) circle (0.02);
\draw[fill=black, opacity=1] (d2) circle (0.02);
\draw[fill=black, opacity=1] (d3) circle (0.02);
\end{tikzpicture}
\caption{Diagram containing potentially singular terms in the simultaneous double-soft theorem.}
\label{fig:sing_doublesoft}
\end{figure}
Our proof of \Eq{eq:doubleSimSoft} is a generalization of the analysis in \cite{Low:2015ogb} for the NLSM. The amplitude is organized into potentially singular terms, which have diagrammatic structure as in Fig.~\ref{fig:sing_doublesoft}, and regular terms, which we denote by $\cal R$, as follows
\begin{align}
\Amp{n+2}^{i_{1} \cdots i_{n} i_a i_b}(p_1, \cdots, p_n, q_a, q_b) =& \phantom{{}+{}} \sum_{c\neq a,b} \Amp{4}^{i_a i_b i_c }{}_{j_c}(q_a, q_b,p_c, -\tilde p_c) \frac{1}{\tilde{p}_c^2} \Amp{n}^{i_{1} \cdots j_c \cdots i_{n} }( p_1, \cdots , \tilde p_c, \cdots p_n) \nonumber \\
& + {\cal R}^{i_{1} \cdots i_{n} i_a i_b}(p_1, \cdots, p_n, q_a, q_b)\,, \label{eq:dssplit}
\end{align}
where $\Amp{n}$ is the amputated $n$-particle current with the momentum $\tilde p_c = p_c -q_a -q_b$ taken off-shell. The function ${\cal R}$ is a remainder function that is local in the soft momenta.
By explicit calculation we find that the four-particle current is
\eq{
\Amp{4}^{i_a i_b i_c }{}_{j_c} =&
\phantom{{}+{}}
\tfrac12 R^{i_ai_bi_c }{}_{j_c} ( s_{ac} - s_{bc})
+
\tfrac16 (R^{ i_c i_a i_b }{}_{j_c} + R^{i_c i_bi_a}{}_{j_c}) ( s_{ac} + s_{bc} - 2s_{ab}) \\
& + \tfrac{1}{2} s_{ab} s_{c\tilde c} \lambda^{i_a i_b i_c}{}_{j_c} + \tfrac{1}{2} s_{ac} s_{b\tilde c} \lambda^{i_a i_c i_b}{}_{j_c} + \tfrac{1}{2} s_{bc} s_{a\tilde c} \lambda^{i_b i_c i_a}{}_{j_c}
\,,
}{eq:A4}
as shown in Fig.~\ref{fig:sing_doublesoft}. Here we have explicitly included the four-derivative coupling, $\lambda$, but all of our results will equally apply when including other higher-derivative interactions.
Dropping contributions subleading in the soft expansion, we can effectively set $s_{ab}=s_{a\tilde c}=s_{b\tilde c}=s_{c\tilde c}=0$ and $\tilde p_c^2= s_{ac}+s_{bc}$, while setting the momenta $\tilde p_c$ in the subamplitudes equal to $p_c$. Combining \Eq{eq:A4} with \Eq{eq:dssplit} we obtain
\begin{align}
\lim_{q_a,q_b\to 0} \Amp{n+2}^{i_{1} \cdots i_{n} i_a i_b} =& \phantom{{}+{}} \tfrac{1}{2} \sum_{c\neq a,b} \frac{s_{ac}-s_{bc}}{s_{ac}+s_{bc}} R^{i_a i_b i_c}{}_{j_c} \Amp{n}^{i_{1} \cdots j_c \cdots i_{n} } \nonumber \\
&+\tfrac{1}{6} \sum_{c\neq a,b} (R^{ i_c i_a i_b}{}_{j_c} + R^{i_c i_bi_a}{}_{j_c}) \Amp{n}^{i_{1} \cdots j_c \cdots i_{n} }
+ \lim_{q_a,q_b\to 0} {\cal R}^{i_{1} \cdots i_{n} i_a i_b} \, . \label{eq:dsstep}
\end{align}
Note that the four-derivative coupling, $\lambda$, does not affect the result at this order. The same is true of other higher-derivative couplings.
To simplify the above expression, we compute the {\it anti-commutator} of consecutive soft limits of \Eq{eq:dssplit}, yielding
\eq{
\tfrac12 \left\{ \lim_{q_a\to 0}, \lim_{q_b\to 0} \right\} \Amp{n+2}^{i_{1} \cdots i_{n} i_a i_b} &= \tfrac16 \sum_{c\neq a,b} (R^{ i_c i_a i_b}{}_{j_c} + R^{i_c i_bi_a}{}_{j_c}) \Amp{n}^{i_{1} \cdots j_c \cdots i_{n} }+ \lim_{q_a,q_b\to 0} {\cal R}^{i_{1} \cdots i_{n} i_a i_b} \,.
}{eq:anticomm1}
We now compare this to the anti-commutator of consecutive soft limits computed directly from our geometric soft theorem,
\eq{
\tfrac12 \left\{ \lim_{q_a\to 0}, \lim_{q_b\to 0} \right\} \Amp{n+2}^{i_{1} \cdots i_{n} i_a i_b} &= \nabla^{(i_a}\nabla^{i_b)}A_{n}^{i_1 \cdots i_n}\,.
}{eq:anticomm2}
Combining \Eq{eq:dsstep} with \Eq{eq:anticomm1} and \Eq{eq:anticomm2}, we obtain the claimed simultaneous double-soft theorem in \Eq{eq:doubleSimSoft}.
\subsection{Simultaneous Triple-Soft Theorem}
It is straightforward to generalize our proof of the simultaneous double-soft theorem to a triplet of particles.
Applying parallel reasoning, we can prove a new simultaneous triple-soft theorem,
\eq{
\lim_{q_a, q_b,q_c\to 0} & \Amp{n+3}^{i_1 \cdots i_n i_a i_b i_c}
\\
=& \sum _{d\neq a,b,c}\left( \frac{1}{2} \frac{s_{ad} - s_{bd}}{s_{ad} + s_{bd}} R^{i_a i_b i_d}{}_{j_d} \nabla^{i_c} + \frac{1}{3}\frac{s_{ad} - s_{bd}}{s_{ad} + s_{bd}+s_{cd}} \nabla^{i_c}R^{i_a i_b i_d}{}_{j_d}\right)
\Amp{n}^{i_{1} \cdots j_d \cdots i_{n} }
\\
&+
\frac{1}{3} \frac{s_{ac} - s_{bc}}{s_{ab} + s_{ac}+ s_{bc}} R^{i_a i_b i_c}{}_{j_d} \nabla^{j_d}
\Amp{n}^{i_{1} \cdots i_{n} }
+ (a \leftrightarrow c) + (b \leftrightarrow c)
+
\nabla^{(i_a}\nabla^{i_b}\nabla^{i_c)} \Amp{n}^{i_{1} \cdots j_d \cdots i_{n} }
\, ,
}{}
which is applicable to any theory without a potential. Note that this formula has no analog in the NLSM, for which all amplitudes with odd numbers of particles are trivially zero. We have checked explicitly that this triple-soft theorem holds for seven-particle amplitudes in theories without a potential.
\section{On-Shell Recursion Relations}
\label{sec:recursion}
It has long been appreciated that the Adler zero condition can be leveraged to bootstrap the scattering amplitudes of the NLSM \cite{Susskind:1970gf,Osborn:1969ku}. More recently, similar logic has been applied to a broader class of scalar effective field theories which exhibit enhanced symmetries \cite{Cheung:2014dqa, Cheung:2016drk}, including Dirac-Born-Infeld (DBI) theory and the special Galileon (SG) theory \cite{Hinterbichler:2015pqa}. Notably, this soft bootstrap can be systematized using on-shell recursion relations \cite{Cheung:2015ota} that directly exploit the soft behavior of amplitudes. See \cite{Kampf:2013vha, Luo:2015tat,Elvang:2017mdq,Elvang:2018dco,Cheung:2018oki,Low:2019ynd,Kampf:2021bet,Kampf:2021tbk} for other recent work on the soft bootstrap.
These earlier explorations all suggest an intimate correlation between enhanced symmetry, vanishing soft limits, and on-shell constructibility. In the present work, we have shown that completely generic theories exhibit a geometric soft theorem even in the absence of symmetry. Interestingly, in many cases the soft limit does not even vanish. As we will see, we can still exploit the geometric soft theorem to build an on-shell recursion relation for any arbitrary two-derivative scalar field theory. A priori, it is rather unintuitive that such generic theories should be on-shell constructible. However, one should bear in mind that the extra amplitudes data that is implicitly input into the recursion is the behavior of amplitudes in a small neighborhood about the VEV.
Before we begin, let us review the original Britto-Cachazo-Feng-Witten (BCFW) recursion relations \cite{Britto:2004ap,Britto:2005fq}. We deform a pair of external momenta,
\eq{
p_{1} \rightarrow p_{1} + z q \qquad {\rm and} \qquad p_{2} \rightarrow p_{2} - z q \, ,
}{}
where $z$ is a complex number and $q$ is reference momentum satisfying $q \cdot p_1 = q\cdot p_2= q^2 =0$. We then apply Cauchy's theorem to write the on-shell $n$-particle scattering amplitude as
\eq{
\Amp{n}(0) = \frac{1}{2\pi i} \oint \frac{dz}{z} \Amp{n}(z) = -\sum_{\alpha} {\rm Res}_{z=z_{\alpha}} \left(\frac{\Amp{n}(z)}{z} \right) \, .
}{}
The right-hand side is composed of a sum of residues corresponding to on-shell factorization diagrams and a contribution at $z=\infty$. If the residue at $z=\infty$ is zero then the $A_{n}(0)$ can be calculated recursively from on-shell lower-particle amplitudes.
Theories with derivative interactions typically have poor high energy behavior, in which case the residue at $z=\infty$ will generically non-zero. Nevertheless, the residue at infinity can actually be {\it eliminated} if the soft limit is known \cite{Cheung:2015ota}. To do so, we define a soft momentum shift,
\eq{
p_{a} \rightarrow p_{a} \left( 1-z c_{a} \right) ,
}{soft_shift}
with the condition that the deformed momenta should maintain total momentum conservation.\footnote{For $n>D+1$ it is possible to find distinct $c_{i}$ for generic $p_{i}$.} Now we apply Cauchy's theorem to a slightly modified integrand,
\eq{
\Amp{n}(0) = \frac{1}{2\pi i} \oint \frac{dz}{z} \frac{\Amp{n}(z)}{F_{n,m}(z)} = - \sum_{\alpha} {\rm Res}_{z=z_{\alpha}^\pm} \left( \frac{\Amp{n}(z)}{z F_{n,m}(z) } \right) ,
}{eq:softrec}
where we have defined the form factor
\eq{
F_{n,m} (z) = \prod^{n}_{a=1} (1 - c_{a} z)^{m} \, .
}{}
In the case of a vanishing soft limit, the the poles generated by $F_{n,m}(z)$ in the denominator are cancelled by the zeros in the amplitude. Hence, the only residues which contribute arise from factorization channels, each with a residue equal to the product of lower-particle amplitudes as dictated by unitarity
\eq{
\Amp{n}(z) = \frac{A_{L}(z) A_R(z)}{P_\alpha^2(z)} + \cdots \, ,
}{}
where the propagator denominator is
\eq{
P_\alpha^2(z) = P_\alpha^2 + 2 z P_\alpha\cdot Q_\alpha + z^2 Q_\alpha^2 = P_\alpha^2\, (1-z/z^+_\alpha)(1-z/z^-_\alpha)\,,
}{}
where $P_\alpha = \sum_a p_a$ and $Q_\alpha = - \sum_a c_a p_a $. Here $z_\alpha^\pm$ are the two roots of the quadratic polynomial $P_\alpha^2(z)$.
Thus the resulting recursion formula is
\eq{
\Amp{n}(0) = \sum_{\alpha} \frac{A_{L}(z_\alpha^+) A_R(z_\alpha^+)}{(1-z_\alpha^+/z_\alpha^-) F_{n,m}(z_\alpha^+) } + (z_\alpha^+ \leftrightarrow z_\alpha^-) \, ,
}{}
and can be used to construct all scattering amplitudes in the NLSM, DBI, and SG.
On the other hand, when the soft limit of the amplitude is known but non-zero, then one can construct a generalized version of soft on-shell recursion relations \cite{Luo:2015tat}. We instead consider
\begin{align}
\Amp{n}(0) &= \frac{1}{2\pi i} \oint \frac{dz}{z} \frac{\Amp{n}(z)}{F_{n,1}(z)} \nonumber \\
&= - \sum_{\alpha} {\rm Res}_{z=z_{\alpha}^\pm} \left( \frac{\Amp{n}(z)}{z F_{n,1}(z) } \right) - \sum_{a} {\rm Res}_{z=1/c_a} \left( \frac{\Amp{n}(z)}{z F_{n,1}(z) } \right) ,
\label{eq:softrecsub}
\end{align}
where in addition to the factorization poles we need to include the additional residues at $z=1/c_a$ from the poles in $F_{n,1}(z)$.
At last, we are equipped to use our geometric soft theorem to derive on-shell recursion relations. For simplicity, let us consider a general two-derivative theory of scalars without a potential, as defined in \Eq{L_GNLSM}. Crucially, in such a theory the soft limit need not vanish. In this case, each residue in the last term of \Eq{eq:softrecsub} becomes
\eq{
{\rm Res}_{z=1/c_a} \left( \frac{\Amp{n}(z)}{z F_{n,1}(z) } \right)= \frac{\nabla_{i_a} A_{n-1}(1/c_a)}{\Pi_{b\neq a} ( 1 - c_{b}/c_{a})}\,.
}{}
This yields the on-shell recursion relation
\eq{
\Amp{n}(0) = \sum_{\alpha} \frac{A_{L}(z_\alpha^+) A_R(z_\alpha^+)}{(1-z_\alpha^+/z_\alpha^-) F_{n,1}(z_\alpha^+) } + (z_\alpha^+ \leftrightarrow z_\alpha^-) + \sum_a \frac{\nabla_{i_a} \Amp{n-1}(1/c_a)}{\Pi_{b\neq a} ( 1 - c_{b}/c_{a})} \, ,
}{eq:recursion_two_deriv}
which is valid for a general two-derivative scalar theory.
Now let us check \Eq{eq:recursion_two_deriv} for the very simplest case of building the five-particle amplitude in terms of the four-particle amplitude via on-shell recursion.
To more easily construct the momentum shift in \Eq{soft_shift} we specialize to three-dimensional kinematics. Note that this choice actually introduces no loss of generality, since the amplitudes of two-derivative scalar theories are degree one polynomials in Mandlestam variables, so they cannot include a three-dimensional Gram determinant. An explicit solution for the $c_{a}$ is \cite{Luo:2015tat}
\begin{align}
c_{1} &= s_{23} (s_{14} s_{23} - s_{13} s_{24} - s_{12} s_{34}) \,, \nonumber \\
c_{2} &= s_{13} (- s_{14} s_{23} + s_{13} s_{24} - s_{12} s_{34})\,, \nonumber \\
c_{3} &= s_{12} (- s_{14} s_{23} - s_{13} s_{24} + s_{12} s_{34})\,, \\
c_{4} &= 2 s_{12} s_{13} s_{23}\,, \nonumber \\
c_{5} &= 0 \, . \nonumber
\end{align}
There are no contributions to \Eq{eq:recursion_two_deriv} from factorization channels because the three-particle amplitude vanishes. Thus, the five-particle amplitude arises purely from the residues $z\rightarrow 1/c_{a}$. Explicitly, the five-particle amplitude is
\eq{
\Amp{5}(0) &= \sum_{a=1}^{4} \frac{ \nabla_{i_a} \Amp{4,a}(1/c_{a})}{\Pi^{4}_{b=1,b\neq a} ( 1 - c_{b}/c_{a})} \, ,
}{eq:A5_recursion}
where $\Amp{4,a}(1/c_{a})$ is the four-particle amplitude not including particle $a$, and with momenta shifted by $z\rightarrow 1/c_{a}$. It is straightforward to check that \Eq{eq:A5_recursion} agrees precisely with the five-particle scattering amplitude we computed previously in \Eq{GNLSM_amps}. We have also checked numerically that \Eq{eq:recursion_two_deriv} produces the correct six-particle amplitude.
Via repeated application of these on-shell recursion relations, any $n$-particle amplitude can be expressed in terms of the four-particle amplitude in \Eq{GNLSM_amps}. Again, it may seem peculiar that this is even possible, since a general two-derivative scalar theory does not exhibit any enhanced symmetry. However, the key point here is that the four-particle amplitude depends on the Riemann curvature as a function of the VEV. This quantity, together its covariant derivatives, encode all of the properties of the underlying manifold. So implicitly, the seed of the on-shell recursion relation is not simply the four-particle amplitude, but the four-particle amplitude evaluated at {\it all points} on the manifold.
\section{Conclusions}
In this paper we have described a systematic framework for recasting the dynamics of scattering in terms of the geometry of field space. On-shell scattering amplitudes are physical quantities, so they must be invariant under changes of field basis. Hence, they can be expressed in terms of a set of corresponding geometric invariants. In particular, we have shown quite generally how a variety of geometric objects are actually avatars for familiar physical concepts, including LSZ reduction (the tetrad), Ward identities (the Lie derivative), single-soft limits (the covariant derivative), and multiple-soft limits (the curvature).
Regarding soft structure, we have shown how the geometry of field space mandates a universal soft theorem applicable to any theory of scalars, even including masses, potential interactions, and higher-derivative couplings. In general, the soft limit is non-vanishing. For theories with symmetry, we have shown how the soft theorem is exactly dictated by the associated Killing vectors in field space. When the symmetries are affine, the corresponding theories describe NGBs, and we show under what circumstances there exists an analog of the Adler zero. We also show how the dilaton soft theorem is another corollary of our geometric soft theorem. Last but not least, we have also derived new double- and triple-soft theorems and showed how to leverage our results to derive on-shell recursion relations that implement the soft bootstrap for a much broader class of theories.
The present work offers numerous avenues for future analysis. First and foremost is the generalization of our results to an even broader class of theories. In particular, it is obvious that the geometry of scalar field space will persist even with inclusion of spectator particles with spin, such as fermions and vectors. Hence, the soft limits of scalars will be universal in those theories as well. On the other hand, a more ambitious goal would be to apply our reasoning to the soft limit of particles with spin, as in the Weinberg soft theorems for gauge theory and gravity, and in non-relativistic or condensed matter systems.
Second, it would be interesting to understand the broader class of invariances under {\it derivative} field redefinitions. In particular, it is known that on-shell scattering amplitudes are unchanged by changes of field basis involving derivatives. What are the geometric invariants associated with these changes of basis? To our knowledge, there exists no unified framework to understand this structure.
Third, there is the question of further applications for scattering amplitudes. A natural question is whether there exist subleading generalizations of our geometric single- and multiple-soft theorems. Another question is whether one can construct on-shell recursion relations that leverage both kinematics as well as the geometry of field space. In such a construction, one would shift not only the external kinematics of an amplitude, but {\it also} the VEVs that dictate the couplings and masses that appear within it.
\newpage
\begin{center}
{\bf Acknowledgments}
\end{center}
\noindent
We are grateful to Lance Dixon, Maria Derda, Aneesh Manohar, and Ira Rothstein for useful discussions and comments on the paper.
We also thank Maria Derda, Elizabeth Jenkins, Aneesh Manohar, and Michael Trott for collaboration on related projects.
C.C., A.H.,~and J.P.-M. are supported by the DOE under grant no.~DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics.
\bibliographystyle{utphys-modified.bst}
|
2024-02-18T23:40:25.453Z
|
2021-11-05T01:22:22.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03045","language":"en","timestamp":1636075342000,"url":"https:\/\/arxiv.org\/abs\/2111.03045","yymm":"2111"}
|
proofpile-arXiv_000-10212
|
{"provenance":"002.jsonl.gz:10213"}
| null | null |
\section{Selective review of post-census method}
Internationally and historically speaking, one can distinguish three main approaches when it comes to the provision of post-census population statistics: Demographic Balancing Equation, Central Population Register and Adjusted Population Dataset. Some selected examples of each approach are briefly reviewed below.
\subsection{Demographic Balancing Equation}
The Demographic Balancing Equation (DBE) allows one to update census population statistics, based on vital statistics, available data on internal and external migration, and various group-quarter (GQ) or other special populations (e.g. military personals).
\begin{itemize}[leftmargin=4mm]
\item In reality the approach is based on administrative data of birth, death, internal migration and GQ or special populations. Nevertheless, for various reasons, longitudinal linkage of relevant data and the base (census) population at the individual level is not the case in practice. The basis of production is a yearly constructed population hypercube rather than, say, a one-number census-like population dataset.
\item Survey data are often required for the external migration component. Indirect residual estimation methods are sometimes used for certain migration groups and special populations. In the UK context, it may be possible to make greater use of the continuous coverage survey under the DBE approach, as will be commented later.
\end{itemize}
\subsubsection{Example of USA}
In the USA, the Population Estimates Program (PEP) uses the DBE approach, supplemented by the American Community Survey (ACS) for foreign-born immigration.
The PEP produces population estimates at three levels: national, state and county, by characteristics of sex, age, race and Hispanic origin; see e.g. CBS (2019). At the first stage, the national characteristics, state total, and county total estimates are created. At the second stage, estimates of states and counties by characteristics are obtained, by raking of the lower-level DBE-estimates to the controlled estimates at a higher level. Linear interpolation between two successive yearly estimates generates the seemingly continuous Population Clock, which can be viewed at \url{http://www.census.gov/popclock/}.
The DBE-estimates are essentially based on administrative data, except for foreign-born immigration, which uses the ACS. In particular, for state and county total estimates, one calculates county-to-county net domestic migration based on four sources: Internal Revenue Service tax return data for ages 0-64, Medicare enrollment data from Centers of Medicare and Medicaid Services for ages 65+, Social Security Administration's Numerical Identification File for all ages, and change in the GQ population.
The ACS uses an address frame, and aims primarily at producing attribute statistics similar to the census long-form format. Two ACS weights are computed for households and persons, respectively. The household unit weight is produced first, based on sampling design inclusion probabilities reweighted for nonresponse and then raked to county/group of county level totals; person weight is then produced by raking. See e.g. CBS (2014).
\subsubsection{Comments on UK}
The DBE approach is used to produce the mid year population estimates (MYE) in the UK; see e.g. ONS (2018). The key dissemination levels are national and local authority. Internal migration data are derived from administrative sources. For international migration one makes use of the International Passenger Survey, while administrative sources are used to distribute immigrants to each local authority.
One anticipates the relevant administrative sources to be enhanced in the coming years. The most relevant sources include the Benefits and Income data, the Patient Register, Education data at all levels, the Council Tax data, and so on. In particular, updates of home addresses in the Drivers' License database may potentially prove to be an important addition, regarding internal migration. Moreover, Migrant Work Scan data can provide information on all overseas nationals who have registered for and allocated a National Insurance Number. Together with Exit Checks they could be expected to greatly improve the external migration data in future.
Going forward, two important questions need to be considered for potential improvements of the estimation of DBE components in the UK context.
\begin{itemize}[leftmargin=4mm]
\item Can the ability to link individual-level administrative data across sources improve the quality of address or locality information, and extend the range of sources to be used? For instance, in the USA, one splits the non-GQ population into 0-64 and 65+ by age, each with its designated source, which may be simplistic and practical, only if individual linkage is infeasible but not otherwise.
\item Can greater use be made of the coverage survey in the DBE approach in the UK, compared to the ACS in the USA? The question is natural, especially provided linkage to the administrative data. But even without linkage, it may be possible to combine area-level administrative and survey migration data, e.g. by means of generalised structure preserving estimation for small area compositions.
\end{itemize}
\subsection{Central Population Register}
In a number of countries that produce register-based census-like population statistics, Central Population Register (CPR) is used for continuous updating of neighbourhood population statistics at very low aggregation levels. A wide range of statistics can be produced with much greater ease and lower cost, including timely population dynamics, detailed migration flows and household statistics that can inform policy makers, researchers, businesses and general public. See e.g. some statistics produced by Statistics Norway at \url{https://www.ssb.no/befolkning/faktaside/befolkningen#blokk-1}.
There are two key political and cultural premises of the CPR approach: infrastructure and population concept. The CPR approach is only feasible, given the necessary legal framework, uptake of universal person identification in public services, and adequate time labelling of relevant demographic events and recording in the CPR. It requires also the population to be counted at their \emph{de jure} instead of \emph{de facto} address. The latter poses a key challenge to the relevance of the statistics, in which respect statistical adjustment may still be necessary for certain topics. See e.g. Zhang (2011) and Zhang and Fosen (2018) for a discussion of register-based household statistics.
\subsection{Adjusted Population Dataset}
In some countries, the CPR does not have the desired accuracy to warrant the CPR approach directly, often due to a lack of updated migration data. Moreover, in countries where CPR does not exist at all, it may still be possible to construct a statistical Population Dataset (PD), based on linkage and integration of relevant administrative registers, which can yield national population counts that are similar to the census population estimates, as it is e.g. the case in the UK (ONS, 2015), New Zealand and Australia. In either case, statistical adjustment is then necessary. Despite nearly all the examples below are based on adjustment of the CPR, we shall refer to this approach as \textit{Adjusted Population Dataset (APD)}, in anticipation of future developments that can enable the approach in those countries that do not have CPR at all.
The main issues for adjustment are \emph{erroneous enumeration}, \emph{missing enumeration} and \emph{misplacement} in the PD. Erroneous and missing enumerations causes over- and under-counting at the national level, respectively; whereas misplacement causes inter-locality over- and under-counting simultaneously but does not cause over- or under-counting at the national level. Below we give some examples of the different cases.
\subsubsection{APD for misplacement in Israel}
The CPR in Israel has only negligible over- and under-counting errors at the national level. The Israeli Integrated Census 2008 was conducted chiefly to adjust for misplacement in the CPR. See e.g. Nirel and Glickman (2009). All the persons registered in a Statistical Area (SA) according to the CPR were given a weight, such that their weighted total equals to the estimated population size in that SA. There are over 3000 SAs in Israel.
For post-census population statistics, a person's weight would remain the same, as long as the person stays in the same SA. A person would in principle take on the weight of the destination SA, if the person registers a move to another SA in the CPR. In this way, the pattern of misplacement adjustment in each SA is preserved, while the CPR count of persons registered in the SA naturally varies over time.
Notice that in reality the basic idea expounded above needs to be refined in several ways. For instance, the weight may vary across different sub-populations within an SA. Moreover, ad hoc adjustment may be required, if one detects an abnormal change in a given SA, often due to delays of previous or new housing developments.
\subsubsection{APD for erroneous enumeration in Latvia and Estonia}
Erroneous enumerations due to lack of updating of emigration were evidenced in the last census in both Latvia and Estonia. Each developed new method for post-census population statistics, which are similar in some respects while differ in others.
The initial census 2011 enumeration returned a population count of 2.075 million in Latvia, which was about 7\% lower than the CPR count. The Central Statistical Bureau of Latvia worked out an adjustment method based on statistical classification and migration mirror statistics (CSBL, 2019). By combining the census data with relevant administrative data including the CPR, approximately 100 000 persons were added to the census enumeration from the administrative data. Treating the imputed census enumeration as the labelled units, supervised learning by logistic regression yielded a predicted probability of erroneous enumeration for each eligible person in the CPR. The fitted model was used in the post-census years to adjust the CPR population count.
The imputation of census under-enumeration was similar in Estonia, which added about 30 000 persons from the relevant administrative sources to the initial census enumeration of about 1.3 million in total. A residency index was developed for the post-census years, which is updated on a yearly basis (Tiit and Maasing, 2016). First, an extended population $(U_+)$ is constructed, which has negligible under-coverage errors. Next, 27 administrative sources are used to construct a Sign-of-Life score $X(k,t-1)$ for person $k\in U_+$ in year $t$, including special care, parental leave, dental care, digital prescription, prison visit, change of vehicle, residence permit, and so on. The residency index for person $k$ in year $t$, denoted by $R(k,t)$ is calculated from $R(k,t-1)$ and $X(k,t-1)$ as
\[
R(k, t) = d\cdot R(k, t-1) + g\cdot X(k, t-1)
\]
where $d$ is the stability rate and $g$ the signs of life rate, the values of which are heuristically chosen to yield plausible updated population counts.
\subsubsection{APD for erroneous and missing enumerations in Italy}
The population census in Italy is moving to a `permanent' census, which will produce annual population statistics instead of the previous decennial cycle, using information from administrative sources integrated with sample surveys. Moreover, the first-phase sample for population statistics will provide the frame for the main social survey samples, which are negatively coordinated at the second phase.
The first-phase sample has two components. The component A consist of a sample of Enumeration Areas or addresses selected from an Integrated Address File. The component L is selected from a list of households (in the CPR), to provide reliable information on the `census' variables that are not available from the administrative sources. The two components A and L will amount to a yearly sample size of about 400,000 households and 1,000,000 persons, respectively, drawn from 2850 out of 7950 municipalities.
Population estimates will be obtained by an extended Dual System Estimation method, accounting for both under-coverage and over-coverage errors of the CPR. It is planned that all the eligible persons in the CPR will be given a weight, such that the weighted totals are equal to the estimated population sizes in the respective municipalities.
\subsubsection{APD for missing enumerations in Ireland}
The traditional population census in Ireland takes place every 5 years. Despite the absence of CPR, the CEO has developed an alternative estimation methodology at the national level, based purely on administrative sources; see e.g. Dunne (2015), Zhang and Dunne (2017). The Irish APD approach is unique internationally, where it operates in such a way that one only needs to deal with the missing enumerations at the national level.
The core of Dual System Estimation consists of two lists both subjected to missing enumerations only. The CEO constructs a Sign-of-Life register, called the Person Activity Register (PAR), based on observed activities across a range of administrative sources, such that the PAR is expected to have only negligible over-counting errors, whereas it can have systematic under-coverage errors in different parts of the usual resident population. Next, the Drivers' License Database (DLD) provides a plausible second cross-sectional enumeration list, based on the fact that each holder needs to renew the license every 10 years. As discussed in Zhang (2019b), the two lists PAR and DLD can satisfy the assumptions of Dual System Estimation, which differ from those assumptions traditionally held for estimation based on census and census coverage surveys (Wolter, 1986).
\section{Fractional counting and rolling}
The CPR approach is unlikely to be feasible in the UK in the near future beyond 2021. Provided linkage across the sources, the APD approach will encompass the DBE approach, now that all the administrative data for component estimation can be brought together into an integrated PD. The key challenge then, for making greater use of the combined administrative and survey data, will be to achieve an individual-based estimation methodology (such as in Israel, Latvia and Estonia), instead of population hypercube-based estimation (such as currently envisaged in Italy and Ireland). This requires above all two extensions to the existing APD approach:
\begin{itemize}[leftmargin=6mm]
\item whereas the current Israeli approach focuses only on misplacement and the Latvian and Estonian approaches only on erroneous enumeration, one needs to be able to account for more than one type of error in the APD approach for the UK;
\item one needs a more rigorous methodology for the updating of weights, score or index over time, for the individuals in the PD.
\end{itemize}
Below, to address the first issue, we outline a theory of \emph{fractional counting} for population statistics; regarding the second issue, we discuss the basic ideas for the rolling or incremental learning of the fractional counters.
\subsection{Fractional counting for misplacement}
To focus on the idea, suppose for the moment that the PD, denoted by $\mathcal{P}$, is only subjected to misplacement, where $\mathcal{P} = U$ and $U$ is the target population.
\paragraph{Sign-of-Life (SoL) addresses} For each person $k$ in $\mathcal{P}$, one finds the possible distinct addresses, at which the person may be located according to all available administrative sources. For instance, a student may have home address (of the parents) in addition to an address in the Higher Education loan register. Or a person may have different addresses in the Patient Register and the Council Tax register. Notice that depending on the data available and the aggregation detail required for population statistics, the address may identify a coarser-than-dwelling location, such as post code or municipality. We shall refer to such addresses as the SoL-addresses.
\paragraph{SoL-address classifier and predictor} Let ${\bf a}_k$ be the $q$-vector containing all the available SoL-addresses of person $k$ in $\mathcal{P}$. Let ${\bf z}_k$ contain all the relevant auxiliary data, such as known family relationships, emigration status, previous addresses, work or study place, and so on. A SoL-address \textit{classifier} is given by
\[
{\bf y}_k = g({\bf a}_k, {\bf z}_k) \in \{ 0, 1\}^q\qquad\text{where}\quad {\bf y}_k^{\top} {\bf 1} = 1
\]
i.e. one and only one of the available addresses is chosen as the address for person $k$, such that the corresponding component of ${\bf y}_k$ is set to 1 and all the other components are set to 0. Moreover, a SoL-address \textit{predictor}, or \emph{fractional counter}, is given by
\[
\boldsymbol{\mu}_k = h({\bf a}_k, {\bf z}_k) \in [0, 1]^q\qquad\text{where}\quad \boldsymbol{\mu}_k^{\top} {\bf 1} = 1
\]
i.e. each component can take value from 0 to 1 and all the components sum to 1. The idea is for each component of $\boldsymbol{\mu}_k$ to be probability that the corresponding address is the true usual resident address of person $k$, denoted by adr$_k$.
\paragraph{Population size based on fractional counting} Based on the individual classifier for all $k\in \mathcal{P}$, the population count of locality $i$, for $i = 1, ..., m$, is given by
\begin{equation}\label{cls}
\widehat{N}_i^C = \sum_{k\in \mathcal{P}} {\bf y}_k^{\top} \boldsymbol{\delta}_k \qquad\text{and}\qquad \boldsymbol{\delta}_k = \boldsymbol{\delta}({\bf a}_k \in A_i)
\end{equation}
where $A_i$ denotes the set of admissible addresses for the $i$th locality, and $\boldsymbol{\delta}({\bf a}_k \in A_i)$ is the $q$-vector with each component taking value 1 if the corresponding address belongs to $A_i$ and 0 otherwise. Whereas, based on the fractional counter $\boldsymbol{\mu}_k$ and the method of \emph{fractional counting}, the population count of the same locality is given by
\begin{equation}\label{pred}
\widehat{N}_i^P = \sum_{k\in \mathcal{P}} \boldsymbol{\mu}_k^{\top} \boldsymbol{\delta}_k \qquad\text{and}\qquad \boldsymbol{\delta}_k = \boldsymbol{\delta}({\bf a}_k \in A_i)
\end{equation}
\paragraph{Properties of fractional counting} Prediction by classifier \eqref{cls} resembles election by majority vote, where the winner takes \emph{all} the votes regardless of the margin over the votes for the loser. It will cause bias of population statistics. Prediction by fractional counting \eqref{pred} aims to avoid this problem. It is unbiased for any $N_i$, provided
\begin{equation} \label{unbias}
\begin{cases} \boldsymbol{1}^{\top} \mbox{Pr}(\mbox{adr}_k = {\bf a}_k) = 1 \\
\boldsymbol{\mu}_k = \boldsymbol{\mu}({\bf a}_k, {\bf z}_k) \end{cases}
\end{equation}
The first condition ensures that the true address (adr$_{k}$) can only be one of the SoL-addresses, insofar misplacement is the only problem at hand. The second condition then ensures that the probabilities of $\mbox{adr}_k = \boldsymbol{a}_k$ is entirely determined by ${\bf a}_k$ and $\boldsymbol{z}_k$. In reality the matter will depend on the covariates $\boldsymbol{z}_k$ and how well $\boldsymbol{\mu}_k$ in \eqref{pred} is modelled. Given the $\boldsymbol{\mu}_k$'s, the prediction variance of $\widehat{N}_i^P$ by \eqref{pred} is
\[
V(\widehat{N}_i^P - N_i) = \sum_{k\in \mathcal{P}} \boldsymbol{\mu}_k^{\top} \boldsymbol{\delta}_k \big( 1 - \boldsymbol{\mu}_k^{\top} \boldsymbol{\delta}_k \big)
\]
where we assume that $\delta(\mbox{adr}_k \in A_i)$ is independent across different persons, conditional on the $({\bf a}_k, {\bf z}_k)$'s. However, it is possible to allow for clustering effects in the variance calculation, depending on the model underlying $\boldsymbol{\mu}_k$, such as when it always assigns the same $\boldsymbol{\mu}_k$ to all the persons in the same family and family relationship is part of $\boldsymbol{z}_k$.
\paragraph{Producing social statistics} Population and sub-population totals based on fractional counting can provide calibration totals for social surveys in the same way as the MYEs. Consider register-based attribute statistics. Denote by $\epsilon_k$ the value of interest for $k\in \mathcal{P}$, and $\hat{\epsilon}_k$ the corresponding register-based value. In cases where $\epsilon_k$ is observed without error, one can simply set $V(\hat{\epsilon}_k - \epsilon_k) = 0$; otherwise the variance will be positive in cases of model-based prediction of $\epsilon_k$ based on register sources. The population and fractional counting totals in the $i$th locality are given, respectively, as
\[
t_i = \sum_{k\in \mathcal{P}} \delta_k \epsilon_k \qquad \text{and}\qquad \hat{t}_i = \sum_{k\in \mathcal{P}} \hat{\mu}_k \hat{\epsilon}_k
\]
where $\delta_k = \delta(\mbox{adr}_k \in A_i)$ as defined in \eqref{cls} and \eqref{pred}, and $\hat{\mu}_k = \boldsymbol{\mu}_k^{\top} \boldsymbol{\delta}_k = \widehat{\mbox{Pr}}(k \in U_i)$ by the fractional counter. We have unbiased $\hat{t}_i$, for $i = 1, ,..., m$, provided
\[
E(\hat{\mu}_k - \delta_k) = 0 \quad\text{and}\quad E(\hat{\epsilon}_k - \epsilon_k) =0 \quad\text{and}\quad
(\hat{\epsilon}_k, \epsilon_k) \perp (\hat{\mu}_k, \delta_k)~,
\]
since $E(\hat{t}_i - t_i) = \sum_{k\in \mathcal{P}} E(\hat{\mu}_k \hat{\epsilon}_k - \delta_k \epsilon_k)$, where
\[
E(\hat{\mu}_k \hat{\epsilon}_k - \delta_k \epsilon_k) =
E(\hat{\mu}_k \hat{\epsilon}_k - \delta_k \hat{\epsilon}_k + \delta_k \hat{\epsilon}_k - \delta_k \epsilon_k) = 0 ~.
\]
Provided independence across different persons, the prediction variance is given as
\[
V(\hat{t}_i - t_i) = \sum_{k\in \mathcal{P}} V(\hat{t}_i - t_i)
\]
Again, it is possible to relax the independence assumption and allow for clustering effects in the variance calculation.
\subsection{Initiation in the absence of missing enumerations}
It is natural to initiate the fractional counters in connection with the next census. The pre-2011 MYEs were retrospectively adjusted upwards given the census population estimates. This suggests that the administrative sources for the DBE component estimation may suffer from some under-coverage. Nevertheless, we shall assume that going forward the enhanced administrative sources will enable one to construct the PD as an extended population, denoted by $\mathcal{P}$, which has negligible under-coverage of the population, denoted by $U$. However, for any in-scope person, we do not require the first condition of \eqref{unbias} to hold, in anticipation of possible weakness of the SoL-address sources. That is, for each person $k\in \mathcal{P} \cap U$, we allow for a probability of being \emph{displaced}, denoted by
\[
\xi_k = 1 - \boldsymbol{1}^{\top} \mbox{Pr}(\mbox{adr}_k = {\bf a}_k) \geq 0 ~.
\]
For supervised learning of \emph{fractional counter}, one ideally needs to label everyone in $\mathcal{P}$ first as erroneous or not, then in the latter case, displaced or not, and finally in the case of not displaced, the vector $\mbox{adr}_k = {\bf a}_k$. The situation following the UK census 2021 is ragged due to the presence of multiple errors, as depicted in Table \ref{tab-initial}.
\begin{table}[ht]
\begin{center}
\caption{Initiation of fractional counters in $\mathcal{P}$ based on census}
\begin{tabular}{cccccccccccc} \hline
\multicolumn{8}{c}{Fractional counter} & & & & Enum. \\ \cline{1-8}
\multicolumn{4}{c}{Placement} & $~$ & Displaced & $~$ & Erroneous & $~$ & $\mathcal{P}$ & $~$ & Census \\
\cline{4-4} \cline{6-6} \cline{8-8} \cline{10-10} \cline{12-12}
& & & \multicolumn{1}{|r|}{1 adr} & & \multicolumn{1}{|c|}{$\xi_1$} & & \multicolumn{1}{|c|}{$\theta_1$} &
& \multicolumn{1}{|c|}{1} & & \multicolumn{1}{|c|}{1} \\ \cline{3-4}
& & \multicolumn{2}{|r|}{2 adr} & & \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} &
& \multicolumn{1}{|c|}{$\vdots$} & Core & \multicolumn{1}{|c|}{$\vdots$} \\ \cline{2-4}
& \multicolumn{3}{|r|}{3 adr} & & \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} &
& \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} \\ \cline{1-4}
\multicolumn{4}{|r|}{$\cdots\quad\cdots\quad\vdots$} & & \multicolumn{1}{|c|}{$\xi_{N_c}$} & & \multicolumn{1}{|c|}{$\theta_{N_c}$} &
& \multicolumn{1}{|c|}{$N_c$} & & \multicolumn{1}{|c|}{$N_c$} \\ \cline{1-4} \cdashline{5-12}
& & & \multicolumn{1}{|r|}{1 adr} & & \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} &
& \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} \\ \cline{3-4}
& & \multicolumn{2}{|r|}{2 adr} & & \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} &
& \multicolumn{1}{|c|}{$\vdots$} & (Non- & \multicolumn{1}{|c|}{$N_L$} \\ \cline{2-4} \cline{12-12}
& \multicolumn{3}{|r|}{3 adr} & & \multicolumn{1}{|c|}{$\vdots$} & & \multicolumn{1}{|c|}{$\vdots$} &
& \multicolumn{1}{|c|}{$\vdots$} & core) & $\widehat{N}$ \\ \cline{1-4}
\multicolumn{4}{|r|}{$\cdots\quad\cdots\quad\vdots$} & & \multicolumn{1}{|c|}{$\xi_{N_p}$} & & \multicolumn{1}{|c|}{$\theta_{N_p}$} & & \multicolumn{1}{|c|}{$N_p$} \\ \cline{1-4} \cline{6-6} \cline{8-8} \cline{10-10} \\
\multicolumn{8}{c}{$\underbrace{\sum \boldsymbol{\mu}_k^{\top} \boldsymbol{1} + \sum \xi_k = \widehat{N}, \quad \widehat{N} + \sum \theta_k = N_p}_\text{Benchmarking}$}
\end{tabular} \label{tab-initial}
\end{center}\end{table}
Reading from right to left, the census enumerations are numbered as 1, ..., $N_L$, where $N_c$ of them can be linked to $\mathcal{P}$. The linked part of $\mathcal{P}$ is referred to as the \emph{core} of PD, denoted by $\mathcal{P}_c$. One observes $\delta(k\in U) = 1$ and $\delta(\mbox{adr}_k = \boldsymbol{a}_k)$ for all $k\in \mathcal{P}_c$, based on the census data. One can treat $\mathcal{P}_c$ as a non-probability sample from $\mathcal{P}$. Under the assumption of non-informative selection, i.e.
\[
\boldsymbol{\mu}(\boldsymbol{a}_k, \boldsymbol{z}_k | k\in \mathcal{P}_c) = \boldsymbol{\mu}(\boldsymbol{a}_k, \boldsymbol{z}_k | k\in \mathcal{P} \cap U) ~,
\]
one can estimate $\boldsymbol{\mu}(\boldsymbol{a}_k, \boldsymbol{z}_k | k\in \mathcal{P} \cap U)$ consistently from $\mathcal{P}_c$. This allows one to populate the probabilities $\xi_k$ and $\boldsymbol{\mu}_k$ for all $k\in \mathcal{P}$, in the case of $k\in U$. Moreover, these probabilities can be benchmarked to the census population estimates, denoted by $\widehat{N}$, at any aggregation level or for any sub-population of choice. There are different ways of benchmarking; see e.g. Favre et al (2005) for a method that can incorporate unit-level constraints.
The estimation of the probability of erroneous enumeration, denoted by $\theta_k$ for $k\in \mathcal{P}$, requires a different approach. Some possibilities are given below.
\begin{itemize}[leftmargin=5mm]
\item One can replicate the approach of Latvia and Estonia, whereby one labels a subset of $\widehat{N}$ persons in $\mathcal{P}$, denoted by $\mathcal{P}_U \subset \mathcal{P}$. The additional non-core persons in $\mathcal{P}_U \setminus \mathcal{P}_c$ are the ones judged to be the most likely in-scope persons in $\mathcal{P}\setminus \mathcal{P}_c$ or, equivalently, the persons in $\mathcal{P}\setminus \mathcal{P}_U$ are the most likely out-of-scope persons in $\mathcal{P}\setminus \mathcal{P}_c$.
\item One can use the population hypercube estimated from the census and census coverage survey, and assign the probabilities $\theta_k$ according to the cell of person $k$, for $k\in \mathcal{P}$.
\item One can draw a probability sample $s$ from $\mathcal{P}\setminus \mathcal{P}_c$ and obtain $\delta(k\in U)$ for $k\in s$, and use the combined sample $\mathcal{P}_c \cup s$ to estimate $\theta_k$, for $k\in \mathcal{P}$.
\end{itemize}
Benchmarking of the estimated $\theta_k$'s may be necessary, denoted by $\widehat{N} + \sum \theta_k = N_p$, in the case of individual-based estimation. Finally, notice that international migration and other special populations may need to be treated separately from the above.
\subsection{Basic ideas of rolling}
By \emph{rolling} we mean that in principle the fractional counters can be updated in a nearly continuous manner over time, just as $\mathcal{P}$ itself. It seems most similar to \emph{incremental learning} in the statistical machine learning literature. Below we first summarise the data that can be made available for rolling, and then discuss the basic ideas in the parametric and algorithmic settings, respectively.
\subsubsection{Data for rolling}
Let $\mathcal{P}_t$ be the PD at time $t$, where $t\geq 0$, and $\mathcal{P}_0$ is the initiation PD, say, at the census time point. Denote by $\mathcal{L}_t$ the set of labelled persons, where $\mathcal{L}_t \subseteq \mathcal{P}_t$, on which supervised learning can be based. Without losing generality, one can partition $\mathcal{L}_t$ into three parts:
\begin{equation} \label{data}
\mathcal{L}_t = S_t \cup \mathcal{B}_t \cup \mathcal{A}_t
\end{equation}
where $S_t$ denotes the subset of persons that are associated with known inclusion probabilities, and $\mathcal{B}_t$ denotes the other persons for whom we observe updated labels of (erroneous, displaced, placement), and $\mathcal{A}_t$ the rest for whom we have only the labels from $t-1$. \textit{It will be important and helpful to enhance the collection, organisation and usage of all the relevant data, across the ONS, in order to facilitate efficient rolling and enable the transition to a sustainable system for population statistics in the long term.}
\paragraph{Coverage survey} The planned post-census coverage survey is a source of $S_t$. The observations in $S_t$ can obviously be used for updating of $\boldsymbol{\mu}$ and $\xi$, pertaining to the probabilities of displaced and placement. Depending on the actual design and estimation method, it may as well be possible to update the erroneous enumeration probability $\theta$, especially if it is possible to administer follow-up surveys at the sampled addresses in the coverage survey.
\paragraph{On-going surveys} As mentioned before, the planned approach at Istat is to draw household samples for the main social surveys from their first-phase sample for population statistics. Unless the ONS adopts the same approach, there will be other labelled persons from the on-going social surveys, in addition to the coverage survey. Notice that the fieldwork protocol in the on-going surveys will need to be enhanced to ensure the quality of the data collected for rolling. Whether these additional labelled persons can be classified as part of $S_t$ or $\mathcal{B}_t$ depends on the actual sampling design.
\paragraph{Administrative sources} Updating in the relevant administrative sources can generate labelled persons. For instance, updating of the Council Tax register, the home addresses in Drivers' License database, the PAYE register, and so on, can all provide data for $\mathcal{B}_t$. The distinction of core and non-core in $\mathcal{B}_t$ can be relevant at least in the near future.
\subsubsection{Rolling in the parametric setting} \label{EBP}
For a parametric setting of the fraction counters, suppose the relevant probabilities are given by the inverse of the logistic link function, denoted by
\[
\boldsymbol{\pi}(\boldsymbol{x}_k, \boldsymbol{\beta}) = E(\boldsymbol{y}_k | \boldsymbol{x}_k ; \boldsymbol{\beta})
\]
for person $k$, where $\boldsymbol{y}_k$ is the vector of indicators whose components sum to 1, and $\boldsymbol{x}_k$ is the vector of known covariates, and $\boldsymbol{\beta}$ the logistic regression coefficients. Suppose the initial $\boldsymbol{\beta}_0$ are estimated based on a large dataset in connection with the census, denoted by $\hat{\boldsymbol{\beta}}_0$, with associated variance $\hat{\boldsymbol{\Sigma}}_0$.
At the next time point $t=1$, one could refit the model using the labelled persons in $\mathcal{L}_1$, of which $\mathcal{D}_1 = S_1\cup \mathcal{B}_1$ are associated with updated observations of $\boldsymbol{y}_{1,k}$, for $k\in \mathcal{D}_1$. This assumes $\boldsymbol{y}_k$ remains the same at $t=0, 1$, for any non-updated person $k\in \mathcal{L}_1 \setminus \mathcal{D}_1$, which may be problematic if the lack of updating is due to delays or errors in the sources. One could refit the model using only the data associated with $\mathcal{D}_1$, under some suitable assumption of non-informative selection of $\mathcal{D}_1$ from $\mathcal{P}_1$. The estimation precision is then determined by the size of $\mathcal{D}_1$, which is much smaller than the initiation dataset $\mathcal{L}_0$, so that the uncertainty of the estimated $\boldsymbol{\beta}_1$ will be much larger than that of $\hat{\boldsymbol{\beta}}_0$.
Consider empirical Bayes (best) prediction (EBP) under the hierarchical model:
\begin{gather*}
E(\boldsymbol{y}_{1,k} | \boldsymbol{x}_{1,k} , \boldsymbol{\beta}_1) = \boldsymbol{\pi}(\boldsymbol{x}_{1,k}, \boldsymbol{\beta}_1) \\
\boldsymbol{\beta}_1 \sim N(\hat{\boldsymbol{\beta}}_0, \hat{\boldsymbol{\Sigma}}_0)
\end{gather*}
where the normal distribution is motivated by the large size of initiation dataset $\mathcal{L}_0$. At the lower level, the population dynamics which change the parameter $\boldsymbol{\beta}_0$ from time 0 to $\boldsymbol{\beta}_1$ at time 1 is modelled as a `random' departure from the previous `position' $\hat{\boldsymbol{\beta}}_0$ with variance $\hat{\boldsymbol{\Sigma}}_0$. This differs in concept to fully Bayesian approach, where the hyper-parameter of the prior distribution of $\boldsymbol{\beta}_1$ needs not to have any empirical connotation.
Assuming IID observations over $\mathcal{D}_1$, we obtain the prediction function for $\boldsymbol{\beta}_1$ as
\[
f(\boldsymbol{\beta}_1 | \boldsymbol{y}_{1,\mathcal{D}_1}, \boldsymbol{x}_{1,\mathcal{D}_1} ; \hat{\boldsymbol{\beta}}_0, \hat{\boldsymbol{\Sigma}}_0) =
\frac{\prod_{k\in \mathcal{D}_1} f(\boldsymbol{y}_{1,k} | \boldsymbol{x}_{1,k}, \boldsymbol{\beta}_1) \cdot
\phi(\boldsymbol{\beta}_1; \hat{\boldsymbol{\beta}}_0, \hat{\boldsymbol{\Sigma}}_0)}{\prod_{k\in \mathcal{D}_1} f(\boldsymbol{y}_{1,k} | \boldsymbol{x}_{1,k})}
\]
Let $\hat{\boldsymbol{\beta}}_1$ and $\hat{\boldsymbol{\Sigma}}_1$ be the prediction mean and variance of $\boldsymbol{\beta}_1$, respectively. In this way the lower-level model is updated to $\boldsymbol{\beta}_2 \sim N(\hat{\boldsymbol{\beta}}_1, \hat{\boldsymbol{\Sigma}}_1)$, by which the model is rolled forward and ready to be used for updating at $t=2$.
The EBP approach can thus facilitate the rolling of fractional counters, without the extra and potentially problematic assumption that $\boldsymbol{y}_k$ remains the same for $k\in \mathcal{L}_1 \setminus \mathcal{D}_1$, or losing efficiency as when estimating $\boldsymbol{\beta}_1$ only based on $\mathcal{D}_1$. It achieves stability over time, balancing between the signals from $\mathcal{D}_t$ and the inertia of $N(\hat{\boldsymbol{\beta}}_{t-1}, \hat{\boldsymbol{\Sigma}}_{t-1})$: any value of $\boldsymbol{\beta}_t$ far from $\hat{\boldsymbol{\beta}}_{t-1}$ are `weighted down' by $\phi$, compared to only based on $f(\boldsymbol{y}_{t,k} | \boldsymbol{x}_{t,k}, \boldsymbol{\beta}_t)$.
\subsubsection{Rolling in the algorithmic setting} \label{learning}
Machine learning or statistical machine learning has a vast and rapidly growing literature. There does not exist a unified framework of the different approaches. Below we first list some classifications and concepts that seem relevant, before we illustrate and discuss the basic ideas of decision tree updating in the present context.
With respect to the logical basis of learning, a distinction is between transduction and induction. Transduction or transductive inference is reasoning from observed (training) cases to specific (test) cases. In contrast, induction is reasoning from observed training cases to general rules, which are then applied to the test cases. The distinction seems most interesting in siutations where the predictions of the transductive model are not achievable by any inductive model. However, transductive algorithms that seek to predict discrete labels tend to be derived by adding partial supervision to a clustering algorithm, while supervised learning is generally considered to be a form of induction.
With respect to the context of learning, one commonly distinguishes among unsupervised learning (without labelled units), supervised learning (based on labelled units), and reinforcement learning (in an interactive environment). The last type of algorithm enables an agent to learn by trial and error using feedbacks from its own actions and experiences, such as in gaming. Semi-supervised learning is a class of techniques that typically use a small amount of labelled data with a large amount of unlabelled data, as many researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce considerable improvement in learning accuracy.
With respect the process of learning, broadly speaking there are two ways to train a model. A static model is trained offline, once and then used as-is for a while. A dynamic model is trained online, where data is continually entering the system and incorporated into the model through frequent updates.
\begin{itemize}[leftmargin=4mm]
\item In active learning, one seeks to select the most informative unlabelled instances and ask an omniscient oracle for their labels, in order to retrain a learning algorithm to maximise accuracy. Clearly, the selection mechanism can be designed to resemble audit sampling for model validation or improvement.
\item In incremental learning, the data is generated by an external process, and continually used to further train the model, i.e. without the model being completely retrained using all the available data at any given time point. The motivation may be technical, e.g. the data becomes available only gradually over time or its size is out of system memory limits. A central concern is to allow the model to adapt to new data without forgetting its existing knowledge. Some incremental learners have built-in parameters or assumptions that control the relevancy of new and old data. See e.g. Ade and Deshmukh (2013), Gepperth and Hammer (2016).
\end{itemize}
Regarding the validity of the trained model, data or concept shift is a term one finds in the machine learning literature, which is said to occur when the joint distribution of inputs and outputs differs between training and test stages. Covariate shift, a particular case of data shift, occurs when only the input distribution changes. An example is email spam filtering, which may fail to include spams (for training) that are not detected by the filter used to classify spams. Relevant statistical concepts developed for various informative selection mechanisms do not seem to have attracted much attention here.
\subsubsection{Rolling of a decision tree} \label{tree}
The Very Fast Decision Tree (VFDT) system is one of the most successful algorithms for mining data streams (Domingos and Hulten, 2000). Its main innovation is the use of Hoeffding bound to decide how many examples (observations) are necessary to observe before installing a split-test at a leaf of the tree. Splitting the leaf makes only a local change to the tree, since the prediction of units ending at another leaf is not affected. The theoretical result refers to the maximum expected disagreement between the Hoeffding tree and the asymptotic batch tree given infinite observations. However, the asymptotic batch tree is hardly our present interest, where one may assume that the population structure (hence the tree itself) must change over time, and the target is not the tree that one will grow given infinite observations that span infinitely over time.
\begin{align*}
x\leq c ~ & : ~ x> c \\
\swarrow \hspace{8mm} & \hspace{10mm} \searrow \\
\{ 1, 0, 0, 0\} \hspace{6mm} & \hspace{10mm} \{ 0, 1, 1, 1\}
\end{align*}
To illustrate the issue, consider the above split for two leaves in the tree grown at $t-1$. Let there be one observation $(x', y')$ at $t$ passing this way. Now,
\begin{itemize}[leftmargin=6mm]
\item if $x' \leq c$ and $y' = 1$, or $x' > c$ and $y' = 0$, then $(x', y')$ may be considered to provide `negative' evidence to the current tree, but perhaps not necessarily so if the value 1 in the left leaf or 0 in the right leaf happens to be observed a long time ago;
\item if $x' \leq c$ and $y' = 0$, or $x' > c$ and $y' = 1$, then $(x', y')$ may be considered to provide `positive' evidence to the current tree, but perhaps not necessarily so if a value 0 in the left leaf or 1 in the right leaf happens to be observed a long time ago.
\end{itemize}
In other words, for the rolling of a decision tree that evolves over time (subjected to concept shift), more considerations are required than the number of observations and the discriminant measure of the split.
It seems that one may still need to use part of the updated observations in $\mathcal{D}_t$ for training and part of them for validation. Let $M_t$ denote the updated tree, and $M_{t-1}$ the tree grown at $t-1$. At least two measures may be needed:
\begin{itemize}[leftmargin=16mm]
\item[$\Delta_{\epsilon}$:] how much better $M_t$ predicts for the updated units in $\mathcal{D}_t$ than $M_{t-1}$,
\item[$\Delta_M$:] how much change $M_t$ predicts for the non-updated units in $\mathcal{P}_t \setminus \mathcal{D}_t$.
\end{itemize}
Since $M_{t-1}$ yields 0 in terms of both measures, one may need to balance between the two measures when growing $M_t$. For instance, one may choose to maximise $\Delta_{\epsilon}$ subjected to an upper bound on $\Delta_M$, or minimise $\Delta_M$ subjected to a lower bound on $\Delta_{\epsilon}$.
\section{Register-based auditing-assisted approach} \label{RBAA}
Greater resource savings will be the case in future, if population statistics are produced from administrative data and on-going surveys under the APD approach, while purposely designed coverage survey is only used from time to time for \emph{auditing}. There are two necessary elements to such a \textit{register-based auditing-assisted} APD approach.
\paragraph{Rolling without coverage surveys} At some stage one needs to be able to exclude the envisaged regular coverage survey data from $\mathcal{L}_t$ in \eqref{data}, and only use the updated observations from other on-going surveys and administrative sources for the rolling of fractional counters, whether it is in the parametric or algorithmic setting.
This is not as unthinkable as it may seem at first. For instance, it may be noticed that the existing DBE approach is essentially a register-based APD approach, based on an estimated population hypercube. The Irish APD approach suggests it may be possible to estimate the population hypercube purely based on administrative data, albeit using a different methodology than DBE component estimation. In the register-based APD approaches of Israel, Latvia and Estonia, individual counters are produced by different methods, none of which uses any regular coverage surveys. The Norwegian household register provides another example, where decision rules are applied to individual-level data from the administrative sources. So the question that matters is how good a register-based individual-level APD approach can become in the UK.
\paragraph{Auditing inference} Zhang (2018) contrasts register-based APD approach to the APD approach based on combining registers and coverage survey. Under the purely register-based approach, ``an estimator, denoted by $\hat{N}_n$, can be calculated under a statistical model, using multiple incomplete administrative registers, where $N$ denotes the unknown population size and $n$ the generic size of the available datasets.'' It is suggested that, regardless how effective the rolling of $\hat{N}_n$ may be, one is unlikely ``to have $\hat{N}_n / N \stackrel{P}{\rightarrow} 1$ under some asymptotic setting, as $n, N\rightarrow \infty$''. Audit sampling will be necessary, in order to ``validate the model underlying $\hat{N}_n$, ..., which is affected by the sampling error of the auditing survey''. This raises the challenge of audit sampling inference.
Using disaggregation of Consumer Price Index based on proxy household expenditure measures obtained from transaction data, Zhang (2019a) develops an audit sampling inference approach for big-data statistics. Generically speaking, let $\theta_0$ be the true scalar parameter value of interest. Let $\theta^*$ be a point estimate based on big data, such that its variance is negligible compared to its potential bias for all practical purposes. One can test $H_0: \theta^* = \theta_0$ based on audit sampling. However, an accuracy measure is needed, even if the hull hypothesis cannot be rejected at the chosen level. A general dilemma in this context is the following. Let $\hat{\theta}$ be an unbiased estimator of $\theta_0$ based on audit sampling, and let $\widehat{V}(\hat{\theta})$ be an unbiased estimator of its audit sampling variance. An unbiased estimator of the mean squared error (MSE) of $\theta^*$ can be given as
\[
\widehat{\mbox{MSE}} = (\theta^* - \hat{\theta})^2 - \widehat{V}(\hat{\theta}) ~.
\]
However, when the bias $\theta^* - \theta$ is small, auditing may fail to yield a meaningful measure, if the audit sampling variance is not small enough, in which case one easily obtains $\widehat{\mbox{MSE}} < 0$ as the result. To overcome the dilemma, Zhang (2019a) proposes a novel accuracy measure to replace the standard MSE. If feasible in the present context, then one can employ an audit sample that is much smaller than the envisaged coverage survey sample.
\paragraph{Summary} In the long-term perspective, greatest gains can be achieved via a gradual transition from an APD approach that requires coverage survey sampling to a register-based auditing-assisted APD approach. Audit sampling aims to validate the register-based statistics, and to generate meaningful accuracy measure, for which one can use a much smaller sample size than the envisaged coverage survey sample. To enable the transition, it will be important that one as soon as possible starts the development, so that one can test and refine the required methods and to obtain the necessary experience and confidence over time.
\section{Key topics for methodology development}
Four inter-related topics for methodological development emerge from the above:
\begin{itemize} \setstretch{1.0}
\item data linkage for longitudinal PD;
\item rolling or incremental learning, including benchmarking, design of coverage survey, and parallel development of register-based auditing-assisted APD approach;
\item appropriate uncertainty propagation or accuracy measure in various scenarios;
\item methodology for producing social statistics in the new environment.
\end{itemize}
\subsection{Longitudinal PD}
\textit{Generic scalable linkage methodology is a premise to the provision of UK neighbourhood population statistics}.
In the first instance, the relevant longitudinal administrative data should be linked to form the longitudinal PD; in the next instance, longitudinal data linkage is the ability to link the longitudinal PD to other open or free data, as well as relevant sample surveys (often longitudinal themselves).
Davis-Kean et al. (2017) projects an ambitious outlook to the longitudinal population for ESRC UK longitudinal study resources. In particular, this aims at standardising the designs of the various longitudinal surveys so that they all use the same \emph{longitudinal population register} (i.e. a population spine) as their sampling frame, and with all ESRC research-related linkage of different administrative and survey data sources harmonised to this spine. As an example of such a constructed longitudinal population spine, in countries that do not have a population register to start with, one may refer to the Integrated Data Infrastructure (IDI) at Statistics New Zealand (SNZ, 2018).
Although the Fellegi and Sunter (FS) methodology for record linkage has proven to be very useful in practice (e.g. Owen et al., 2015), it does have some theoretical issues.
\begin{itemize}[leftmargin=6mm]
\item Applying the Likelihood Ratio Test to all the pairs $A\times B$ in files $A$ and $B$ creates a multiple comparison problem. The acceptable pairs require deduplication, e.g. when both $(ab)$ and $(ab')$ are above the acceptance threshold. It is difficult to link multiple files in a transitive manner, e.g. that $(ab)$ in $A\times B$ and $(bc)$ in $B\times C$ are links does not necessarily entail $(ac)$ will be accepted as a link when looking at $A\times C$.
\item The joint distribution of all the $n_A n_B$ comparison scores is ill-defined, if one treats e.g. the comparison scores for $(ab)$ and $(ab')$ as if they were independent of each other. The so-called maximum likelihood estimator of the parameters of the $m$- and $u$-probabilities (Jaro, 1989) are biased in reality; see e.g. Fortini and Tuoto (2019).
\end{itemize}
Entity resolution provides theoretically a more attractive formulation (e.g. Christen, 2012), where the set of (unique) entities underlying the separate datasets are envisaged as a latent spine of unknown size, and each record (in any dataset) is attached to one and only one latent entity on the spine. In this way, the records in different files are linked to each other or deduplicated, provided they are attached to the same latent entity, in a transitive manner regardless of the number of datasets involved. There are a few applications of the entity resolution perspective under the Bayesian paradigm of computation (e.g. Tancredi and Liseo, 2011; Stoerts et al., 2017), although there is nothing intrinsically Bayesian about this perspective to record linkage. Lack of scalability has been a central challenge to the proposed methodology so far, which is not yet feasible e.g. to link the population census file with the patient register.
Scalable linkage methods for multiple population-size datasets are important to the creation of the longitudinal PD. Moreover, the generic ability to link multiple files in a transitive manner can be expected to improve the quality of statistical information harnessed in the linked dataset. Replacing the FS-paradigm to record linkage by the entity resolution perspective can provide the angle for innovative approaches.
\subsection{Rolling or incremental learning}
\textit{The methodology of rolling or incremental learning needs to be studied and decided upon.}
Firstly, one needs to find out how rolling by EBP in the parametric setting (Section \ref{EBP}) works out in practice. Next, it is possible that algorithmic learning (Section \ref{learning}) can provide a more flexible and powerful predictive modelling approach. However, incremental learning in the presence of concept shift (i.e. the model changes over time) does not yet have an established approach in the literature. Methodological developments in this respect will be necessary, in case one would like to pursue algorithmic learning.
Whether one adopts parametric or algorithmic learning, there are at least three other relevant aspects that seem worth attention, as discussed below.
Firstly, one may naturally wish to benchmark the updated fractional counters, towards the estimated population hypercube from combined register and coverage survey data, as during their initiation (Table \ref{tab-initial}). A possibility is to only apply benchmarking when producing population statistics to be disseminated, say, on a yearly basis in the years immediate after 2021. In other words, benchmarking and rolling of fractional counters can have different frequencies. The methodology and practice need to be established.
Secondly, how to make effective use of the coverage survey, and can active learning be related to its design? By active learning, one seeks to observe the unlabelled instances that are most effective for model validation or improvement. For instance, in the present context, it seems reasonable that one should give higher sample representation of the non-core part of the PD, or the persons who are judged to have weak fractional counters, e.g. placement probabilities $\boldsymbol{\mu}_k$ are not `close' to a dummy vector, or a relatively large probability of being displaced ($\xi_k$) or out-of-scope ($\theta_k$). Regardless of the exact characterisation, this suggests that one may need a method for follow-up surveys of some addresses, given the PD-status of the persons at different addresses.
Thirdly, `parallel' learning of register-based fractional counters (Section \ref{RBAA}) requires a different approach, where the coverage survey data are only used indirectly for updating of the model, but not directly as labelled observations for supervised learning. This means that one would like to combine supervised learning from $\mathcal{L}_t \setminus \mathcal{C}_t$, where $\mathcal{C}_t$ denotes the coverage survey sample, and heuristic updating of the fitted model, based on the evidence in $\mathcal{C}_t$. Such heuristic updating differs from benchmarking, where the latter requires estimates using the coverage survey. For instance, one may envisage heuristic updating in the form of a decision tree, where the paths close to the root are determined by stable decision rules evidenced from the comprehensive coverage survey data, while the observations in $\mathcal{L}_t \setminus \mathcal{C}_t$ are only used for the splits and sprouting near the leaves.
\subsection{Uncertainty propagation or accuracy measure}
\textit{Theoretical conceptualisation and practical method for uncertainty propagation or accuracy measures are needed in several scenarios}.
Firstly, for the initiation of fractional counters, the core part of PD that can be linked to the census enumerations will be used for the estimation of displacement and misplacement probabilities, whereas the census enumerations and the census coverage survey will be used for estimating the probability of erroneous enumeration. It needs to be verified whether conditional uncertainty propagation given the estimated fractional counters can sensibly capture the various underlying variations, and if not, how it might be modified to produce plausible accuracy measures in a practical manner.
As explained in Section \ref{EBP}, uncertainty propagation from parameter updating to fractional counting given the parameter can be based on a coherent scheme under the rolling of parametric EBP. The matter is less clear under incremental algorithmic learning. Suppose the fitted model is given as a decision tree, which is updated by a constrained optimisation method (Section \ref{tree}). Conditional uncertainty propagation given the updated tree is straightforward, just as when the fractional counters are given parametrically. But how can one incorporate the uncertainty of the tree updating itself? One can introduce some kind of bootstrap. But at which level should one allow the replicate trees to vary from each other: simply where the new splits are created, or higher up?
Finally, the methodology of audit sampling inference needs to be worked out for register-based fractional counting (Section \ref{RBAA}). It will be possible to treat the coverage survey sample, or part of it, as the audit sample, whether the coverage survey is designed to accommodate active learning or not. This is necessary in order to provide the statistical argument for the transition towards a register-based auditing-assisted APD approach in the long term, in replacement of the more costly continuous coverage survey.
\subsection{Producing social statistics in the new environment}
Though not central to this report, it must be pointed out that
\textit{the production of social statistics requires a broad perspective to design and estimation in the new environment}.
The specifics of future provision of population statistics will change considerably in the UK. Traditional MYEs with decennial census updating will no longer be the foundation of social statistics on temporally varying topics and phenomena of interest. It would be narrow-minded and ineffective to simply replace the population benchmarks, albeit produced based on a different APD approach beyond 2021, but keep the same design strategies across the spectrum of social survey programmes.
For instance, as mentioned earlier, a two-phase approach is currently being developed in Italy, where the first-phase sample targets mainly at the population statistics, and the major social survey samples are selected as negatively coordinated second-phase samples. It is yet unclear whether this is a suitable solution in the UK. Neither is it necessarily the most effective approach, generally speaking or in the long run, when it comes to the combined use of coverage survey and other social social surveys.
The coverage survey as currently envisaged may no longer be necessary, provided the transition to a register-based auditing-assisted APD approach to population statistics. How to combine audit sampling and on-going social surveys will be then a different overall design question. Indeed, greater use of administrative data is expected to extend to the area of social statistics as well. It is thus perhaps appropriate to set on a future landscape of register-based auditing-assisted population \emph{and} social statistics.
|
2024-02-18T23:40:25.459Z
|
2021-11-08T02:01:11.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03100","language":"en","timestamp":1636336871000,"url":"https:\/\/arxiv.org\/abs\/2111.03100","yymm":"2111"}
|
proofpile-arXiv_000-10213
|
{"provenance":"002.jsonl.gz:10214"}
| null | null |
\section{Introduction}
In the dawn of modern cosmology, one of the reasons Einstein first rejected the Friedmann solution was because it contains an initial singularity. When he was forced to accept it, he claimed that the initial singularity in the Friedmann universe was General Relativity (GR) pointing us to its own limits of applicability. In fact, GR is generally plagued with singularities~\cite{fried1,fried2,fried3}.
New physics must take place near the singularity, where the curvature of space-time and the energy-density of the matter fields attain immensely high values. This new physics is not evident, because the singularities appear under very general reasonable assumptions
There are classical extensions to GR that can be proposed, such as non-minimal couplings between matter gravitational degrees of freedom, the addition of curvature squared terms in the gravitational Lagrangian, and
the presence of exotic matter fields, { many of them coming from effective actions taking into account quantum effects that can eliminate the cosmological singularity; see~\cite{bounce-classical1,bounce-classical2,bounce-classical3,bounce-classical4,bounce-classical5,bounce-classical6,bounce-classical7,
bounce-classical8,bounce-classical9,bounce-classical10,bounce-classical11,bounce-classical12,bounce-classical13,reviews1,reviews2} for some reviews.} Another route, in analogy with the procedure adopted to treat the singularities present in the classical description of matter (the instability of the classical model of the atom, the divergence of the electromagnetic field near the electron), is to expect that a proper { and full} quantization of GR could eliminate these singularities. However, a consensual theory of quantum gravity is not yet available, with many proposals still under construction~\cite{qg1,qg2,qg3,qg4}. Nevertheless, in the case of cosmology, as observations indicate that the Universe was nearly homogeneous when it was very hot and dense, with small inhomogeneous perturbations around this very symmetric state~\cite{CMB,nucleo}, one can design an effective quantum theory for cosmology, where the complete configuration space of GR and matter fields, called superspace, is reduced to a subset containing only the homogeneous and perturbation degrees of freedom, called midi-superspace, where the technical problems surrounding quantum gravity are dramatically simpler. This line of investigation is called quantum cosmology~\cite{qc1,qc2,qc3,qc4,qc5,qc6,qc7,qc8,qc9,qc10,qc11,qc12,qc13,qc14}. The rigorous mathematical justification for this reduction is not yet known,
but it is expected that, if one is not very close to the Planck length, this approach can not only furnish the right corrections to the classical cosmological models in the extreme physical situations around the singularity, but it can also teach us the sort of properties a complete theory of quantum gravity might have. Some relevant questions are: What is a singularity in quantum space-time? Does a classical singularity survive quantization? How can the classical limit be reached?
What is the meaning of time in quantum space-time?
Nevertheless, beyond all the problems surrounding quantum gravity, there is an extra fundamental question concerning the application of quantum theory to cosmology. As we know, in the usual Copenhagen interpretation~\cite{bohr,hei,von}, the wave function gives the probability density amplitude for an external observer to measure one of the eigenvalues of a Hermitian operator describing an observable of a physical system in state $|\Psi\rangle$. In the measuring process, the system must interact with a measuring apparatus. In the quantum description of the whole process, the total wave function describing the system and apparatus bifurcates into many branches, each one containing one of the possible results of the measurement. However, at the end of the measurement process, just one value is obtained; hence, the total wave function must collapse in one of the branches. This a non-unitary non-linear process that cannot be described by the unitary quantum evolution. The intervention of the classical observer imposes a break on the quantum description, bringing to actual existence the many potentialities the quantum state describes. Of course, one cannot apply this picture to the Universe as whole, as, by the definition of Universe, there is nothing external to it that can bring to actual existence all the potentialities described in a quantum state of the Universe. In this scenario, quantum cosmology does not make any sense; it cannot describe the objective reality we experience in the world; it is an empty theory. One should then abdicate to apply quantum theory to cosmology, in order to use it to solve the classical cosmological problems. This is a good example that corroborates an important criticism of Einstein's concerning quantum theory in the Copenhagen framework~\cite{einstein}: `Contemporary quantum theory … constitutes an optimum formulation of [certain] connections … [but] offers no useful point of departure for future developments.'.
Fortunately, there are alternative quantum theories. One can cite the
Many Worlds Interpretation (MWI)~\cite{eve}, where the wave function does not collapse, and all potentialities take place in each branch, but the branches are not aware of each other, or the Spontaneous Collapse approach, where the unitary Schr\"odinger evolution is supplemented with a non-linear evolution where the collapse of the wave function takes place physically~\cite{rim,pen}, among others. In both approaches, there is no need for an external agent to turn the quantum potentialities into actual facts. { These alternative quantum theories have been used in quantum cosmology; for some examples, see~\cite{qc3,mwiqc2,rimqc1,rimqc2,chqc1,chqc2}.}
The framework we will use here is de Broglie--Bohm quantum theory~\cite{bohm,hol,duerr}. In this approach, the point in configuration
space describing the degrees of freedom of the physical system and the measuring apparatus is supposed to exist,
independently of any observations. This point refers to either particle positions and/or field configurations. In the splitting of the total wave function, the point in configuration space will enter
into one of the branches, depending on the initial positions
before the measurement interaction, which are unknown. The other branches will be empty, and it can be shown~\cite{bohm,hol,duerr} that the empty waves can never interact with the actual degrees of freedom describing the physical system, measuring apparatus, or any other external agent.
Hence, no
observer can be aware of these empty branches. We thus have
an effective but not real collapse, as the empty waves continue to exist, but now, contrary to MWI, with no multiplication of actual worlds. There is only one actual world and a profusion of empty innocuous empty waves.
As in the MWI and Spontaneous Collapse approaches, the presence of an external agent is not necessary for understanding quantum measurements, and the quantum dynamics are always valid. In these frameworks, quantum theory can be viewed as the fundamental theory of Nature, applicable to all physical systems, including the Universe itself, from which classical mechanics is a by-product, under certain physical conditions. Hence, in these formulations of quantum theory, quantum cosmology makes sense, and it can be~studied.
In this paper, I will summarize the results that were obtained with the application of the dBB quantum theory to quantum cosmology. In fact, the assumption that particle positions and/or field configurations are actual makes quantum cosmology conceptually simpler. The concepts of quantum singularities, how they can be removed, and the classical limit can be easily obtained. The notion of time emerges naturally from timeless quantum dynamics, and the Schr\"{o}dinger equation for quantum inhomogeneous perturbations in quantum homogeneous backgrounds is dramatically simplified under the dBB assumptions, allowing the calculation of many cosmological observable quantities. Finally, a sound interpretation of the wave function of the Universe emerges in this framework; see \mbox{Section \ref{sec3}}. { In technical terms, however, the dBB quantum theory adds further computational difficulties, as it may require the calculation of the quantum trajectories of the quantum degrees of freedom, which can be extremely hard in general. However, in the framework of cosmology, the necessary computations to be performed are not difficult, and they help the construction of simple extensions of the quantum equations for quantum cosmological perturbations, as we will see in this paper.}
The review will be divided as follows: In Section \ref{sec2}, I will summarize the main features of the dBB quantum theory. In Section \ref{sec3}, I will apply it to quantum cosmology, considering first the homogeneous background, in order to discuss the singularity problem, and then the inhomogeneous perturbations, where a notion of time emerges. In Section \ref{sec4}, I will present some important results concerning the evolution of quantum cosmological perturbations in quantum backgrounds without singularities, and their confrontations with observations and inflation. In Section \ref{sec5}, I will show that the dBB approach explains, in a quite simple way, an old controversy: the quantum-to-classical transition of the quantum inhomogeneous cosmological perturbations that evolved to form all the structures in the Universe, which are, of course, classical. I finish in Section \ref{sec6} with a discussion and conclusions.
\section{The de Broglie--Bohm Quantum Theory}\label{sec2}
A good way to motivate the construction of the dBB quantum theory in the framework of non-relativistic particles is through the words of John Stewart Bell~\cite{bell}:
`The kinematics of the world, in this orthodox picture, is given by a
wave function for the quantum part, and classical variables - variables which {\em have} values - for the classical part:
$(\Psi(t,q ...), X(t) ...)$. The $X$’s are somehow macroscopic. This is not
spelled out very explicitly. The dynamics is not very precisely
formulated either. It includes a Schr\"{o}dinger equation for the
quantum part, and some sort of classical mechanics for the
classical part, and `collapse’ recipes for their interaction.
It seems to me that the only hope of precision with the dual $(\Psi,x)$
kinematics is to omit completely the shifty split, and let both $\Psi$ and $x$
refer to the world as a whole. Then the $x$’s must not be confined to
some vague macroscopic scale, but must extend to all scales’.
Following Bell's proposal, particle positions must also be considered in order to completely determine the state of a quantum system. Then, besides the Schr\"{o}dinger equation for $\Psi$, one must postulate an equation for $x$. From the Schr\"{o}dinger equation for a single non-relativistic particle in the coordinate representation,
\begin{equation}
\label{bsc}
i \hbar \frac{\partial \Psi (x,t)}{\partial t} = \biggl[-\frac{\hbar ^2}{2m} \nabla ^2 +
V(x)\biggr] \Psi (x,t),
\end{equation}
where $V(x)$ is the classical potential, and writing $\Psi = R \exp (iS/\hbar)$, one obtains the following two real equations:
\begin{equation}
\label{bqp}
\frac{\partial S}{\partial t}+\frac{(\nabla S)^2}{2m} + V
-\frac{\hbar ^2}{2m}\frac{\nabla ^2 R}{R} = 0,
\end{equation}
\begin{equation}
\label{bpr}
\frac{\partial R^2}{\partial t}+\nabla .\left(R^2 \frac{\nabla S}{m}\right) = 0.
\end{equation}
Equation \eqref{bqp} looks like a Hamilton--Jacobi equation for $S$, with an extra term at the end. On the other hand, Equation~\eqref{bpr} can be viewed as the continuity equation for a density distribution of an ensemble of particles given by $R^2$, with $\nabla S/m$ being the velocity field of this ensemble (the quantum current divided by $R^2$). Hence, both equations suggest the following postulates~\cite{bohm,hol,duerr}:
(i) Quantum particles follow objectively real trajectories $x(t)$. They must satisfy the so-called guidance equation:
\begin{equation}
p=m\dot{x}=\nabla S,
\label{guidance1}
\end{equation}
or, equivalently, as first proposed by de Broglie,
\begin{equation}
v(x(t),t)\equiv \dot{x}=\frac{J}{R^2},
\label{guidance2}
\end{equation}
where $v$ is the velocity field, and $J$ is the usual quantum current $J={\rm{Im}}(\hbar\Psi^*\nabla\Psi/m)$.
(ii) The particles are never separated from a quantum field
$\Psi$, which acts on them through Equation~\eqref{guidance1} and satisfies the Schr\"{o}dinger
Equation~(\ref{bsc}).
These are two first-order equations in time, which demand knowledge of initial positions $x_0$ and initial quantum field configurations $\Psi (x,0)$ to be solved uniquely. Initial field configurations can usually be obtained through the preparation of the quantum system, by measuring a complete set of observables in the system. The initial position of the particle, instead, cannot be obtained without disturbing the quantum system. Hence, one cannot exactly know the position of the particle; it is the hidden variable of the theory.
{ Note that solving Equation~\eqref{guidance1} can be very difficult, especially in a many-particle system or field theory. However, as we will see below, both Equations (\ref{bsc}) and \eqref{guidance1} lead to the same probabilistic results as in Copenhagen quantum theory; hence, one can use the usual mathematical techniques to derive these probabilities. Furthermore, in quantum cosmology, what one usually needs is the quantum trajectories (also called the Bohmian trajectories) of the background geometry only, which are not difficult to calculate. Even in the case of quantum inhomogeneous perturbations, as they are supposed to originate from an adiabatic vacuum quantum state, their quantum trajectories are also simple to calculate, as we will see in Section \ref{sec5}. Hence, obtaining solutions from Equation~\eqref{guidance1} will not be problematic in the physical situations with which we are dealing in this paper.}
Because Equations~(\ref{bqp}) and \eqref{guidance1} can indeed be viewed as a Hamilton--Jacobi equation
for the particle, which suffers from the influence of a new quantum potential, besides the classical potential $V$, given by
\begin{equation}
\label{qp}
Q \equiv -\frac{\hbar ^2}{2m}\frac{\nabla ^2 R}{R}.
\end{equation}
\noindent from
both Equations~(\ref{bqp}) and \eqref{guidance1}, one can obtain the equation of motion:
\begin{equation}
\label{beqm}
m \frac{d^2 x}{d t^2} = -\nabla V - \nabla Q.
\end{equation}
In a statistical ensemble of particles in the quantum
state $\Psi$, if the probability density for the unknown initial
position is given by $P(x_0)=R^2(x=x_0,t=t_0)$, Equation~(\ref{bpr})
guarantees that $R^2(x,t)$ will give the distribution of positions
at any time, and all the statistical predictions of quantum mechanics are recovered. The distribution $R^2$ is called a typical
distribution~\cite{duerr}. Note, however, that, in a fundamental theory describing the dynamics of quantum particles, there is no logical connection between
the distribution of the unknown initial positions with $R^2$. Nevertheless,
whenever $P\neq R^2$, Equations~(\ref{bpr})
and \eqref{guidance1} make $P$ rapidly relax to $R^2$, at least at a coarse
grained level, in many circumstances. This is an analog of the $H$-theorem of statistical mechanics
applied to quantum mechanics; see~\cite{valentini1} for details.
Hence, it seems that the dBB dynamics push physical systems to the typical distribution $P=R^2$, also called the quantum equilibrium distribution.
Note that, if one can find physical systems that
have not relaxed to $P=R^2$, then their statistical predictions will
not agree with conventional quantum mechanics, and the dBB theory could be tested.
The possibility of the existence of such systems, such as relic gravitational
waves, is now under investigation~\cite{valentini2}.
In conclusion, probabilities are not fundamental
in this theory, and if the tendency to quantum equilibrium is really general, then it may not be necessary to postulate the Born rule, as it could be obtained
through the dynamics themselves.
Let us make some final comments:
(a) The $\Psi$ field guides the particle motion through Equation~\eqref{guidance1}, whoseh effects can be encoded in the quantum potential $Q$. In the case of a many-particle system, when $\Psi$ is entangled, $Q$ can be highly non-local. This is
very important, because the Bell's inequalities~\cite{bell-paper}, together
with Aspect's experiments~\cite{aspect}, show that, in general, a
quantum theory must be non-local, which is the case of
the dBB quantum theory. Additionally, while solving the Schr\"{o}dinger equation, boundary conditions are usually imposed on it, as in the two-slit experiment. Hence, the $\Psi$ field contains this information, which is transmitted to the particle motion through Equation~\eqref{guidance2}. In other words, although there is no classical potential along the route of the quantum particle towards the screen, the quantum potential is not null; it encodes the information contained in $\Psi$, leading to a Bohmian trajectory that is quite complicated. When taken together, the different Bohmian trajectories of an ensemble of quantum particles with an initial position distribution given by $R^2(x,t=0)$ yield the interference pattern typical of the two-slit experiment. Contextuality is also present in the dBB quantum theory.
(b) The classical limit is very simple: we have only to find
the conditions for having $Q\approx0$ when compared with the classical kinetic and
potential energy terms.
(c) Note that, although assuming the ontology of the position of particles in space through the new proposed guidance relation (\ref{guidance1}),
the dBB theory has, at least, the same number of postulates as the Copenhagen interpretation, as long as it dispenses with the collapse postulate. If the Born rule can also be justified under this framework, which is still under debate~\cite{duerr,valentini1,valentini}, then the dBB theory is logically simpler than the Copenhagen interpretation, as it would have one postulate less.
A detailed analysis of the dBB theory in the context of quantum field theory can be seen in Refs.~\cite{hol,oqed,goldPRL,gold2,struyveN}
Generally, it is assumed that field configurations are actual. The probabilistic predictions are in accordance with Poincar\'{e} invariance, but the hidden Bohmian evolution of the fields may violate this symmetry.
Let us end this section in the same way it began, with some words of John Bell~\cite{bell}:
`In 1952 I saw the impossible done. It was in papers by David Bohm.
… the subjectivity of the orthodox version, the
necessary reference to the ‘observer,’ could be eliminated. . . . But why
then had Born not told me of this ‘pilot wave’? If only to point out
what was wrong with it? Why did von Neumann not consider it? . . .
Why is the pilot wave picture ignored in text books? Should it not be
taught, not as the only way, but as an antidote to the prevailing complacency?
To show us that vagueness, subjectivity, and indeterminism,
are not forced on us by experimental facts, but by deliberate theoretical
choice?’
\section{The de Broglie--Bohm Theory Applied to Quantum Cosmology: Background and~Perturbations}\label{sec3}
The structure of the Hamiltonian describing the gravitational field and all other non-gravitational degrees of freedom in the Universe, reads
\vspace{-6pt}
\begin{eqnarray}
\label{ham-gen}
H &=&\int \mathrm{d}^3x \{N(x){\cal{H}}_0[h(x),\pi_h(x),\varphi(x),\pi_{\varphi}(x)]
\nonumber \\ &+&
N^i(x){\cal{H}}_i[h(x),\pi_h(x),\varphi(x),\pi_{\varphi}(x)]\},
\end{eqnarray}
where $N$ and $N^i$ are Lagrange multipliers, the so called lapse and shift functions; $h$ are the gravitational (geometric) degrees of freedom, usually, the space metric of the space-like hypersurfaces; $\varphi$ represents all non-gravitational degrees of freedom; and $\pi_h,\pi_{\varphi}$ are their respective conjugate momenta. The quantities ${\cal{H}}_0$ and ${\cal{H}}_i$ are the super Hamiltonian and super momentum constraints, originated from the invariance of the full theory under general time and space coordinate transformations. They are constrained to vanish, ${\cal{H}}_0 \approx 0$, ${\cal{H}}_i \approx 0$, where the symbol $\approx$ means `weak equality’, in the sense that they are zero, but the Poisson brackets between them and other canonical variables may not be zero.
The Hamilton equations in terms of the Poisson brackets arise as usual:
\begin{eqnarray}
\label{ham-eq}
\dot{h}(x) &=& \{h(x),H\} , \;\;\;\;\; {\dot{\pi}}_h(x) = \{\pi_h(x),H\} \nonumber \\
\dot{\varphi}(x) &=& \{\varphi (x),H\} , \;\;\;\;\; {\dot{\pi}}_{\varphi}(x) = \{\pi_{\varphi} (x),H\},
\end{eqnarray}
yielding the field evolutions in terms of an arbitrary coordinate time $t$. Note that {the Hamiltonian $H$ is null} due to the constraint equations, a feature of any theory that is invariant under time reparametrizations. {All the constraints are first class: the Poisson brackets among themselves are null in the region of phase space where they are satisfied.}
Following the Dirac quantization procedure for constrained systems, the first class constraints, when turned into operators acting on a Hilbert space, must annihilate the wave functional of the Universe $\Psi$ (expressed in the field representation):
\begin{equation}
\label{WDW}
{\cal{H}}_0[\hat{h}(x),\hat{\pi}_h(x),\hat{\varphi}(x),\hat{\pi}_{\varphi}(x)] \Psi[h(x),\varphi(x)]=0,
\end{equation}
and
\begin{equation}
\label{super-momentum}
{\cal{H}}_i[\hat{h}(x),\hat{\pi}_h(x),\hat{\varphi}(x),\hat{\pi}_{\varphi}(x)] \Psi[h(x),\varphi(x)]=0.
\end{equation}
The Schr\"{o}dinger equation $i\partial\Psi/\partial t = \hat{H}\Psi$ only tells us that the wave functional does not explicitly depend on time, as long as
$\hat{H}\Psi=0$ due to the constraint equations (\ref{WDW},\ref{super-momentum}). We are now using natural units $\hbar=c=1$.
Equation \eqref{super-momentum} just implies that $\Psi$ is invariant under space coordinate transformations of the fields. Equation \eqref{WDW} is the so-called Wheeler--DeWitt equation~\cite{WDW}.
These quantum equations render the interpretation of $\Psi$ quite obscure. First, as we have seen, time has disappeared. It is believed that it is hidden in the Wheeler--DeWitt equation, in which one field degree of freedom will play the role of a physical clock. However, apart from some exceptions, as we will see, this clock variable is not transparent. In general, the Wheeler--DeWitt equation has a Klein--Gordon structure~\cite{WDW}, which makes it difficult to assign a probabilistic interpretation for $\Psi$, as is well known. Some investigations have tried to put the Wheeler--DeWitt equation into a Schr\"odinger form, but, when possible, it was achieved only in an implicit form; see~\cite{kuchar} for a discussion on these issues.
However, when one uses the dBB quantum theory, a quite reasonable interpretation of the wave functional of the Universe emerges. As we have seen, in the dBB theory, one also imposes the guidance relations to the actual field configurations, which are supposed to be objectively real. Looking at Equation~\eqref{guidance1}, one can formally write
\begin{equation}
\label{guidance-fields}
\pi_h(x) = \frac{\delta S[h,\varphi]}{\delta h(x)}, \;\;\;\;\;
\pi_{\varphi}(x) = \frac{\delta S[h,\varphi]}{\delta \varphi(x)},
\end{equation}
where $\pi_{h}(x),\pi_{\varphi}(x)$ are the canonical momenta of $h,\varphi$, which can be expressed in terms of the time derivatives of $h,\varphi$, as usual. For instance, in GR, the canonical momenta conjugate to $3$-metric $h^{ij}$ reads
\begin{equation}
\label{37}
\pi _{ij} = \frac{\delta L_{GR}}{\delta ({\dot{h}}^{ij})} =
- h^{1/2}(K_{ij}-h_{ij}K),
\end{equation}
where $K_{ij}$ is the extrinsic curvature given by
\begin{equation}
\label{33}
K_{ij} = \frac{1}{2N}(2D_{(i}N_{j)}-{\dot{h}}_{ij}) ,
\end{equation}
and $D_i$ is the three-dimensional covariant derivative.
The quantity $S[h,\varphi]$ is the phase of the wave functional of the Universe $\Psi[h,\varphi]$. Hence, Equation~\eqref{guidance-fields} yields the evolution of all the fields in terms of coordinate time $t$ once one knows $\Psi$. This induces the proposition of a nomological interpretation of the wave functional of the Universe: it yields the laws of Nature, in the same way as a Lagrangian and/or a Hamiltonian do in classical mechanics, {see~\cite{goldNom}.} Consequently, the wave functional of the Universe has nothing to do with probabilities, which is quite sensible, as one is talking about a single system, the Universe. Furthermore, it is not surprising that the wave functional of the Universe does not depend on an explicit external time parameter and that the equation that determines it is not generally suitable for inducing a probability measure. However, it would be helpful, in this way of thinking, to find boundary conditions for Equation~\eqref{WDW} where a particular solution emerges as {the} wave functional of the Universe, from which the dynamics of all fields are obtained. Some proposals are under discussion; see, for instance,~\cite{hawking,vilenkin}.
The next natural question to pose is how do probabilities emerge in this conceptual framework? Of course, they should arise when one considers subsystems contained in the Universe, where probabilities can naturally be defined. Indeed, in the dBB approach, one can use the notion of conditional wave functions in order to describe subsystems. Let us suppose that the Universe contains only two fields, $\varphi_1$ and $\varphi_2$. Hence, the wave functional is given by $\Psi[\varphi_1,\varphi_2]$. Suppose one can calculate the Bohmian trajectory for $\varphi_1 \rightarrow \varphi_1(t)$.
Then, one can define the conditional wave functional $\Psi_c[t,\varphi_2]=\Psi[\varphi_1(t),\varphi_2]$, which gives all the information about the evolution of $\varphi_2$. Under certain conditions, the original equation for $\Psi$ becomes a Schr\"odinger equation for $\Psi_c$.
In this situation, quantum equilibrium arises~\cite{duerr,valentini,valentini2}, and one can understand $|\Psi_c|^2$ as a probability measure for subsystems discriminated only by $\varphi_2$, in accordance with daily quantum mechanics.
The use of the dBB quantum theory was essential for constructing this whole conceptual framework. Let us now apply it to quantum cosmology. In this reduced framework, most of the problems associated with the full theory, which are still unsolved, are simpler to handle. As we mentioned in the Introduction, cosmological observations inform us that the Universe was very homogeneous and isotropic when it was very hot and dense, with small inhomogeneous perturbations over this very symmetric state. Hence, one restricts the configuration space to a restricted domain in which the geometry of space-time is given by
\begin{equation}
\label{perturb}
g_{\mu\nu}(t,{\bf {x}})=\bar{g}_{\mu\nu}(t)+h_{\mu\nu}(t,{\bf {x}}),
\end{equation}
where
\begin{equation}
\label{linha-fried}
{\rm {d}} s^{2}=\bar{g}_{\mu\nu}(t) {\rm {d}} x^{\mu} {\rm {d}} x^{\nu}=N^{2}(t) {\rm {d}} t^2 -
a^{2}(t)\delta_{ij} {\rm {d}} x^{i} {\rm {d}} x^{j},
\end{equation}
and
\begin{eqnarray}
\label{perturb-componentes}
h_{00}(t,{\bf {x}})&=&2N^{2}(t)\phi(t,{\bf {x}}) \nonumber \\
h_{0i}(t,{\bf {x}})&=&-N(t)a(t)B_{,i}(t,{\bf {x}}) \\
h_{ij}(t,{\bf {x}})&=&2a^{2}(t)\psi(t,{\bf {x}})\gamma_{ij}-E_{,ij}({t,\bf {x}})), \nonumber
\end{eqnarray}
where $i$ represents $\partial/\partial x^i$, and all the quantities in Equation~\eqref{perturb-componentes} are assumed to be very small when compared with the background degrees of freedom.
Note that I am assuming flat space-like homogeneous and isotropic hypersurfaces just for simplicity; all the calculations can be generalized for spherical and hyperbolic cases. Additionally, it is a good approximation, as indicated by cosmological observations.
Additionally for simplicity, I will consider just one matter degree of freedom, described by the scalar field
\begin{equation}
\label{fluid-velocity}
\varphi(t,{\bf {x}}) = \bar{\varphi}(t) + \delta\varphi(t,{\bf {x}}),
\end{equation}
where $\bar{\varphi}(t)$ is the background field and $\delta\varphi(t,{\bf {x}})$ its first order perturbation.
One should also consider vector and tensor perturbations. Vector perturbations are usually not relevant in inflation and bounce scenarios; see~\cite{vector} for details. Tensor perturbations, or primordial gravitational waves, are represented by the transverse-traceless spatial tensor $h_{ij}^{\rm TT}(t,{\bf {x}})$. Its treatment is technically similar but much easier than the scalar perturbation case. The main results for them will be presented below. For details, see~\cite{PPNGW,PPNGW2}.
I will take a conservative point of view, in which the gravitational field dynamics are described by GR, and the matter field, by a scalar field minimally coupled to gravity, described by the general Lagrangian density $p(X,\varphi)$, where $X=g^{\mu\nu}\partial_{\mu}\varphi \; \partial_{\nu}\varphi/2$. When $p=X^n$, it describes a perfect fluid with equation of state $p=w\rho$, where $p$ is the pressure, $\rho$ is the energy density, and $w=1/(2n-1)$ is constant. This Lagrangian density can also describe a canonical scalar field $p=X-V(\varphi)$. In all cases, $p$ is the pressure associated with the scalar field; see~\cite{mukh-book}. Hence, the action reads
\begin{equation}
\mathcal{S}= \mathcal{S}_{_\mathrm{GR}} + \mathcal{S}_\mathrm{fluid}
= -\frac{1}{2 l_{P}^2} \int \sqrt{-g} R {\rm {d}}^4 x - \int \sqrt{-g} p(X,\varphi) {\rm {d}}^4 x,
\label{action}
\end{equation}
where $ l_{P}=(8\pi G_N)^{1/2}\equiv\sqrt{6}\kappa$ is the Planck length in natural units.
By inserting Equations~(\ref{perturb})--(\ref{perturb-componentes}) into the action (\ref{action}), one can construct the Hamiltonian of the system up to second order in the perturbation expansion, through the usual Legendre transformations. After solving the super-momentum constraint and performing suitable canonical transformations, without ever using the background equations of motion, the Hamiltonian can be generally written as
\begin{equation}
\label{hfinal-vinculos-escalares-hidro} H=N\left[ H_{(0)} +
H_{(2)}\right] ,
\end{equation}
where $H_{(0)}$ and $H_{(2)}$ are the zeroth and second order Hamiltonians, yielding the background and linear cosmological perturbation dynamics, respectively. Note that their sum is constrained to zero, which is a consequence of the invariance of GR under time reparametrizations. Let us now show in detail the two interesting cases of a perfect fluid and a canonical scalar field.
\subsection{Perfect Fluids}
For perfect fluids, one has $p=X^n$. The calculations of $H_{(0)}$ and $H_{(2)}$ lead to (see~\cite{PPNscalar} for details)
\begin{equation}
\label{h00} H_{(0)}\equiv
-\frac{P_{a}^{2}}{4a}+\frac{P_{T}}{a^{3\omega}},
\end{equation}
and
\begin{equation}
\label{h02} H_{(2)}\equiv \frac{1}{2a^{3}}\int
{\rm {d}}^{3}x\pi^{2}({\bf {x}})+\frac{a\omega}{2} \int {\rm {d}}^{3}x v^{,i}({\bf {x}})v_{,i}({\bf {x}}).
\end{equation}
In the background Hamiltonian, $P_T$ arises from the canonical transformation
\begin{equation}
T= \frac{1}{c(1+w)} \frac{\varphi}{ p_{\varphi}^{w}} , \qquad P_{T}= c p_\varphi^{1+w},
\label{can}
\end{equation}
where $p_{\varphi}$ is the momentum conjugate to $\varphi$ and
$c=1/(w\sqrt{2}^{1+3w}n^{1+w})$ is a constant. It is thus connected to the matter degree of freedom. Note, however, that it appears linearly in the Hamiltonian; hence, it can be understood as being canonically conjugate to a clock time $T$. Indeed, this is physically motivated from the definition of $T$ in terms of $\varphi$ and $p_{\varphi}$, and the fact that $\varphi$ is a velocity field potential, $V_{\mu} = \partial_{\mu} \varphi / (2X)^{1/2}$.
The second order part, $H_{(2)}$, yields the dynamics of the perturbation field, $v({\bf x})$, which emerges as the single perturbation degree of freedom left.
When quantizing the system, as explained above, the operator version of the first class constraints must annihilate the wave functional
$\Psi[T,a,v({\bf x})]$,
\begin{equation}
\label{schroedinger}
(\hat{H}_{(0)} + \hat{H}_{(2)})\Psi=0.
\end{equation}
This is a case where Equation~\eqref{schroedinger}
assumes a Schr\"odinger form, because a natural time $T$ emerges from the degrees of freedom of the fluid
\begin{eqnarray}
\label{schr-fluid}
i\frac{\partial}{\partial T}\Psi &=&\frac{1}{4} \left\{
a^{(3w-1)/2}\frac{\partial}{\partial a} \left[
a^{(3w-1)/2}\frac{\partial}{\partial a}\right]
\right\}\Psi \nonumber \\&-&\biggl[\frac{a^{3w-1}}{2}\int
d^3x\frac{\delta^2}{\delta v^2({\bf x})}-\frac{a^{3w+1}w}{2}\int
d^3x v^{,i}({\bf x})v_{,i}({\bf x})\biggr]\Psi .
\end{eqnarray}
{Note that the operator ${\hat{P}}_a^2$ present in $\hat{H}_{(0)}$ is multiplied by $a^{3w-1}$ in Equation~\eqref{schr-fluid} (which can be understood as a particular case of the DeWitt metric~\cite{WDW}), yielding a factor ordering ambiguity. When ${\hat{P}}_a$ becomes a differential operator, there is one particular factor ordering that turns $ a^{3w-1} {\hat{P}}_a^2$ into a covariant one-dimensional Laplacian; hence, it is covariant under coordinate redefinitions. As it is a one-dimensional Laplacian, and one-dimensional manifolds are flat, there exists a special function of $a$ that plays the role of a Cartesian coordinate, in which this term can be written as a simple second order derivative. It reads}
\begin{equation}
\label{transf}
\chi=\frac{2}{3} (1-\omega)^{-1} a^{3(1-\omega)/2}.
\end{equation}
{I chose this particular factor ordering when writing Equation~\eqref{schr-fluid}, and I will use the variable $\chi$ in Section \ref{sec4} in order to solve Equation~\eqref{schr-fluid}.}
I will assume that the background is not entangled with the perturbations
\begin{equation}
\label{ansatz1} \Psi[a,T,v({\bf x})]=\Psi_{(0)}(a,T)\Psi_{(2)}[T,v({\bf x})].
\end{equation}
As a consequence, Equation~\eqref{schr-fluid} leads to the equation
\begin{eqnarray}
\label{scrhoedinger-separado-fundo} &&i\frac{\partial}{\partial
T} \Psi_{(0)}(a,T)=\nonumber\\&&\frac{1}{4} \left\{
a^{(3w-1)/2}\frac{\partial}{\partial a} \left[
a^{(3w-1)/2}\frac{\partial}{\partial a}\right] \right\}
\Psi_{(0)}(a,T),
\end{eqnarray}
{for the zeroth order order wave function $\Psi_{(0)}(a,T)$. As we will see in the next section, wave function solutions of this zeroth order equation yield, in the dBB quantum theory, a Bohmian trajectory $a(T)$ through the guidance equations. In this context, the second order equation for the perturbations described by the wave functional $\Psi_{(2)}[v({\bf x}),T]$ can be written as}
\vspace{-12pt}
\begin{eqnarray}
\label{scrhoedinger-separado-perturb}
&&i\frac{\partial}{\partial
T} \Psi_{(2)}[T,v({\bf x})]=
-\frac{a^{(3w-1)}(T)}{2}\int
d^3x\frac{\delta^2}{\delta v^2({\bf x})}\Psi_{(2)}[T,v({\bf x})]+\nonumber\\
&&\frac{w
a^{(3w+1)}(T)}{2}\int d^3x v^{,i}({\bf x})v_{,i}({\bf x})\Psi_{(2)}[T,v({\bf x})].
\end{eqnarray}
Hence, Equation~\eqref{scrhoedinger-separado-perturb} becomes a time-dependent Schr\"odinger equation for $\Psi_{(2)}$ when we substitute $a\rightarrow a(T)$.
One can further perform the time-dependent unitary transformation
\begin{equation}
\label{unitarias} U=\exp\left\{i\int
d^3x\left[\frac{\dot{a}(T)v({\bf x})}{2a(T)}-\frac{(v({\bf x})\pi({\bf x})+\pi({\bf x}) v({\bf x}))}{2}\ln (a(T))\right]\right\},
\end{equation}
yielding
the functional Schr\"odinger equation for the perturbations
\begin{equation}
i\frac{\partial\Psi_{(2)}[v({\bf x}),\eta]}{\partial \eta}= \int \mathrm{d}^3 x
\left(-\frac{1}{2} \frac{\delta^2}{\delta v^2({\bf x})} +
\frac{w}{2}v_{,i}({\bf x}) v^{,i}({\bf x}) - \frac{{a''}}{2a}v^2({\bf x}) \right)
\Psi_{(2)}[v({\bf x}),\eta], \label{schroedinger-conforme}
\end{equation}
written in terms of conformal time $d\eta=a^{3w-1}d T$ { (the cosmic proper time $\tau$ satisfies $d\tau = a^{3w} dT$; see Section \ref{sec4} for details; hence, $d\eta = d\tau /a$)}, and the new quantum variable $\bar{v}({\bf x})=av({\bf x})$, the usual gauge invariant
Mukhanov--Sasaki variable defined in~\cite{mukh-book} (we have omitted the
bars),
\begin{equation}
\label{vdefinition}
v({\bf {x}})=\frac{{a}^{\frac{1}{2}(3w-1)}}{\sqrt{6}}\biggl( \delta\varphi({\bf {x}})+\frac{2\sqrt{6}\sqrt{(w+1)P_{T}}}{{P}_{a}\sqrt{w}} {a}^{2-3w} {\psi}({\bf {x}})\biggr),
\end{equation}
expressed in terms of the background variables, and the perturbation fields $\psi({\bf {x}}),\delta\varphi({\bf {x}})$. It is connected to the gauge invariant Bardeen
potential $\Phi({\bf x})$ (see~\cite{mukh-book}) through
\begin{equation}
\label{vinculo-simples}
\Phi^{,i}\,_{,i}({\bf x}) =
-\frac{3\sqrt{(\omega+1)\bar{\rho}}}{2\sqrt{\omega}}a
\biggl(\frac{v({\bf x})}{a}\biggr)' ,
\end{equation}
The prime denotes a derivative with respect to conformal time, and $\bar{\rho}$ is the background energy density.
Equation \eqref{schroedinger-conforme} is the usual functional Schr\"odinger equation for quantum linear perturbations in cosmological models with a single perfect fluid satisfying $p=w\rho$. However, the scale factor appearing in Equation~\eqref{schroedinger-conforme} is not the classical one but the Bohmian solution $a(\eta)$. This interpretation of Equation~\eqref{schroedinger-conforme} is only possible within the dBB theory, in which a Bohmian trajectory $a(\eta)$ can be defined. In other frameworks, where $a$ is a background quantum degree of freedom, the physical understanding of the Wheeler--DeWitt Equation~\eqref{schr-fluid} using the ansatz \eqref{ansatz1} implying Equation~\eqref{schroedinger-conforme} becomes conceptually much more intricate, if possible.
The dynamical equation for the quantum operator $\hat{v}({\bf x})$ in the Heisenberg picture~reads
\begin{equation}
\label{ff}
\hat{v}''({\bf x})-\omega \hat{v}^{,i}_{\ ,i}({\bf x})-\frac{a''}{a}\hat{v}({\bf x})=0.
\end{equation}
{The Fourier modes $v_{\bf k}$,}
\begin{equation}
\label{mode}
v({\bf x})=\int{\frac{d^3x}{(2\pi)^{3/2}}v_{\bf k} \textrm e^{\textrm i {\bf k} \cdot {\bf x}}},
\end{equation}
{evolve as}
\begin{equation}
\label{equacoes-mukhanov} v''_k+\biggl(\omega
k^2-\frac{{a''}}{a}\biggr)v_k=0.
\end{equation}
These equations have the same form as the equations for scalar
perturbations obtained in~\cite{mukh-book}. However, the function $a(\eta)$
is no longer a classical solution of the background equations but a
quantum Bohmian trajectory of the quantized background. Hence, different power spectra of quantum cosmological perturbations may emerge. In Section \ref{sec5}, we will present the background and quantum perturbation solutions concerning this case.
\subsection{The Canonical Scalar Field}
In the canonical scalar field, the calculations of $H_{(0)}$ and $H_{(2)}$ now yield (without ever using the classical background equations; see~\cite{sandro-scalar} for details),
\begin{equation}
H_{(0)} = \frac{1}{\textrm e^{3\alpha}}\left[ -\frac{\Pi_\alpha^2}{2} + \frac{\Pi_\varphi^2}{2} + \textrm e^{6\alpha} V(\varphi) \right],
\label{h00fi}
\end{equation}
and
\begin{equation}
\label{h02fi}
H_{(2)}= \frac{1}{2}\int d^3x \left({\pi_v}^2 + {v}^{,i} {v}_{,i}+ 2\frac{z'}{z} {\pi_v}{v}\right),
\end{equation}
where we absorbed $\kappa$ in redefinitions of the scalar field and time in order to deal with dimensionless quantities. We set $a=\textrm e^{\alpha}$; $v({\bf x})$ is, again, the usual Mukhanov--Sasaki variable~\cite{mukh-book},
\begin{equation}
\label{vinculo-simples2}
v({\bf x}) = a\biggl(\delta\varphi({\bf x}) + \frac{{\varphi} ' \phi({\bf x})}{\cal{H}}\biggr) ,
\end{equation}
with primes denoting derivatives with respect to conformal time $\eta$, $\mathcal{H} = a'/a = \alpha'$, and $\pi_v$ is its canonical momentum. The background quantities $\Pi_\alpha$ and $\Pi_\varphi$ are the momenta canonically conjugate to background variables $\alpha$ and $\varphi$, respectively, and $N$ plays the role of a Lagrangian multiplier. Finally, $z$ is a background function defined as $z=a\varphi '/\cal{H}$.
As before, when quantizing the system, one obtains the Wheeler--DeWitt equation
\begin{equation}
\label{split-h0-fi}
(\hat{H}_{(0)}+\hat{H}_{(2)}) \Psi = 0,
\end{equation}
Supposing that the background evolution is not affected by the quantum perturbations through some quantum entanglement, one sets
\begin{equation}
\label{ansatz}
\Psi[\alpha,\varphi,v({\bf x})] = \Psi_0(\alpha,\varphi)\Psi_2[\alpha,\varphi,v({\bf x})].
\end{equation}
Note that, in this case, $\Psi_2$ depends on both $\alpha,\varphi$, as there is no explicit background variable playing the role of time.
The zeroth order part of Equation~\eqref{split-h0-fi},
\begin{equation}
\label{minibacks1}
\hat{H}_0 \Psi_0 = 0 ,
\end{equation}
yields wave solutions that, in the dBB framework, lead to the Bohmian trajectories $\alpha(t)$ and $\varphi(t)$. They will be presented in Section \ref{sec5}.
Having a Bohmian solution for the background, guided by $\Psi_0$, one can now construct the conditional wave equation to describe the perturbations as
\begin{equation}
\label{cond-fi}
\chi[v({\bf x}),t] = \Psi_2[\alpha(t),\varphi(t),v({\bf x})].
\end{equation}
Using the guidance equations naturally coming from the zeroth order Hamiltonian~\eqref{h00fi} (in the time gauge $N=e^{3\alpha}$)
\begin{equation}
\dot \varphi = \partial_\varphi S , \quad \dot \alpha = - \partial_\alpha S ,
\label{s6-s}
\end{equation}
one obtains
\begin{equation}
\label{essential}
-\left(\frac{\partial S_0}{\partial \alpha}\right)\left(\frac{\partial \Psi_2}{\partial \alpha}\right) +
\left(\frac{\partial S_0}{\partial \varphi}\right)\left(\frac{\partial \Psi_2}{\partial \varphi}\right)=\dot{\alpha}\left(\frac{\partial \Psi_2}{\partial \alpha}\right) +
\dot{\varphi}\left(\frac{\partial \Psi_2}{\partial \varphi}\right)
= \frac{\partial \chi}{\partial t}.
\end{equation}
Using Equation~\eqref{essential} in \eqref{split-h0-fi}, implementing a time-dependent canonical transformation, similar to what was done in the perfect fluid case (see~\cite{felipe-scalar}), together with one assumption that I will describe soon, one obtains the
Schr\"odinger equation
\begin{equation}
\label{xo2}
\textrm i \frac{\partial \chi(v,\eta)}{\partial \eta} = \frac{1}{2}\int d^3x \left[ \hat{\pi}^2 + \hat{v}^{,i}\hat{v}_{,i}+ \frac{z'}{z} \left( \hat{\pi}\hat{v}+ \hat{v}\hat{\pi}\right)\right] \Psi(v,\eta),
\end{equation}
Going to the Fourier modes $v_{\bf k}$ of the Mukhanov--Sasaki variable,
\begin{equation}
\label{mode}
v({\bf x})=\int{\frac{d^3x}{(2\pi)^{3/2}}v_{\bf k} \textrm e^{\textrm i {\bf k} \cdot {\bf x}}},
\end{equation}
one obtains the mode equation
\begin{equation}\label{eqv2}
v_k''+ \left(k^2-\frac{z''}{z}\right)v_k \quad =0 \qquad .
\end{equation}
As in the perfect fluid case, Equation (\ref{eqv2}) has the same form as the usual equations for the modes in classical backgrounds, but now, the background time functions present in it are the Bohmian trajectories. This can give rise to different effects in the region where the quantum effects
on the background are important, which can propagate to the classical~region.
Equation \eqref{xo2} was obtained under the hypothesis that there is never quantum entanglement between background and the perturbations. When the background behaves classically, this seems to be correct, as the semi-classical calculations, which rely on this hypothesis, yield the observed spectra of perturbations. When the background is quantum, around the bounce, there is nothing imposing the absence of quantum entanglement during this period. In this case, the assumption relies on simplicity. Note that it would be quite interesting to relax the hypothesis of the absence of quantum entanglement around the bounce and investigate its observational consequences.
Finally, I would like to emphasize that, in the case of the scalar fields, there is no degree of freedom that emerges as a possible clock in the original Wheeler--DeWitt equation; see Equations~(\ref{h00fi}),~(\ref{h02fi}) and (\ref{split-h0-fi}). Nevertheless, I was able
to construct a Schr\"odinger equation for the perturbations using the conditional wave function \eqref{cond-fi}. The assumption
of the existence of a Bohmian background quantum trajectory was essential for achieving this goal; see Equation~(\ref{essential}). {The procedure is analogous to what is performed in semi-classical quantum gravity~\cite{kieferSC}, where a notion of time emerges from a combination of the classical background variables (from an equation similar to (\ref{essential})), yielding a background clock, and a Schr\"odinger functional equation for the quantum non-gravitational degrees of freedom is obtained. Within the dBB approach, this can also be performed for a quantum background.} Hence, this is a concrete example, with physical implications, of what was discussed in Section \ref{sec2}. The original Wheeler--DeWitt equation does not have a Shr\"odinger form; it has, rather, a Klein--Gordon form; hence, no notion of probability naturally emerges.
However, in the dBB approach, this difficulty is not an insurmountable obstacle to further calculations, as the wave functional leads to the Bohmian trajectories for the background through the
guidance relations. These trajectories, which are assumed to be actual trajectories, can then be used to construct the conditional wave function for the perturbations,
yielding a Schr\"odinger equation for them. In this case, there is a typical probability distribution~\cite{duerr}, the Born distribution, which can also be the dynamical attractor of any reasonable probability distribution, at a cross-grained level, which is called the quantum equilibrium distribution (see~\cite{valentini}). Consequently, we get back to the standard quantum theory of cosmological perturbations, described by a wave functional with a probabilistic interpretation, but now, the mode perturbations evolve
in a background, the Bohmian background quantum trajectory, which does not always satisfy the background classical GR equations. For other approaches to quantum perturbations in quantum backgrounds, see~\cite{lqcp1,lqcp2,lqcp3}.
\section{Quantum Bouncing Backgrounds and Their Cosmological Perturbations}\label{sec4}
In this section, I will present some examples of background Bohmian solutions that are free of singularities and the features of their cosmological perturbations. I will focus on two matter contents: perfect fluids and a canonical field with an exponential potential.
Perfect fluids can model, quite well, the hot Universe, especially a radiation fluid with $w\approx 1/3$, which usually dominates this very hot phase, not only because of the massless fields present but also because the massive particles have their rest energy completely negligible at high temperatures. Another possibility is that the Universe becomes so dense that the sound velocity of the fluid becomes close to the speed of light, the so-called stiff matter. A canonical scalar field can represent this state if its dynamics are such that its potential energy becomes negligible with respect to its kinetic energy, yielding $p \approx \rho$. This is the case of the exponential potential, which also has other nice properties, as we will see.
\subsection{The Perfect Fluid}
In Section \ref{sec3}, I obtained the Schr\"odinger equations for the background wave function and the background wave functional:
\begin{eqnarray}
\label{scrhoedinger-separado-fundo2} &&i\frac{\partial}{\partial
T} \Psi_{(0)}(a,T)=\nonumber\\&&\frac{1}{4} \left\{
a^{(3w-1)/2}\frac{\partial}{\partial a} \left[
a^{(3w-1)/2}\frac{\partial}{\partial a}\right] \right\}
\Psi_{(0)}(a,T),
\end{eqnarray}
and
\begin{equation}
i\frac{\partial\Psi_{(2)}[v({\bf x}),\eta]}{\partial \eta}= \int \mathrm{d}^3 x
\left(-\frac{1}{2} \frac{\delta^2}{\delta v^2({\bf x})} +
\frac{w}{2}v_{,i}({\bf x}) v^{,i}({\bf x}) - \frac{{a''}}{2a}v^2({\bf x}) \right)
\Psi_{(2)}[v({\bf x}),\eta], \label{schroedinger-conforme2}
\end{equation}
yielding the normal mode $v_k$ equation
\begin{equation}
\label{equacoes-mukhanov1} v''_k+\biggl(\omega
k^2-\frac{{a''}}{a}\biggr)v_k=0.
\end{equation}
Let us first solve the zeroth order equation. The guidance equations are
\begin{equation}
\label{guidancec} {\dot T} = \frac{N}{a^{3w}} , \qquad \dot{a}=-\frac{N}{2a} \frac{\partial S}{\partial a}.
\end{equation}
Note that, as the resulting Bohmian trajectories have objective reality, the characterization of singularities is very simple and direct, as in classical cosmology: it appears when $a(T)=0$, when space shrinks to zero. {Note, also, that I am treating $T$ in the same way as $a$, with its own guidance equation, and without fixing $N$ from the beginning. The guidance equation for $T$ implies that $dT = N dt/a^{3w}=d\tau /a^{3w}$, where $\tau$ is proper cosmic time. Usually, the time gauge is fixed before quantization, by choosing $N=a^{3w}$, yielding $T=t$ and $d\tau=a^{3w} d T$; hence, both methods are equivalent. As we will see in the sequel, combining both guidance equations of Equation~\eqref{guidancec} yields the same guidance equation for $a$ in terms of $T$, independently of $N$, as it would be obtained if we had fixed $N$ a priori.}
The dynamics can be simplified using the transformation
\begin{equation}
\label{transf}
\chi=\frac{2}{3} (1-\omega)^{-1} a^{3(1-\omega)/2},
\end{equation}
to obtain
\begin{equation}
\textrm i\frac{\partial\Psi(\chi,T)}{\partial T}= \frac{1}{4} \frac{\partial^2\Psi(\chi,T)}{\partial \chi^2} \label{es202}.
\end{equation}
This is the time-reversed Schr\"odinger equation for a one-dimensional free particle with mass $2$ constrained to the positive axis. As it has a Schr\"odinger form, it is possible, in this case, to obtain the Born rule for $\Psi$ if one imposes the condition
\begin{equation}
\label{cond27}
\Psi \bigl|_{\chi=0} = c \frac{\partial\Psi}{\partial
\chi}\Biggl|_{\chi=0},
\end{equation}
with $c$ being a real constant. For these wave functions, one can assert that $|\Psi^2| d\chi$ is the probability measure for the scale factor, as the boundary condition imposes that the total probability is preserved in time. Another good property of condition \eqref{cond27} is that the Bohmian trajectories coming from wave functions satisfying it are free of singularities~\cite{falciano-santini} because the probability flux $J_\chi \sim {\textrm{Im}}\left(\Psi^* \frac{\partial \Psi}{\partial \chi}\right)$ associated with these wave functions is null at $\chi=0$, so no trajectories can cross $a=0$.
Note that, in the dBB theory, it is not necessary to have a probabilistic interpretation for $\Psi$; hence, one could work with wave functions that do not satisfy boundary condition \eqref{cond27}. In this case, singularities may be obtained, as in plane wave solutions, where the Bohmian trajectories are always classical, hence containing a singularity.
A wave function that satisfies condition (\ref{cond27}) can be obtained by imposing that, at $T=0$, it is the Gaussian
\begin{equation}
\label{initial}
\Psi^{(\mathrm{init})}(\chi)=\biggl(\frac{8}{T_b\pi}\biggr)^{1/4}
\exp\left(-\frac{\chi^2}{T_b}\right) ,
\end{equation}
where $T_b$ is the constant variance of the Gaussian. The wave solution for all times in terms of $a$ reads~\cite{qc2}:
\begin{eqnarray}
\label{psi1t}
\Psi(a,T)&=&\left[\frac{8 T_b}{\pi\left(T^2+T_b^2\right)}
\right]^{1/4}
\exp\biggl[\frac{-4T_b a^{3(1-\omega)}}{9(T^2+T_b^2)(1-\omega)^2}\biggr]
\nonumber\\
&\times&\exp\left\{-\textrm i\left[\frac{4Ta^{3(1-\omega)}}{9(T^2+T_b^2)(1-\omega)^2}
+\frac{1}{2}\arctan\biggl(\frac{T_b}{T}\biggr)-\frac{\pi}{4}\right]\right\}.
\end{eqnarray}
Taking the two equations in \eqref{guidancec}, one can write a guidance equation describing the dynamics of the scale factor in terms of $T$,
\begin{equation}
\frac{da}{dT} = - \frac{a^{3w-1}}{2} \frac{\partial S}{\partial a}
\end{equation}
or
\begin{equation}
\frac{d \chi}{dT} = - \frac{1}{2} \frac{\partial S}{\partial \chi}.
\end{equation}
Substituting the phase $S$ of (\ref{psi1t}) in these guidance equations, one obtains the Bohmian trajectories
\begin{equation}
\label{at} a(T) = a_b
\left[1+\left(\frac{T}{T_b}\right)^2\right]^\frac{1}{3(1-\omega)} .
\end{equation}
This is a bounce solution without singularities for any initial value $a_b \neq 0$. It tends to the classical solution when $T/T_b\rightarrow\pm\infty$. Hence, the constant $T_b$ provides the time scale of the bounce and the quantum effects. The solution (\ref{at}) can be obtained for other initial wave functions~\cite{falciano-santini}.
The case $w=1/3$ describes a radiation fluid. Adjusting the free parameters conveniently, the solution \eqref{at} can reach the classical Friedmann evolution at energy scales larger than the nucleosynthesis energy scale, when the standard cosmological model begins to be tested by observations. Hence, it is a sensible cosmological model describing the radiation-dominated era, which is free of singularities.
Nevertheless, a complete cosmological model must also contain a presureless component, describing dark matter and baryons (dark energy will be treated later). This extension was accomplished in~\cite{falciano-santini}, yielding
\begin{equation}
\label{ascalefactor}
a(\eta_s)=a_0\left(\dfrac{\Omega_{m0}}{4}\,\eta_s^{2} + \sqrt{\dfrac{1}{x_b^{2}}+\Omega_{r0}\,\eta_s^{2}}\right),
\end{equation}
where $x_b=a_0/a_b$, $a_0$ is the scale factor today, $a_b$ is the scale at the bounce, and $\Omega_{m0}$ and $\Omega_{r0}$ are the usual
dimensionless densities ($\Omega = \rho/\rho_c$) of dust and radiation, respectively, where $\rho_c$ is the critical density.
I have also introduced the dimensionless conformal time, $\eta_s=(a_0/R_{H_0})\eta$, where $R_{H_0}=1/H_0$ is the Hubble radius
today. The scale factor in Equation~\eqref{ascalefactor} describes a universe
dominated by dust in the far past, which contracts up to radiation
domination. Near the singularity, quantum effects become
relevant, and a quantum bounce takes place, eliminating the singularity. The universe is then launched to an expanding phase, reaching the usual standard classical radiation and dust phases. As we will see, the presence of dust is important not only for completeness but also because it is necessary to yield a scale invariant spectrum of scalar perturbations.
The curvature scale at the bounce reads
\begin{equation}\label{lb}
\left. L_{b} \equiv \dfrac{1}{\sqrt{R}}\right\vert_{\eta_s=0} = \left.\sqrt{\dfrac{a^{3}}{6a''}}\right\vert_{\eta_{s}=0},
\end{equation}
where $R$ is the Ricci scalar. It cannot be very close to the Planck length, because near these very small scales, a complete theory of quantum gravity~\cite{qg1,qg2,qg3,qg4} must be evoked. The Wheeler--DeWitt quantization we are using should be understood as a good effective theory for quantum gravity only at higher length scales. Hence, using the values $H_{0}=70
\,\text{km\,\,s}^{-1}\,\text{Mpc}^{-1}$
and $\Omega_{r0}\approx8\times 10^{-5}$, the imposition that $L_b$ should be a few orders of magnitude larger than the Planck length implies that $x_{b} < 10^{31}$. Additionally, as mentioned above, the bounce should
occur before the beginning of the nucleosynthesis era,
implying that $x_{b}\gg 10^{11}$. Collecting these two limits yields
\begin{equation}\label{xblimit}
10^{11}\ll x_{b} < 10^{31}.
\end{equation}
Let us now calculate the mode solutions characterizing the cosmological perturbations of Equation~\eqref{equacoes-mukhanov1}
\begin{equation}
\label{equacoes-mukhanov2}
v''_k+\biggl(\omega
k^2-\frac{{a''}}{a}\biggr)v_k=0,
\end{equation}
in the quantum bounce background given in Equation~\eqref{at}.
Far from the bounce, when $|T|\gg |T_b|$, Equation~(\ref{equacoes-mukhanov2}) reads,
\begin{equation}
\label{Modes} v_k'' +\left[ \omega k^2
+\frac{2(3\omega-1)}{(1+3\omega)^2\eta^2}\right]v_k = 0.
\end{equation}
The solution is
\begin{equation}
\label{Bessel} v_k = \sqrt{|\eta|} \left[ c_1(k) H^{(1)}_\nu
(\bar{k}|\eta|)+ c_2(k)
H^{(2)}_\nu(\bar{k}|\eta|)\right],
\end{equation}
with
$$ \nu = \frac{3(1-\omega)}{2(3\omega+1)}, $$ where $H^{(1,2)}$ are Hankel
functions, $\bar{k}\equiv \sqrt{\omega}k$, and we are considering the far past of the contracting phase, $\eta \ll -1$. In order to obtain spectrum predictions from the above result, one needs to select one solution from Equation~\eqref{Bessel} by fixing $c_1(k)$ and $c_2(k)$.
In the case of inflationary models, all the wavelengths of cosmological interest were much smaller than the Hubble radius at least by $60 e-$folds before the end of inflation; see Figure~\ref{inf}. Long before that, any perturbation around the homogeneous background was deep inside the Hubble volume, and it would rapidly fade away, justifying that only quantum vacuum fluctuations could survive. Hence, an adiabatic vacuum state (close to the Bunch--Davies de Sitter vacuum state) is chosen as the initial quantum state of quantum cosmological perturbations. During cosmic evolution, these perturbation scales become bigger than the Hubble radius before and during re-heating, becoming smaller than the Hubble radius again in the expanding decelerating phase. The power spectrum is calculated, with results that remarkably agree with observations~\cite{CMB}. Note, however, that the transition from the quantum description to the classical evolution giving rise to the classical structures in the Universe is very subtle and controversial, needing a clear explanation. This will be done in the next section in the context of the dBB quantum theory.
\begin{figure}[H]
\includegraphics[width=.9\columnwidth]{Inflation2.pdf}
\caption{Comparison between evolution
of Hubble radius and cosmological scales in inflation. The green straight lines are the perturbation scales; the black solid line is the Hubble radius. The horizontal axis depicts $\log (a)$.} \label{inf}
\end{figure}
The cosmic evolution of the quantum Bohmian bouncing solutions obtained above is completely different from the inflationary solution. As we have seen, they contain a long-standing decelerating contracting phase, implying that they are naturally free of the particle horizon and flatness issues, which are smoothly connected through a quantum bounce to the usual radiation-dominated expanding phase of the standard cosmological model, as can be seen from solution \eqref{ascalefactor}. However, the qualitative evolution of the perturbation scales is very similar. In a decelerating contracting phase, cosmological scales evolve as $\lambda_{\rm phys}\equiv \lambda a \propto \tau^{2/[3(1+w)]}$, while the sound Hubble radius evolves as $R^{S}_{H} = w^{1/2}/H \propto \tau$, where $\tau$ is proper cosmic time, and $H=a'/a^2$ is the Hubble function. Hence, in the far past of the contracting phase, $|\tau|H_0 w^{-1/2}\gg1$, $H_0$ being the Hubble function today; all the scales of cosmological interest were much smaller than the sound Hubble radius as long as $2/[3(1+w)]<1$, or $w>-1/3$, which is exactly the condition for a deceleration. {Put another way, expressing these quantities in terms of the scale factor, one finds that the sound Hubble radius (or the Hubble radius itself) evolves as $R_H \propto a^{3(1+w)/2}$, while the physical cosmological scales are $\lambda_{\rm phys} = \lambda a$. For $w>-1/3$, the Hubble radius grows faster with $a$ than the cosmological scales, implying that, in the far past of the contracting phase of such bouncing models, the cosmological scales were deep inside the Hubble radius. The cases of cosmological interest are dust, radiation and the cosmological constant, in which the exponents appearing in $R_H \propto a^{3(1+w)/2}$ are $3/2, 2, 0$, respectively. In the course of cosmic evolution during contraction, the perturbations scales will eventually become larger than the sound Hubble radius before the bounce. Near the bounce, however, the Hubble radius is no longer a good geometrical scale to which to compare the cosmological scales, as long as the Hubble radius diverges at the bounce, by definition. Indeed, from Equation~\eqref{equacoes-mukhanov2}, one can see that the really physically important geometrical scale to which the cosmological scales must be compared is proportional to the curvature scale $l_c \equiv R^{-1/2}$ ($wk^2\approx a''/a \Rightarrow \lambda_{\rm phys}^2 = a^2/k^2 \approx w a^3/a''=w/R\equiv w l_c^2$), where $R$ is the Ricci scalar of the background. The curvature scale generally coincides with the Hubble radius during classical contraction and expansion, but they behave very differently during the bounce. In fact, the curvature scale has a smooth behavior, without ever diverging. \mbox{Figure \ref{bou}} shows a qualitative comparison between the cosmological scales $\lambda_{\rm phys}\equiv \lambda a$ and the curvature scale $l_c$ in a bouncing model dominated by dust and radiation, plotting $\ln(l_c)$ and $\ln(\lambda_{\rm phys})$ against $\ln(a)$. I normalized $a$ such that the scale factor at the bounce is $1$ ($a_b=1$). The negative (positive) horizontal axis corresponds to the contracting (expanding) phase, respectively. During classical evolution, the curvature scale coincides with the Hubble radius, but it behaves differently during the quantum bounce. Note that the cosmological scales are much smaller than the curvature scale in the far past of the contracting phase; they become larger than the curvature scale during contraction at different times, and they become smaller than the curvature scale again only in the expanding phase. During the period when they are larger than the curvature scale, the perturbations become amplified, yielding the structures in the Universe, as we will see.}
Hence, one can say that decelerating contraction and the bounce play the role of the accelerating phase and re-heating in inflationary models; compare Figure~\ref{inf} with Figure~\ref{bou}. Furthermore, as the sound Hubble sphere in the far past of the contracting phase contains an immensely large space volume and a tiny matter energy density, and as the Universe evolves very slowly because the Hubble time scale $1/H$ is very large, the effective physical universe that affects such perturbation scales is very close to the flat Minkowski space-time. Consequently, any small classical perturbation around this homogeneous, almost-flat background would rapidly dissipate away, and, as in inflation, only quantum vacuum fluctuations would survive (see~\cite{novo}), justifying, again, the choice of an adiabatic vacuum state as the initial quantum state of quantum cosmological perturbations, which is now close to the Minkowski vacuum quantum state. Hence, the qualitative justification for imposing vacuum initial conditions for the cosmological perturbations in inflation and bouncing models is similar, although the physical ambiences justifying them are completely different. Note, however, that the quantum perturbations in bouncing models must be dynamically evolved through a different background, especially through the bounce, which generally involves new physics, with possible different observational consequences, as we will see.
\begin{figure}[H]
\includegraphics[width=.7\columnwidth]{Bounce-Fig.pdf
\caption{{ Qualitative comparison between evolution of the curvature scale, in blue, and cosmological scales, in red, in bouncing models. During classical evolution, the curvature scale coincides with the Hubble radius. The scale factor at the bounce is normalized to one; hence, the origin corresponds to the bounce, where the scale factor attains its minimal value. The negative and positive horizontal axis correspond to the contracting and expanding phases, respectively. In the plot, the transitions from dust to radiation domination, and the bounce itself, are qualitatively depicted by sharp transitions. In reality they are smooth, but it does not alter the physical conclusions presented in the text.}} \label{bou}
\end{figure}
The modes characterizing a(n) (almost) Minkowski vacuum state are given by
\begin{equation}
v_{k}^{(\mathrm{ini})} =
\frac{\exp {i \bar{k} \eta}}{\sqrt{\bar{k}}}.
\label{v}
\end{equation}
The asymptotic expansion of the Hankel functions for $k|\eta|\gg 1$
that fits solution \eqref{Bessel} with~\eqref{v} implies that
$$ c_1=0 \quad \hbox{and} \quad c_2= l_{P} \sqrt{\frac{3\pi}{2}}
\exp^{-i\frac{\pi}{2} \left(\nu+\frac{1}{2}\right)}.
$$
In order to propagate the solution through the bounce up to the expanding phase, one expands the solutions of Equation~\eqref{Modes} in powers of $k^2$ according to
the formal solution~\cite{mukh-book}
\begin{eqnarray}
\frac{v_k}{a} & = & A_1(k)\biggl[1 - \omega k^2 \int^{\eta} \frac{d\bar
\eta}{a^2\left(\bar \eta\right)} \int^{\bar{\eta}}
a^2\left(\bar{\bar{\eta}}\right)d\bar{\bar{\eta}}\biggr]\nonumber
\\ &+& A_2(k) \biggl[\int^\eta\frac{d\bar{\eta}}{a^2} - \omega k^2
\int^\eta \frac{d\bar{\eta}}{a^2} \int^{\bar{\eta}} a^2
d\bar{\bar{\eta}} \int^{\bar{\bar{\eta}}}
\frac{d\bar{\bar{\bar{\eta}}}}{a^2} \biggr] +\;...\;,\cr & & \label{solform}
\end{eqnarray}
where I have omitted the terms of order $\mathcal{O}(k^{j\geq 4})$. The quantity $v/a$ is the curvature perturbation $\zeta$~\cite{mukh-book}. This solution is adequate when $wk|\eta|\ll 1$. The solution \eqref{Bessel} is also valid in this regime, as long as the bouncing solution \eqref{at} is still in the classical regime. This is true for all scales of cosmological interest because they cross the sound Hubble radius, which happens when $wk|\eta|\approx 1$, when the Universe is still very large, and quantum effects are completely negligible.
Hence, in this region, one can match the solution \eqref{solform} with solution \eqref{Bessel}, and obtain
the coefficients $A_1(k)$ and $A_2(k)$ from the coefficients $c_1(k)$ and $c_2(k)$, which were fixed by the vacuum initial condition \eqref{v}. They read
\begin{eqnarray}
A_1 &\propto&
\biggl(\frac{\bar{k}}{k_0}\biggr)^\frac{3\left(1-\omega\right)}
{2\left(3\omega+1\right)},\label{A1}\\ A_2 &\propto& \biggl(\frac{\bar{k}}{k_0}
\biggr)^\frac{3\left(\omega-1\right)}{2\left(3\omega+1\right)},\label{A2}
\end{eqnarray}
where $k^{-1}_0=T_0a_0^{3\omega-1}=L_b$, and $L_b$ is the curvature scale at the bounce. Propagating the solution \eqref{solform} up to the expanding phase, and relating it to the Bardeen potential $\Phi({\bf x})$ through the known formula
\begin{equation}
\label{vinculo-simples2}
\Phi^{,i}\,_{,i}({\bf x}) =
-\frac{3 l_{P}^2\sqrt{(\omega+1)\bar{\rho}}}{2\sqrt{\omega}}a
\biggl(\frac{v({\bf x})}{a}\biggr)' .
\end{equation}
one obtains, in the expanding phase for $T\gg T_b$,
\begin{equation}
\Phi_k \propto
k^\frac{3\left(\omega-1\right)}{2\left(3\omega+1\right)}
\biggl[\mathrm{const.}+\frac{1}{\eta^{(5+3\omega)/(1+3\omega)}}\biggr].
\end{equation}
The constant mode contains a mixing of the coefficients $A_1$ with $A_2$ in the expanding phase, but the $A_2$ coefficient is multiplied by a large constant, dominating over $A_1$; see~\cite{large-sandro}.
Hence, calculating the power spectrum of the Bardeen potential,
\begin{equation}
\label{PS} \mathcal{P}_\Phi \equiv \frac{2 k^3}{\pi^2}
\left| \Phi_k \right|^2,
\end{equation}
which is connected to the anisotropies of the CMBR and fixed by observations, one obtains
\begin{equation}
\mathcal{P}_\Phi \propto k^{n_{_\mathrm{S}}-1},
\label{powspec}
\end{equation}
where
\begin{equation}
\label{indexS} n_{_\mathrm{S}} = 1+\frac{12\omega}{1+3\omega}.
\end{equation}
In the case of gravitational waves, the
equation for the modes $\mu_k = a h_k$, where $h_k$ is the mode related to the amplitude of the wave, can be obtained very easily, because gravitational waves are gauge invariant; see~\cite{PPNGW2} for details. It is given by
\begin{equation}
\label{mu} \mu_k''+\left( k^2 -\frac{a''}{a} \right)\mu_k =0.
\end{equation}
The power spectrum is
\begin{equation}
\label{PT} \mathcal{P}_h \equiv
\frac{2 k^3}{\pi^2}\left| \frac{\mu_k}{a} \right|^2 \propto k^{n_{_\mathrm{T}}},
\end{equation}
and it reads
\begin{equation}
\label{indexT} n_{_\mathrm{T}} = \frac{12\omega}{1+3\omega}.
\end{equation}
One can see from Equation~\eqref{indexS} that, for $\omega\approx 0$ (dust), one obtains a nearly scale-invariant spectrum for both tensor and scalar perturbations~\cite{PPNscalar}. This is a general result for bouncing models~\cite{Allen2004,bounce-classical8,mbounce2,mbounce3}: if the contracting phase of a smooth bouncing model is dominated at large scales by a matter field satisfying $w=p/\rho\approx 0$, then the power spectrum of long wavelength scalar perturbations in the expanding phase is nearly scale invariant. {However, there is a problem if the matter field is a fluid in which $w=c_s^2$ because, in this case, $w$ must be positive and one cannot obtain a red-tilted spectrum, as observed. In the case of a canonical scalar field, this is not the case because $w$ is independent of $c_s^2$, and it can be made negative, as we will see in the next subsection. Nevertheless, one can circumvent this problem even in the fluid case.} Note that it is not
necessary that the dust fluid dominates at all times. As we have seen above, the
$k$-dependence of $A_1$ and $A_2$ is obtained far from the
bounce, when the modes cross the sound Hubble radius, $\bar{k}\eta\approx 1$, and they do not change in a possible transition from matter
to radiation domination in the contracting phase or across a smooth
bounce. The effect of the bounce is to mix the two
coefficients, and the constant mode acquires, in the expanding phase, the scale-invariant piece. Hence, the bounce itself may be dominated by another fluid,
such as radiation. In fact, the more complete bounce solution \eqref{ascalefactor} also yields an almost scale-invariant spectrum of adiabatic cosmological perturbations. Its amplitude reads~\cite{large-sandro,2-fluids}
\begin{equation}
\label{ampl-ad}
A_S \approx 10^{-2} \frac{l_p^2}{R_{H_0}^2} \frac{x_b^2}{\Omega_{r0}c_s^5},
\end{equation}
where $c_s$ is the value of the sound velocity characterizing the adiabatic perturbation when the perturbation scale crosses the sound Hubble radius. Note that the amplitude becomes bigger for small values of the sound velocity. Indeed, the perturbation modes grow faster after they cross the sound Hubble radius, which shrinks if $c_s$ becomes small. Hence, they cross this scale earlier for smaller $c_s$ and have more time to grow.
As $A_S \approx 2.09 \times 10^{-9}$ (see~\cite{CMB}), and using Equation~\eqref{xblimit}, one obtains
\begin{equation}\label{cslimit}
10^{-16} \leq c_s < 10^{-10}
\end{equation}
for the sound velocity. Note,
however, that there are two fluids; hence, the sound velocity for the adiabatic perturbations reads:
\begin{equation}
\label{sound-ad}
c_s^2 = \frac{w(\rho_m + p_m) + (\rho_r + p_r)/3}{\rho_T + p_T},
\end{equation}
where $w$ is the equation of the state parameter of the dust fluid, and the indices $m,r,T$ designate the dust, radiation, and total energy densities and pressures, respectively. Hence, $c_s^2 \approx |w|\ll 1$ only when dust dominates. For small scales that cross the sound Hubble radius very late, near radiation domination, the power spectrum amplitude is highly suppressed because $c_s$ is no longer in the range \eqref{cslimit}, as it tends to increase up to $1/\sqrt{3}$ when radiation begins to be important. Hence, the spectrum must be slightly red-tilted due to the presence of radiation, and the parameters may be fitted with CMBR observations.
Note that, as tensor perturbations have $c_s=1$, and as they evolve similarly to scalar perturbations, their amplitudes must be very small in comparison with scalar perturbations, being unobservable at large scales or very small frequencies. However, they might be observable at larger frequencies. Indeed, we have calculated the strain spectrum of the stochastic background of relic gravitons in such models~\cite{denis}, and we have shown that the resulting amplitude is too small to be detected by any gravitational wave detector, unless in the frequency range 10--100 Hz, as can be seen from Figure~\ref{GW1}. However, it is a hard technical challenge to detect stochastic gravitational waves in such range of frequencies, if possible.
\begin{figure}[H]
\includegraphics[width=0.7\textwidth]{figureGW1.pdf}
\caption{The figure shows a comparison of our results, labeled by ${\bar{\eta}}_b$ (the smaller this
parameter, the bigger the energy scale of the bounce, and the value $10^{-30}$ is only two orders of magnitude
away from the Planck scale) with experimental sensitivities of LIGO's 5th run, Advanced LIGO, and the forthcoming LISA and Einstein Telescope, and a prediction of the upper limits on the spectrum of primordial gravitational waves generated in inflationary models.}
\label{GW1}
\end{figure}
Up to now, I have not considered dark energy (DE), which seems to be accelerating the present Universe~\cite{lss,crs}. In the case of inflationary models, DE is irrelevant, because the initial conditions for the perturbations and their subsequent evolution are set at very small scales, where DE does not play any role. However, in bouncing models, vacuum initial conditions for quantum cosmological
perturbations are set in the far past of the contracting phase of these models, when the Universe was very large and almost flat, and DE energy may be relevant at such large scales, as it is in the expanding phase of our Universe. In fact, in the case of the standard $\Lambda$CDM model, where DE is a cosmological constant, the {the curvature scale, or} sound Hubble radius, stops evolving linearly in cosmic time and tends to be constant at large scales. Hence, going back in time, as the large perturbation scales grow following a time power-law, $\tau^{1/[3(1+w)]}$, they become larger than the Hubble radius again, and an adiabatic Minkowski vacuum prescription for their initial conditions become problematic; see Figure~\ref{Blam}. One possible solution to this problem is to try to define a Minkowski adiabatic vacuum in the period of time when the cosmological constant is not relevant,
but the Universe is still very large, with a Hubble radius larger than the scales of cosmological interest. However, these cosmological scales are not much smaller than the length scale
associated with the value of the cosmological constant given in the $\Lambda$CDM standard cosmological model; hence,
the spectrum of cosmological perturbations at these scales can be influenced by its presence.
In fact, we have shown in~\cite{Maier2011}, analytically and numerically, that, in a bouncing model containing a dust fluid ($w\approx 0$) and a cosmological constant, an
almost scale-invariant spectrum of long-wavelength perturbations is also obtained, but it is now affected by the presence of the cosmological constant. It
induces small oscillations and a small running towards a red-tilted spectrum in these scales; see Figure~\ref{fig3}.
Hence, small oscillations in the spectrum of temperature fluctuations may arise in the cosmic background radiation at large scales, superimposed to the usual acoustic~oscillations.
\begin{figure}[H]
\includegraphics[width=.6\columnwidth]{Blamb1.pdf}
\caption{{ Qualitative comparison between evolution of the curvature scale, in blue, and cosmological scales, in red, in bouncing models with a cosmological constant. During classical evolution, the curvature scale coincides with the Hubble radius. The scale factor at the bounce is normalized to one; hence, the origin corresponds to the bounce, where the scale factor attains its minimal value. The negative and positive horizontal axis correspond to the contracting and expanding phases, respectively. In the plot, the transitions from dust to radiation domination and dust to cosmological constant domination, and the bounce itself, are qualitatively depicted by sharp transitions. In reality, they are smooth, but it does not alter the physical conclusions presented in the text.}}
\label{Blam}
\end{figure}
\vspace{-6PT}
\begin{figure}[H]
\includegraphics[width=.8\columnwidth]{Lambda1.pdf}
\caption{Numerical results for $n_S(k)$ in the presence of a cosmological constant. The solid line shows the result obtained using $\Omega_{\Lambda}=0.7$; the dashed line, that for $\Omega_{\Lambda}=10^{-3}$; and the dotted line, that for $\Omega_{\Lambda}=10^{-6}$.
The oscillations become smaller for
smaller $\Omega_\Lambda$, indicating that they arise because of the presence of the cosmological constant.}
\label{fig3}
\end{figure}
In the next subsection, I will present the scalar field case. Concerning primordial gravitational waves, as the sound velocity associated with canonical scalar field scalar perturbations is $1$, the scalar and tensor perturbations evolve approximately in the same way in classical bouncing models, rendering
the ratio of the tensor to scalar perturbations of order $1$, $r=T/S \approx 1$~\cite{CaiGW1,CaiGW2}, which is ruled out by observations. I will show that, in the case of a quantum bounce, the quantum effects near the bounce increase the scalar perturbations with respect to tensor perturbations, yielding $r < 0.1$, as observed. Furthermore, using an exponential potential, the problem with dark energy mentioned above is circumvented, as we will see.
\subsection{Canonical Scalar Field}
Consider a canonical scalar field $p=X-V(\varphi)$ in which
\begin{equation}
\label{def_pot}
V(\varphi) = V_0 \textrm e^{-\lambda {\bar \kappa} \varphi},
\end{equation}
where $V_0$ and $\lambda$ are constants. ${\bar \kappa}^2 = 6 \kappa^2 = 8\pi G$, so that $\lambda$ is dimensionless.
Exponential potentials have been widely studied in cosmology, as they can model primordial inflation, the present acceleration of the Universe, and matter bounces. Their scalar field dynamics in expanding Friedmann backgrounds contain an attractor where the ratio between the pressure and the energy density is constant: $w=p/\rho$, where $w=(\lambda^2-3)/3$. Hence, by adjusting $V_0$ and $\lambda$, they can be used to describe the above cosmological scenarios.
Restricting ourselves to matter bounces, $w\approx 0$, one must set $\lambda \approx \sqrt{3}$. Note that, as $w$ is not related to the sound speed squared of scalar perturbations, as in the fluid case, it is not restricted to being positive: it can have a small negative value in order to give $n_s = 1 + 12w/(1+3w) \approx 0.97$.
Let us first present the classical dynamics of canonical scalar fields with exponential potential. Using cosmic proper time, $N=1$, one can define the variables
\begin{equation}\label{mud1}
x = \frac{{\bar \kappa} }{\sqrt{6}H}\dot{\varphi}, \qquad
y = \frac{{\bar \kappa} \sqrt{V_M}}{\sqrt{3}H},
\end{equation}
where
\begin{equation}
H=\frac{\dot a}{a} = {\dot \alpha}
\end{equation}
is the Hubble parameter. This choice dramatically simplifies the Friedmann equations as follows:
\begin{equation}\label{sist1c}
\frac{\mathrm{d} x}{\mathrm{d} \alpha} = -3 \left(x-\frac{\lambda}{\sqrt{6}}\right) \left(1-x\right)\left(1+x\right)
\end{equation}
\begin{equation}
\label{xyFried}
x^2 + y^2 = 1.
\end{equation}
The ratio $w = p/\rho$ is given by
\begin{equation}
w = 2x^2-1. \label{x_y_con}
\end{equation}
The critical points of this system are listed in Table~\ref{tab_crit}; see~\cite{heard-wands}. The critical points at $x=\pm 1$, yielding $p=\rho$ (the potential is negligible with respect to the kinetic term), and the scalar field behave as stiff matter. They correspond to the space-time singularity $a=0$. The critical points $x=1/\sqrt{2}$ imply that $w=0$ (see Equation~\eqref{x_y_con}) or $p=0$, and the scalar field behaves as dust matter. They
are attractors (repellers) in the expanding (contracting) phase, corresponding to very large, slowly expanding (contracting) universes, and the space-time is asymptotically flat in time. Additionally, from Equation~\eqref{x_y_con}, one can see that, at $x=0$, the scalar field behaves like dark energy; $w=-1$, $p=-\rho$.
\begin{specialtable}[H]
\tablesize{\small}
\caption{Critical points of the planar system defined by \eqref{sist1c} and \eqref{xyFried}.}
\label{tab_crit}
\setlength{\cellWidtha}{\columnwidth/3-2\tabcolsep+0.0in}
\setlength{\cellWidthb}{\columnwidth/3-2\tabcolsep+0.0in}
\setlength{\cellWidthc}{\columnwidth/3-2\tabcolsep+0.0in}
\scalebox{1}[1]{\begin{tabularx}{\columnwidth}{>{\PreserveBackslash\centering}m{\cellWidtha}>{\PreserveBackslash\centering}m{\cellWidthb}>{\PreserveBackslash\centering}m{\cellWidthc}}
\toprule
\emph{\textbf{x}} & \emph{\textbf{y}} & \emph{\textbf{z}} \\
\midrule
$-1$ & $0$ & $1 $\\
\midrule
$1$ & $0$ & $ 1$ \\
\midrule
$\frac{\lambda }{\sqrt{6}}$ &$ -\sqrt{1-\frac{\lambda ^2}{6}}$ & $\frac{1}{3} \left(\lambda ^2-3\right)$ \\
\midrule
$\frac{\lambda }{\sqrt{6}}$ & $\sqrt{1-\frac{\lambda ^2}{6}}$ & $\frac{1}{3} \left(\lambda ^2-3\right)$ \\
\bottomrule
\end{tabularx}}
\end{specialtable}
The expanding solutions evolve from a Big Bang singularity, when the scalar field behaves as stiff matter, up to an asymptotic future where the scalar field behaves as dust. In one of the possibilities, the scalar field passes through a dark energy phase.
The contracting solutions evolve from an asymptotic past dust-dominated contraction, ending in a Big Crunch singularity, when the scalar field behaves as stiff matter. Again, in one of the possibilities, the scalar field passes through a dark energy phase.
These classical possibilities are shown in Figure~\ref{phase-classical}. The Friedmann Equation \eqref{xyFried} restricts the trajectories to a circle. The upper and down semicircles are disconnected, as the $S_{\pm}$ points are singularities, and they show the expanding and contracting solutions, respectively. The points $M_{\pm}$ are the dust attractor and repeller points, respectively.
\begin{figure}[H]
\includegraphics[width=0.6\textwidth]{phase_space_full_DE.pdf}
\caption{Phase space for the planar system defined by \eqref{sist1c} and \eqref{xyFried}. The critical points are indicated by $M_\pm$ for a
dust-type effective equation of state, and $S_\pm$ for a
stiff-matter equation of state. Note that the region $y < 0$ shows the contracting solutions, while the
$y > 0$ region presents the expanding solutions. Lower and upper quadrants are not physically
connected, because there is a
singularity in~between.} \label{phase-classical}
\end{figure}
Let us now quantize this background model and perturbations around it, using the results of Section \ref{sec3}.
One first has to solve the background Wheeler--DeWitt Equation \eqref{minibacks1}, where ${\hat{H}}_{(0)}$ is the operator version of the classical background Hamiltonian
\begin{equation}
H_{(0)} = \frac{1}{\textrm e^{3\alpha}}\left[ -\frac{\Pi_\alpha^2}{2} + \frac{\Pi_\varphi^2}{2} + \textrm e^{6\alpha} V(\varphi) \right],
\end{equation}
where $V(\varphi)$ is the exponential potential. Exact solutions were found in~\cite{colin17} with their respective Bohmian trajectories, which are non-singular bouncing solutions. These solutions can also be obtained in a more simplified way by noting that, near the singularity, the scalar field behaves as stiff matter, the potential can be neglected, and solutions to this case were found in~\cite{qc7}, whose trajectories around the bounce are shown in \mbox{Figure~\ref{quantum-bounce}}. Note that, for large scale factors, $\alpha \gg 1$, the classical stiff matter behavior is recovered, $x \approx \pm 1$, and from there on, the Bohmian trajectories become classical. They can then be appropriately matched with the classical solutions presented in Figure~\ref{phase-classical}. This was done in~\cite{bacalhau17}, yielding the same qualitative picture.
In fact, both results, together with some general arguments based on the Bohmian configuration space, where no trajectories can cross (see~\cite{colin17}), imply that the only Bohmian possible bounce solutions are those that connect the region around $S_{\pm}$ with the region around $S_{\mp}$. Figure~\ref{quantum-bounce} shows a concrete example of a quantum bounce transiting from $S_+$ to $S_-$. Hence, the possible Bohmian bouncing scenarios are:
\begin{itemize}
\item[(A)]
A long classical dust contraction, which traverses a dark energy phase and realizes a stiff matter quantum bounce, directly expanding afterwards to an asymptotically
dust matter expanding phase, without passing through a dark energy phase.
\item[(B)]
A long classical dust contraction, without traversing a dark energy phase, which realizes a stiff matter quantum bounce and expands to a dark energy phase, ending in an asymptotically
dust expanding phase.
\end{itemize}
\begin{figure}[H]
\includegraphics[width=0.6\textwidth]{quantum_pe.pdf}
\caption{Phase space for the quantum bounce~\cite{bacalhau17}. The bounces in the figure connect regions around $S_+$ in the contracting phase with
regions around $S_-$ in the expanding phase.} \label{quantum-bounce}
\end{figure}
Case B is the physically interesting solution. First, it contains a dark energy phase in the expanding era, allowing the description of the present observed acceleration of the Universe. Second, there is no dark energy phase in the contracting era. The model has a long standing dust contraction, where space-time is almost flat in its asymptotic past, allowing the prescription of an adiabatic Minkowski vacuum as the initial state for quantum cosmological perturbations and avoiding the problem concerning the quantum vacuum prescription for the initial quantum state of cosmological perturbations when dark energy is present. Hence, the dBB quantum theory yields an example where vacuum initial conditions for quantum cosmological perturbations can be easily imposed in bouncing models with dark energy. {Other physical effects can lead to bouncing models with a dark energy phase~\cite{leh,caiU,od}, with similar properties. The relevant aspect of the present model is that a single canonical scalar field, with a quite simple potential, was capable of modeling not only a dark energy phase at large scales in the expanding phase of the bouncing model but also a pressureless field that dominates the asymptotic past of the contracting phase of the same model.}
Having solved the background Wheeler--DeWitt Equation~\eqref{minibacks1} and found the relevant background Bohmian trajectories, let us now calculate the amplitudes of scalar perturbations and primordial gravitational waves in this background.
As we have seen in Section \ref{sec3}, quantum primordial gravitational waves are described by the variable $\mu$, whose modes satisfy similar equations to the Mukhanov--Sasaki variable mode $v_k$, with the scale factor $a$ playing the role of $z$:
\begin{equation}
\label{mode-v2}
v_{\bf k}'' + \left(k^2-\frac{z''}{z}\right) z_{\bf k}=0 ,
\end{equation}
\begin{equation}
\label{mode-f2}
\mu_{\bf k}'' + \left(k^2-\frac{a''}{a}\right) \mu_{\bf k}=0 .
\end{equation}
Adiabatic vacuum initial conditions are set using the mode Equation~\eqref{v}, which applies for both $v_k$ and $\mu_k$. The calculations in the Bohmian background were performed in~\cite{bacalhau17}.
In order to qualitatively understand the final results, let us discuss what happens near the quantum bounce. Approaching the bounce in the contracting phase, the perturbation reaches the super-Hubble behavior, where $z''/z \gg k^2$ and $a''/a \gg k^2$. As shown in Section~\ref{sec3}, in this regime, the solutions for the scalar and tensor perturbations at leading order in a $k^2$ expansion read
\begin{align}\label{solHa}
{\zeta}_k \equiv \frac{v_k}{z} &\approx A^{(1)}_k + A^{(2)}_k \frac{1}{R_H}\int \frac{\mathrm{d}{}\tau}{x^2a^3}, \\
{h}_k \equiv \frac{\mu_k}{a} &\approx B^{(1)}_k + B^{(2)}_k \frac{1}{R_H}\int \frac{\mathrm{d}{}\tau}{a^3},
\end{align}
where $x$ was defined in Equation~\eqref{mud1}.
In the classical contracting phase of case B, one has $0< x< 1/\sqrt{2}$; hence, the evolution of $\zeta_k$ and $h_k$ is very close, since they are different by the presence of $x$ in Equation~\eqref{solHa}, implying that $r = T/S \approx 1$. This is the origin of the problem with classical bouncing models with canonical scalar fields. In a quantum bounce, however, the classical Friedmann equations
are no longer satisfied, the evolution is no longer restricted to the circle in Figure~\ref{phase-classical}, and $x$ can assume any value. Indeed, in Figure~\ref{quantum-bounce2}, one can see that there are Bohmian trajectories where $x=d\varphi/d\alpha$ is very small. Hence, in this period, the scalar perturbation amplitudes can increase relatively to the tensor perturbation amplitudes. Indeed,
this was calculated numerically, and the results are shown in Figure~\ref{zeta_h}. One can see a sharp increase in the scalar perturbation amplitude
around $|\alpha-\alpha_b| \approx 10^{-1}$, where $\alpha_b$ is the value of the scale factor at the bounce.
\begin{figure}[H]
\includegraphics[width=.7\columnwidth]{quantum-trajectory.pdf}
\caption{Possible Bohmian trajectories associated with the canonical scalar field with exponential potential. The trajectories yielding relevant amplification of scalar perturbations are set 1 and set 2. The bounces are not deep, but they are steep, with very small $x$.} \label{quantum-bounce2}
\end{figure}
\vspace{-12pt}
\begin{figure}[H]
\includegraphics[width=0.6\textwidth]{zeta_h_evol_set2.pdf}
\caption{Evolution of scalar and tensor perturbations in the background of case B. Scalar
and tensor perturbations grow almost at the same rate during classical contraction, but
at the quantum bounce, the scalar perturbations are enormously enhanced over the tensor perturbations
due to the quantum effects (shown in the detail of the figure). After the bounce, the perturbations
get frozen. The final amplitudes of both perturbations are compatible with observations. The indices
$a$ and $b$ refer to the real and imaginary parts of the perturbation amplitudes.}
\label{zeta_h}
\end{figure}
This is a remarkable result. It shows that features of quantum Bohmian trajectories can lead to observational consequences and explain involved cosmological issues, such as the unwanted large ratios of tensor to scalar
perturbation amplitudes that plague classical bouncing models with canonical scalar fields. Hence, it is a concrete example of how a quantum cosmological effect can be amplified to yield sound observable consequences.
The free parameters of the theory can be adjusted to yield the right amplitudes and
spectral indices of scalar and tensor perturbations. {From Planck observations, one obtains $n_s = 0.9652\pm 0.0042$, implying that $\lambda^2 = 2.9914 \pm 0.0010$. The amplitude of scalar perturbations, $A_s \approx 2.1 \times 10^{-9}$, can be obtained if the curvature scale at the bounce is around $10^3 l_p$. However, the bounce must be steep in order to obtain sufficient amplification of scalar perturbations over tensor perturbations, as shown in Figure~\ref{zeta_h}; see Figure~\ref{quantum-bounce2} and~\cite{bacalhau17} for details.} Hence, applying the dBB quantum theory to quantum cosmology made it possible to obtain a simple and sensible bouncing model with dark energy behavior in the expanding phase, and
correct and well-defined perturbation amplitudes of quantum mechanical origin.
\section{The Quantum-to-Classical Transition of Quantum Cosmological Perturbations}\label{sec5}
As we have seen in Section \ref{sec3}, in both bouncing and inflationary models, the seeds of structure in the Universe are the quantum
fluctuations of an adiabatic vacuum, which is defined when the wavelengths of cosmological interest are deep inside the sound Hubble radius, either in the slow contracting phase of a very large and rarefied universe in the far past of bouncing models or in the quasi-de Sitter expansion of inflationary cosmology. During the evolution of
the Universe, these quantum vacuum fluctuations must become classical fluctuations, as the structures present in the real universe (galaxies, cluster of galaxies, etc.) are classical.
In the context of the Copenhagen interpretation, it is rather difficult, if not impossible,
to explain this transition. The adiabatic vacuum is a homogeneous and isotropic quantum state. For instance, the mean value of the curvature perturbation $\zeta(x)$ squared in the adiabatic vacuum state $|0\rangle$ is homogeneous:
\begin{equation}
\langle0|\zeta^2(x)|0\rangle = \langle0|T^{\dagger}\zeta(x)T T^{\dagger}\zeta(x)T |0\rangle=\langle0|\zeta^2(x+\delta)|0\rangle,
\end{equation}
where $T$ is the $\delta$ translation unitary operator, and the vacuum state satisfies $T|0\rangle = |0\rangle$.
From another point of view, the temperature anisotropies that are measured in the CMBR~\cite{CMB} originated from the Sachs--Wolff effect are obtained from the Bardeen potential $\Phi$, but how should we understand $\Phi$ in this calculation: as a mean value of the quantum operator corresponding to $\Phi$ (which is zero in the state $|0\rangle$) or as a particular realization of it? Which one?
The usual attempts to address these issues argue that, in inflation, the vacuum state is squeezed, yielding a positive Wigner distribution in phase space, which looks like a classical stochastic distribution of realizations of the Universe, with different inhomogeneous configurations. Decoherence avoids interference among the different realizations.
These arguments were severely criticized by many authors~\cite{lyth,mukh-book,weinberg-book,sudarsky}. The quantum state, although squeezed, is still homogeneous, so what breaks its homogeneity? What is the environment of the perturbations in the decoherence picture? The most fundamental question is as follows: in the Copenhagen interpretation, different potentialities are not realities, so how does one of the potentialities become our real Universe? How does a single outcome emerge from the many possible realizations, without a collapse of the wave function, or what defines the role of a measurement in the early Universe? This main issue is ultimately connected with the measurement problem in quantum mechanics, which, as commented on in the Introduction, becomes acute when the physical system is the Universe under the Copenhagen view. It cannot be solved by the arguments above without a collapse postulate, which does not make sense in the physical situation we are facing: we cannot collapse the perturbation wave function because we could not exist without stars!
The dBB quantum theory provides a simple and elegant solution to this very important problem. First of all, remember that, besides the quantum state, there is an actual quantum field describing the cosmological perturbations, which, depending on its initial configuration, will distinguish one of the possible realizations of the Universe with respect to the others, breaking the symmetry of the quantum state. As explained in Section \ref{sec2}, there is no collapse, but one realization is selected by the evolution of the actual perturbation field. Secondly, as be shown in the sequel, this quantum evolution becomes classical while the Universe evolves, either in inflation or in bouncing models.
As we have seen in Section \ref{sec3}, the Schr\"odinger equation for the perturbations reads
\begin{equation}
\label{xo2}
\textrm i \frac{\partial \Psi(v,\eta)}{\partial \eta} = \frac{1}{2}\int d^3x \left[ \hat{\pi}^2 + \hat{v}^{,i}\hat{v}_{,i}+ \frac{z'}{z} \left( \hat{\pi}\hat{v}+ \hat{v}\hat{\pi}\right)\right] \Psi(v,\eta),
\end{equation}
where $z=a\varphi '/\cal{H}$ is a background function coming from either an inflationary model or a quantum Bohmian trajectory (which may be nonsingular with a bounce), as we will see in the next section.
Going to the Fourier modes $v_{\bf k}$ of the Mukhanov--Sasaki variable,
\begin{equation}
\label{mode}
v({\bf x})=\int{\frac{d^3x}{(2\pi)^{3/2}}v_{\bf k} \textrm e^{\textrm i {\bf k} \cdot {\bf x}}},
\end{equation}
and because of linearity, one can set the product wave function
\begin{equation}
\label{product}
\Psi= \Pi_{{\bf k} \in \mathbb{R}^{3+}} \Psi_{\bf k}(v_{\bf k},v^*_{\bf k},\eta),
\end{equation}
where each factor $\Psi_{\bf k}$ satisfies the Schr\"odinger equation
\begin{equation}
\label{sch}
\textrm i\frac{\partial\Psi_{\bf k}}{\partial\eta}=
\left[ -\frac{\partial^2}{\partial v_{\bf k}^*\partial v_{\bf k}}+
k^2 v_{\bf k}^* v_{\bf k}
- \textrm i\frac{z'}{z}\left(\frac{\partial}{\partial v_{\bf k}^*}v_{\bf k}^*+
v_{\bf k}\frac{\partial}{\partial v_{\bf k}}\right)\right]\Psi_{\bf k}.
\end{equation}
The guidance equations are
\begin{equation}
\label{guidance}
v'_{\bf k}= \frac{\partial S_{\bf k}}{\partial v^*_{\bf k}}+\frac{z'}{z}v_{\bf k} .
\end{equation}
The wave function $\Psi_{\bf k}$ associated with the adiabatic vacuum wave functional given by Equation~\eqref{product} reads (see~\cite{polarski} for details)
\end{paracol}
\begin{equation}
\label{psi2}
\Psi_{\bf k} = \frac{1}
{\sqrt{\sqrt{2\pi}|f_k(\eta)|}} \exp{\left\{-\frac{1}{2|f_k(\eta)|^2}|v_{\bf k}|^2 + i \left[\left(\frac{|f_k(\eta)|'}{|f_k(\eta)|}-
\frac{z'}{z}\right)|v_{\bf k}|^2-
\int^\eta \frac{d {\tilde \eta}}{2|f_k({\tilde \eta})|^2}\right]\right\}} ,
\end{equation}
\begin{paracol}{2}
\switchcolumn
where $f_k$ is a solution of the classical mode equation
\begin{equation}
\label{mode-f}
f_{\bf k}'' + \left(k^2-\frac{z''}{z}\right) f_{\bf k}=0,
\end{equation}
with initial conditions $f_k(\eta_i) = \exp{-i k\eta_i}/\sqrt{2k}$, where $|\eta_i|$ is an early time in the contracting phase where $k^2 \gg z''/z$ \footnote{In this section, I will name the Mukhanov--Sasaki mode $v_k$ of Section \ref{sec4} $f_k$, reserving the name $v_k$ for the Bohmian mode that we are now discussing.}. This state is homogeneous and isotropic. Note that, around $\eta_i$, the wave function $\Psi_k$ reduces to the usual harmonic oscillator ground state wave function for the mode $k$, with ground state energy $E_k=k$
The guidance equations can be integrated to give
\begin{equation}
\label{soly}
v_{\bf k}(\eta) = v_{\bf k}(\eta_i)\frac{|f_k(\eta)|}{|f_k(\eta_i)|},
\end{equation}
independently of the particular form of $f_k(\eta)$.
When $k^2 \gg z''/z$, we have seen that, in either inflation or the bounce scenario in the contracting phase, the solution of Equation~\eqref{mode-f} can usually be approximated to
\begin{equation}
\label{fka}
f_k(\eta) \sim \textrm e^{- \textrm i k \eta}\left(1 + \frac{A_k}{\eta} + {\rm {O}}(\eta^{-2}) + \dots \right).
\end{equation}
In the simple solutions presented in Section \ref{sec3}, Equation~\eqref{fka} comes from asymptotic Hankel function expansions.
By inserting Equation~\eqref{fka} into Equation~\eqref{soly}, one obtains, for the Bohmian modes,
\begin{equation}
v_{\bf k}(\eta) \sim \left( 1+\frac{{\rm Re} A_k}{\eta} + \dots \right).
\end{equation}
Note that $v_{\bf k}$ is approximately constant, as it is the usual case of Bohmian trajectories corresponding to the ground state of a harmonic oscillator. Hence, the Bohmian mode is completely different from the classical mode: the first is almost static, and the second is oscillating. The quantum perturbation field is genuinely quantum.
When the modes get deep inside the potential, $k^2 \ll z''/z$, we learned from Section IV that the classical mode $f_{\bf k}$ is a combination of power law solutions, which soon becomes dominated by a growing mode. Hence, one has
\begin{equation}
\label{yqq}
f_k (\eta) \sim A_k \eta ^{\beta},
\end{equation}
where $\beta<0$ \footnote{In inflation, this result is direct, while for bouncing models, some care must be taken with the interchange between growing and decaying modes after the bounce, but in the end, the result is the same; see~\cite{qtc2}.}. Hence, as $|f_k|$ equals $f_k$, up to a time-independent complex factor and, looking at Equation~\eqref{soly}, the Bohmian modes evolve in the same way as the classical modes in this era. The classical limit has been achieved, long before non-linear structures begin to be formed.
One can also use the quantum potential to investigate the classical limit, constructing it from the wave function Equation~\eqref{psi2} in both eras. It was explicitly shown in~\cite{qtc1,qtc2} that, indeed, when $k^2 \gg z''/z$, the quantum potential dominates the evolution of the perturbations, while for $k^2 \ll z''/z,$ it becomes negligible with respect to the classical potential.
In order to obtain the statistical prediction, one can write the Bohmian field as $v(\eta,{\bf x};v_i)$ such that $v(\eta_i,{\bf x};v_i) = v_i({\bf x})$. If the initial field $v_i$ is distributed according to the quantum equilibrium distribution $|\Psi(v_i,\eta_i)|^2$, we have seen that $v(\eta,{\bf x};v_i)$ will be distributed according to $|\Psi(v,\eta)|^2$. This property is called equivariance. For such an equilibrium ensemble, we can consider the two-point correlation function
\begin{eqnarray}
\label{2-point}
&&\left\langle v(\eta,{\bf x})v(\eta,{\bf x}+{\bf r})\right\rangle_{\rm B} \nonumber \\
&=& \int \mathcal{D} v_i |\Psi(v_i, \eta_i)|^2 v(\eta,{\bf x};v_i) v(\eta,{\bf x} + {\bf r};v_i) \nonumber \\
&=& \int \mathcal{D} v |\Psi(v, \eta)|^2 v({\bf x}) v({\bf x} + {\bf r}).
\end{eqnarray}
The second line expresses the integration over the ensemble of possible initial configurations with distribution $|\Psi(v_i,\eta_i)|^2$, and the step to the third line is a consequence of equivariance.
Using Equations~(\ref{product}) and (\ref{psi2}), one obtains
\begin{equation}
\label{twopoint}
\left\langle {v}({\bf x},\eta) {v}({\bf x} + {\bf r},\eta)\right\rangle_B = \frac{1}{2\pi^2}\int k^2 d{k} \frac{\sin{kr}}{kr} |f_{k}(\eta)|^2 \equiv \frac{1}{2\pi^2}\int d\ln{k} \frac{\sin{kr}}{kr} P(k,\eta),
\end{equation}
where $P(k,\eta) = k^3 |f_{k}(\eta)|^2$ is the power spectrum of $v$.
Note that
\begin{equation}
\label{two-2}
\left\langle {v}({\bf x},\eta) {v}({\bf x} + {\bf r},\eta)\right\rangle_B=\left\langle \hat{v}({\bf x},\eta) \hat{v}({\bf x} + {\bf r},\eta)\right\rangle,
\end{equation}
where the R.H.S is the usual two-point correlation function in the Heisenberg representation, calculated in Section \ref{sec4}.
One can object that Equation~\eqref{2-point} is an average over possible realizations of the Universe and we just see one universe. This can be overcome with the argument that the width of the Gaussian distribution of temperature correlations in the CMBR is small for small angles; {hence, a measurement related to these small correlation angle temperature--temperature anisotropies must be very close to the mean value. Note that, for larger angles (or larger cosmological scales), this is no longer the case, leading to the so-called cosmic variance (a larger imprecision in these larger angle observations).} For details, see~\cite{mukh-book}.
In conclusion, the dBB approach explains, in a very simple and clear way, the quantum-to-classical transition of quantum cosmological perturbations, conceptually and qualitatively, solving an ancient deep problem concerning the evolution of cosmological perturbations of quantum mechanical origin. For other approaches and other quantum effects in the CMBR, see~\cite{cmbq1,cmbq2,cmbq3,cmbq4,cmbq5}.
\section{Discussion and Conclusions}\label{sec6}
As we have seen in this review, de Broglie--Bohm (dBB) quantum theory is very
suitable for quantum cosmology. Many of the issues that plagued the subject for a long time simply disappear:
\begin{itemize}
\item [(1)] The measurement problem is naturally solved, without the necessity of invoking the presence of an external agent outside the quantum physical system, which does not make sense when the physical system is the whole Universe.
\item[(2)] The fact that the usual quantum equations for the wave function of the Universe that emerge from many approaches to quantum gravity do not present a Schr\"odinger form makes it difficult to physically interpret the wave function of the Universe, especially in probabilistic terms~\cite{kuchar,kuchar2}. In the dBB theory, however, the wave function of the Universe $\Psi$ yields the guidance equations, which provide the time evolution of all the quantum particles and fields present in the Universe. Hence, one can assign a nomological interpretation to $\Psi$, as giving the laws of motion for the quantum degrees of freedom, in the same way as Hamitonians and Lagrangians do. There is no need to talk about probabilities at this level; hence, the quantum equations for $\Psi$ may have any form. When dealing with subsystems in the Universe, one can construct the conditional wave function to describe this subset of fields and particles, which may satisfy a Schr\"odinger-like equation under reasonable assumptions, and a natural probabilistic interpretation in terms of the Born rule emerges for this conditional wave function.
\item[(3)] There is the so-called problem of time in quantum cosmology, as it seems that the quantum theory is timeless~\cite{kuchar2}. This issue is intimately connected with the second one. In the dBB quantum approach, the guidance equations yield a parametric evolution for the fields. Note, however, that the space-time structure that emerges from orderly stacking the fields along this parameter may be very contrived, but they can be calculated; see~\cite{pinto-neto1} for details. Additionally, when going to subsystems described by the conditional wave function, where a Schr\"odinger-like equation emerges, a time evolution for the subsystem quantum state emerges.
\item[(4)] As, in the dBB theory, the Bohmian trajectories emerge, the characterization of quantum singularities becomes clear. For instance, in the quantum cosmological models discussed here, the background model is said to be non-singular if the Bohmian trajectory of the scale factor satisfies $a(t)\neq 0$ for all $t$.
\item[(5)] The classical limit is easily obtained, either by the inspection of the quantum potential or by direct comparison between the classical and Bohmian trajectories.
\end{itemize}
The features of the dBB theory, which naturally solve the issues of quantum cosmology presented above, have many important consequences:
\begin{itemize}
\item[(i)] Feature (1) yields a clear understanding of a long standing problem, which is the quantum-to-classical transition of quantum cosmological perturbations in inflation and bouncing models. This was discussed in Section \ref{sec5}.
\item[(ii)] Feature (4) allows a simple identification of non-singular quantum models, as shown in Section \ref{sec4}. All of them present a regular bounce.
\item[(iii)] All the features above yield simple equations for quantum perturbations in quantum backgrounds, which is not an easy task under other approaches~\cite{halli}. These simple equations could be solved, providing sensible bouncing models with inhomogeneous perturbations, in which the presence of a dust fluid (dark matter?) yields an almost scale-invariant spectrum of perturbations, as observed, with the correct amplitudes. Dark energy can also be included, as in the scalar field model of Section \ref{sec4}. In this model, we have seen that a quantum cosmological effect becomes very relevant during the quantum bounce, leading to observable consequences which solve a conflict with observational results that cannot be solved in classical terms, rendering it a viable model to be developed.
\item[(iv)] Feature (5) makes direct the evaluation of the parameter limits under which the standard classical Friedmann solution arises from a quantum Bohmian solution.
\end{itemize}
There are many routes of investigation to be deepened, and many new to be followed. Some of them are:
\begin{itemize}
\item[(a)] The angular power spectrum of the temperature--temperature correlation function, and the $E$ and $B$ polarization modes corresponding to the bouncing models described here, and other possibilities, must be calculated in great detail, and compared with the most recent CMB results~\cite{planck}, in order to differentiate these models among themselves and with inflation. Additionally, one could try to find typical fingerprints of a quantum cosmological effect, which cannot be found by other methods. One promising example is the scalar field model presented in Section \ref{sec4}.
\item[(b)] In the analysis of more elaborate models, some new observables must be calculated. For instance, in the two-fluid model described in Section \ref{sec5}, one needs to calculate the entropy perturbations. In preliminary calculations~\cite{2-fluids}, as the entropy effective sound velocity is given by
\begin{equation}
\label{sound-ad}
c_e^2 = \frac{w(\rho_r + p_r) + (\rho_m + p_m)/3}{\rho_T + p_T},
\end{equation}
the large scale perturbations become super-Hubble in the dust-dominated era, when $c_e^2=1/3$, and the short scale perturbations in the radiation-dominated era, when $c_e^2=w\approx 0$. Hence, in opposition to the adiabatic perturbations, the large scale entropy perturbations are very small compared to adiabatic perturbations, which is in agreement with observations, but they may be relevant at small scales. Hence, even knowing that small scale perturbations are suppressed by Silk damping, some imprint of this effect may be present. Furthermore, they may also slightly affect the spectrum index of large scale adiabatic perturbations. This is work in progress.
\item[(c)] The role of dark energy in bouncing models is very important to understand. In Section \ref{sec5}, I presented a possible solution to the issues raised in Section \ref{sec4}, but there are many other possibilities. In the case that dark energy is a cosmological constant, the problem becomes more contrived, with the possibility of observational consequences. Note that bouncing models offer a unique possibility to learn about dark energy through the primordial power spectrum of cosmological perturbations, which is not the case for inflation.
\item[(d)] The effects of a quantum bounce on non-gaussianities are also a very relevant investigation, with possible observational consequences~\cite{cai2,agullo}.
\item[(e)] The dBB quantum theory, in principle, allows probability distributions that do not obey the Born rule, that are away from quantum equilibrium. It is difficult to find ordinary physical systems in this situation. In cosmology, this may not be the case. For instance, long wavelength perturbations originated from a vacuum quantum state do not relax quickly to quantum equilibrium, yielding a possible departure from
quantum mechanical predictions~\cite{PVV}. Additionally, one could relax the conditions imposed in the conditional wave function explained in Section V, which would lead to corrections to the effective Schr\"odinger equation for the perturbations in the quantum background regime and a departure from quantum
equilibrium, with possible observational consequences.
\end{itemize}
In conclusion, quantum theory can indeed help cosmology in solving the singularity problem, which plagues all GR classical solutions. Furthermore, in a reverse way, cosmology can also help in understanding quantum theory more deeply. For instance, the alternative Many Worlds Interpretation~\cite{eve} was constructed because the Copenhagen interpretation cannot be applied to quantum cosmology. Additionally, we have seen that another alternative, the dBB quantum theory, yields possible testable predictions and new effects that may distinguish it from other quantum approaches. These possible tests appear only in a cosmological context. Hence, not only does quantum theory help cosmology but, also, cosmology can improve quantum theory.
Let me end with a Louis de Broglie quote~\cite{undivided}:
\begin{quote}
“To try to stop all attempts to pass beyond the
present viewpoint of quantum physics could be
very dangerous for the progress of science and
would furthermore be contrary to the lessons
we may learn from the history of science.
This teaches us, in effect, that the actual state
of our knowledge is always provisional
and that there must be, beyond what is actually
known, immense new regions to~discover”.
\end{quote}
\vspace{6pt}
\funding{This research received no external funding.
\acknowledgments{NPN acknowledges the support of CNPq of Brazil under grant PQ-IB
309073/\mbox{2017-0.}}
\conflictsofinterest{The authors declare no conflict of interest.
\end{paracol}
\reftitle{\hl{References}
|
2024-02-18T23:40:25.463Z
|
2021-11-05T01:22:37.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03057","language":"en","timestamp":1636075357000,"url":"https:\/\/arxiv.org\/abs\/2111.03057","yymm":"2111"}
|
proofpile-arXiv_000-10214
|
{"provenance":"002.jsonl.gz:10215"}
| null | null |
\section{Introduction}
\label{sect:intro}
The surfaces of strong topological insulators (TI) like Bi$_2$Se$_3$, Bi$_2$Te$_3$~and Sb$_2$Te$_3$~carry spin-locked non-degenerate helical surface
states. As long as the surfaces are isolated, i.e. their distance is much larger than the surface state decay length
into the bulk the 2D dispersion is described by isotropic Dirac cones to lowest order in momentum counted from
the time-reversal invariant (TRI) points and by warped cones with sixfold symmetry due to higher order terms. They have been verified
indirectly by magnetotransport measurements \cite{taskin:11,taskin:12} as well as directly
by photoemission \cite{hsieh:09,chen:09,kuroda:10,hoefer:14} and surface tunneling \cite{roushan:09,zhang:09,alpichshev:10,okada:11,cheng:12,kohsaka:17,lee:09} experiments, the latter were explained in numerous
theoretical investigations \cite{lee:09,zhou:09,ruessmann:20}.
This simple situation changes in an interesting manner when one considers TI thin films with thickness small enough so that
inter-surface hybridisation of bottom (B) and top (T) surface states occurs. Due to the interaction topological protection for
states close to the Dirac point is lifted and a hybridisation gap in the excitation spectrum opens. This has indeed been verified directly by photoemission \cite{zhang:10,neupane:14} but also by a sudden breakdown of weak anti-localisation in magnetotransport \cite{taskin:11,taskin:12} as function of film thickness when the latter falls below about five quintuple layers (QL). The effective hybridisation between the surface states has been calculated \cite{lu:10,asmar:18,asmar:21} solving the thin film boundary value problem for an effective $\bk\cdot{\bf p}} \def\hg{\rm\hat{g}$ Hamiltonian of the bulk. Due to the constrained film geometry
the effective inter-surface hybridisation $t(d)$ not only decreases exponentially with film thickness $d$ but also generally oscillates
as function of thickness d with an oscillation period depending on the material parameters. This should be again visible in the photoemission gap and in the concommitant oscillation of quasiparticle interference (QPI) patterns predicted in Ref. \onlinecite{thalmeier:20}.\\
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\columnwidth]{stepscheme}
\end{center}
\vspace{-0.5cm}
\caption{Schematic view of the TI step configuration due to profiled substrate, for simplicity equal x,y dimension $2L_0$ of the device is chosen. The TI film thicknesses $d_L$ and $d_R$ to the left and right of the step lead to different hybridisation strengths $t_L$ and $t_R$ between the Dirac cones on top (T) and bottom (B) surfaces of the TI film. For $t_L\cdot t_R <0$ this leads to the appearance
of a 1D bound state exponentially located at the step $y\approx0$ and extended along $x$. If the substrate is an s-wave superconductor Majorana zero modes at u,d positions may appear. For clarity film thickness and step size are exaggerated.
}
\label{fig:stepscheme}
\end{figure}
Aside from the modulated excitation gap at the Dirac point the oscillation of $t(d)$ gives rise to another novel and highly interesting scenario which is the subject of this investigation.
Suppose the film thickness is not constant but changes in a steplike manner at a certain lateral position (see Fig.~\ref{fig:stepscheme}). If the thickness to the left $(d_L)$ and
right $(d_R)$ of the step is chosen in such a way that the hybridisation $t_L(d_L)$ and $t_R(d_R)$ have {\it opposite} signs on the two sides of the step then a helical non-degenerate bound state within the hybridisation gap may appear which is spatially located at the step and has a linear dispersion. If found experimentally this would entail a further interesting speculative possibility: Once the substrate of the step-profiled TI becomes a simple s-wave spin singlet superconductor the proximity effect will open a superconducting gap in the dispersion of nondegenerate (spin-locked) step state. This is a typical situation that can create Majorana end states. Such scenario have previously investigated e.g. with TI nanowires on SC substrate \cite{sau:21,sau:10,stanescu:13,cook:11,cook:12,das:12} which may need the application of a magnetic flux through the wire. In alternative devices the wire has to be itself ferromagnetic \cite{livanas:19} or heterostructures with ferromagnetic layers are used \cite{stanescu:13}. In the present scenario no magnetic field has to be applied, a properly chosen step of the TI profile on the substrate is sufficient to create the possibility for Majorana states at the step ends. This seems to be a most simple way to realise these states in a realistic geometry.
\section{Isotropic Dirac surface state model for TI}
\label{sect:model}
As a starting point for the homogeneous thin film we use the isotropic TI surface state model on the
top (T) and bottom (B) surface of the film. We will use $\alpha, \sigma, \kappa$ indices and associated
Pauli matrix vectors $({\bm\alpha},{\bm\sigma},{\bm\kappa})$ to denote two dimensional T/B surface, $|\uparrow\rangle,|\downarrow\rangle$ spin and $|\pm1\rangle$
helicity spaces. The unit in each space is denoted by $(\alpha_0,\sigma_0,\kappa_0)$.
On a single isolated surface, using the operator basis $\psi^{\dag}_\bk=(c^{\dag}_{\ua\bk},c^{\dag}_{\downarrow} \def\ua{\uparrow\bk})$, the isotropic 2D model Hamiltonian is given by
\begin{equation}
\begin{aligned}
\!
{\cal H}=
\!
\sum_\bk \psi^\dag_\bk h_\bk\psi_\bk;\;\;
h_\bk=v(\bk\times{\bm\sigma})\cdot\hbz =
v\left(
\begin{array}{cc}
0& -ik_-\\
ik_+& 0
\end{array}
\right),
\label{eqn:hamat}
\end{aligned}
\end{equation}
where $\bk=(k_x,k_y)$ is the wave vector counted from the TRI Dirac point, ${\bm\sigma}$ the spin, $\hbz$ the surface normal and
$v$ the velocity, furthermore $k_\pm=k_x\pm i k_y$ and $k=|\bk|=(k_x^2+k_y^2)^\frac{1}{2}$. This expression is proportional to
the helicity operator $\kappa_{\hat{{\bf k}}}=({\bm\sigma}\times\bk)\cdot\hbz/k$, namely $h_\bk=(vk)\kappa_{\hat{{\bf k}}}$.
The eigenvalues or dispersions of
the Dirac cone and associated eigenvectors in the spin basis $|\uparrow\rangle,|\downarrow\rangle$ are described by
\begin{eqnarray}
\epsilon^\pm_\bk=\pm(vk);\;\;\;
S_\bk=\frac{1}{\sqrt{2}}
\left(
\begin{array}{cc}
1& ie^{-i\theta_\bk}\\
-ie^{i\theta_\bk}& 1
\end{array}
\right),
\end{eqnarray}
where the columns of the unitary matrix $S_\bk$ are the helical eigenstates $|\pm1\rangle$. In this basis the helicity operator
is simply represented by the Pauli matrix $\kappa_{\hat{{\bf k}}}=\kappa_z$. Furthermore $\theta_\bk=\tan^{-1}(k_y/k_x)$ is the azimuthal angle of the \bk-vector.
\section{Hybridisation and gap opening of surface states in TI thin films}
\label{sect:film}
In a TI film with thickness $d$ much larger than the surface state decay length the top and bottom surfaces
may be considered as independent surface states. One only has to keep in mind that surface normals are
oppositely oriented to the global $\hbz$ direction, i.e. $\hbz_T=-\hbz_B \equiv\hbz$. Therefore helicities for both
energies are also opposite on T, B surfaces according to $\kappa^B_{\hat{{\bf k}}}=-\kappa^T_{\hat{{\bf k}}}$ and
therefore $h^B_\bk=-h^T_\bk\equiv- h_\bk$.
However, when the film thickness is reduced (below a few quintuple layers) the T,B surface states overlap and
a hybridisation of equal spin (and therefore equal helicity) eigenstates develops. This problem has been fundamentally treated
and analyzed in great generality in the work of Asmar et al \cite{asmar:18} and also in Ref.~\onlinecite{shan:10}. Here we use a simplified model with a \bk-independent effective hybridisation element $t(d)$ that depends, however, on film thickness $d$. The thin film surface states Hamiltonian is
then given in spin representation~ \cite{thalmeier:20} using $\Psi^{\dag}_\bk=(c^{T\dag}_{\ua\bk},c^{T\dag}_{\downarrow} \def\ua{\uparrow\bk},c^{B\dag}_{\ua\bk},c^{B\dag}_{\downarrow} \def\ua{\uparrow\bk})$,
%
\begin{equation}
\begin{aligned}
{\cal H}
&
=\sum_\bk \Psi^\dag_\bk \hat{h}_\bk\Psi_\bk;\;\;
\\
\hat{h}_\bk
&
=
v(k_x\sigma_y-k_y\sigma_x)\alpha_z+t\sigma_0\alpha_x
\\
&
=
\left(
\begin{array}{cc}
v(k_x\sigma_y-k_y\sigma_x)& t\sigma_0\\
t \sigma_0& -v(k_x\sigma_y-k_y\sigma_x)
\end{array}
\right),
\end{aligned}
\label{eqn:hamfilm}
\end{equation}
%
or in helicity representation according to
%
\begin{equation}
\begin{aligned}
\hat{h}_\bk=\epsilon_\bk\kappa_z\alpha_z+t\kappa_0\alpha_x=
\left(
\begin{array}{cc}
\epsilon_\bk\kappa_z& t\kappa_0\\
t \kappa_0& -\epsilon_\bk\kappa_z
\end{array}
\right),
\end{aligned}
\end{equation}
%
The eigenvalues of the film Hamiltonian are then obtained as
%
\begin{eqnarray}
E^\pm_\bk=\pm[(vk)^2+t(d)^2]^\frac{1}{2}
.
\label{eqn:enhyb}
\end{eqnarray}
Which exhibit a thickness dependent hybridisation gap $t(d)$ at the Dirac point $\bk=0$. Each of these
dispersion branches is twofold degenerate which is inherited from the (T,B) degeneracy of states in
the uncoupled $(t=0)$ case.
The degeneracy is lifted if the bottom surface experiences an effective bias due to the substrate effect. This
can be described by adding a term $\Delta_{su}\sigma_0\alpha_z$ to Eq.~(\ref{eqn:hamfilm}). Then the
split hybridised surface bands are given by
%
\begin{equation}
\begin{aligned}
E^\pm_{\bk 1,2}=\pm[(|vk|\pm|\Delta_{su}|)^2+t(d)^2]^\frac{1}{2},
\end{aligned}
\end{equation}
where split band indices $1,2$ refer to $\pm$ inside the square root. Since the above substrate term can in principle
be canceled by an applied bias voltage $({\rm eV})$ term at the substrate we will keep the degenerate thin film model of Eq.~(\ref{eqn:enhyb}). Furthermore the chemical potential $\mu$ can be controlled by applying a gate voltage at the TI surface.
\subsection{The oscillation model of hybridisation with film thickness}
\label{subsect:hybosc}
The thickness dependent effective hybridisation $t(d)$ may be obtained from the solution of a subtle
boundary value problem for the thin film \cite{asmar:18}, starting form the $\bk\cdot{\bf p}} \def\hg{\rm\hat{g}$ Hamiltonian of the bulk bands.
It may be represented by the phenomenological form \cite{asmar:18,thalmeier:20}
\begin{equation}
\begin{aligned}
t(d)=t_0\exp\bigl(-\frac{d}{d_0}\bigr)\sin\bigl(\frac{d}{d'_0}\bigr).
\label{eqn:tosc}
\end{aligned}
\end{equation}
Here the energy $t_0$ and thickness $d_0,d'_0$ scales are determined by the parameters of bulk bands \cite{asmar:18,thalmeier:20}.
As an example we give the theoretical values for Bi$_2$Te$_3$~in terms of natural units $E^*=0.25$ meV and $1\mbox{QL}=10.16$\AA, respectively as $(t_0,d_0,d'_0)=(0.80,1.79,0.3)$. As expected the expression contains an exponential decay with increasing thickness $d$ but, in order to satisfy boundary conditions, also an oscillatory term, whose physical origin was derived in Ref.~\cite{asmar:18}. The hybridisation is obtained as a perturbation integral of an effective inter-surface tunneling Hamiltonian connecting the uncoupled surface state wave functions. The decay length of the latter perpendicular to the surface may become a complex number, depending on the bulk band parameters. This leads to oscillating decay of the wave function which is inherited
by the hybridisation integral. We note that the vanishing t(d) for thin films at the nodes (Fig.~\ref{fig:hybridisation}) does not mean the surface states should be considered as decoupled as for large d in the bulk case but they rather indicate the destructive interference of both surface wave function in the hybridisation integral.
These thickness oscillations of $t(d)$ play an essential role in the present investigation and its consequences have before been studied in view of its influence on quasiparticle interference (QPI) patterns \cite{thalmeier:20}. According to the theoretical estimation of parameters \cite{asmar:18,asmar:21} the oscillations of the gap $|t(d)|$ should be pronounced in Bi$_2$Te$_3$~and Sb$_2$Te$_3$~but not in Bi$_2$Se$_3$.
In the latter only half an oscillation period appears which is strongly damped by the rapid exponentioal decay of $t(d)$. The observable quantity is the (rectified) oscillation of inter-surface hybridisation gap $|t(d)|$. Since it can reasonably only be compared for films with identical surface terminations one is restricted to a discrete set of values for $t(d)$ with integer multiples of quintuple layers. Therefore weak oscillations of $|t(d)|$ may not easily be identified.
The hybridisation gap opening as function of film thickness with integer number of QL has been investigated by ARPES \cite{zhang:10} for Bi$_2$Se$_3$. The predicted exponential decay of $|t(d)|$ with increasing film thickness was observed but no clear evidence for the half oscillation period was found.
It was argued \cite{thalmeier:20} that a modest change of the theoretical bulk band parameters leading to different values of $d_0, d'_0$ in Eq.~(\ref{eqn:tosc}) could account for the suppression of the half oscillation.
In fact charge transfer and bulk band bending effects due to the substrate have not been included in the theoretical model employed here but may influence the surface state energies \cite{zhang:10} and possibly the above oscillation parameters.
However, the origin of hybridisation gap oscillation is of universal nature enforced by boundary conditions due to the constrained geometry of thin films \cite{asmar:18} and therefore it may appear whenever the bulk band parameters of the 3D TI lead to suitable $d_0, d'_0$ scales with $d_0\gg d'_0$, i.e. a considerable number of oscillations before $t(d)$ decays. Therefore they should appear in TI material with more favourable scale parameters such as is prediced for Bi$_2$Te$_3$~or Sb$_2$Te$_3$~(Fig.~\ref{fig:hybridisation}a). Sofar no systematic film thickness variation of the hybridsation gap in these materials has been investigated with either ARPES or QPI methods.
Due to their universal origin we are confident that the gap oscillations will eventually be found in thin films of a suitable TI material.
This expectation is the starting point for the following investigation of intriguing appearance of 1D topological states for profiled thin film geometry.
\begin{figure}[t]
\includegraphics[width=0.90\columnwidth]{hybridisation}
\caption{(a) Film thickness dependence of inter-surface hybridisation energy in Bi$_2$Te$_3$~for the theoretical model parameters
$(t_0,d_0,d'_0)=(0.8,1.79,0.3)$ corresponding to Ref.~\onlinecite{asmar:18} (in units of $E^*=0.25$ eV for energy and QL for thicknesses \cite{thalmeier:20}). (b) Contour plot of the function
$g(d_L,d_R)$ in Eq.~(\ref{eqn:existence}). In the coloured region a step bound state exists while it is absent in the white
regions. The darker color correspond to more tightly bound wave functions in Fig.~\ref{fig:envelope}. The contour lines denote
pairs $(d_R,d_L)$ with $t_R=-t_L$ leading to a symmetric step bound state around $y=0$. The green dots designate pairs with integer thickness [QL] close to this line of symmetry.
}
\label{fig:hybridisation}
\end{figure}
\section{Creation of 1D step bound states inside the TI thin film gap}
\label{sect:1Dbound}
The TI surface state form linearly dispersing 2D bands or Dirac cones inside the bulk gap $\Delta_b$ (Fig.\ref{fig:gapping}a) of the
3D TI material which is due to bulk spin-orbit coupling. For sufficiently thin films the Dirac cones themselves
are opening a gap given by the size of the inter-surface hybridisation (Fig.\ref{fig:gapping}a). One might ask whether
the creation of topologically protected in-gap states can be repeated by some means in a `Matrjoschka'-like fashion, creating 1D linearly
dispersing bands within the hybridisation gap of 2D surface states. One way is to create domain walls of some sort on the surface with
suitable properties. The easiest way to achieve a domain wall is by stepping the thin film (e.g. by stepping the substrate) so that the film thickness and hybridisation are different on both sides of the step (see illustration in Fig.~\ref{fig:stepscheme}). This possibility will be investigated in the following sections.
\subsection{Boundary and existence conditions and dispersion of bound states}
\label{subsect:1Ddisp}
Let us adopt a step geometry with the step extending along the $x$-direction, separating the left (L) and right (R) regions of the surfaces at $y=0$ and the surface normals oriented parallel to $z$ (Fig.~\ref{fig:stepscheme}). Then, assuming a sharp step the $y$-dependent inter-surface hybridisation is given by
\begin{equation}
\begin{aligned}
t(y)=t_L\Theta(-y)+t_R\Theta(y),
\label{eqn:sharps}
\end{aligned}
\end{equation}
where $t_L=t(d_L)$ and $t_R=t(d_R)$, we assume $2L_0$ is the sample length along $x,y$ (Fig.~\ref{fig:stepscheme}). At the moment we keep the relative size of $d_L,d_R$ arbitrary. To find out whether a localised state at
the step develops one has to replace $k_y\rightarrow (-i\partial_y)$ and $t\rightarrow t(y)$ in the Hamiltonian leading to the effective 1D problem
(with $k_x$ parallel to the step treated as a fixed parameter) described in spin-surface space by
\begin{equation}
\begin{aligned}
\hat{h}(k_x,y)
=&
v(k_x\sigma_y+i\sigma_x\partial_y)\alpha_z+t(y)\sigma_0\alpha_x
\\
=&
\left(
\begin{array}{cc}
v(k_x\sigma_y+i\sigma_x\partial_y)& t(y)\sigma_0\\
t(y) \sigma_0& -v(k_x\sigma_y+i\sigma_x\partial_y)
\end{array}
\right),
\label{eqn:hamstep}
\end{aligned}
\end{equation}
%
If a localised bound state exists at the step it has to fulfil the envelope equation
\begin{equation}
\begin{aligned}
\hat{h}(k_x,y){\bm\phi}(k_x,y)=E{\bm\phi}(k_x,y)
\label{eqn:envelope}
\end{aligned}
\end{equation}
with $|E|<|t_L|,|t_R|$, i.e., lying inside the hybridisation gap of 2D surface states.
It is convenient to introduce rescaled energies $\hat{E}=E/v$ and $\hat{t}=t/v$ which have the dimension
of wave number or inverse length (units $\mbox{QL}^{-1}$). The wave function is a four spinor defined by ($tr$ for transposed):
\begin{equation}
\begin{aligned}
{\bm\phi}^{tr}(k_x,y)=
\Big(
\phi^T_\ua(k_x,y),\phi^T_\downarrow} \def\ua{\uparrow(k_x,y),\phi^B_\ua(k_x,y),\phi^B_\downarrow} \def\ua{\uparrow(k_x,y)
\Big).
\end{aligned}
\end{equation}
For the the L,R sides of the step we use the following ansatz to solve Eq.~(\ref{eqn:envelope}) $(\lambda=L,R)$:
\begin{equation}
\begin{aligned}
\phi_\lambda(k_x,y)={\bf a}_\lambda e^{ik_xx}e^{ik_y^\lambda y};\;\;
{\bf a}_\lambda^{tr}=(a^{T\lambda}_\ua,a^{T\lambda}_\downarrow} \def\ua{\uparrow,a^{B\lambda}_\ua,a^{B\lambda}_\downarrow} \def\ua{\uparrow),
\label{eqn:wavedef}
\end{aligned}
\end{equation}
where $k_y^L=-i\kappa_L$ and $k_y^R=i\kappa_R$ are given by the inverse decay lengths $\kappa_R, \kappa_L$ of the bound state localised at the step. Inserting into Eq.~(\ref{eqn:envelope}) we obtain
\begin{equation}
\begin{aligned}
\hat{E}^2=k_x^2-\kappa_\lambda^2+\hat{t}_\lambda^2;\;\;\; \kappa^2_L-\kappa^2_R=\hat{t}^2_L-\hat{t}^2_R
,
\label{eqn:energy}
\end{aligned}
\end{equation}
The second equation follows because the first one has to be fulfilled for both sides L and R simultaneously.
The remaining relation to determine $\kappa_\alpha$ is obtained from the boundary condition at the step
according to $\phi_L(k_x,0)=\phi_R(k_x,0)$.
These $\lambda= L,R$ wave functions for step states with energies $\hat{E}$ (Eq.~(\ref{eqn:energy})) are determined by the
solutions of
\begin{widetext}
\begin{eqnarray}
\left(
\begin{array}{cccc}
-\hat{E}& i(\kappa_\lambda-k_x)&\hat{t}_\lambda &0\\
i(\kappa_\lambda+k_x) & -\hat{E} &0&\hat{t}_\lambda\\
\hat{t}_\lambda&0&-\hat{E}& -i(\kappa_\lambda-k_x)\\
0&\hat{t}_\lambda& -i(\kappa_\lambda+k_x)&-\hat{E}
\end{array}
\right)
\left(
\begin{array}{c}
a_\ua^{T\lambda}\\
a_\downarrow} \def\ua{\uparrow^{T\lambda}\\
a_\ua^{B\lambda}\\
a_\downarrow} \def\ua{\uparrow^{B\lambda}
\end{array}
\right)=0,
\end{eqnarray}
\end{widetext}
The matrix has rank 2 and therefore 2 components $a_1\equiv a_\ua^{T\lambda}$ and $a_2\equiv a_\downarrow} \def\ua{\uparrow^{T\lambda}$
may be considered as free parameters
for the solution. The other two components are obtained as
\begin{equation}
\begin{aligned}
a^\lambda_{B\ua}
=&
\frac{\hat{E}}{\hat{t}_\lambda}a_1-\frac{i(\kappa_\lambda-k_x)}{\hat{t}_\lambda}a_2;\;\;\;
\\
a^\lambda_{B\downarrow} \def\ua{\uparrow}
=&
\frac{\hat{E}}{\hat{t}_\lambda}a_2-\frac{i(\kappa_\lambda+k_x)}{\hat{t}_\lambda}a_1.
\label{eqn:amprelation}
\end{aligned}
\end{equation}
The ratio $a_1/a_2$ is fixed by the continuity condition $\phi_L(k_x,0)=\phi_R(k_x,0)$. From the two equations above we
obtain
\begin{eqnarray}
\begin{aligned}
\frac{a_1}{a_2}
=&i\frac{\hat{t}_L(\kappa_R+k_x)+\hat{t}_R(\kappa_L-k_x)}{\hat{E}(\hat{t}_R-\hat{t}_L)};
\\
\frac{a_2}{a_1}
=&i\frac{\hat{t}_L(\kappa_R-k_x)+\hat{t}_R(\kappa_L+k_x)}{\hat{E}(\hat{t}_R-\hat{t}_L)}.
\label{eqn:ampratio}
\end{aligned}
\end{eqnarray}
Taking the product and using the symmetrised Eq.~(\ref{eqn:energy}) with $\hat{E}^2=k_x^2-\frac{1}{2}[(\kappa_R^2+\kappa_L^2)-(\hat{t}^2_R+\hat{t}^2_L)]$,
we finally arrive at the second relation
\begin{equation}
\begin{aligned}
\frac{1}{2}\bigl[(\kappa_R^2+\kappa_L^2)-(\hat{t}_R^2+\hat{t}_L^2)\bigr]=
\Bigl(\frac{\hat{t}_L\kappa_R+\hat{t}_R\kappa_L}{\hat{t}_R-\hat{t}_L}\Bigr)^2
.
\label{eqn:bcond}
\end{aligned}
\end{equation}
This equation together with the one in Eq.~(\ref{eqn:energy}) may be solved by
expressing them in terms of the (anti-) symmetrised quantities $\frac{1}{2}(\kappa_R\pm\kappa_L)$.
After some algebra one obtains the simple relations
\begin{equation}
\begin{aligned}
\kappa_R
=&
\frac{1}{2}|\hat{t}_R-\hat{t}_L|+\frac{1}{2}(\hat{t}_R+\hat{t}_L)sign(\hat{t}_R-\hat{t}_L);
\\
\kappa_L
=&
\frac{1}{2}|\hat{t}_R-\hat{t}_L|-\frac{1}{2}(\hat{t}_R+\hat{t}_L)sign(\hat{t}_R-\hat{t}_L).
\end{aligned}
\end{equation}
For a localised step state within the thin film hybridisation gap one must have both $\kappa_R,\kappa_L>0$. It is easy
to see from the above expressions that this can only be possible if the we have the fundamental relation
\begin{widetext}
\begin{equation}
\begin{aligned}
g(d_R,d_L)=\hat{t}_R(d_R)\cdot\hat{t}_L(d_L)<0 :\;\;\;\;
\left\{
\begin{array}{rl}
\hat{t}_L<0<\hat{t}_R \;\;(\tau_{RL}=+1); \;\;\;& \kappa_R=\hat{t}_R;\;\;\; \kappa_L=-\hat{t}_L=|\hat{t}_L|\\
\hat{t}_R<0<\hat{t}_L \;\;(\tau_{RL}=-1); \;\;\;& \kappa_L=\hat{t}_L;\;\;\; \kappa_R=-\hat{t}_R=|\hat{t}_R|
\label{eqn:existence}
\end{array}
\right.
\end{aligned}
,
\end{equation}
\end{widetext}
where we defined $\tau_{RL}=sign(\hat{t}_R-\hat{t}_L)$. Inserting $\kappa_{R,L}$ for these to cases
into Eq.~(\ref{eqn:energy}) for the step state energy we simply get, after reordering, the two linear dispersing
energies:
\begin{equation}
\begin{aligned}
E_{k_x\pm}=\pm(vk_x),
\label{eqn:E1D}
\end{aligned}
\end{equation}
which form two linear dispersing branches of 1D excitations localised at and moving along the step with
an energy inside the thin-film hybridisation gaps for $|k_x|<\min\{|\hat{t}_R|,|\hat{t}_L|\}$.
We note that
the dispersion relation, i.e. the velocity $v$ is {\it independent} of the size of hybridisations $t_L,t_R$ and hence of the asocciated asymmetric decay of the wave function perpendicular to the step.
This non-degenerate step state as enforced by the boundary conditions exists as long as $\hat{t}(y)$ {\it changes sign} when crossing
the step. It is therefore topologically protected as long as this condition is fulfilled. These 1D topological states
inside the gap of hybridised 2D topological thin film states are schematically shown in Fig.~\ref{fig:gapping}(c).
The velocity $v$ of 1D excitations (red) is asymptotically the same
as those of the gapped and hybridized 2D surface excitations (blue).
\begin{figure}[t]
\begin{center}
\includegraphics[width=\columnwidth]{gapping}
\end{center}
\vspace{-0.5cm}
\caption{Schematic sequence of 2D surface state and 1D step state gappings. (a) ungapped 2D helical states on isolated surfaces.
(b) Gapping in homogeneous thin film due to inter-surface hybridisation $t(d)$. (c) Appearance of 1D helical step states (red) in profiled TI thin film with velocity $v$ asymptotically equal to that of 2D hybridized surface states (blue). (d) Opening of SC proximity gap in the 1D step states and appearance of zero energy Majorana end states. Here $\Delta_b$ is the overall 3D bulk gap ( $\Delta_b/E^*\simeq 1$ for Bi$_2$Te$_3$). Note that for clarity the various gaps $\Delta_b, 2|t(d)|, 2|\Delta'_0|$ are not drawn to scale.
}
\label{fig:gapping}
\end{figure}
\subsection{Eigenvectors and helicity of the step states}
\label{subsect:helicity}
The four amplitudes ${\bf a}_\lambda$ of the wave function in Eq.~(\ref{eqn:wavedef}) are obtained from
Eqs.~(\ref{eqn:amprelation},\ref{eqn:ampratio}) and the condition that $\langle {\bf a}_\lambda^\dag |{\bf a}_\lambda\rangle=1$. They are
the same on both sides L,R. For the two orthogonal states corresponding to $\hat{E}_\pm(k_x)=\pm k_x$, we obtain
\begin{eqnarray}
{\bf a}_+=\frac{1}{2}
\left(
\begin{array}{c}
1\\
i\\
-\tau_{RL}\\
i\tau_{RL}
\end{array}
\right);\;\;\;
{\bf a}_-=\frac{1}{2}
\left(
\begin{array}{c}
1\\
-i\\
\tau_{RL}\\
i\tau_{RL}
\end{array}
\right)
.
\label{eqn:1Dvector}
\end{eqnarray}
The complete normalised wave functions of the two nondegenerate step states are finally given by
\begin{equation}
\phi_\pm(k_x,y)=\frac{1}{\nu_0}{\bf a}_\pm[e^{\kappa_Ly}\Theta(-y)+ e^{-\kappa_Ry}\Theta(y)]e^{ik_xx},
\label{eqn:stepstate}
\end{equation}
where the normalisation is $\nu_0=L_0^\frac{1}{2}(\kappa^{-1}_L+\kappa^{-1}_R)$
with $L_0$ denoting half the step length in $x$-direction and $\kappa_{L,R}$ corresponding to the two possible
cases of Eq.~(\ref{eqn:existence}). Examples of the step wave functions for various integer QL thickness pairs
$(d_R,d_L)$ are given in Fig.~\ref{fig:envelope}(a).
These wave functions are eigenstates to the helicity operator.
Since for the step states $\bk=k_x\hat{{\bf x}}$ the latter is given by $\kappa_z=-\sigma_y\frac{1}{2}(1+\alpha_z)+\sigma_y\frac{1}{2}(1-\alpha_z)\equiv -\sigma_y^T+\sigma_y^B$.
The spinors ${\bf a}_\pm$ (and also the total wave functions $\phi_\pm$) then fulfil $\kappa_z{\bf a}_\pm=\mp{\bf a}_\pm$.
This property is inherited from the isolated T,B film states. Because of the 1D character of step states it means the spin is always
locked perpendicular to $\bk=k_x\hat{{\bf x}}$, i.e. parallel to $y$. It is also useful to consider the expectation values of the spin ${\bm\sigma}$.
One finds $\langle {\bf a}_\pm|\sigma_y^T|{\bf a}_\pm\rangle=\pm\frac{1}{2}$ and $\langle {\bf a}_\pm|\sigma_y^B|{\bf a}_\pm\rangle=\mp\frac{1}{2}$. All other spin expectation values vanish. The eigenvectors ${\bf a}_\pm$ define the field operators $\Psi_\lambda(x,y)$ $(\lambda=\pm)$ of helical step states according to (now using $k=k_x$ for 1D states):
\begin{equation}
\begin{aligned}
\Psi_\lambda(x,y)
=&
\sum_k \phi_\lambda^\dag(k,y)\Psi_k
\\
=&
\sum_k\chi_{k\lambda}\frac{1}{\nu_0}
[e^{\kappa_Ly}\Theta(-y)+ e^{-\kappa_Ry}\Theta(y)]e^{ikx},
\end{aligned}
\end{equation}
where the quasiparticle operator algebra for the 1D helical step states is defined by $\chi_{k\lambda}={\bf a}^\dag_\lambda\Psi_k$ which is explicitly
given by (using the abbreviation $\tau=\tau_{RL}=sign(\hat{t}_R-\hat{t}_L)$ in Eq.~(\ref{eqn:1Dvector}));
\begin{equation}
\begin{aligned}
\chi_{k+}
=&\frac{1}{2}(c^T_{k\ua}-ic^T_{k\downarrow} \def\ua{\uparrow}-\tau c^B_{k\ua}-i\tau c^B_{k\downarrow} \def\ua{\uparrow});
\\
\chi_{k-}
=&\frac{1}{2}(c^T_{k\ua}+ic^T_{k\downarrow} \def\ua{\uparrow}+\tau c^B_{k\ua}-i\tau c^B_{k\downarrow} \def\ua{\uparrow}).
\label{eqn:1Dquasi}
\end{aligned}
\end{equation}
They fulfil the canonical anti-commutation relations $\{\chi_{k\lambda},\chi_{k'\lambda'}^\dag\}=\delta_{kk'}\delta_{\lambda\la'}$. In terms of these 1D quasiparticle operators the 1D step state Hamiltonian may be written as (cf. Eq.(\ref{eqn:E1D}))
\begin{equation}
H_{ST}=\sum_{k\lambda}E_{k\lambda}\chi_{k\lambda}^\dag\chi_{k\lambda}.
\label{eqn:1DHam}
\end{equation}
These step states form the basis to construct Majorana end states through the proximity effect originating from the superconducting substrate. Before this, however we consider the situation for a more realistic step profile.
\subsection{Extension to bound states for soft steps}
\label{subsect:softstep}
The existence of the 1D step states is not tied to having a sharp step. A more softer profile serves the same purpose,
for example replacing Eq.~(\ref{eqn:sharps}) with a soft step of width $w$:
\begin{eqnarray}
\label{eqn:softstep}
\begin{aligned}
\nonumber
\hat{t}(y)=\frac{1}{2}(\hat{t}_R+\hat{t}_L)+\frac{1}{2}(\hat{t}_R-\hat{t}_L)\tanh\frac{y}{w}
\left\{
\begin{array}{rl}
&\hat{t}_L; \;\;\; y\gg w \\
&\hat{t}_R; \;\;\; y\ll w
\label{eqn:softs}
\end{array}
\right.
\end{aligned}
\\
\end{eqnarray}
To solve the wave equation Eq.~(\ref{eqn:envelope}) we now make a smooth envelope function ansatz instead of separating
between L,R regime (Eq.~(\ref{eqn:wavedef})). It may be written as
\begin{equation}
\phi_\lambda(k_x,y)=\frac{1}{\nu}{\bf a}_\lambda e^{ik_xx}f(y);
\label{eqn:softwave}
\end{equation}
with
\begin{equation}
\begin{aligned}
f(y)
=
&
\exp
\Big[
-\tau_{RL}\int_0^y\hat{t}(y')dy'
\Big]
\\=&
\exp
\Big[
-\tau_{RL}
\Big(
\bar{\hat{t}}y+\hat{t}'w\ln\cosh\frac{y}{w}
\Big)
\Big].
\label{eqn:softenv}
\end{aligned}
\end{equation}
The form of spinors in Eq.(\ref{eqn:softenv}) and the dispersions is the same as for the model with a sharp step.
Here we defined the symmetrised expressions $\bar{\hat{t}}=\frac{1}{2}(\hat{t}_R+\hat{t}_L)$ and $\hat{t}'=\frac{1}{2}(\hat{t}_R-\hat{t}_L)$.
A comparison of step and envelope functions for various widths $w$ is presented in Fig.~\ref{fig:envelope}(b).
\subsection{Influence of the 2D warping term}
\label{subsect:warping}
Our investigation of step states is based on the underlying assumption that the influence of higher order warping terms
for the 2D surface states can be neglected in the 1D step state formation. Firstly it is well known how to include them in the surface state formation of thin films where they modify the isotropic Dirac cones and circular Fermi surface into cones and Fermi surface with six-pronged `snowflake' shape. A complete theory for homogeneous thin films, including the lowest order Dirac term (Eq.~(\ref{eqn:hamat})), the warping term and the inter-surface hybridisation on the same footing has been developed in Ref.~\cite{thalmeier:20} leading to gapped and warped Dirac cones for the 2D thin film quasiparticles. The remaining question of importance here is how the warping will influence the formation of the 1D step states. We may understand this in a straightforward way by treating the warping as a perturbation (which vanishes for $k_x\rightarrow 0$). In the inhomogeneous film geometry the warping term in spin-surface presentation is given by
\begin{eqnarray}
\begin{aligned}
\hat{h}^w(k_x,y)
=&
\lambda D_y(k_x)\sigma_z\alpha_0;
\\
D_y(k_x)=&k_x(k_x^2+3\partial_y^2),
\end{aligned}
\end{eqnarray}
which has to be added to the unperturbed Hamiltonian in Eq.~(\ref{eqn:hamstep}). Here $\lambda$ is the warping parameter (for realistic values in TI see Ref.~\cite{thalmeier:20}). Then, using the 1D step wave functions of Eq.~(\ref{eqn:stepstate}) the correction to the 1D step state dispersion Eq.~(\ref{eqn:E1D}) is given
by
\begin{eqnarray}
\begin{aligned}
\delta E_{k_x\pm}=\int dy \phi^\dag_\pm(k_x,y) \hat{h}^w(k_x,y) \phi_\pm(k_x,y).
\end{aligned}
\end{eqnarray}
Writing the four component eigenvectors ${\bf a}_\pm$ in Eq.~(\ref{eqn:1Dvector}) composed of obvious two-component parts
${\bf a}_\pm^T=({\bm\alpha}^T_\pm,{\bm\beta}^T_\pm)$ we find that the expectation values ${\bm\alpha}^\dag_\pm\sigma_z{\bm\alpha}_\pm=0$ and
${\bm\beta}^\dag_\pm\sigma_z{\bm\beta}_\pm=0$ and therefore the warping correction to the 1D step state dispersion $\delta E_{k_x\pm}=0$ vanishes. We conclude that in first order in the warping scale $\lambda$ the 1D step state Hamiltonian Eq.~(\ref{eqn:1DHam}) is unchanged, therefore we can expect that the possible appearance of Majorana end states in the SC case as discussed in the next section is also unaffected by the warping term in this order for all momenta $k=k_x$. Furthermore in the limit $k\rightarrow 0$ the warping perturbation effect vanishes intrinsically.
\section{Majorana zero modes in the SC proximity induced gap of 1D step states}
\label{sect:majorana}
In each of the two 1D bands of quasiparticles confined to the step the helical spin locking is protected since
it is inherited from the 2D topological surface states. Therefore they may be considered as spin-locked
1D excitations. If they open a superconducting gap originating from the proximity effect of a superconducting substrate it is natural to expect \cite{alicea:12,beenakker:13,elliott:15,sato:16,chamon:10,marra:21,laubscher:21} the possible creation of Majorana zero modes (MZM) at the end of the step line which would lead to a zero-bias conductance peak in transport and tunneling experiments \cite{das:12,jeon:17,aguado:20}.
Due to the locking of opposite spins in the helical state an underlying conventional singlet or s-wave superconductor with gap $\Delta_0$ may be used instead of the difficult to realize p-wave superconductor necessary in spinless models \cite{alicea:12,cook:11,laubscher:21}. The topological state of a general 1D superconducting fermionic system is given by a topological invariant \cite{kitaev:01,elliott:15} $\cal{M}$$=(-1)^\nu$ where $\nu$ is the number of Fermi points including band degeneracy in one half of the BZ for the normal state. For the considered model (Eq.~(\ref{eqn:E1D})) with one nondegenerate band we have $\nu=1$ and then $\cal{M}$$=-1$ characterizes a nontrivial topology of the superconducting state which may host Majorana zero modes as end states of the step.
They are described by zero-energy solutions of Bogoliubov-deGennes (BdG) equations
inside the SC gap and are characterised by quasiparticle operators which are identical to their conjugates. Previous scenarios to create Majorana states have mostly involved TI wires \cite{cook:11,cook:12} on top of an s-wave SC and a flux passing through them to obtain the constituent 1D excitations from which Majorana states are formed. The present proposal is comparatively simple, it just takes the stepped interface between an s-wave SC and TI with the step playing the role of the flux penetrated wire.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{envelope}
\end{center}
\vspace{-0.5cm}
\caption{Step state envelope function $f(y)$ (Eq.~(\ref{eqn:softenv})). (a) for different thicknesses $d_R,d_L$ and a sharp
step ($w=0$). In this case the ordinate is also equal to $|\phi_\pm(k_x,y)|/|\phi_\pm(k_x,0)|$ (Eq.~\ref{eqn:stepstate})). The
pair $(d_R,d_L)=(2,3)$ is not far from the symmetric step state. (b) fixed thickness pair $(2,5)$ but different step
widths tuned by $w$ [QL].
}
\label{fig:envelope}
\end{figure}
\subsection{Model for 1D BdG Hamiltonian of step states}
Due to the proximity effects the spin-singlet pairs of the substrate can propagate a certain distance into the normal state of the TI which is governed by its coherence length of the latter \cite{deGennes:66}. In the pure normal state it is given by $\xi_n=(\hbar v/2k_BT)$ which becomes large for low temperatures so that one may expect a still sizeable s-wave gap $\Delta_0'< \Delta_0$ on the T,B surfaces of the topological insulator thin film. In order to formulate the effective
BCS Hamiltonian for the 1D step states, however, we have to be aware that the proximity effect works on all 2D TI surface states.
Therefore we first express the 1D Cooper pair operators formed from Eq.~(\ref{eqn:1Dquasi}) by the 2D operator basis. We must keep in mind that the two $(\lambda =\pm)$ 1D dispersions fulfill $E_{-k\lambda}=-E_{k\lambda}=E_{k-\lambda}$, therefore only {\it inter-band} pairing of states with opposite momenta and approximately equal energy are possible. For those pairs we have:
\begin{equation}
\chi_{k+}\chi_{-k-}=\frac{i}{4}\sum_{\alpha=T,B}(c^\alpha_{k\ua}c^\alpha_{-k\downarrow} \def\ua{\uparrow}-c^\alpha_{k\downarrow} \def\ua{\uparrow}c^\alpha_{-k\ua})
=-\chi_{k-}\chi_{-k+}
\label{eqn:interpair}
\end{equation}
This means the inter-band pairing of spin-locked 1D quasiparticles naturally results from the proximity induced spin-singlet pairing of 2D surface states. Hereby we neglected on the right side i) triplet terms with equal amplitude since they cannot be induced by the s-wave substrate and ii) inter-(T,B) surface pairing of 2D helical states which is difficult to justify on the basis of the proximity effect.
Then the pair amplitude is given by
\begin{equation}
\begin{aligned}
\langle \chi_{k+}\chi_{-k-}\rangle
=&
-\langle\chi_{k-}\chi_{-k+}\rangle
\\=&
\frac{i}{4}\sum_{\alpha=T,B}\langle c^\alpha_{k\ua}c^\alpha_{-k\downarrow} \def\ua{\uparrow}-c^\alpha_{k\downarrow} \def\ua{\uparrow}c^\alpha_{-k\ua}\rangle
\\
\sim&
\frac{i}{4}(\Delta_T+\Delta_B)=i\Delta_0',
\end{aligned}
\end{equation}
where $\Delta_{T,B}$ are the s-wave order parameters on the two TI surfaces introduced by the proximity effect. If $\xi_n\gg d$ they may be almost equal. The proximity induced superconducting pair potential for the 1D surface states is then
\begin{equation}
\begin{aligned}
{\cal H}_{SC}
=
&\,
i\Delta_0'[\chi_{k+}\chi_{-k-}-\chi_{k-}\chi_{-k+}]
\\
&
-i\Delta_0^{'*}[\chi_{-k-}^\dag\chi_{k+}^\dag -\chi_{-k+}^\dag\chi_{k-}^\dag]
.
\end{aligned}
\end{equation}
Introducing the Nambu spinors for the two 1D bands according to ${\bm\chi}_k^{tr}=(\chi_{k+},\chi_{k-},\chi_{-k+}^\dag,\chi_{-k-}^\dag)$ and adding the normal quasiparticle part in Eq.~(\ref{eqn:1DHam}) we obtain for the total 1D step state BCS Hamiltonian
\begin{equation}
\begin{aligned}
&
{\cal H}=
{\cal H}_{ST}+{\cal H}_{SC}
=
\sum_k{\bm\chi}_k^\dag \tilde{h}_k{\bm\chi}_k;\;\;\;
\\
&
\tilde{h}_k=(vk - \mu)\lambda_z\tau_0 -\lambda_y( {\rm Re} \Delta'_0\tau_x+ {\rm Im} \Delta'_0\tau_y).
\end{aligned}
\end{equation}
Here the $\lambda$- and $\tau$ Pauli matrices act in the space of 1D bands $(\pm)$ and Nambu particle-hole space, respectively . For simplicity we first set $\mu=0$ i.e. the chemical potential lies at the Dirac point in Fig.~\ref{fig:gapping}, the case for general $\mu$ is treated at the end of this section. The explicit matrix form of $\tilde{h}_\bk$ is shown below. The step 1D quasiparticle energies in the superconducting state are then given by
\begin{equation}
\tilde{E}_k=\pm[(vk)^2+|\Delta'_0|^2]^\frac{1}{2},
\end{equation}
where generally $|\Delta'_0|<|t_L|,|t_R|$, i.e. the SC gap is inside the larger 2D hybridisation gap, except when $d$ is close to nodal points of $t_{L,R}(d)$ (Fig.~\ref{fig:hybridisation}a).
\subsection{MZM in-gap states at the step ends}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\columnwidth]{majorana}
\end{center}
\vspace{-0.5cm}
\caption{Density profile $p(x,y)$ of Majorana end states (superposition of u, d)) on the top (T) surface plane for the cases $(d_L,d_R)=(2,3); (2,5)$ for chemical potential $\mu$. Asymmetry and localization degree changes notably between the different pairs.
For nonzero $\mu$ MZM oscillations appear according to Eq.~(\ref{eqn:majosc}).
Here we set SC gap size $|\Delta_0'|=0.05 [E^*]$, $\mu= 0.25[E^*]$ in the lower panel and step length $2L_0=80 [\mbox{QL}]$. For presentation the latter is chosen artificially small.}
\label{fig:majorana}
\end{figure}
Now we will search for zero energy solutions of the BdG Hamiltonian inside the SC gap and whose wave function has to be located within the step length $-L_0\leq x \leq L_0$, where $\Delta'_0$ is finite.
Replacing $k\rightarrow -i\partial_x$ in the above Hamiltonian we obtain the equation
\begin{equation}
\bigl[(v\lambda_z\tau_0\partial_x -i\lambda_y({\rm Re} \Delta'_0\tau_x+ {\rm Im} \Delta'_0\tau_y)\bigr]\tilde{{\bm\phi}}(x)=0
.
\label{eqn:BdGx}
\end{equation}
In the space of four dimensional Nambu spinors ${\bm\chi}_k$ the real space BdG Hamiltonian of Eq.~(\ref{eqn:BdGx}) is represented as the matrix
\begin{equation}
\begin{aligned}
\tilde{h}(x)=
\left(
\begin{array}{cccc}
-iv\partial_x& 0&0&i\Delta_0^{'*}\\
0 & iv\partial_x&-i\Delta_0^{'*}&0\\
0&i\Delta'_0&-iv\partial_x&0\\
-i\Delta'_0&0& 0&iv\partial_x
\end{array}
\right).
\end{aligned}
\end{equation}
Aside from a relative sign this matrix consists of two identical blocks due to the relation between inter-band pairings given in Eq.~(\ref{eqn:interpair}) and neglect of intra-band pairs.
Using this matrix form which factorises into two blocks the BdG equation (Eq.~(\ref{eqn:BdGx})) may be solved by the ansatz of Eqs.(\ref{eqn:xansatz},\ref{eqn:endstate}) assuming $\Delta'_0(x)=\Delta'_0\Theta(L_0+x)\Theta(L_0-x)$, i.e. constant $\Delta'_0$ within the sample. The two solutions for each block decay exponentially either along $x$ or $-x$. This means they will be
located at one of the ends of the step at $-L_0$ or $L_0$ which we call $d$(down) and $u$(up), respectively (Fig.\ref{fig:stepscheme}). For the two blocks $A,B$ we arrive at the zero energy BdG wave functions $(\nu=u,d)$
\begin{equation}
\begin{aligned}
\tilde{{\bm\phi}}_{A}^\nu(x)
=&\,
{\bf a}^\nu_{A}w_\nu(x);
\;\;\;
\tilde{{\bm\phi}}_{B}^\nu(x)={\bf a}^\nu_{B}w_\nu(x);
\\
w_d(x)=
&\,
Ce^{\lambda(L_0+x)};
\;\;\;
w_u(x)=Ce^{\lambda(L_0-x)},
\label{eqn:xansatz}
\end{aligned}
\end{equation}
where $C=[2\lambda/(1-exp(-4\lambda L_0))]^{-\frac{1}{2}}$. The decay length $\lambda^{-1}$ of the end states is given by $\lambda=\frac{|\Delta'_0|}{v}$ which may be written as $\lambda=\frac{1}{\pi}\frac{|\Delta'_0|}{\Delta_0}\frac{v^s_F}{v}\xi^{-1}$ which is proportional to the inverse BCS coherence length $\xi$ of the SC substrate. Here the first factor describes the gap reduction factor due to proximity effect and the second one the ratio of Fermi velocities in substrate $(v^s_F)$ and TI $(v)$. Furthermore the amplitude vectors of the wave functions are
\begin{equation}
\begin{aligned}
({\bf a}^u_A)^{tr}&=\frac{1}{\sqrt{2}}(1,0,0,1);\;\;
({\bf a}^d_A)^{tr}=\frac{1}{i\sqrt{2}}(1,0,0,-1),
\\
({\bf a}^u_B)^{tr}&=\frac{1}{\sqrt{2}}(0,1,1,0);\;\;
({\bf a}^d_B)^{tr}=\frac{1}{i\sqrt{2}}(0,1,-1,0).
\label{eqn:endstate}
\end{aligned}
\end{equation}
The complete zero energy wave functions in the SC gap, including the $y$-dependence from Eq.~(\ref{eqn:stepstate}), is then
given by
\begin{eqnarray}
\label{eqn:Mwave}
\tilde{{\bm\phi}}_{A,B}^\nu(x,y)&=&
{\bf a}^\nu_{A,B}w_\nu(x,y)\\
w_\nu(x,y)&=&w_\nu(x)
\frac{1}{\nu_0}[e^{\kappa_Ly}\Theta(-y)+ e^{-\kappa_Ry}\Theta(y)],\nonumber
\end{eqnarray}
where $-L_0\leq x,y\leq L_0$.
The wave functions for $A,B$ blocks are related by permutation of the two 1D band states according to $P\tilde{{\bm\phi}}^\nu_{A,B}=\tilde{{\bm\phi}}^\nu_{B,A}$ defined by the permutation operator $P=\lambda_x\tau_0$ which is a symmetry of the Hamiltonian. Therefore the wave functions of the zero energy states should also be symmetrized by taking the combination $\tilde{{\bm\phi}}^\nu_{A,B}+P\tilde{{\bm\phi}}^\nu_{A,B}$ meaning
\begin{equation}
\begin{aligned}
\tilde{{\bm\phi}}_\nu(x,y) = \tilde{{\bm\phi}}^\nu_A(x,y)+\tilde{{\bm\phi}}^\nu_B(x,y)
\;\;\mbox{and}\;\;
{\bf a}_\nu={\bf a}_A^\nu+{\bf a}_B^\nu
.
\label{eqn:Msymm}
\end{aligned}
\end{equation}
For these symmetrised zero energy wave functions we may construct their corresponding field operators according to
\begin{equation}
\begin{aligned}
\Psi_u
&=\!
\int dx\tilde{{\bm\phi}}^\dag_u(x){\bm\chi}_x
=\!
\int dxw_u(x){\bf a}_u^\dag{\bm\chi}_x
=\!
\int dx w_u(x)\gamma_1,
\\
\Psi_d
&=\!
\int dx\tilde{{\bm\phi}}^\dag_d(x){\bm\chi}_x
=\!
\int dxw_d(x){\bf a}_d^\dag{\bm\chi}_x
=\!
\int dx w_d(x)\gamma_2,
\label{eqn:Mop}
\end{aligned}
\end{equation}
where we used the real space representation ${\bm\chi}_x=\int dke^{-ikx}{\bm\chi}_k$. These operators satisfy the
reality condition $\Psi_u^\dag=\Psi_u$ and $\Psi_d^\dag=\Psi_d$ and
therefore represent two Majorana zero modes separated at the two ends $(d,u)$ of the step length.
Their associated quasiparticle operators $\gamma_1$, $\gamma_2$ corresponding to zero energy states
with BdG wave functions $\tilde{\phi}_\nu(x,y)$ in Eqs.~(\ref{eqn:Mwave},\ref{eqn:Msymm}) are then obtained as
\begin{equation}
\begin{aligned}
\gamma_1
&=
{\bf a}_u^\dag{\bm\chi}_x=\frac{1}{\sqrt{2}}(\chi_{x+}+\chi_{x-}+\chi_{x+}^\dag+\chi_{x-}^\dag);
\\
\gamma_2
&=
{\bf a}_d^\dag{\bm\chi}_x=\frac{i}{\sqrt{2}}(\chi_{x+}+\chi_{x-}-\chi_{x+}^\dag-\chi_{x-}^\dag),
\label{eqn:majo1}
\end{aligned}
\end{equation}
which fulfil the reality condition $\gamma_i^\dag=\gamma_i \;\;(i=1,2)$. These quasiparticle operators for the MZM satisfy the canonical Majorana anti-commutation rules $\{\gamma_{1},\gamma_{1}\}=\{\gamma_{2},\gamma_{2}\}=2$ and $\{\gamma_{1},\gamma_{2}\}=0$.
The real-space density profile $p(x,y)=|w_u(x,y)+w_d(x,y)|^2$
of these SC in-gap zero- energy states is governed by three (inverse) length scales: i) perpendicular to the step by $\kappa_L(d_L)=sign(t_L)t_L(d_L)/v$ and $\kappa_L(d_R)=sign(t_R)t_R(d_R)/v$ to the right and left and ii) parallel to the step by $\lambda=|\Delta'_0|/v$. This means that the Majorana density profile can be quite anisotropic and change rapidly as function of the relative
TI film thicknesses $d_L$, $d_R$ on both sides of the step. In the general case when both $t_L(d_L)$ and $t_R(d_R)$ are not located close to the zeroes of the oscillatory function in Eq.(\ref{eqn:tosc}) one has $|\Delta'_0| < |t_L|, |t_R|$ and therefore the Majorana profile is concentrated at the step with a more gradual decay along step direction $x$ (Fig.~\ref{fig:majorana}).\\
Finally we briefly give the results for the case of general position of the chemical potential $\mu\neq 0$ inside the hybridization gap (Fig.~\ref{fig:gapping}(c)), cutting the 1D dispersions at finite wave vector $k_c=\mu/v$ once in the positive half of the BZ. In the SC state this leads to quasiparticle bands
\begin{eqnarray}
\begin{aligned}
\tilde{E}^\pm_{k1,2}=\pm[(vk\pm\mu)^2+|\Delta'_0|^2]^\frac{1}{2},
\end{aligned}
\end{eqnarray}
which have now two branches $(1,2)$ emerging from $k_{1,2}=\pm k_c$ and a SC gap $2|\Delta'_0|$, independent of the position of the chemical potential inside the hybridisation gap. The solutions for the MZM end states may be found in analogy to the above derivation. The essential difference lies in the MZM wave functions now given by
\begin{eqnarray}
\begin{aligned}
w_u(x)=&Ce^{\lambda_1(x-L_0)}e^{i\lambda_2x};
\\
w_d(x)=&Ce^{-\lambda_1(x+L_0)}e^{i\lambda_2x},
\label{eqn:majosc}
\end{aligned}
\end{eqnarray}
where $\lambda_1=|\Delta'_0|/v$ and $\lambda_2=\mu/v$. Thus, in addition to the exponential decay from
the ends there is an oscillation of MZM form factors which becomes faster than the decay for $|\mu|>|\Delta'_0|$.
Analogous to Eq.~(\ref{eqn:Mop}) the self-adjoint field operators of MZM modes can be written as
\begin{eqnarray}
\begin{aligned}
\Psi_u=&\frac{1}{2L_0}\int dx[w'_u(x)\gamma_1+w''_u\tilde{\gamma_1}];
\\
\Psi_d=&\frac{1}{2L_0}\int dx[w'_d(x)\gamma_2+w''_d\tilde{\gamma_2}],
\end{aligned}
\end{eqnarray}
where we split $w_{u,d}(x)=w'_{u,d}(x)+iw''_{u,d}(x)$ into real and imaginary parts. we have now an additional Majorana pair $(\tilde{\gamma}_1,\tilde{\gamma_2})$ complementing Eq.~(\ref{eqn:majo1}) and defined by the quasiparticle operators
\begin{equation}
\begin{aligned}
\tilde{\gamma_1}
&=
\frac{1}{i\sqrt{2}}(\chi_{x+}-\chi_{x-}-\chi_{x+}^\dag+\chi_{x-}^\dag);
\\
\tilde{\gamma_2}
&=
\frac{1}{\sqrt{2}}(\chi_{x+}-\chi_{x-}+\chi_{x+}^\dag-\chi_{x-}^\dag),
\end{aligned}
\end{equation}
The doubling is related to the existence of two Bogoliubov excitation bands for $\mu\neq 0$ arising from from $k_{1,2}=\pm k_c$ points. Again the Majorana anti-commutation rules $\{\tilde{\gamma}_{1},\tilde{\gamma}_{1}\}=\{\tilde{\gamma}_{2},\tilde{\gamma}_{2}\}=2$ and $\{\tilde{\gamma}_{1},\tilde{\gamma}_{2}\}=0$ are fulfilled, furthermore we have $\{\gamma_{1},\tilde{\gamma}_{1}\} = \{\gamma_{2},\tilde{\gamma}_{2}\} =0$. The density profiles are $p(x,y) =|w'_u(x,y)+w'_d(x,y)|^2$ and $\tilde{p}(x,y)=|w''_u(x,y)+w''_d(x,y)|^2$ with $w_\nu(x,y)=w_\nu(x)\frac{1}{\nu_0}[e^{\kappa_Ly}\Theta(-y)+ e^{-\kappa_Ry}\Theta(y)]$ similar as in Eq.~(\ref{eqn:Mwave}). An example of the MZM profile for nonzero $\mu$ is shown in the bottom of Fig.~\ref{fig:majorana} which exhibits the additional oscillatory behaviour.
\section{Conclusion and outlook}
In this work we have shown that the helical surface states in thin films of topological insulators may be manipulated in an
interesting and promising way. It has previously been theoretically derived on general grounds that in thin films the surface
states exhibit hybridisation due to inter-surface wave function overlap. This leads to a hybridisation energy that both decays
exponentially and oscillates with film thickness depending on materials parameters and simultaneously leads to a gapping
of the 2D helical surface states.
This can be exploited in a simple
way by profiling the film thickness in a suitable manner, for example by introducing a step in film thickness via the substrate
such that the inter-surface hybridisation has opposite sign on both sides of the step. This leads to the appearance
of novel type of non-degenerate 1D helical states confined spatially to the step with linear dispersion and again helical spin locking.
These states are protected by the sign change of the hybridisation and their decay perpendicular to the step is controlled by
the modulus of the hybridisation energy. It should be possible to investigate the existence of these 1D step states and their dispersion
by STM spectroscopy. Before this, however, it would be useful to check by STM and ARPES experiments whether homogeneous thin films indeed exhibit the theoretically predicted and prerequisite oscillations in hybridisation energy. This should immediately translate in the (rectified) oscillations of the gap size of coupled surface states. While Bi$_2$Se$_3$~does not seem to exhibit such oscillations the theoretical band parameters for Bi$_2$Te$_3$~and Sb$_2$Te$_3$~are apparently more favourable for their appearance.
If these conjectures can be experimentally verified in some cases another highly attractive possibility opens up: When the substrate becomes a simple s-wave superconductor the proximity effect leads to induced gapping of the 1D helical step states. Since the latter are nondegenerate fermions with spin locking this can create Majorana type zero-energy modes inside the proximity effect induced gap which are localised at the end of the steps where the gap drops to zero. Their inverse localisation lengths along and perpendicular to the step is proportional to the SC gap energy and left/right inter-surface hybridisation energies. These Majorana end states should, like 1D step states themselves be observable with STM spectroscopy. The present proposed scenario for creating Majorana states is exceedingly
simple, requiring only a suitably profiled (s-wave superconducting substrate) to create the 1D step states. Since the latter are
already nondegenerate by their helical spin-locked nature one should not need additional arrangements like applied magnetic fields
or applying additional ferromagnetic layers which have been discussed before in the wire-type geometries for creating Majorana states.
The present simplistic geometry may be replaced by more elaborate ones. For example the step may not be extended to the whole width $[-L_0,L_0]$ of the sample but may only be present for $|x|\leq x_0 <L_0$. When $|x|$ approaches $x_0$ the thicknesses $d_R,d_L$ may be gradually changed on both sides of the step to a common $d_0$ which satisfies $t_R(d_0)=t_L(d_0)=0$. Then the 1D helical step state will also cease to exist at $\pm x_0$ position within the thin film area. This would presumably simplify investigations by STM method and suppress unwanted effects from sample ends. Another extension would be a regular array of steps that are a certain distance $y_0$ apart leading to sign change of $t(y)$ with period $2y_0$.
This configuration would be suitable for studying the effects of 1D step state overlap. Finally one might form a ring-like step, i.e. a quantum dot with the proper thickness inside and outside the ring to support a 1D ring state.
This should lead to a discretization of the linear dispersion of step states, depending on the diameter of the ring and the thickness variation characteristics.
It is therefore certainly worthwhile to study these configurations and the possible in-gap (hybridisation, superconducting) states and their physical consequences further.
\section*{Acknowledgments}
A.~A. acknowledges the support
of the Max Planck- POSTECH-Hsinchu Center for Complex Phase
Materials, and financial support from the National Research
Foundation (NRF) funded by the Ministry of Science of Korea (Grant
No. 2016K1A4A01922028).
|
2024-02-18T23:40:25.472Z
|
2021-11-08T02:04:27.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03145","language":"en","timestamp":1636337067000,"url":"https:\/\/arxiv.org\/abs\/2111.03145","yymm":"2111"}
|
proofpile-arXiv_000-10215
|
{"provenance":"002.jsonl.gz:10216"}
| null | null |
\section{Introduction}
\label{sec:intro}
Through the energetic feedback of their intense radiation fields and powerful relativistic jets, active galactic nuclei (AGN) are thought to have a significant influence on the evolution of their host galaxies \citep[see][for reviews]{cat09,fab12,veil20}. These feedback effects are regularly incorporated into semi-analytic models and hydrodynamical simulations of galaxy evolution as a means of regulating and suppressing star formation, from the scales of galaxy nuclei to the circumgalactic environment \citep[][]{sd15}.
The jets of radio AGN are thought to be particularly important in the feedback context. In galaxy halos and dense large-scale environments (groups and clusters; on scales $\sim$10 kpc -- 1 Mpc), they are thought to regulate the cooling of hot gas, preventing the build up of mass in the most massive galaxies and explaining the decline at the upper end of observed galaxy luminosity and stellar mass functions \citep[][]{bow06,cro06,cro16,vog14}. This type of feedback, often referred to as ``jet mode" or ``maintenance mode", is usually linked with AGN fuelled by low Eddington rate, radiatively-inefficient supermassive black hole (SMBH) accretion -- low-excitation radio galaxies (LERGs), identified through their weak optical emission-line spectra \citep[e.g.][]{but10,bh12,hb14}.
The alternative type of feedback in this widely used scheme is ``quasar mode" or ``radiative mode", which is linked with AGN-driven outflows that influence star formation on the scales of galaxy nuclei ($\sim$1--10 kpc). While this has traditionally been associated with radiatively-driven winds in luminous AGN \citep[e.g.][]{kp15}, there is strong observational evidence to suggest that jets are also capable of driving multiphase outflows \citep[e.g.][]{mor05,mor13,mor15,mah16,huse19a,oost19,sant20} or entraining molecular gas \citep[e.g.][]{mcn14,rus17,rus19} on these scales, even in quasar-like AGN \citep[][]{wm18,jar19,oost19}. Radio AGN that are fuelled by high Eddington rate, radiatively-efficient SMBH accretion -- high-excitation radio galaxies (HERGs) or jetted quasars, which exhibit strong optical emission-line spectra -- are therefore particularly interesting objects in the feedback scenario.
A key aspect for the correct implementation of radio AGN into models of galaxy evolution lies in determining how they are triggered.
Deep, ground-based optical imaging studies of powerful radio galaxies (L$_{\rm 1.4GHz}$ $\gtrsim 10^{25}$ W\,Hz$^{-1}$) have revealed high rates of morphological disturbance from galaxy mergers and interactions \citep{heck86,sh89a,sh89b,ram11}, events which efficiently transport material to galaxy centres and could provide the dominant trigger for the nuclear activity \citep[e.g.][]{bh96,gabor16}. The merger rates are particularly high for samples of radio AGN with strong optical emission lines, with 94$^{+2}_{-7}$ per cent\footnote{The uncertainties from our previous studies are here updated to binomial $1\sigma$ confidence intervals from the Bayesian technique of \cite{cam11}.} of strong-line radio galaxies (SLRGs\footnote{SLRGs and WLRGs are selected based on [OIII] emission-line equivalent width (EW$_{\rm [OIII]} > 10$\AA{} and EW$_{\rm [OIII]} < 10$\AA{}, respectively), but show a strong overlap with the HERG and LERG populations \citep[see discussion in][]{tad16}.}) in the 2Jy sample exhibiting clear tidal features \citep{ram11}. The 2Jy SLRGs also show an excess of high-surface-brightness tidal features relative to non-active elliptical galaxies matched to the targets in terms of absolute magnitude \citep[][]{ram12}, and are found to preferentially lie in group-like environments that are well suited to frequent galaxy interactions \citep{ram13}.
On the other hand, evidence for interactions in powerful radio galaxies with radiatively-inefficient AGN is much less frequent, with only 27$^{+16}_{-9}$ per cent of 2Jy weak-line radio galaxies (WLRGs$^2$) showing clear tidal features \citep{ram11}. In addition, analysis at both optical \citep[][]{ram13,sab13} and X-ray wavelengths \citep{ine13,ine15} shows that these objects preferentially lie in dense, cluster-like environments, where high relative galaxy velocities can reduce the merger rate \citep[][]{pb06}. It has been proposed that radiatively-inefficient nuclear activity is instead predominantly fuelled by the prevalent hot gas content that is present in these environments, through direct accretion \citep[e.g.][]{allen06,hard07}, cooling flows \citep[e.g.][]{tad89,baum92,best05b}, or the chaotic accretion of condensing cold gas \citep[][]{gas13,gas15,tremblay16}.
This current picture of the triggering and fuelling of radio AGN is, however, largely based on samples with the highest radio powers, despite the fact that local radio luminosity functions are found to increase steeply towards lower radio powers \citep[][]{ms07,bh12,sad14,sab19}. In addition, the subpopulation of active galaxies that have strong optical emission lines and intermediate radio powers (HERGs and jetted Type 2 quasars with 22.5 $<$ log(L$_{\rm 1.4GHz}$) $< 25.0$ W\,Hz$^{-1}$) show strong evidence for jet-driven multiphase outflows \citep[e.g.][]{tad14b,har15,vm17,jar19}. The broadest ionised gas emission-line profiles for radio AGN in the Sloan Digital Sky Survey (SDSS) are also found to be associated with those in the intermediate radio power regime \citep[][]{mul13}. Study of intermediate-radio-power, radiatively-efficient AGN is thus particularly important for improving our understanding of the role of radio AGN feedback in galaxy evolution.
In the first paper in this series (\citeauthor{pierce19} \citeyear{pierce19}; hereafter Paper I), we used detailed analysis of the optical morphologies of a sample of 30 local HERGs with intermediate radio powers ($z<0.1$; 22.5 $<$ log(L$_{\rm 1.4GHz}$) $< 24.0$ W\,Hz$^{-1}$) to investigate the importance of merger-based triggering in this subpopulation for the first time. When compared with the 2Jy HERGs and quasars, the rates of disturbance in the higher and lower radio power halves of the sample (67$^{+10}_{-14}$ per cent and 40$^{+13}_{-11}$ per cent, respectively) suggested that interactions have decreasing importance for triggering radio AGN towards lower radio powers. However, a relationship with decreasing optical emission-line luminosity could not be ruled out, and it is interesting to note that the hosts of Type 2 quasars with low-to-intermediate radio powers frequently show clear tidal features in deep, ground-based imaging observations \citep{bess12}. The radio-intermediate HERGs were also shown to exhibit a mixture of morphological types, consistent with the idea that the dominant hosts of radiatively-efficient radio AGN change from massive, early-type galaxies at high radio powers \citep[as shown by e.g.][]{mms64} to late-type galaxies at lower radio powers \citep[like Seyfert galaxies; e.g.][]{adams77}.
Here, we use an online interface to expand our morphological analysis to a much larger sample of 155 local active galaxies covering a broad range in both 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity \citep[a proxy for the total AGN power; e.g.][]{heck04}, allowing us to investigate the relationships suggested by the results of Paper I in more detail. The new sample includes high-radio-power 3CR HERGs and LERGs, radio-intermediate HERGs, and Type 2 quasar hosts, imaged using an identical observing setup at a consistent limiting surface brightness depth ($\mu_r$$\sim $27 mag\,arcsec$^{-2}$). The inclusion of both radiatively-efficient and -inefficient radio AGN in the 3CR sample also allows the apparent dichotomy in their dominant triggering and fuelling mechanisms at high radio powers to be tested. Crucially, our observations also allow us to select a large control sample of non-active galaxies matched in stellar mass and redshift from the wide image fields, the morphologies of which were classified blindly alongside those of the targets.
The paper is structured as follows. Details on the samples, observations and image reduction are provided in \S\ref{sec:samp_sel_obs_red}. The online interface used to obtain the morphological classifications is described in \S\ref{sec:methods}, along with the method used to select non-active control galaxies from the target image fields. \S\ref{sec:res} outlines the analysis of the morphological classifications and the subsequent results relating to the rates of disturbance (\S\ref{subsec:dist_rates}), the interaction/merger stage for disturbed galaxies (\S\ref{subsec:features_merger_stage}), and the galaxy morphological types (\S\ref{subsec:morph_types}). A discussion of both the classification methodology and the results is presented in \S\ref{sec:disc}. The study is then summarised in \S\ref{sec:summary}.
A cosmology described by $H_{0} = 73.0$ km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\rm m} = 0.27$ and $\Omega_{\rm \Lambda} = 0.73$ is assumed throughout this paper, for consistency with our previous work.
\section{Sample selection, observations and reduction}
\label{sec:samp_sel_obs_red}
The samples used in this study comprise powerful 3CR radio galaxies, radio-intermediate HERGs and the hosts of Type 2 quasars, which together encompass a broad range in both 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity. All objects were observed with the Wide-Field Camera (WFC) on the 2.5m Isaac Newton Telescope (INT) at the Observatorio del Roque de los Muchachos, La Palma, using a consistent observing technique. A summary of the redshift, stellar mass, 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity properties for each of the four active galaxy samples studied is provided in Table~\ref{tab:samples_summary}.
\begin{table}
\centering
\small
\caption{A summary of the redshift, stellar mass, 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity properties for the four active galaxy samples studied. The ranges, medians, and standard deviations are presented in the top, middle, and bottom rows, respectively.}
\label{tab:samples_summary}
\begin{tabular}{C{1.15cm}cccc}
\hline
& \makecell{RI-HERG\\low} & \makecell{RI-HERG\\high} & \makecell{3CR} & \makecell{Type 2\\quasars} \\ \hline
\textit{} & 0.031-0.098 & 0.051-0.149 & 0.050-0.296 & 0.051-0.139 \\
\textit{z} & 0.074 & 0.110 & 0.174 & 0.111 \\
\textit{} & 0.019 & 0.032 & 0.077 & 0.025 \\ \hline
\multirow{3}{*}{$\rm log(M_{*}/M_{\odot}$)} & 10.0-11.4 & 10.7-11.4 & 10.5-12.7 & 10.6-11.4 \\
& 10.7 & 11.2 & 11.4 & 11.0 \\
& 0.3 & 0.2 & 0.4 & 0.2 \\ \hline
\multirow{3}{*}{\makecell{log(L$_{\rm 1.4GHz}$)\\(W\,Hz$^{-1}$)}} & 22.52-23.98 & 24.01-24.88 & 24.67-28.15 & 22.47-26.22 \\
& 23.06 & 24.54 & 26.40 & 23.34 \\
& 0.39 & 0.24 & 0.58 & 0.86 \\ \hline
\multirow{3}{*}{\makecell{log(L$_{\rm[OIII]}$)\\(W)}} & 32.86-34.24 & 33.00-35.06 & 32.67-36.09 & 35.04-35.86 \\
& 33.56 & 33.98 & 34.64 & 35.19 \\
& 0.35 & 0.46 & 0.80 & 0.19 \\ \hline
\end{tabular}
\end{table}
\subsection{Radio-intermediate HERGs}
\label{subsec:RI-HERG_sel}
\begin{table*}
\centering
\caption{\small Basic information for the 28 targets in the RI-HERG high sample described in \S\ref{subsubsec:RI-HERG_high_sel}. Full table available as supplementary material.} \label{tab:RI-HERG_high_target_info}
\begin{tabular}{lcccccccc}
\hline
SDSS ID (Abbr.) & $z$ & \makecell{$A_{\rm r}$\\(mag)} & \makecell{SDSS \textit{r} mag\\(mag)} & \makecell{log(L$_{\rm 1.4GHz}$)\\(W\,Hz$^{-1}$)} & \makecell{log(L$_{\rm [OIII]}$)\\(W)} & \makecell{log(M$_{*}$)\\(M$_{\odot}$)} & Obs. date & \makecell{Seeing \small{FWHM}\\ (arcsec)}\\
\hline
J075244.19$+$455657.4 (J0752+45) & 0.052 & 0.23 & 14.24 & 24.52 & 33.68 & 11.3 & 2018-03-11 & { 0.97} \\
J080601.51$+$190614.7 (J0806+19) & 0.098 & 0.11 & 15.46 & 24.55 & 33.93 & 11.4 & 2018-03-11 & { 1.32} \\
J081755.21$+$312827.4 (J0817+31) & 0.124 & 0.12 & 16.81 & 24.39 & 34.26 & 11.0 & 2018-03-11 & { 1.56} \\
J083655.86$+$053242.0 (J0836+05) & 0.099 & 0.10 & 15.49 & 24.09 & 33.78 & 11.3 & 2018-03-13 & { 1.17} \\
J084002.36$+$294902.6 (J0840+29) & 0.065 & 0.15 & 14.66 & 24.73 & 34.33 & 11.2 & 2018-03-11 & { 1.33} \\
... & ... & ... & ... & ... & ... & ... & ... & ... \\ \hline
\end{tabular}
\end{table*}
The radio-intermediate HERGs were selected from the catalogue of 18,\,286 local radio galaxies produced by \citet{bh12}. These were selected as radio-intermediate based on their 1.4 GHz radio powers\footnote{In the range 22.5 $<$ log(L$_{\rm 1.4GHz}$) $< 25.0$ W\,Hz$^{-1}$. Studies of local radio luminosity functions for AGN and star-forming galaxies show that the latter decline steeply above L$_{\rm 1.4 GHz} \sim 10^{23}$ W\,Hz$^{-1}$ \citep[e.g.][]{sad02,bh12}, but since all radio sources in the \cite{bh12} catalogue were additionally identified as AGN using their optical properties, a lower luminosity density limit was considered for the selection.}, but were separated into two samples covering different ranges for the analysis in this paper. The optical spectroscopic data were extracted from the value-added catalogue maintained by the Max Planck Institute for Astrophysics and the Johns Hopkins University (the MPA-JHU value-added catalogue), which provides raw measurements and derived properties (e.g. stellar masses and star-formation rates) based on analysis of the SDSS DR7 spectra\footnote{Available at: \url{https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/}.}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{RIHERG_sample_hists_v2.pdf}
\caption{Redshift, stellar mass, 1.4 GHz radio power, and [OIII]$\lambda$5007 emission-line luminosity distributions for the sources in the RI-HERG low (blue) and RI-HERG high (teal) samples.}
\label{fig:RIHERG_hists}
\end{figure}
\subsubsection{The RI-HERG low sample}
\label{subsubsec:RI-HERG_low_sel}
The detailed optical morphologies of the first of the two radio-intermediate HERG samples were investigated in Paper I. The sample comprises all 30 HERGs in the \citet{bh12} catalogue with radio powers in the range 22.5 $<$ log(L$_{\rm 1.4GHz}$) $< 24.0$ W\,Hz$^{-1}$, redshifts $z < 0.1$ and right ascension coordinates ($\alpha$) in the range 07h 15m $< \alpha <$ 16h 45m. These objects were included in the current project to obtain morphological classifications in a manner consistent with the other active galaxy samples studied in this work, so as to allow direct comparison between the results. Their inclusion also allowed the online interface classification method to be compared with the previous detailed approach (see \S\ref{subsec:method_disc}).
Given the lower radio powers of the objects relative to those in the other radio-intermediate HERG sample studied in this work (\S\ref{subsubsec:RI-HERG_high_sel}), this is referred to as the \textit{RI-HERG low} sample from this point forward. Information on the properties of these objects is provided in Paper I and is not included here, for brevity.
\subsubsection{The RI-HERG high sample}
\label{subsubsec:RI-HERG_high_sel}
The second radio-intermediate HERG sample selected from \cite{bh12} consists of all HERGs with radio powers in the range 24.0 $<$ log(L$_{\rm 1.4GHz}$) $< 25.0$ W\,Hz$^{-1}$, redshifts $z<0.15$ and right ascension coordinates in the range 07h 45m $< \alpha <$ 15h 45m. Inspection of Faint Images of the Radio Sky at Twenty-cm \citep[FIRST;][]{bec95} images for all objects in this sample (limiting angular resolution $\sim$5") suggested that the radio emission detected by the NRAO VLA Sky Survey \citep[NVSS;][]{con98} would not be contaminated by other radio sources within its $\sim$45" angular resolution limit. NVSS flux densities were therefore used to derive the radio powers in all cases, due to their better sensitivity to extended, low surface brightness emission. These selection criteria resulted in a final sample of 28 radio-intermediate HERGs that is complete within these constraints.
Some basic properties of these 28 targets are provided in Table~\ref{tab:RI-HERG_high_target_info}, and the distributions of redshift, stellar mass, 1.4 GHz radio power, and [OIII]$\lambda$5007 emission-line luminosity are presented alongside those of the RI-HERG low sample in Figure~\ref{fig:RIHERG_hists}.
\subsection{The 3CR sample}
\label{subsec:3CR_sel}
Our sample of high-radio-power AGN was selected from the \cite{spin85} catalogue of 3CR radio galaxies. The full sample comprises all 84 objects with redshifts in the range 0.05 $< z <$ 0.3. The lower redshift limit restricted the sample to galaxies with high radio powers (L$_{\rm 1.4GHz} \gtrsim 10^{25}$ W\,Hz$^{-1}$), while the upper redshift limit ensured that the imaging observations had sufficient sensitivity and resolution for the detection of faint interaction signatures. However, the morphologies of 11 of the selected sources have been studied in detail in our previous work \citep{ram11,ram12,ram13}, and were thus excluded from the current sample. One further object could not be used for the classification analysis, since a saturated bright star ruined the image appearance at its location (3C 452). The sample studied here comprises the remaining 72 3CR sources.
\begin{table*}
\centering
\caption{\small Basic information for the 72 targets in the 3CR sample described in \S\ref{subsec:3CR_sel}. Full table available as supplementary material.} \label{tab:3CR_info}
\begin{tabular}{lccccccccc}
\hline
Name & \textit{z} & \makecell{log(L$_{\rm 1.4GHz}$)\\(W\,Hz$^{-1}$)} & \makecell{log(L$_{\rm [OIII]}$)\\(W)} & \makecell{log(M$_{*}$)\\(M$_{\odot}$)} & Optical class & Filter & Obs. Date & \makecell{Seeing \small{FWHM}\\ (arcsec)} \\ \hline
3C 20 & 0.174 & 26.88 & 34.52 & 11.3 & HERG & \textit{r} & 2013-08-06 & { 1.28} \\
3C 28 & 0.195 & 26.57 & 33.94 & 11.6 & LERG & \textit{r} & 2012-12-13 & { 2.34} \\
3C 33 & 0.060 & 25.99 & 35.16 & 11.0 & HERG/Q & \textit{r} & 2012-12-13 & { 2.14} \\
3C 33.1 & 0.181 & 26.40 & 35.28 & 11.6 & HERG/Q & \textit{r} & 2013-08-06 & { 1.34} \\
3C 35 & 0.067 & 25.39 & 32.99 & 11.0 & LERG & \textit{r} & 2012-12-13 & { 1.64} \\
... & ... & ... & ... & ... & ... & ... & ... & ... \\
\hline
\end{tabular}
\end{table*}
The spectroscopic properties of the vast majority of the objects in the sample have been determined by \cite{but09,but10,but11}. The measured [OIII]$\lambda$5007 emission-line luminosities or upper limits\footnote{These are 3$\sigma$ upper limits determined from measurements of the noise level in the spectral regions near to the expected positions of the lines, assuming the instrumental resolution as their width \citep{but09}.} from these studies are listed in Table~\ref{tab:3CR_info} (corrected for Galactic reddening), and were only unavailable for three of the sources: the line was not detected in the spectra of 3C 52 and 3C 130; and 3C 405 (Cygnus A) is not included in the Buttiglione et al. samples. For 3C 52 and 3C 130, the upper limits provided for the H$\alpha$ emission-line fluxes were used to estimate [OIII]$\lambda$5007 luminosity upper limits, under the assumption that the noise levels were equivalent in the two spectral regions.
\cite{but09,but10,but11} also used the spectroscopic data to classify the galaxies as HERGs and LERGs, mostly using the excitation index (EI) scheme from \cite{but10}. In the following cases, however, EI indices could not be calculated due to the lack of detections for key spectral lines: 3C 35; 3C 52; 3C 89; 3C 130; 3C 258; 3C 319; 3C 346; and 3C 438. For these objects, we use the [OIII]$\lambda$5007 equivalent width (EW) method adopted by \cite{bh12} to obtain classifications, with HERGs having EW$_{\rm [OIII]} > 5$\AA{} and LERGs having EW$_{\rm [OIII]} < 5$\AA{}. Measurements of the continuum level were estimated from visual inspection of the spectra in \cite{but09} and \cite{but11}. For the objects that lacked clear [OIII]$\lambda$5007 detections, the measured H$\alpha$ fluxes or upper limits were instead used as upper limits on the [OIII]$\lambda$5007 fluxes, for use with the equivalent width criterion (decisive for 3C 52 and 3C 89). When these values were unavailable or were not decisive for the classification, the spectral region near the expected position of the [OIII]$\lambda$5007 line was visually inspected, in order to place an upper limit on its line flux (necessary for 3C 130, 3C 319 and 3C 438).
For 3C 405, the [OIII]$\lambda$5007 emission-line luminosity was calculated from the flux measured by \cite{om75} -- corrected for Galactic reddening using the average $E(B-V)$ value of 0.48 determined by \cite{tay03} from optical spectra of the outer regions of the galaxy -- from which it is identified as a HERG using the EW criterion outlined above. The [OIII]$\lambda$5007 luminosity provided by \cite{but10} for 3C 321 is inconsistent with other measurements in the literature, and we instead adopted the value presented in \cite{dick10}.
Basic information on the properties of the 3CR sample is presented in Table~\ref{tab:3CR_info}. The 1.4 GHz radio powers were derived from the 178\,MHz flux densities in \cite{spin85}, assuming a standard spectral index of $\alpha=-0.7$. Stellar mass estimates were derived following the procedure outlined in \S\ref{subsec:mass_calc}. Note that we make no distinction between Type 1 and Type 2 AGN in the 3CR sample, since we find that the presence of a broad-line nucleus in the 11 Type 1 objects does not affect the ability to detect large-scale tidal features. The distributions of redshift, stellar mass, 1.4 GHz radio power, and [OIII]$\lambda$5007 emission-line luminosity are shown separately for the 3CR HERGs and LERGs in Figure~\ref{fig:3CR_hists}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{3CR_sample_hists.pdf}
\caption{Redshift, stellar mass, 1.4 GHz radio power, and [OIII]$\lambda$5007 emission-line luminosity distributions for the HERG (red) and LERG (blue) sources in the 3CR sample. Objects without mass estimates were not considered when plotting the stellar mass distribution. Those with only upper limits on their [OIII]$\lambda$5007 luminosities were not used for the corresponding distribution.}
\label{fig:3CR_hists}
\end{figure}
\subsection{The Type 2 quasar sample}
\label{subsec:q2_sel}
Our Type 2 quasar sample consists of all 25 objects in the SDSS-selected sample of \cite{reyes08} with [OIII]$\lambda$5007 emission-line luminosities above $\rm L_{[OIII]} \geq 10^{8.5}$\,L$_{\odot}$\footnote{Consistent with the Type 2 quasar definition used by \citet{zak03}.} ($\rm L_{[OIII]} \gtrsim 10^{35}$\,W), redshifts $z<0.14$ and right ascension coordinates in the range 00h 30m $< \alpha <$ 12h 30m. These comprise part of a larger sample of 48 Type 2 quasars (the QSOFEED sample), which is described in detail in Ramos Almeida et al. (2021; in preparation).
As for the radio-intermediate HERG samples, the 1.4 GHz radio powers were derived from their NVSS flux densities whenever possible to avoid sensitivity losses due to resolution effects. However, FIRST flux densities were instead used for J0858+31 and J1218+08, where the FIRST images suggested that the NVSS flux densities would be strongly contaminated by other radio sources. FIRST flux densities were also used for J0818+36 and J1015+00, which were not detected by NVSS. J1036+01 and J1223+08 were not detected by either radio survey. For these objects, upper limits on the 1.4 GHz radio powers were estimated using the 1 mJy beam$^{-1}$ flux density selection limit for the FIRST catalogue \citep{white97}. The sample is found to cover a broad range in radio power and includes 21 radio-intermediate sources ($10^{22.5} <$ L$_{\rm 1.4GHz}$ $< 10^{25}$ W\,Hz$^{-1}$) and two traditionally radio-loud sources (L$_{\rm 1.4GHz}$ $> 10^{25}$ W\,Hz$^{-1}$): J1137+61 and 3C 223\footnote{3C 223 was observed and classified twice as part of both the Type 2 quasar sample and the 3CR sample. The object is considered as part of both samples throughout the analysis, although the classifications obtained from the Type 2 quasar image are used when they are combined, due to the higher image quality. The 1.4 GHz radio power and [OIII]$\lambda$5007 luminosity from Table~\ref{tab:3CR_info} are also used for this object in Table~\ref{tab:q2_target_info}, for consistency.}.
Calculation of the stellar mass estimates for the hosts of the Type 2 quasars was as performed for the 3CR sources (see \S\ref{subsec:mass_calc}). Basic information for the Type 2 quasar sample is presented in Table~\ref{tab:q2_target_info}. The redshift, stellar mass, 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity distributions for the targets in the sample are presented in Figure~\ref{fig:q2_hists}.
\begin{table*}
\centering
\caption{\small Basic information for the 25 targets in the Type 2 quasar sample described in \S\ref{subsec:q2_sel}. Full table available as supplementary material.} \label{tab:q2_target_info}
\begin{tabular}{lcccccccc}
\hline
SDSS ID (Abbr.) & $z$ & \makecell{$A_{\rm r}$\\(mag)} & \makecell{SDSS \textit{r} mag\\(mag)} & \makecell{log(L$_{\rm 1.4GHz}$)\\(W\,Hz$^{-1}$)} & \makecell{log(L$_{\rm [OIII]}$)\\(W)} & \makecell{log(M$_{*}$)\\(M$_{\odot}$)} & Obs. date & \makecell{Seeing \small{FWHM}\\ (arcsec)}\\
\hline
J005230.59$-$011548.3 (J0052$-$01) & 0.135 & 0.11 & 17.64 & 23.27 & 35.11 & 10.8 & 2020-01-25 & { 1.37} \\
J023224.24$-$081140.2 (J0232$-$08) & 0.100 & 0.09 & 16.65 & 23.04 & 35.13 & 10.8 & 2020-01-25 & { 1.31} \\
J073142.37+392623.7 (J0731+39) & 0.110 & 0.14 & 16.94 & 22.99 & 35.08 & 11.0 & 2020-01-25 & { 1.40} \\
J075940.95+505023.9 (J0759+50) & 0.054 & 0.09 & 15.68 & 23.46 & 35.32 & 10.6 & 2020-01-19 & { 1.47} \\
J080224.34+464300.7 (J0802+46) & 0.121 & 0.17 & 16.94 & 23.48 & 35.11 & 11.1 & 2020-01-25 & { 1.72} \\
... & ... & ... & ... & ... & ... & ... & ... & ... \\ \hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{q2_sample_hists.pdf}
\caption{Redshift, stellar mass, 1.4 GHz radio power, and [OIII]$\lambda$5007 emission-line luminosity distributions for the galaxies in the Type 2 quasar sample.}
\label{fig:q2_hists}
\end{figure}
\subsection{INT/WFC observations}
\label{subsec:obs}
All deep optical imaging data were obtained using the Wide-Field Camera (WFC) attached to the 2.5m Isaac Newton Telescope (INT) at the Observatorio del Roque de los Muchachos, La Palma. The WFC consists of four thinned, anti-reflection-coated 2048 $\times$ 4100 pixel CCDs separated by gaps of $660 - 1098$ \micron, pixel sizes of 0.333 arcsec\,pixel$^{-1}$, and a resulting large total field-of-view of 34 $\times$ 34 arcmin$^{2}$.
The vast majority of the images were taken using the WFC Sloan \textit{r}-band filter ($\rm{\lambda_{eff}} = 6240$\,\AA{}, $\Delta \lambda = 1347$\,\AA{}), chosen for consistency with the radio-intermediate HERG observations in Paper I and the observations of radio-loud galaxies in the 2 Jy sample, performed by \citet[][]{ram11}. However, images were taken using the WFC Sloan \textit{i}-band filter ($\rm{\lambda_{eff}} = 7743$\,\AA{}, $\Delta \lambda = 1519$\,\AA{}) for six of the 3CR sources (3C 52, 3C 61.1, 3C 63, 3C 130, 3C 405, 3C 410), in order to reduce the influence of the bright moonlight in the period in which these observations were performed. 3C 198 was observed using the WFC Harris R-band filter ($\rm{\lambda_{eff}} = 6380$\,\AA{}, $\Delta \lambda = 1520$\,\AA{}), which has a different response function to the WFC Sloan \textit{r}-band filter but covers a broadly similar wavelength range.
The observations for the 3CR, RI-HERG high and Type 2 quasar samples were conducted in several separate runs between December 2012 and January 2020, on the dates listed in Tables~\ref{tab:RI-HERG_high_target_info}, \ref{tab:3CR_info} and \ref{tab:q2_target_info}. Observations of the RI-HERG low sample were taken in March 2017, and are described in Paper I. Individual seeing full width at half maximum (FWHM) values for the observations were obtained from averaging the measured FWHM values of foreground stars in the final coadded images (following the reduction outlined in \S\ref{subsec:red}). These values are presented in Tables~\ref{tab:RI-HERG_high_target_info}, \ref{tab:3CR_info} and \ref{tab:q2_target_info}.
The seeing FWHM measurements range from 0.93 to 2.61 arcsec across the four samples, with a median of 1.39 arcsec. The RI-HERG low and RI-HERG high samples were observed with the best typical seeing, with medians of 1.31 and 1.36 arcsec and standard deviations of 0.15 and 0.26 arcsec, respectively. The 3CR and Type 2 quasar samples were typically observed in poorer seeing conditions, with medians of 1.51 and 1.40 arcsec and standard deviations of 0.39 and 0.30 arcsec, respectively. A summary of the seeing measurements for each sample is presented in Table~\ref{tab:lsb_summary}.
The vast majority of the targets were observed using 4 $\times$ 700s exposures, yielding total exposure times of 2800s per target. These integration times were necessary for the detection of low-surface-brightness tidal features, but were divided into separate exposures to avoid target galaxy saturation in individual images. A square dithering pattern was employed for the observations (four pointing offsets of 30 arcsec) in order to cover the gaps in the images introduced by the spacings between the CCDs. Flat-field and image defect corrections were also improved by this process.
Several targets were, however, observed with different total integration times. 3C 405 and 3C 410 lie in crowded fields that contain a large number of stars due to their close proximity to the Galactic plane. The observations of these targets were divided into additional separate exposures that would yield similar total exposure times and reduce the number of saturated stars in individual images: 5 $\times$ 500s for 3C 405 and 7 $\times$ 400s for 3C 410. Only three exposures were obtained for 3C 63 (3 $\times$ 700s), J0818+36 (3 $\times$ 700s) and 3C 458 (3 $\times$ 600s), while five were used for 3C 236, 3C 326 and J0945+17 (each 5 $\times$ 700s), six for 3C 349 (6 $\times$ 700s) and seven for J1015+00 (7 $\times$ 700s). Four shorter exposures were taken for J1223+08 (4 $\times$ 400s), due to time constraints on the night on which this target was observed. In addition, image quality issues meant that only 3 $\times$ 700s individual frames could be used for some targets: J0836+05 and 3C 236 (satellite trails); J0858+31, J0945+17 and J1200+31 (bad CCD striping). Targets with three observations were still dithered sufficiently to overcome the issues related to the WFC CCD gaps.
\subsection{Image reduction and surface brightness depth}
\label{subsec:red}
All processing of the target images, from initial reduction to construction of the final mosaic images, was carried out using \texttt{THELI} (\citeauthor{schirmer13}, \citeyear{schirmer13}; latest version available at \url{https://github.com/schirmermischa/THELI}).
Bias and flat-field corrections for each observation were performed via the subtraction of master bias frames and division by the master flat-field frames, respectively.
For the \textit{i}-band observations, fringing effects were removed using a two-pass image background model. Brighter background features were removed on the first pass, where no detection and masking of objects was performed. Fainter background features were modelled in the second pass, performed after the detection and masking of all objects in the field above a certain threshold (signal-to-noise per pixel of 1.5 for objects with 5 or more connected pixels).
Astrometric solutions for the 3CR, RI-HERG high and Type 2 quasar calibrated images were calculated in \texttt{THELI} through cross-matching object catalogues produced for each image by \texttt{SExtractor} \citep[][]{bertin96} with the GAIA DR2 catalogue (\citeauthor{gaia18}, \citeyear{gaia18}). The all-sky USNO-B1 catalogue (\citeauthor{monet03}, \citeyear{monet03}) was used for the RI-HERG low sample, since these data were reduced with an earlier version of \texttt{THELI} (see Paper I).
A sky model was produced and subtracted from all calibrated images by \texttt{THELI} to correct for remaining variations in the sky background level before final coaddition.
Photometric zero points were calculated in \texttt{THELI} through comparison of the derived instrumental magnitudes for stars in the final coadded image fields with their catalogued magnitudes in the Pan-STARRS1 \citep{cham16} First Data Release\footnote{Note that the analysis in Paper I considered catalogued SDSS \textit{r}-band magnitudes when determining the zero points for the RI-HERG low sample. However, the Pan-STARRS1 magnitudes are used when comparing between the different samples in the current work. Both surveys use the AB magnitude system and the difference between the \textit{r}-band magnitudes is minor, characterised by the following equation: $r_{\rm SDSS} = r_{\rm P1} - 0.001 + 0.011(g-r)_{\rm P1}$ \citep{tonry12}.}. This method has the key advantage of automatically correcting for photometric variability throughout the nights, since the calibration stars are observed at the same time and at the same position on the sky as the galaxy targets. The reliance on average zero points derived from standard star observations is also removed.
Measurements of the surface brightness depth of the observations were obtained using the same procedure as outlined in Paper I, which closely follows that used by \citet{atk13}. These measurements were carried out after all reduction had been performed, including the flattening achieved through the subtraction of the sky background model. The total counts detected within 40 unique circular apertures of one arcsecond in radius were measured, placed in regions of the sky background with minimal or no influence from the haloes of bright objects in the field.
The standard deviations in the sky background level within the apertures ($\sigma_{\rm sky}$) were converted to apparent magnitudes, using the photometric zero points derived as outlined above, to provide final measurements of the surface brightness depth. The median $3\sigma_{\rm sky}$ surface brightness depths over all \textit{r}-band observations of the targets is 27.0 mag\,arcsec$^{-2}$, with a standard deviation of 0.3 mag\,arcsec$^{-2}$. The value for the \textit{i}-band observations of the 3CR targets is 25.7 mag\,arcsec$^{-2}$ ($3\sigma_{\rm sky}$), with a standard deviation of 0.2 mag\,arcsec$^{-2}$. The differences in individual and total exposure times for some target observations (see \S\ref{subsec:obs}) did not significantly affect the surface brightness depths achieved. Limiting surface brightness measurements for individual targets in the RI-HERG low sample were presented in Paper I. Values for the individual target observations are not presented for the RI-HERG high, 3CR and Type 2 quasar samples. However, a summary of the average values for each sample is presented in Table~\ref{tab:lsb_summary}.
\begin{table}
\small
\centering
\caption{\small The means, medians and standard deviations of the seeing FWHM and $3\sigma_{\rm sky}$ limiting surface brightness measurements ($\mu_{AB}$) for the observations of the four samples. The latter values have units of mag\,arcsec$^{-2}$ and are presented in the Pan-STARRS1 AB magnitude photometric system.}
\label{tab:lsb_summary}
\begin{tabular}{ccccccc}
\hline
& & Seeing & & & $\mu_{AB}^{3\sigma_{\rm sky}}$ & \\
& Mean & Median & $\sigma$ & Mean & Median & $\sigma$ \\ \hline
\makecell{RI-HERG low} & 1.42 & { 1.36} & 0.26 & 27.0 & 27.0 & 0.3 \\
\makecell{RI-HERG high} & {1.25} & { 1.31} & {0.15} & 27.1 & 27.1 & 0.3 \\
\makecell{Type 2 quasars} & { 1.44} & { 1.40} & { 0.30} & 27.0 & 27.0 & 0.3 \\
\makecell{3CR (\textit{r}-band)} & { 1.67} & { 1.51} & { 0.40} & 26.9 & 27.0 & 0.4 \\
\makecell{3CR (\textit{i}-band)} & { 1.42} & {1.47} & 0.12 & 25.7 & 25.7 & 0.2 \\ \hline
\end{tabular}
\end{table}
\section{Methodology and control matching}
\label{sec:methods}
\subsection{Stellar mass calculation}
\label{subsec:mass_calc}
Prior to identifying suitable matched control galaxies, estimates of the stellar masses were required for all of the targets to be matched. All objects in the RI-HERG low and RI-HERG high samples had existing stellar mass estimates in the MPA-JHU value added catalogue from which the control galaxies were also extracted\footnote{These were obtained from fitting of the stellar population synthesis models of \cite{bc03} to the SDSS broad-band \textit{ugriz} photometry for the galaxies. The method is similar to that described by \cite{kauf03}, who instead used spectroscopic features to characterise the fits. A comparative discussion of these methods is available at: \url{https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/mass_comp.html}.}, and so no additional calculations were required for these sources.
However, MPA-JHU measurements were not available for all of the objects in the 3CR and Type 2 quasar samples. Initial stellar mass estimates for the galaxies in these samples were instead derived from their Two Micron All Sky Survey \citep[2MASS;][]{skrut06} $K_s$-band luminosities, using the colour-dependent mass-to-light ratio prescription of \cite{bell03} for this waveband. This method was used to derive stellar mass estimates for \textit{all} of the objects in both the 3CR and Type 2 quasar samples (i.e. MPA-JHU values were not used for any of these galaxies), for consistency both within and between the two samples.
The \cite{bell03} mass-to-light ratio prescription for the 2MASS $K_s$-band has the following form (in solar units):
\vspace{-0.2ex}
\begin{equation}
\label{eq:m-to-L}
\mathrm{log}(M/L)_{K_s} = -0.206 + 0.135 \times (B-V)\,.
\end{equation}
\noindent
A $B-V$ colour of 0.95 was assumed for all sources, in line with the expected value for a typical elliptical galaxy at zero redshift \citep[e.g.][]{sh89b}. An additional factor of 0.15 dex was subtracted in order to convert to a Kroupa initial mass function (IMF) for the stellar population \citep{kroupa01}, in accordance with the IMF assumed for the MPA-JHU stellar mass estimates. A final value of $\mathrm{log}(M/L)_{K_s} = -0.228$ was thus used for the calculations.
Where possible, the ``total" $K_s$-band magnitudes\footnote{Derived by extrapolating the radial surface brightness profile measured within a 20 mag\,arcsec$^{-2}$ elliptical isophote to a radius of around four scale lengths (from S\'ersic-like exponential fitting), in an attempt to account for the flux lost below the background noise.} listed in the 2MASS Extended Source Catalog \citep{jarr00} were used to derive the luminosities for the calculations, in order to ensure that as much as possible of the galaxy flux was encapsulated (XSC magnitudes, hereafter). These values were available for 44 of the 73 3CR sources (60 per cent) and 19 of the 25 Type 2 quasars (76 per cent). For the remaining sources in the two samples, the magnitudes listed in the 2MASS Point Source Catalog \citep[][]{skrut06}, derived from PSF profile fitting, were considered (hereafter PSC magnitudes). These were available for all 6 of the remaining Type 2 quasars and 26 of the 29 remaining 3CR galaxies (90 per cent).
Three of the 3CR sources -- 3C 61.1, 3C 258, and 3C 458 -- did not have either 2MASS XSC or PSC magnitudes available. Stellar mass estimates were not calculated in these cases, and, consequently, the control matching was not performed for these sources. However, these objects were still considered for the morphological classifications and are included in comparisons of the results obtained for the 3CR sample and their matched controls (\S\ref{sec:res}). This was done under the assumption that the stellar masses for these sources, which represent a small minority of the full 3CR sample (4 per cent), are similar to those of the other 3CR objects.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{K-band_mass_vs_MPA-JHU_mass-contour.pdf}
\caption[]{A plot of the $K_s$-band and MPA-JHU stellar mass estimates for the 238,418 galaxies in the MPA-JHU catalogue with 2MASS XSC magnitudes and MPA-JHU stellar mass estimates in the range 10.7 $\leq$ $ \rm log(M_*/M_{\odot}$) $\leq$ 12.0. The line of best fit derived from linear regression is plotted (black), with the grey shaded region representing the one-$\sigma$ error bounds for the fit.}
\label{fig:k_mass_v_mpa_mass}
\end{figure}
The main disadvantage of using the PSC measurements was loss of sensitivity to emission from the extended regions of the target galaxies. In an attempt to account for this effect, a correction was derived from the average difference between the XSC and PSC magnitudes for the large number of galaxies in the MPA-JHU catalogue that had both measurements available. Considering all MPA-JHU galaxies with redshifts in the range covered by both the 3CR and Type 2 quasar samples ($0.05 < z < 0.3$), a median magnitude difference of $K_s^{\rm PSC} - K_s^{\rm XSC} =0.811$ was determined, with a standard deviation of 0.311.
No significant evidence for a relationship between these differences and the measured PSC magnitude or redshift was found, as confirmed by Pearson correlation tests ($r = -0.121$ and $r = 0.100$, respectively, with negligible \textit{p}-values). As a result, a fixed value of 0.811 was subtracted from the PSC magnitudes to convert them to estimated XSC magnitudes in all cases, with the latter then being used in the subsequent calculations.
Following this, an extragalactic extinction correction was subtracted from the XSC or corrected PSC magnitudes, using the \textit{K}-band values of \cite{sf11}\footnote{Downloaded from the IRSA Galactic Dust Reddening and Extinction online service, available at: \url{https://irsa.ipac.caltech.edu/applications/DUST/.}}. A cosmological \textit{k}-correction was also applied, using the $k(z) \approx (2.1 \pm 0.3)z$ formulation given in \cite{bell03}, which is independent of galaxy spectral type. The corrected magnitudes were then converted to solar luminosities by assuming a solar 2MASS \textit{K$_s$}-band absolute magnitude of 3.27 \citep[][]{will18}, after which Equation~\ref{eq:m-to-L} was applied to obtain the stellar mass estimates. An AGN contribution to the near-infrared light was not subtracted before calculating the stellar masses. However, we do not expect this to have a strong influence on our main results, since: (i) high-resolution near-IR imaging of nearby 3CR radio galaxies showed that the contribution from Type 2 AGN was typically small near $K_s$-band wavelengths \citep[$<$20 per cent of the light in 0.9” diameter nuclear apertures;][]{ram14a,ram14b}; (ii) Type 1 AGN objects only comprise 15 per cent of the 3CR sample (11/72) and 7 per cent of the full AGN sample (11/154), and Type 1 quasars only 7 per cent and 3 per cent, respectively.
As a final step, the same technique was used to determine $K_s$-band stellar mass estimates for all galaxies in the MPA-JHU catalogue with 2MASS XSC magnitudes available. This was done with the goal of obtaining a correction to align the $K_s$-band stellar mass estimates with the MPA-JHU values. Since individual SDSS $g-i$ colours were available for the MPA-JHU galaxies, a second prescription from \cite{bell03} that utilized these measurements was employed in this instance: $\mathrm{log}(M/L)_{K_s} = -0.211 + 0.137 \times (g-i)$. The procedure was otherwise identical to that described above; this again included the subtraction of an additional 0.15 dex to convert to a Kroupa IMF. All XSC-matched galaxies with MPA-JHU stellar mass estimates in the range 10.7 $\leq$ $ \rm log(M_*/M_{\odot}$) $\leq$ 12.0 were considered for the comparison, in approximate agreement with the range of $K_s$-band stellar mass estimates derived for the 3CR and Type 2 quasar samples -- while some of the 3CR sources had $K_s$-band estimates larger than $ \rm log(M_*/M_{\odot}$) = 12.0, this upper limit was chosen because of the large uncertainties determined for MPA-JHU estimates above this value.
The $K_s$-band and MPA-JHU stellar mass estimates for the $\sim$240,000 galaxies considered for the comparison are presented in Figure~\ref{fig:k_mass_v_mpa_mass}. The line of best fit displayed in the figure has the form $ \rm log(M_*/M_{\odot})_{K_s}$ = 0.84\,$ \rm log(M_*/M_{\odot})_{\rm MPA-JHU}$ + 1.77, as derived from linear regression. The typical scatter around the relation is 0.01. No significant evidence that the difference between the stellar mass estimates varies with $g-i$ colour, redshift, 2MASS XSC $K_s$-band magnitude, or the values of either of the stellar mass estimates was found, based on Pearson correlation tests. Consequently, this relation was employed to convert $K_s$-band stellar mass estimates to corresponding MPA-JHU stellar mass estimates in all cases. The final MPA-JHU-equivalent stellar mass estimates for the 3CR and Type 2 quasars are presented in Tables~\ref{tab:3CR_info} and \ref{tab:q2_target_info}, respectively.
\subsection{Control matching procedure}
\label{subsec:control_matching}
The MPA-JHU value-added catalogue contains a large amount of raw and derived data for 927,552 galaxy spectra from SDSS DR7, including the spectroscopic redshifts and stellar mass estimates that were crucial for the control matching. Given the significant crossover between the fields covered by the INT/WFC images and the SDSS DR7 survey footprint, the catalogue therefore provided a suitable ``pool" from which to select matched control galaxies for the active galaxies with matching observation properties (technique, image depth, observing conditions). Prior to performing the control matching, however, the following steps were taken to limit the control pool to galaxies with suitable properties.
\begin{enumerate}
\item \textit{Coordinate restriction.} Galaxies were required to lie within the regions of sky covered by the INT/WFC images of the active galaxies. This was the most restrictive constraint, with only 2,744 of the objects in the catalogue meeting this criterion (0.3 per cent).
\item \textit{Removal of likely AGN.} The MPA-JHU catalogue was constructed solely of objects that had been spectroscopically classified as galaxies in SDSS DR7, with the exception of some objects that had originally been targeted as galaxies but were later classified as quasars. However, the spectral types of both galaxies and quasars were also subclassified based on the properties of their [OIII]$\lambda$5007, H$\alpha$, H$\beta$, and [NII]$\lambda$6583 emission lines (if strongly detected).
All sources that had been subclassified as either AGN or non-star-forming broad-line objects\footnote{Objects with lines detected at the 10\,$\sigma$ level, with velocity dispersion $> 200$ km\,s$^{-1}$ at the 5\,$\sigma$ level.} were removed from consideration. Star-forming or starburst galaxies were not removed. This left 2,615 galaxies in the control pool.
\item \textit{Removal of remaining targets.} The three galaxies from our target samples that had not been identified as AGN based on their SDSS spectra in step (ii) were removed, leaving 2,612 objects.
\item \textit{Removal of duplicate objects.} The MPA-JHU catalogue contains duplicate identifications for a large number of galaxies due to repeat SDSS spectral observations. Any remaining duplicates were taken out of the control pool at this point, which left 2,413 sources available for the matching.
\end{enumerate}
All matched controls used for the analysis were selected from this final restricted pool of 2,413 galaxies. As mentioned, this matching was not performed for the three galaxies in the 3CR sample for which stellar mass estimates could not be obtained from 2MASS magnitudes (\S\ref{subsec:mass_calc}): 3C 61.1; 3C 258; and 3C 458. The Type 2 quasars were also not considered at this stage, although matching was performed for these objects after the classifications had been obtained (see below). A total of 127 active galaxies (58 radio-intermediate HERGs and 69 3CR sources) were therefore considered for control matching. Repeat selections of controls that matched multiple active galaxies were permitted throughout the matching process, in order to maximise the number of possible matches available for each target.
Redshift matches were determined using separate criteria for the radio-intermediate HERG and 3CR samples:
\begin{enumerate}
\item $z_{\rm target} - 0.01 < z_{\rm control} < z_{\rm target} + 0.01$, for matching to the RI-HERG low and RI-HERG high samples;
\item $z_{\rm target} - 0.02 < z_{\rm control} < z_{\rm target} + 0.02$, for matching to the 3CR sample.
\end{enumerate}
\noindent
An increased tolerance was allowed for the 3CR matching due to the fact that their typically high stellar masses (median log(M$_*$/M$_{\odot}$) = 11.4) made the selection of matched controls more difficult (see below). All control galaxies with suitable redshifts then needed to meet both of the following two stellar mass criteria:
\begin{enumerate}
\item (log(M$_*$/M$_{\odot}$) + $\sigma$)$_{\rm control}$ $>$ (log(M$_*$/M$_{\odot}$) $-$ $\sigma$)$_{\rm target}$\,;
\item (log(M$_*$/M$_{\odot}$) $-$ $\sigma$)$_{\rm control}$ $<$ (log(M$_*$/M$_{\odot}$) $+$ $\sigma$)$_{\rm target}$\,;
\end{enumerate}
\noindent
i.e. the $1\sigma$ uncertainties on the stellar mass estimates for the target and the control were required to overlap \citep[as in][]{gord19}. A total of 1,581 unique control galaxies were found to meet these selection criteria, an average of $\sim$12 per active galaxy considered. However, since repeat selections of controls were permitted, the true number of matches was in fact much larger than this (8,700).
In addition to the three 3CR sources not considered for the matching (listed above), no matches were found for a further 10 of the 3CR sources using these criteria: 3C 130; 3C 234; 3C 323.1; 3C 346; 3C 371; 3C 382; 3C 405; 3C 410; 3C 430; and 3C 433. These sources had stellar mass estimates in the range $\rm 11.3 \leq log(M_*/M_{\odot}) \leq 12.7$, with a median of 11.8, and the lack of success in the matching was found to be driven by the large stellar masses of these objects. Overall, the procedure was thus successful in finding controls for 117 out of the 127 targets considered for the matching (92 per cent).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{control_matching_histograms-ALL_reduced.pdf}
\caption{The redshift (left column) and stellar mass (right column) distributions for the galaxies in each of the four active galaxy samples, alongside the corresponding distributions for their matched controls. For the 3CR sample, { the full sample and} the objects successfully matched with controls are { shown separately}.}
\label{fig:control_matching_hists}
\end{figure}
\begin{table}
\centering
\caption{The results of two-sample Kolmogorov-Smirnoff (KS) tests performed for the redshift and stellar mass distributions of the active galaxy samples and their respective matched control samples. Both the test statistic ($D$) and the $p$-value are presented in each case. Note that the values listed for the 3CR sample we derived considering only the objects for which control matches were successfully found.}
\label{tab:control_matching_KS}
\begin{tabular}{ccccc}
\hline
& \multicolumn{2}{c}{$z$} & \multicolumn{2}{c}{$ \rm log(M_*/M_{\odot}$)} \\
& $D$ & $p$ & $D$ & $p$ \\ \hline
RI-HERG low & 0.084 & 0.988 & 0.076 & 0.996 \\
RI-HERG high & 0.119 & 0.857 & 0.163 & 0.519 \\
3CR (matched) & 0.075 & 0.922 & 0.095 & 0.726 \\
Type 2 quasars & 0.099 & 0.971 & 0.115 & 0.913 \\
\hline
\end{tabular}
\end{table}
In order to reduce the total number of galaxies requiring classifications, only the five controls with the smallest differences between the target and control stellar mass estimates were considered for each active galaxy. This requirement had the additional advantage of counteracting the potential selection of controls with large uncertainties on their stellar mass estimates. In cases where the target had fewer than five matches, all of the available controls were considered: J1036+38 (RI-HERG low), with 4 matches; 3C 52, with 3 matches; 3C 236, with 2 matches; and 3C 132, 3C 388, 3C 438, J1630+12 (RI-HERG low), J0752+45, J1147+35, and J1436+05 (RI-HERG high), with one match each. A total of 551 control selections (388 unique galaxies, 163 repeats) were made for the 118 matched targets remaining at this point, an average of 4.7 per active galaxy.
19 of these control galaxies were found to be unsuitable for the classifications due to issues with the images (e.g. bad image regions, defects, crowding/source confusion). In these cases, images for the next closest matches in stellar mass were inspected until a suitable replacement control was found. All 19 controls were successfully replaced, but with 10 additional repeat selections (551 total selections with 173 repeats). This left a final sample of 378 unique control galaxies with images to be used for the classification analysis.
Due to their late inclusion in the project, control matching for the Type 2 quasar sample was performed after the interface classifications had been obtained. The controls for these targets were hence selected from the 378 control galaxies selected as matches to objects in the other active galaxy samples. The criteria used for the matching were identical to those used for the RI-HERG low and RI-HERG high samples: (i) $z_{\rm target} - 0.01 < z_{\rm control} < z_{\rm target} + 0.01$, for the redshift matching; (ii) (log(M$_*$/M$_{\odot}$) + $\sigma$)$_{\rm control}$ $>$ (log(M$_*$/M$_{\odot}$) $-$ $\sigma$)$_{\rm target}$ and (log(M$_*$/M$_{\odot}$) $-$ $\sigma$)$_{\rm control}$ $<$ (log(M$_*$/M$_{\odot}$) $+$ $\sigma$)$_{\rm target}$, for the stellar mass matching.
Using these criteria, it was found that 202 of the 378 controls were also matches to at least one of the galaxies in the Type 2 quasar sample. This included matches for each of the Type 2 quasar objects, with the number of matches for each individual target ranging from 4 to 47. As before, the five controls with the smallest differences between active galaxy and control galaxy stellar mass were selected for the control sample, and repeat selections were allowed. Only four matched controls were available for J1100+08, which were hence all included in the sample. A total of 124 control selections were therefore made for the 25 Type 2 quasar objects (91 unique control galaxies, 33 repeats).
Figure~\ref{fig:control_matching_hists} shows the stellar mass and redshift distributions for each of the active galaxy samples and their respective matched control galaxy samples, which demonstrate the success of the matching.
Two-sample Kolmogorov-Smirnov tests performed on these distributions provided no significant evidence for rejecting the null hypothesis that the targets and their matched controls are drawn from the same underlying distribution, in any of the cases. The results of these tests are presented in Table~\ref{tab:control_matching_KS}.
\subsection{Online classification interface}
\label{subsec:interface}
The online interface used to obtain morphological classifications for the project was made using the Zooniverse Project Builder platform at Zooniverse.org\footnote{Available at: \url{https://www.zooniverse.org/lab}.}, a citizen science web portal that stemmed from the initial Galaxy Zoo project by \cite{lin08,lin11}. Through this interface, eight researchers (all authors except PB, PDG) were blindly shown images of the active galaxies and control galaxies in a randomised order, and were asked to answer multiple choice questions concerning their optical morphologies.
No additional information on the nature of the galaxy to be classified was provided, in order to avoid introducing any biases related to the individual galaxy properties (e.g. active/non-active, target name, stellar mass, redshift, radio power, optical luminosity). However, two scale bars of 10 kpc in size were included on each image to assist with determination of multiple nuclei classifications (see below). The images were centred on the targets and fixed to be of 200 kpc $\times$ 200 kpc in size at the redshift of the galaxy in question, in all cases \citep[as in][]{gord19}. The images used for the classifications for all active galaxies (except those in the RI-HERG low sample) are available in the supplementary information.
One disadvantage of carrying out the classifications in this way was that the ability to fully manipulate the image contrast levels \citep[as for those in][]{ram11,ram12,bess12,pierce19,ell19} was lost. The interface was set up with several features that partially accounted for this issue. Firstly, two postage stamp images with different contrast levels were displayed alongside each other for each galaxy: one of high contrast, for clearer identification of high-surface-brightness tidal features and the overall morphological types (spiral/disk, elliptical, etc.); and one of low contrast, for clearer identification of faint morphological structures. The two image contrast levels were chosen manually on a case-by-case basis, with consideration given to the appearance of both the target galaxy structures and the objects and/or image defects within the 200 kpc square surrounding region. In addition, classifiers could zoom in or out and pan around the image to look at specific regions in more detail. Rotation or inversion of the image was also possible.
Using the interface, classifiers were required to answer up to three multiple choice questions related to the morphological appearance of the subject galaxies. The first of these required the classifiers to answer the question ``Does this galaxy show at least one clear interaction signature?" using one of the following options: (i) ``Yes"; (ii) ``No"; or (iii) ``Not classifiable (e.g. image defect, bad region/spike from saturated star)".
The last option was included for cases in which it was not possible to determine whether or not the galaxy was disturbed due to issues with the displayed image (e.g. because of presence of major image defects). For consistency with the previous study of the RI-HERG low sample (Paper I), dust lanes were included as one of the interaction signature classifications at this stage. However, dust lanes were not considered as a clear signature of a galaxy merger or interaction, and these classifications were not used for the analysis presented in \S\ref{subsec:dist_rates} and \S\ref{subsec:features_merger_stage}.
Should the classifier have answered ``Yes" to this first question, they were then asked to identify the type(s) of interaction signature that they had seen. To do this, they had to answer the question ``What types of interaction signature are visible?" using the following options:
\begin{enumerate}
\item Tail (T) -- a narrow curvilinear feature with roughly radial orientation;
\item Fan (F) -- a structure similar to a tail, but that is shorter and broader;
\item Shell (S) -- a curving filamentary structure with a roughly tangential orientation relative to a radial vector from the main body of the galaxy;
\item Bridge (B) -- a feature that connects a radio galaxy with a companion;
\item Amorphous halo (A) -- the galaxy halo is misshapen in an unusual way in the image;
\item Irregular (I) -- the galaxy is clearly disturbed, but not in a way that fits any of the alternative classifications;
\item Multiple nuclei (2N, 3N...) -- two or more brightness peaks within a distance of 10\,kpc;
\item Dust lane (D) -- a clear linear dark structure within the galaxy;
\item Tidally interacting companion (TIC) -- a companion galaxy shows clear morphological disturbance that is suggestive of a tidal interaction with the main target (e.g. with direction aligned towards/away from the central target).
\end{enumerate}
\noindent
The classifiers were allowed to select as many options as necessary, to ensure that multiple interaction signatures could be identified for each galaxy when present. These categories were chosen to be consistent with those from the interaction signature classification scheme detailed in Paper I \citep[also with][]{ram11,ram12,bess12}.
A new category was also added to the classification scheme for the current analysis: ``Tidally interacting companion (TIC)". This accounted for cases in which a close companion showed evidence for a tidal interaction with the main target, whether or not the target itself showed clear interaction signatures -- this included cases where the distance limit criterion for the ``Multiple Nuclei (2N, 3N...)" class was not met.
Finally, the classifiers were required to answer the question ``On first impression, what is the morphological type of the galaxy?" using one of the following responses: (i) ``Spiral/disk"; (ii) ``Elliptical"; (iii) ``Lenticular"; (iv) ``Merger (too disturbed to classify as above)"; or (v) ``Unclassifiable (due to image defects, \textit{not} merger)".
Again, these options were chosen to be consistent with the host type classifications obtained from visual inspection of the RI-HERG low sources in Paper I. As for the first question, the last option was included for cases where the classifier thought that issues with the image quality meant that they could not provide an accurate classification.
\section{Analysis and results}
\label{sec:res}
Through the online interface, eight researchers (all authors except PB, PDG) provided morphological classifications for each of the 533 galaxies involved in the project. In the same manner as the morphological classification analysis performed for the RI-HERG low sample (Paper I), each classification was considered as a ``vote" for that particular classification category. A classification was then only accepted when the number of votes it received exceeded a certain threshold, the value of which was dependent on the question considered. In this section, the results related to each of the three classification questions listed in \S\ref{subsec:interface} are addressed in turn. Full classification results for each of the 155 active galaxies studied are presented in the supplementary information.
\subsection{Rates of morphological disturbance}
\label{subsec:dist_rates}
The main goal of the project was to determine how the importance of galaxy mergers and interactions for triggering AGN varies with their radio powers and/or optical emission-line luminosities. This topic was addressed by the first question asked to the classifiers in the online interface: ``Does this galaxy show at least one clear interaction signature?", with the possible answers of ``Yes", ``No" and ``Not classifiable (e.g. image defect, bad region/spike from saturated star)" (see \S\ref{subsec:interface}).
This question was answered by all eight classifiers for every galaxy in the sample. Only one of the listed responses could be selected when answering this question. A threshold of 5 out of 8 votes was chosen as the lower limit for accepting a certain classification in all cases (i.e. a simple majority). In this instance, the goal was simply to test whether or not the classifier believed that the galaxy had been disturbed by a merger or interaction. Here, cases in which 5 or more votes were recorded for ``Yes" were taken to confirm that the galaxy was disturbed, and those where 5 or more votes were recorded for ``No" were taken to indicate that the galaxy was not disturbed. Any other distribution of votes (including any number for ``Not classifiable") was considered as an uncertain case.
\subsubsection{Proportions}
\label{subsubsec:q1-props}
Figure~\ref{fig:q1_all_agn_samples} shows the proportions of galaxies classed as disturbed, not disturbed and uncertain for all samples classified using the online interface. The results for the active galaxy samples are presented alongside those for their matched control samples in all cases. The significance of the differences between the active galaxy samples and matched control samples is also shown, as estimated using the two-proportion Z-test. The proportions measured for each of the active galaxy and matched control samples are presented in Table~\ref{tab:q1_props}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Q1-active_gals_vs_matched_controls-new.pdf}
\caption{The proportions of galaxies classed as disturbed, not disturbed or uncertain for each of the samples classified using the online interface. The results for the active galaxy samples are presented alongside those for their respective matched control samples, with the significance of the difference between the measured proportions for each category indicated in all cases (from two-proportion Z-tests). Proportion uncertainties were estimated following the method of \citet[][]{cam11}. The exact proportions are presented in Table~\ref{tab:q1_props}.}
\label{fig:q1_all_agn_samples}
\end{figure}
\begin{table}
\centering
\caption{The proportions of galaxies classed as disturbed, not disturbed and uncertain for all of the active galaxy and matched control samples classified using the online interface, as presented in Figure~\ref{fig:q1_all_agn_samples}. All proportions are expressed as percentages. The results for the 3CR HERG and LERG subsamples are included in separate columns, alongside those found for the full 3CR sample. Proportion uncertainties were estimated following the method of \citet[][]{cam11}.}
\label{tab:q1_props}
\begin{tabular}{C{1.13cm}C{0.88cm}C{0.73cm}C{0.9cm}C{0.73cm}C{0.78cm}C{0.5cm}}
\hline
& \multicolumn{2}{c}{Disturbed} & \multicolumn{2}{c}{\makecell{Not dist.}} & \multicolumn{2}{c}{Uncertain} \\
& AGN & Cont. & AGN & Cont. & AGN & Cont. \\ \hline
\makecell{RI-HERG\\low} & 37 $^{+9}_{-8}$ & 24 $^{+4}_{-3}$ & 53 $\pm$ 9 & 69 $^{+3}_{-4}$ & 10 $^{+8}_{-3}$ & 7 $^{+3}_{-2}$ \\
\makecell{RI-HERG\\high} & 57 $^{+9}_{-10}$ & 35 $^{+5}_{-4}$ & 39 $^{+10}_{-8}$ & 60 $^{+4}_{-5}$ & 4 $^{+7}_{-1}$ & 6 $^{+3}_{-1}$ \\
\makecell{3CR\\ } & 53 $\pm$ 6 & 27 $^{+3}_{-2}$ & 39 $^{+6}_{-5}$ & 67 $\pm$ 3 & 8 $^{+4}_{-2}$ & 7 $^{+2}_{-1}$ \\
\makecell{\textit{-- 3CR}\\\textit{HERGs}} & 66 $^{+7}_{-8}$ & 27 $^{+4}_{-3}$ & 24 $^{+8}_{-5}$ & 66 $^{+3}_{-4}$ & 10 $^{+7}_{-3}$ & 7 $^{+2}_{-1}$ \\
\makecell{\textit{-- 3CR}\\ \textit{LERGs}} & 37 $^{+9}_{-8}$ & 26 $^{+4}_{-3}$ & 57 $^{+8}_{-9}$ & 68 $\pm$ 4 & 7 $^{+8}_{-2}$ & 6 $^{+3}_{-1}$ \\
\makecell{Type 2\\quasars} & 64 $^{+8}_{-10}$ & 26 $\pm$ 4 & 36 $^{+10}_{-8}$ & 69 $\pm$ 4 & -- & 6 $^{+3}_{-1}$ \\ \hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.95\linewidth]{Q1-3CR_HERGs+LERGs_vs_matched_controls-new.pdf}
\caption[The proportions of 3CR HERGs, 3CR LERGs, and their respective matched control galaxies classed as disturbed, not disturbed or uncertain using the online interface.]{As Figure~\ref{fig:q1_all_agn_samples}, but for the HERGs and LERGs in the 3CR sample. In this case, the results for the two samples are presented both alongside each other (first panel) and with those for their respective matched control samples (second and third panels).}
\label{fig:q1_all_HERGs_and_LERGs}
\end{figure*}
From these results, it is seen that the AGN show a preference for disturbed galaxies relative to the matched controls in all cases, supported by the measured proportions for both the disturbed and not disturbed categories. Across the radio AGN samples, the degree of significance for these differences appears to decrease with decreasing radio power, with the most significant excess being found for the 3CR sample ($>$4\,$\sigma$). The Type 2 quasars, however, also show a significant preference for disturbed morphologies relative to their matched controls (3.7\,$\sigma$), suggesting that the optical emission-line luminosity could also be important in this context. In-depth analysis of the relationships with radio power and optical emission-line luminosity is presented in the following subsection.
Previous study of the powerful radio galaxies in the 2Jy sample suggests that the hosts of radiatively-efficient radio AGN (the SLRGs) are typically more likely to be merging/interacting than those of radiatively-inefficient radio AGN \citep[the WLRGs;][]{ram11,ram12,ram13}. In terms of their rates of disturbance, \cite{ram11} found that 94 $^{+4}_{-7}$ per cent of the SLRGs and 27 $^{+16}_{-9}$ per cent of the WLRGs in the sample showed clear signatures of mergers and interactions, which differ at the 4.7\,$\sigma$ level according to a two-proportion Z-test. The inclusion of both HERGs and LERGs in the 3CR sample allows this picture to be tested with a larger sample.
Figure~\ref{fig:q1_all_HERGs_and_LERGs} again shows the measured proportions for the disturbed, not disturbed and uncertain categories outlined above, but in this case comparing the results for the 3CR HERG and LERG subsamples and their respective matched control samples (proportions also listed in Table~\ref{tab:q1_props}). These measurements appear to support the picture suggested by the 2Jy results, with HERGs showing an increased preference for disturbed morphologies both relative to their matched controls and to the LERGs. There is also evidence to suggest that the large difference between the disturbance rates for the 3CR sources and their matched control samples (seen in Figure~\ref{fig:q1_all_agn_samples}) is predominantly driven by the HERGs, with the HERG proportions showing $\sim$5\,$\sigma$ differences with respect to those of their matched controls and the LERG proportions only exhibiting $\sim$1\,$\sigma$ differences.
However, the significance of the difference between the HERG and LERG proportions is lower than that found between the 2Jy SLRGs and WLRGs, with the two-proportion Z-test suggesting that the null hypothesis of them being the same can only be rejected at the 2.4\,$\sigma$ level. One caveat with this comparison is that, as mentioned \S\ref{sec:intro}, the SLRG/WLRG and HERG/LERG classification schemes are not completely equivalent \citep[despite considerable overlap;][]{tad16}. The 2Jy classifications were also obtained through more detailed morphological analysis based on higher quality imaging observations. The effects of these factors on this comparison are discussed in more detail in \S\ref{sec:disc}.
Given that the proportions of galaxies in each sample that were classified as uncertain were small, the trends in the results found for the disturbed galaxies are largely consistent with the opposite trends found for those classified as not disturbed, as can be seen in Figures~\ref{fig:q1_all_agn_samples} and \ref{fig:q1_all_HERGs_and_LERGs}. Therefore, only the galaxies securely classified as disturbed (i.e. above the five-vote threshold) are considered for the remainder of the analysis in this subsection.
\subsubsection{Relationship with radio power and [OIII]$\lambda$5007 luminosity}
\label{subsubsec:q1-RP_and_LOIII}
The detailed morphological analysis of the RI-HERG low sample presented in Paper I suggested that the association between AGN and merging galaxies could be strongly dependent on radio power, but more weakly dependent on optical emission-line luminosity. The significant excesses in the proportions of disturbed galaxies in the 3CR and Type 2 quasar samples, both relative to their respective matched control samples and to the RI-HERG samples, suggested that both properties could be important, but did not provide clear evidence as to which is the main driver of the trend.
The picture suggested by these results is complicated by the fact that there could be some underlying co-dependence between the AGN 1.4 GHz radio powers and [OIII]$\lambda$5007 luminosities. Pearson correlation tests\footnote{Active galaxies with upper limits on either their 1.4 GHz radio powers or [OIII]$\lambda$5007 luminosities were not considered for correlation tests or for the plots in Figure~\ref{fig:q1_dist_enh_vs_RP_OIII}.} suggest that there is a moderate but significant positive correlation between the two parameters in the RI-HERG high and 3CR samples ($r=0.517$, $p=0.005$ and $r=0.533$, $p<10^{-5}$, respectively), but that they are not significantly correlated in the RI-HERG low and Type 2 quasar samples ($r=0.077$, $p=0.648$ and $r=0.118$, $p=0.591$, respectively). This is consistent with the findings of previous studies, where optical emission-line luminosity and radio power are seen to be strongly correlated for powerful radio AGN, but more weakly correlated for those with lower radio powers \citep[e.g.][]{rs91,zb95,best05b}. A significant but weaker positive correlation is found when all of the active galaxy samples are combined ($r=0.315$, $p<10^{-4}$), although this is driven by the stronger correlations seen for the RI-HERG high and 3CR sources, which represent the majority of the total objects (100 out of 155). This suggests that AGN radio power and [OIII]$\lambda$5007 luminosity are not strongly correlated within the sample as a whole.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{merger_frac_v_RP_OIII-all_active_gals_and_HERGs-props_and_enhs-new_err_v2.pdf}
\caption{The disturbed proportions and enhancement ratios ($f_{\rm AGN}/f_{\rm cont}$) in bins of logarithmic 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity for all active galaxies (left column) and for all HERGs and Type 2 quasars (right column) classified using the online interface. In all cases, the markers represent the median values for the active galaxies in each bin, the ranges of which are indicated by the dotted lines. A line representing equality between the proportions measured for active galaxies and their matched control galaxies is shown on the enhancement plots.}
\label{fig:q1_dist_enh_vs_RP_OIII}
\end{figure}
Figure~\ref{fig:q1_dist_enh_vs_RP_OIII} shows the proportions of galaxies classified as disturbed in bins of 1.4 GHz radio power and [OIII]$\lambda$5007 luminosity for all of the active galaxies in the current project.
These proportions are also expressed as ``enhancements" relative to those measured for their matched controls, i.e. the ratios of the fractions of disturbed active galaxies in each bin to those found for the corresponding matched control galaxies ($f_{\rm AGN}/f_{\rm cont}$). The distributions for all active galaxies (including LERGs) and for all HERGs and Type 2 quasars are presented separately.
Across the full range of radio powers covered, the distributions with 1.4 GHz radio power for the disturbed proportions and their enhancements are not significantly different for the full active galaxy sample and for the HERG and Type 2 quasar objects. Both are consistent with a general increase in the rates of disturbance in the two lower-radio-power bins, but the differences between the proportions in these two bins are not highly significant (at the 1.5\,$\sigma$ level), and this trend is not seen when the enhancement ratios are considered. While the enhancement ratios for both the full active galaxy sample and the HERG/Type 2 quasar subset reveal $\sim$3\,$\sigma$ excesses above equality between the active galaxy and control galaxy disturbance rates in the third radio power bin ($\rm 25.31 \leq log(L_{1.4GHz}) \leq 26.73$ W\,Hz$^{-1}$), no significant excess is observed at higher radio powers.
As found for the distributions with radio power, the disturbed proportions for the full active galaxy sample and the HERG/Type 2 quasar subset are consistent over the full range of [OIII]$\lambda$5007 luminosity covered. Suggestions of a general increase towards higher luminosities are seen, but again have low significance. In this case, however, a positive, seemingly linear trend with [OIII]$\lambda$5007 luminosity is observed in the disturbed enhancement ratio distribution for the HERGs and Type 2 quasars. A Pearson correlation test provides evidence for a strong positive correlation between the disturbed enhancement ratios and the median [OIII]$\lambda$5007 luminosities for the binned HERG/Type 2 quasar data, significant at the 97.2 per cent level: $r_{\rm HERG}=0.972$, $p_{\rm HERG}=0.028$. Bootstrapping analysis shows that the strength of this correlation is not sensitive to the uncertainties in the measured enhancements.
A strong trend with [OIII]$\lambda$5007 luminosity is not seen in the disturbed enhancements for the full active galaxy sample, and a Pearson correlation test in this case indicates no highly significant evidence for a correlation: $r_{\rm full}=0.772$, $p_{\rm full}=0.228$. The relationship is therefore stronger when the LERGs are excluded, and only the HERGs and Type 2 quasar hosts are considered. This suggests that mergers and interactions become increasingly important for triggering radiatively-efficient AGN towards higher optical emission-line luminosities, but that the same relationship does not apply to radiatively-inefficient AGN.
Overall, the results provide evidence that the importance of galaxy mergers and interactions for triggering radiatively-efficient AGN is strongly dependent on [OIII]$\lambda$5007 emission-line luminosity, but not strongly dependent on 1.4 GHz radio power, in contrast with the results from the more detailed analysis of the RI-HERG low sample (Paper I). Given the relationship between [OIII]$\lambda$5007 luminosity and the total AGN power \citep[e.g.][]{heck04}, this supports the idea of an increasing importance of merger-based AGN triggering towards higher bolometric luminosities, as suggested by previous studies in the literature \citep[e.g.][]{tre12}. The observed difference between the distributions of the proportions of disturbed active galaxies and their enhancement ratios also serves to highlight the importance of the control matching process.
\subsubsection{Relationship with stellar mass and redshift}
\label{subsubsec:q1-mass_z}
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{merger_prop_vs_mass_z-controls+targets-matched_and_unmatched.pdf}
\caption[]{The proportions of active galaxies and matched control galaxies classified as disturbed in bins of stellar mass and redshift. The proportions are plotted at the median stellar masses and redshifts of the active galaxies and control galaxies in each bin. Proportion uncertainties were estimated following the method of \citet[][]{cam11}. For the plot against stellar mass, the results for the full active galaxy sample (unfilled points, including the unmatched 3CR sources) and the active galaxies with control matches (filled points) are shown separately.}
\label{fig:q1_dist_frac_vs_mass_z}
\end{figure}
Investigation of the relationship between stellar mass and disturbance rate is important for ensuring that the particularly large stellar masses of the unmatched objects in the 3CR sample did not strongly affect the results outlined in the previous subsections. Figure~\ref{fig:q1_dist_frac_vs_mass_z} shows the distributions of disturbed proportions for the active galaxies and matched control galaxies with both stellar mass and redshift -- note that 3C 130 was not considered for the former plot due to its abnormally large stellar mass relative to the other galaxies in the project (log($\rm M_{*}/M_{\odot}$) = 12.7). The distributions with stellar mass for the matched active galaxies and the full active galaxy sample are shown separately, to illustrate the effect of including the unmatched 3CR objects. The distributions with redshift for these two samples are very similar, and so only the results for the full active galaxy sample are shown, for clarity.
From Figure~\ref{fig:q1_dist_frac_vs_mass_z}, it is seen that the active galaxies have higher rates of disturbance than the matched controls across the full range of stellar mass and redshift, except at the lowest and highest stellar masses. This confirms that the active galaxies are in general more frequently disturbed, as originally indicated by the proportions presented in Table~\ref{tab:q1_props} and Figure~\ref{fig:q1_all_agn_samples}.
No clear trend is visible with redshift for either the active galaxies or matched controls. The highly significant excess in the disturbed fraction for the active galaxies with $0.08 \lesssim z \lesssim0.14$ (4.6\,$\sigma$) is likely caused by the high rates of disturbance found for the RI-HERG high and Type 2 quasar samples, the median redshifts of which both lie within this bin ($z=0.110$ and $z=0.111$, respectively). On the other hand, a positive trend with stellar mass is seen for both the active galaxies and the matched control galaxies. This is steeper for the active galaxies, and therefore the significance of the excess in the disturbed proportions relative to the matched controls is seen to increase with stellar mass in this range: 5.0\,$\sigma$ and 4.3\,$\sigma$ in the third and fourth bins, respectively, for the full active galaxy sample.
Due to the small numbers of active galaxies and control galaxies at the highest stellar masses, the uncertainties on the disturbance proportions are large. As a result, there is no strong evidence to suggest that the inclusion of the unmatched 3CR objects has a significant affect on any differences between active and control galaxy disturbance rates.
\subsection{Interaction signatures and merger stage}
\label{subsec:features_merger_stage}
The second question in the online interface asked the researchers to identify the specific types of interaction signature that they had seen in the galaxy images, with the goal of better characterising the types and stages of the mergers and interactions identified. Classifiers were asked to answer the question ``What types of interaction signature are visible?" using one or more of the categories outlined in \S\ref{subsec:interface}.
This question was only answered in cases where classifiers had already indicated that clear interaction signatures were visible in the galaxy images by responding ``Yes" to the first question. Multiple responses could be selected, since several different types of interaction signature could be present at the same time. As a result of these two factors, the total number of votes recorded for each galaxy across the different interaction signature categories varied. An interaction signature classification was therefore required to meet two criteria in order to be accepted. Firstly, the threshold of 5 out of 8 votes must have been met for the first question, ensuring that only the galaxies accepted as being disturbed were included in the analysis. Secondly, the majority of the classifiers that had answered ``Yes" to the first question must have voted for that particular interaction signature category, i.e. 3 out of 5, 4 out of 6, 4 out of 7, or 5 out of 8.
The main aim of this question was to determine whether the galaxy appeared to be in the early stages or late stages of a merger or interaction, i.e. prior to or following the coalescence of the galaxy nuclei (``pre-coalescence" or ``coalescence/post-coalescence", respectively). For this purpose, the ``Bridge (B)", ``Tidally interacting companion (TIC)" and ``Multiple nuclei (MN)" classifications were considered to indicate early-stage or pre-coalescence interactions, while the remainder were classified as late-stage or coalescence/post-coalescence interactions (late-stage or post-coalescence hereafter, for brevity). This approach is mostly consistent with that of \cite{ram11,ram12} and \cite{bess12}, with the exception of the new ``Tidally interacting companion (TIC)" category that was added for this work. On the occasions when both early-stage and late-stage interaction signature classifications were accepted, the interaction was considered to be of early-stage/pre-coalescence, for consistency with these previous studies.
Figure~\ref{fig:q2_pre_vs_post_merger} shows the proportions of disturbed radio galaxies, Type 2 quasar objects, and control galaxies with secure interaction signature classifications that are suggestive of early-stage/pre-coalescence or late-stage/post-coalescence events based on the categorisation outlined above. The measured proportions for each sample are provided in Table~\ref{tab:q2_pre_vs_post_merger}.
Late-stage interactions appear to be slightly favoured for the disturbed galaxies in each radio galaxy sample, although the late-stage and early-stage proportions are consistent with being equal for the RI-HERG low sample. This preference is also found when considering the measured proportions for all radio galaxy samples combined: 31 $^{+7}_{-6}$ per cent and 69 $^{+6}_{-7}$ per cent for early- and late-stage interaction signatures, respectively. These results agree well with the those found for the powerful radio galaxies in the 2Jy sample, for which 35 $\pm$ 11 per cent and 65 $\pm$ 11 per cent of the disturbed objects with $z < 0.3$ show a preference for pre- and post-coalescence interactions, respectively \citep{ram11}.
\begin{figure}
\centering
\includegraphics[width=0.93\columnwidth]{Q2-all_targets_pre_vs_post_coal_with_cont-new_err_v2.pdf}
\caption{The proportions of disturbed galaxies in each of the active galaxy samples that show evidence for pre-coalescence or post-coalescence interactions. The values for the disturbed galaxies in the full control sample {and for all radio galaxy samples combined} are also presented. Only galaxies with secure classifications of pre- or post-coalescence interaction signatures were considered for the analysis (see text), and the proportions shown were derived relative to the combined total number of these classifications for each sample. The exact measured proportions are listed in Table~\ref{tab:q2_pre_vs_post_merger}.}
\label{fig:q2_pre_vs_post_merger}
\end{figure}
\begin{table}
\centering
\small
\caption{The proportions of galaxies with interaction signatures that are indicative of pre-coalescence or post-coalescence mergers and interactions, as presented in Figure~\ref{fig:q2_pre_vs_post_merger}. All proportions are expressed relative to the combined total number of secure pre- and post-coalescence classifications within each sample, and are given as percentages. Proportion uncertainties were estimated following the method of \citet[][]{cam11}.}
\label{tab:q2_pre_vs_post_merger}
\begin{tabular}{lcc}
\hline
& Pre-coalescence & Post-coalescence \\ \hline
RI-HERG low & 40 $^{+16}_{-13}$ & 60 $^{+13}_{-16}$ \\
RI-HERG high & 19 $^{+13}_{-6}$ & 81 $^{+6}_{-13}$ \\
3CR (full) & 35 $^{+13}_{-9}$ & 65 $^{+9}_{-13}$ \\
Type 2 quasars & 69 $^{+9}_{-13}$ & 31 $^{+13}_{-9}$ \\
All controls & 59 $^{+5}_{-6}$ & 41 $^{+6}_{-5}$ \\
All radio galaxies & 31 $^{+7}_{-6}$ & 69 $^{+6}_{-7}$ \\
\hline
\end{tabular}
\end{table}
On the other hand, the Type 2 quasar hosts show a preference for early-stage ({69 $^{+8}_{-13}$ per cent}) relative to late-stage ({31 $^{+13}_{-8}$ per cent}) interactions. {These proportions are, however, consistent with those found for the disturbed galaxies in the control sample, for which 59 $^{+6}_{-7}$ per cent and 41 $^{+7}_{-6}$ per cent were found to have early-stage and late-stage interaction signatures, respectively.} Two-proportion Z-tests suggest that the null hypothesis that the proportions of early-stage (or late-stage) interactions in the control sample and Type 2 quasar sample are the same can only be rejected at a confidence level of {0.7\,$\sigma$}. {Similar fractions of pre-coalescence and post-coalescence interactions were also found for the disturbed hosts of Type 2 quasars with moderate redshifts ($0.3 < z < 0.41$) studied by \cite{bess12}: 47 $\pm$ 13 per cent and 53 $\pm$ 13 per cent, respectively.}
{The} measurements for the Type 2 quasar sample and control sample contrast significantly with those found for the radio galaxies, with the proportions of early-stage (or late-stage) interactions differing with those in the combined radio galaxy sample at confidence levels of $\sim$3\,$\sigma$, in both cases. {Note that all} of the above results are preserved if the radio galaxies with [OIII]$\lambda$5007 luminosities above the quasar-like threshold ($\rm L_{[OIII]} \geq 10^{35}$ W) are not considered for the analysis -- {the early-stage and late-stage proportions become 30 $^{+8}_{-6}$ per cent and 68 $^{+6}_{-8}$ per cent, respectively}.
Overall, the interaction signature classifications from the interface therefore {tentatively} suggest that radio AGN are preferentially triggered in the late stages of galaxy mergers and interactions, while Type 2 quasars are preferentially triggered in their early stages.
Regardless of this, AGN host galaxies with both pre- and post-coalescence interaction signatures are found in each of the active galaxy samples. Therefore, if the galaxy mergers and interactions are responsible for triggering each of the types of AGN considered, this can occur at several different phases during these events.
\subsection{Morphological types}
\label{subsec:morph_types}
The final question in the online interface asked the classifiers to indicate the overall morphological type of the galaxy in the image. Classifiers were required to answer the question ``On first impression, what is the morphological type of the galaxy?" by selecting spiral/disk, elliptical, lenticular, merger (too disturbed to categorise), or indicating that it was unclassifiable due to image issues (\S\ref{subsec:interface}). This was done to test if there was any variation in the longstanding relationship between radio AGN and early-type host galaxies with radio power, as suggested by the mixed population of late- and early-type galaxies found for the RI-HERG low sample in Paper I.
All classifiers were required to answered this question, and only one of the available responses could be selected for each galaxy. As a result, a threshold of 5 out of 8 votes was again used for accepting a classification. If this was not met for any of the options (excluding the ``Unclassifiable (due to image defects, \textit{not} merger)" category), the galaxy was classed as having an uncertain host type.
\begin{table*}
\centering
\caption{The proportions of galaxies with host types classed as elliptical, spiral/disk, lenticular, merger (too disturbed to place in former categories), or uncertain for all of the active galaxy and matched control samples classified using the online interface. All proportions are expressed as percentages{, and the number of objects in each sample are also presented}. The results for the 3CR HERG and LERG subsamples are included in separate columns, alongside those found for the full 3CR sample. Proportion uncertainties were estimated following the method of \citet[][]{cam11}.}
\label{tab:q3_props}
\begin{tabular}{cccccccccccc}
\hline
& & \multicolumn{2}{c}{Elliptical} & \multicolumn{2}{c}{\makecell{Spiral/disk}} & \multicolumn{2}{c}{Lenticular} & \multicolumn{2}{c}{Merger} & \multicolumn{2}{c}{Uncertain} \\
& { $N$} & AGN & Cont. & AGN & Cont. & AGN & Cont. & AGN & Cont. & AGN & Cont. \\ \hline
\makecell{RI-HERG low} & { 30} & 17 $^{+9}_{-5}$ & 49 $\pm$ 4 & 47 $\pm$ 9 & 34 $\pm$ 4 & 10 $^{+8}_{-3}$ & 4 $^{+2}_{-1}$ & 0 $^{+6}$ & 1 $^{+2}$ & 27 $^{+9}_{-6}$ & 11 $^{+3}_{-2}$ \\
\makecell{RI-HERG high} & { 28} & 60 $^{+8}_{-9}$ & 56 $^{+4}_{-5}$ & 14 $^{+9}_{-4}$ & 25 $^{+4}_{-3}$ & 0 $^{+6}$ & 3 $^{+2}_{-1}$ & 4 $^{+7}_{-1}$ & 2 $^{+2}_{-1}$ & 21 $^{+10}_{-6}$ & 14 $^{+4}_{-3}$ \\
\makecell{3CR (full)} & { 72} & 86 $^{+3}_{-5}$ & 73 $^{+2}_{-3}$ & 1 $^{+3}$ & 16 $\pm$ 2 & 0 $^{+2}$ & 1 $^{+1}$ & 3 $^{+3}_{-1}$ & 1 $^{+1}$ & 10 $^{+5}_{-2}$ & 9 $\pm$ 2 \\
\makecell{3CR HERGs} & { 41} & 78 $^{+5}_{-8}$ & 78 $\pm$ 3 & 2 $^{+5}_{-1}$ & 14 $^{+3}_{-2}$ & 0 $^{+4}$ & 2 $^{+2}_{-1}$ & 5 $^{+6}_{-2}$ & 1 $^{+1}$ & 15 $^{+7}_{-4}$ & 6 $^{+2}_{-1}$ \\
\makecell{3CR LERGs} & { 30} & 97 $^{+1}_{-7}$ & 71 $^{+3}_{-4}$ & 0 $^{+6}$ & 16 $^{+4}_{-3}$ & 0 $^{+6}$ & 1 $^{+2}$ & 0 $^{+6}$ & 1 $^{+2}$ & 3 $^{+7}_{-1}$ & 11 $^{+3}_{-2}$ \\
\makecell{Type 2 quasars} & { 25} & 52 $^{+9}_{-10}$ & 47 $^{+5}_{-4}$ & 8 $^{+9}_{-3}$ & 37 $^{+5}_{-4}$ & 4 $^{+8}_{-1}$ & 5 $^{+3}_{-1}$ & 12 $^{+10}_{-4}$ & 1 $^{+2}$ & 24$^{+10}_{-6}$ & 9 $^{+3}_{-2}$ \\
\hline
\end{tabular}
\end{table*}
The measured morphological type proportions for all samples are provided in Table~\ref{tab:q3_props}. With the exception of the RI-HERG low sample (27 $^{+9}_{-6}$ per cent early type, 47 $\pm$ 9 per cent late type), it is found that the majority of the galaxies in each radio galaxy sample were classified as having early-type morphologies (elliptical and lenticular; dominated by the former), and small proportions were deemed to have late-type morphologies (spiral or disk-like). Their respective matched controls show relative excesses in late-type morphologies in all cases, with the same exception. The majority of the Type 2 quasar hosts are also classed as ellipticals, and a significant excess in the merger category relative to their matched controls is also found (3.1\,$\sigma$), consistent with the high overall rate of disturbance determined for the sample (64 $^{+8}_{-10}$ per cent; \S\ref{subsec:dist_rates}).
In order to investigate the relationship between AGN host type and radio power, the proportions of active galaxies with morphologies classified as early-type and late-type were compared across the full range of radio powers covered. Here, all galaxies classified as either elliptical or lenticular were considered to be of early-type, and those classed as spirals or disks were considered to be of late-type, consistent with the analysis presented in Paper I.
Figure~\ref{fig:q3_host_types_vs_RP} shows the proportions of early-type and late-type galaxies in bins of 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity for the full active galaxy sample and, separately, for only the HERGs and Type 2 quasars. A strong positive correlation is observed between the early-type proportions and medians of the 1.4 GHz radio power bins for both the full active galaxy sample ($r_{\rm full}=0.994$, $p_{\rm full}=0.006$) and the HERG and Type 2 quasar subset ($r_{\rm HERG}=0.998$, $p_{\rm HERG}=0.002$), according to Pearson correlation tests. This is coupled with a strong negative correlation for the proportion of late-type galaxies in both cases ($r_{\rm full}=-0.993$, $p_{\rm full}=0.007$ and $r_{\rm HERG}=-0.997$, $p_{\rm HERG}=0.003$). Pearson correlation tests also suggest strong correlations with [OIII]$\lambda$5007 luminosity, although these are found to be of lower significance than the relationships with 1.4 GHz radio power: $r_{\rm full}=0.868$, $p_{\rm full}=0.132$ and $r_{\rm HERG}=0.972$, $p_{\rm HERG}=0.028$, and $r_{\rm full}=-0.978$, $p_{\rm full}=0.022$ and $r_{\rm HERG}=-0.985$, $p_{\rm HERG}=0.003$ for the early- and late-type proportions, respectively. In contrast with the rates of disturbance, this suggests that the host types are more strongly linked with the radio power of the AGN than the optical emission-line luminosity.
\begin{figure}
\centering
\includegraphics[width=0.98\columnwidth]{host_type_frac_v_RP_OIII-all_active_gals_and_HERGs-props-new_err.pdf}
\caption{The proportions of active galaxies classed as having early-type (elliptical or lenticular) and late-type (spiral or disk) morphologies against their 1.4 GHz radio powers (top panels) and [OIII]$\lambda$5007 emission-line luminosities.}
\label{fig:q3_host_types_vs_RP}
\end{figure}
While the control galaxies were matched to the targets in terms of their stellar masses and redshifts, matching of their morphological types was not performed. It is therefore important to check that the difference between the rates of disturbance for the active galaxies and control galaxies were not caused by the general preference for late-type morphologies exhibited by the latter objects (Table~\ref{tab:q3_props}).
Across the full sample of galaxies studied in the project, including all active galaxies and control galaxies, it is found that 30 $^{+3}_{-2}$ per cent of the early types and 26 $\pm$ 4 per cent of the late types were classified as disturbed. A two-proportion Z-test indicates that the null hypothesis that these two proportions are the same can be rejected at the 0.9\,$\sigma$ level. There is thus little evidence for a significant difference between the disturbance rates of the early-type and late-type galaxies studied, and the general excess of late-type galaxies in the control samples relative to the active galaxy samples should therefore not affect the comparison of their disturbance fractions. Matching of the morphological types would hence have had little effect on the results presented in \S\ref{subsec:dist_rates} \citep[as found by][]{gord19}.
Taken together, the results described in this section are consistent with the longstanding idea that the most powerful radio AGN are hosted by massive early-type galaxies. In addition, the fraction of early-type hosts is found to decrease strongly with decreasing radio power, while the proportion of late-type hosts shows the opposite trend. Since the sample is dominated by HERGs and Type 2 quasar objects, this supports the picture of a transition in the dominant host types for radiatively-efficient AGN from massive early-type galaxies at high radio powers to late-type hosts at low radio powers (i.e. like Seyfert galaxies), supporting the results from Paper I.
\section{Discussion}
\label{sec:disc}
\subsection{Comparison with more detailed inspection -- The RI-HERG low sample}
\label{subsec:method_disc}
The online interface provided a means for obtaining morphological classifications of a large sample of active galaxies and matched control galaxies in a time-efficient manner. The standardised image format, set classification questions/categories and the randomisation of the galaxy image presentation also allowed the levels of individual classifier bias to be greatly reduced.
Prior to comparing with results from the literature, however, we here consider how classifications obtained for the RI-HERG low sample using the online interface compare with those obtained from the more detailed visual inspection (i.e. with the ability to manipulate the image contrast and scale as required) in Paper I.
When considering the online interface classifications, it is found that 11 out of 30 (37 $^{+9}_{-8}$ per cent) of the galaxies in the RI-HERG low sample are classed as disturbed, compared with the 16 out of 30 (53 $\pm$ 9 per cent) found from the more detailed analysis. The results from the two methods therefore show good overall agreement, with the same classifications (either both disturbed or both not disturbed) being determined for 77 per cent of the RI-HERG low objects (23 out of 30). Most importantly, this includes the eight galaxies that in Paper I were identified as ``highly disturbed" based on cursory visual inspection, all of which were classified as disturbed by either 7 (J1351+46, J1358+17) or all 8 (J0757+39, J0836+44, J0902+52, J1243+37, J1257+51, J1412+24) of the researchers in the online interface.
Of the remainder, 3 galaxies (10 per cent of the sample) were classified as having an uncertain level of disturbance (4 votes disturbed, 4 votes not disturbed) through the online interface but as disturbed in the more detailed analysis (J0827+12, J1601+43, J1609+13). A further 3 galaxies (10 per cent) had secure classifications as not disturbed from the interface (i.e. meeting the 5-vote threshold) that disagreed with the disturbed classifications from the more detailed analysis (J0725+43, J0911+45, J1236+40). As can be seen from the images presented in Paper I, all of these galaxies exhibit subtle morphological signatures of disturbance, and, because of the limited image manipulation afforded by the interface method, the lack of agreement between the two methods is therefore unsurprising. The final galaxy, J1324+27, was the only object classified as disturbed when using the online interface but as not disturbed in the detailed analysis. This galaxy was a borderline case, however, with 5 votes recorded for disturbed and 3 for not disturbed, and it is seen to exhibit an unusual spiral structure that could be interpreted either as a sign of disturbance or as that of an undisturbed late-type galaxy.
When dividing the sample into two halves by radio power, the two methods give the same proportion of disturbed galaxies for the half with the highest radio powers (10 out of 15; 67 $^{+10}_{-13}$ per cent), but a much reduced proportion is found for the lower-radio-power half from the interface classifications relative to the detailed inspection -- 1 out of 15 (7 $^{+13}_{-2}$ per cent) and 6 out of 15 (40 $^{+11}_{-13}$ per cent), respectively. This suggests that the galaxies in the higher-radio-power half of the sample exhibit higher levels of disturbance than those in the lower-radio-power half, in support of the conclusions drawn based on the ``highly disturbed" galaxies in the sample in Paper I. From the interface classifications, the two-proportion Z-test now indicates that the null hypothesis that the two proportions are equal can be rejected at a confidence level of 3.4\,$\sigma$, compared to the value of 1.5\,$\sigma$ obtained previously. Repeating this analysis in terms of [OIII]$\lambda$5007 luminosity, proportions of 7 out of 15 (47 $\pm$ 17 per cent) and 4 out of 15 (27 $\pm$ 11 per cent) are measured for the high-luminosity and low-luminosity halves of the sample, respectively, a difference only at the 1.1\,$\sigma$ level.
These results hence appear to support the idea that the importance of mergers and interactions for triggering radio AGN is strongly dependent on radio power but more weakly dependent on optical emission-line luminosity, as suggested in Paper I. However, they are based on the measured proportions of disturbed galaxies, which, as shown in \S\ref{subsec:dist_rates}, increase strongly with stellar mass for both active and non-active galaxies. Within the RI-HERG low sample, Pearson correlation tests suggest that there is a moderate but significant correlation between stellar mass and 1.4 GHz radio power ($r=0.569$, $p=0.001$), but no significant correlation between stellar mass and [OIII]$\lambda$5007 luminosity ($r=0.095$, $p=0.617$). While the disturbed proportions for radio-intermediate active galaxies are also positively correlated with radio power in the current project, this is not observed when the matched control galaxy proportions are taken into consideration (i.e. the enhancement ratios in Figure~\ref{fig:q1_dist_enh_vs_RP_OIII}). These factors therefore suggest that the apparent relationship with radio power {from the Paper I results} is in fact a consequence of an underlying trend with stellar mass in the general galaxy population, showing the importance of performing the control matching.
Considering the results for the host types, it is found that the classifications from the two methods agree for 18 of the 30 galaxies in the sample (60 per cent). However, 8 of the remainder were classed as uncertain (27 per cent), and so secure classifications (with $\geq$5 votes) from the interface only disagreed for 4 of the galaxies (13 per cent). Of this latter group, 3 out of 4 galaxies were classed as lenticular by one of the two methods and as late-type (spirals/disks) by the other, a difference that could be caused by the reduced ability to identify finer structures when using the interface. Interestingly, in the more detailed analysis, the host types for 5 out of the 8 uncertain cases from the interface classifications were deemed too disturbed to classify (the ``Merger" class), which could explain this categorisation. Therefore, although the rate of complete consistency is lower than for the classifications of morphological disturbance, the general agreement still appears to be good between the two methods, given these factors.
Overall, this comparison suggests that the interface classifications provide good sensitivity to major levels of disturbance but not to minor levels, and the derived rates of disturbance should therefore be treated as lower limits. Furthermore, the lower sensitivity to more subtle morphological details could lead to the preferential classification of early-type hosts relative to late-types. However, any limitations introduced by the interface method should affect the active galaxies and their matched control samples equally, and so conclusions based on relative comparisons between the two should be secure in all cases. This again highlights the importance of the control matching process carried out for the current work.
\subsection{The rates of disturbance and AGN triggering}
\label{subsec:mergers_and_triggering_disc}
The importance of galaxy mergers and interactions for triggering AGN has been the subject of much debate. The most widely accepted model suggests that radiatively-efficient AGN (e.g. HERGs/SLRGs/quasars) and radiatively-inefficient AGN (e.g. LERGs/WLRGs) differ in their dominant triggering and fuelling mechanisms \citep[e.g.][]{hb14,yn14}. In this picture, radiatively-efficient AGN are fuelled by a high Eddington rate cold gas flow from a standard accretion disk \citep[c.f.][]{ss73}, and hence a sufficient supply of such gas must be available to the central SMBH in order to initiate and sustain this type of nuclear activity. The strong inflows of gas caused by the tidal forces associated with galaxy mergers and interactions \citep[e.g.][]{bh96,gabor16} therefore provide an attractive mechanism for triggering and fuelling the AGN in these objects.
Radiatively-inefficient AGN are then thought to be fuelled by an optically thin, geometrically thick accretion flow of hotter gas at low Eddington rates \citep[c.f.][]{ny94,ny95,nar05}. At high radio powers, the favoured fuelling mechanisms in this case are often linked with the prevalent hot gas supply in the host galaxy haloes and the dense larger-scale environments in which these active galaxies typically lie \citep[e.g.][]{baum92,best05b,hard07,gas13}, with mergers and interactions thus being relatively less important. The results obtained from the interface classification analysis are here discussed in this context.
\subsubsection{Powerful radio galaxies -- Comparison with the 2Jy sample}
\label{subsubsec:2jy_comp}
Previous deep, ground-based optical observations of powerful radio galaxies have revealed frequent morphological signatures of galaxy mergers and interactions, which, in keeping with the picture outlined above, are found to be more prevalent for those also exhibiting strong optical emission lines \citep{heck86,sh89a,sh89b,ram11}. {Furthermore, evidence from HST imaging observations suggests that the hosts of bright AGN with moderate to high radio luminosities also have high merger rates at higher redshifts \citep[$1<z<2.5$;][]{chi15}.}
\cite{ram11} found that 94 $^{+2}_{-7}$ per cent of the SLRGs in the 2Jy sample display these signatures, which were also found to preferentially lie in the moderate-density group environments that favour the frequent occurrence of these events \citep{ram13}. In contrast, evidence for morphological disturbance was found in only 27 $^{+16}_{-9}$ per cent of the 2Jy WLRGs \citep{ram11}, which were also found to be predominantly associated with denser cluster environments \citep{ram13}, where the high relative galaxy velocities can have a negative effect on the merger rate \citep{pb06}.
It was also found that the rates of disturbance are considerably lower for non-active early-type galaxies with comparable optical luminosities, redshifts and image depths than for the 2Jy SLRGs, showing disturbance fractions of 53 $\pm$ 7 per cent \citep[$z<0.2$, from the OBEY survey;][]{tal09} and 48 $\pm$ 5 per cent \citep[$0.2 \leq z < 0.7$, from the Extended Groth Strip;][]{zhao09} when interaction signatures with the same surface brightness limits were considered \citep[][]{ram12}.
Considering the results obtained for the 3CR HERGs in the current analysis, it is seen that 66 $^{+7}_{-8}$ per cent of the objects are classed as disturbed based on the interface classifications, an excess at the 4.7\,$\sigma$ level relative to their matched controls. While the rate of disturbance is notably lower than the fraction determined for the 2Jy SLRGs at lower redshifts -- 93 $^{+2}_{-13}$ per cent \cite[13 out of 14 objects at $z<0.2$;][]{ram11} -- the excess relative to matched control galaxies in the OBEY survey for the latter is, in fact, less highly significant (at the 3.3\,$\sigma$ level). Both studies therefore suggest significant enhancements in disturbance rate for the hosts of radiatively-efficient AGN at high radio powers. The lower rates of disturbance found for both the 3CR HERGs and their matched controls are then likely accounted for by the reduced sensitivity to low surface brightness tidal features when using the interface method (\S\ref{subsec:method_disc}).
Turning to the 3CR LERGs, it is found that only 37 $^{+9}_{-8}$ per cent are classed as disturbed based on the interface classifications, which is consistent with the value of 20 $^{+17}_{-7}$ per cent determined for 2Jy WLRGs at the same redshifts (2 out of 10 objects at $z<0.2$). In addition, no significant evidence to suggest that the rates of disturbance differ from those of their matched controls is found, in both cases.
The results from the current analysis are therefore consistent with the idea that galaxy mergers and interactions are highly important for triggering the most powerful radio galaxies with radiatively-efficient AGN, but are much less important for triggering those with radiatively-inefficient AGN.
\subsubsection{Comparison with radio-intermediate LERGs}
\label{subsubsec:ri-lerg_comp}
Recently, study of the optical morphologies of a large sample of low redshift ($z<0.07$) LERGs with mostly intermediate radio powers ($\rm 10^{21.7} < log(L_{1.4GHz}) < 10^{25.8}$ W\,Hz$^{-1}$, median $10^{23}$ W\,Hz$^{-1}$) was undertaken by \cite{gord19}. The 282 LERGs were classified alongside 1622 control galaxies matched in stellar mass, redshift and large-scale environment, using a similar online interface technique to that used for the current analysis. While the Dark Energy Camera Legacy Survey \citep[DECaLS;][]{dey19} images used by \cite{gord19} have a fainter limiting surface brightness depth \citep[$\mu_{r} \sim 28$ mag\,arcsec$^{-2}$;][]{hood18} than the INT/WFC images ($\mu_{r} \sim 27$ mag\,arcsec$^{-2}$), this is expected to have little effect on the classifications, given the reduced sensitivity to low-surface-brightness features when using the interface method (see \S\ref{subsec:method_disc}). These results are therefore directly comparable with those obtained from the online interface classifications of the current sample. The main caveat is that the classifiers were able to indicate whether the level of disturbance was ``major" or ``minor" when classifying the objects as disturbed, whereas the disturbed classifications in the current work encompass both.
The overall rates of disturbance determined for the radio-intermediate LERGs and their matched controls, considering classifications of both minor and major disturbances, are 28.7 $\pm$ 1.1 per cent and 27.3 $\pm$ 0.5 per cent, respectively, a difference at a confidence level of $<0.5$\,$\sigma$ \citep{gord19}. Considering the combined interface classification results for the RI-HERG low and RI-HERG high samples from the current analysis ($\rm 10^{22.5} < log(L_{1.4GHz}) < 10^{25}$ W\,Hz$^{-1}$; $z < 0.15$), disturbance rates of 47 $^{+7}_{-6}$ per cent and 29 $\pm$ 3 per cent are found for the radio-intermediate HERGs and their matched controls, a difference at the 2.7\,$\sigma$ level. Since the proportions measured for the control samples from both analyses are consistent ($<0.5$\,$\sigma$ difference), this suggests that the merger rates for radio-intermediate HERGs are significantly higher than for radio-intermediate LERGs -- a two-proportion Z-test indicates that the null hypothesis that the disturbance proportions are equal can be rejected at a confidence level of 2.7\,$\sigma$.
In comparison, the rates of disturbance for the high-radio-power 3CR LERGs (37 $^{+9}_{-8}$ per cent) are consistent with both the radio-intermediate LERGs and the control sample.
Therefore, while the 3CR LERGs have typically higher 1.4 GHz radio powers and [OIII]$\lambda$5007 emission-line luminosities\footnote{Two-sample KS tests suggests that the null hypothesis that the two LERG samples are drawn from the same underlying 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity distributions can be rejected with very high confidence: $D = 0.972$, $p < 10^{-21}$ and $D = 0.852$, $p < 10^{-14}$ (upper limits excluded), respectively.}, there is no strong evidence that the rate of disturbance is significantly increased in this population. In support of this, \citet{ell15} find that their sample of LERGs with predominantly intermediate radio powers shows no significant excess in close pairs and post merger signatures relative to non-active controls, when both host galaxy properties and environmental structure are accounted for.
Overall, the results from both studies are hence consistent with the idea that galaxy mergers and interactions are generally less important for triggering the nuclear activity in LERGs than in HERGs. In addition, there is little evidence to support the idea that the importance of this triggering mechanism is dependent on either [OIII]$\lambda$5007 emission-line luminosity or 1.4 GHz radio power for radiatively-inefficient radio AGN.
\subsubsection{The importance of mergers for triggering quasars}
\label{subsubsec:quasar_comp}
{Models of galaxy mergers and interactions suggest that they offer an effective means for triggering and fuelling quasar activity \citep[e.g.][]{sanders88,dm05,hop08}. However, previous searches for morphological disturbance in the hosts of bright AGN selected in different wavebands have yielded mixed results, both at low to intermediate \citep[e.g.][]{dun03,floyd04,benn08,veil09,cis11,tre12,vill14,hong15,vill17} and higher redshifts \citep[e.g.][]{koc12,koc15,chi15,glik15,mech16,don18,mar19,shah20}.}
{The high merger rate and significant enhancement relative to the matched controls measured for our Type 2 quasar hosts (64 $^{+8}_{-10}$ per cent and 3.7\,$\sigma$) supports the idea that mergers and interactions provide the dominant triggering mechanism for luminous AGN. While other studies of the morphologies of Type 2 quasar objects have found both low and high disturbance rates \citep[e.g.][]{greene09,bess12,wyl16,urb19,zhao19}, there is strong evidence to suggest that the surface brightness depth of the observations is particularly important in this context. This issue will be discussed in a subsequent paper that will focus on the importance of mergers for triggering quasar-like nuclear activity (Pierce et al. in prep.), in which the results obtained from further deep imaging observations of Type 2 quasars will also be presented.}
\subsection{Host types}
\label{subsec:host_type_disc}
Evidence from many past observational studies suggests that powerful radio AGN ($\rm L_{1.4GHz} \gtrsim 10^{25}$ W\,Hz$^{-1}$) are predominantly associated with massive elliptical galaxies \citep[e.g.][]{mms64,dun03,best05b}. A minority of the objects in the high-flux-density selected 3CR and 2Jy samples do however exhibit disk-like morphologies upon cursory visual inspection, with these typically being found towards the lower end of the radio power range covered \citep{tad16}. Some of these objects, in fact, have intermediate radio powers, which supports previous suggestions that late-type hosts could be more common in this range \citep[e.g.][]{sad14}. Furthermore, the results presented in Paper I showed that HERGs in the intermediate radio power range have a mixture of early-type and late-type morphologies. In combination, these results could suggest that the hosts of radiatively-efficient AGN move towards the predominantly late-type morphologies of Seyfert galaxies \citep[e.g.][]{adams77} at lower radio powers.
The general trends observed for the morphological type proportions in the current analysis provide strong support for this picture. Pearson correlation tests revealed evidence for strong positive correlations with radio power for the proportions of active galaxies classified as early-type in both the full active galaxy sample and the HERG and Type 2 quasar subset, which are coupled with strong decreases in the proportions classified as late-type with increasing radio power -- each of these trends are clearly demonstrated in Figure~\ref{fig:q3_host_types_vs_RP}. While similar trends are also seen with [OIII]$\lambda$5007 emission-line luminosity, these are found to be of lower significance, thus indicating that the host types are more strongly linked with AGN radio power. These results therefore suggest that there is a gradual transition in the dominant host types of radio AGN from early-type galaxies at high radio powers to late-type galaxies at lower radio powers, at least for radiatively-efficient objects. They are also consistent with the idea that secular triggering mechanisms related to galaxy disks \citep[e.g.][]{hq10,hb14} become increasingly important towards lower radio powers, as suggested in Paper I, or lower total AGN powers \citep[as in e.g.][]{tre12}.
Looking at the results for the current Type 2 quasar sample, it is found that the majority of host galaxies are classed as early-type (52 $^{+9}_{-10}$ per cent elliptical, 4 $^{+8}_{-1}$ per cent lenticular) and only 2 out of 25 (8 $^{+9}_{-3}$ per cent) and 3 out of 25 (12 $^{+10}_{-4}$ per cent) are classed as late-type or ``merger", respectively; the remaining 24 $^{+10}_{-6}$ per cent have uncertain host types. A preference for early-type hosts has also been found in other imaging studies of low to intermediate redshift Type 2 quasars, from both visual inspection (\citeauthor{urb19} \citeyear{urb19}; but see \citeauthor{zhao19} \citeyear{zhao19}) and detailed light profile fitting \citep[][]{greene09,wyl16}. The preference for early-type hosts is enhanced when the radio-loud quasar-like AGN in the 3CR sample (with $\rm L_{[OIII]} \geq 10^{35}$ W) are considered together with the Type 2 quasars (73 $^{+6}_{-8}$ per cent), in agreement with previous results that suggest both radio-loud and radio-quiet quasars are predominantly hosted by elliptical galaxies \citep{dun03}. The host type classifications determined using the interface therefore appear to show good general agreement with those from these previous studies, and lend favour to the idea that powerful quasar-like activity in the local universe is largely associated with massive, early-type galaxies.
\section{Summary and conclusions}
\label{sec:summary}
Investigating the mechanisms that trigger AGN is key for correctly implementing their associated feedback processes in current models of galaxy evolution. The jets of radio AGN could be particularly important in this context, although little investigation of the dominant triggering and fuelling mechanisms for the lower radio power sources that comprise the bulk of the local population has been performed.
The morphologies of the radio-intermediate HERGs studied in the first paper in this series \citep[][]{pierce19} suggested that the importance of merger-based AGN triggering was strongly dependent on the radio power associated with the nuclear activity. However, there was some evidence to suggest that the optical emission-line luminosity may also play a role. Using an online classification interface, this paper has expanded the morphological analysis to a much larger sample of active galaxies that encompasses a broad range in both 1.4 GHz radio power and [OIII]$\lambda$5007 emission-line luminosity, allowing the dependence of AGN triggering by galaxy mergers and interactions on these properties to be investigated in more detail. The dependence of host galaxy type on the AGN radio power has also been assessed. This analysis has also been performed for large samples of control galaxies matched to the active galaxies in terms of both stellar mass and redshift, classified blindly alongside the active galaxies in a randomised manner. The main results are as follows.
\begin{itemize}
\item The active galaxies are found to be more frequently disturbed than the matched control galaxies across the full range of stellar masses and redshifts covered by the samples. The most significant excesses are found for the 3CR (4.3\,$\sigma$) and Type 2 quasar samples (3.7\,$\sigma$). In the former case, this is largely driven by the HERGs in the sample, which show a 4.7\,$\sigma$ excess relative to their matched controls. The 3CR LERGs, by comparison, only show a 1.2\,$\sigma$ excess in their disturbance fraction.
\item There is no strong evidence that the rates of disturbance in the active galaxies are correlated with 1.4 GHz radio power when the rates measured for their matched controls are accounted for. In contrast, we find clear evidence that the enhancement in the disturbance rate for the HERGs and Type 2 quasar hosts relative to that of the matched controls ($f_{\rm AGN}/f_{\rm cont}$) increases strongly with [OIII]$\lambda$5007 luminosity: $r=0.972$, $p=0.028$, from a Pearson correlation test. A significant correlation is not found when the 3CR LERGs are included, suggesting that this relation applies only to radiatively-efficient AGN.
\item {The disturbed radio galaxies show a preference for post-coalescence interaction signatures relative to pre-coalescence signatures, which may suggest that these objects are more likely to be triggered in the later stages of galaxy mergers.} The Type 2 quasar hosts show the opposite preference, indicating that they are more likely to be triggered in the early stages.
\item The AGN in all samples show a preference for early-type rather than late-type host galaxies, with the exception of the RI-HERG low sample (from Paper I). The latter sample exhibits a preference for late-type hosts and a significant deficit ($>3$\,$\sigma$) of early-types relative to its matched controls. The measured morphological type proportions also suggest that the fraction of early-type hosts decreases strongly with radio power, while the fraction of late-type hosts increases. This supports the idea that the dominant host types of radiatively-efficient radio AGN change from early-type galaxies at high radio powers to late-type galaxies at lower radio powers, as suggested in Paper I. This could also suggest that triggering via secular processes in galaxy disks holds more importance for the latter objects.
\end{itemize}
Overall, the measured rates of disturbance imply that the importance of galaxy mergers and interactions for triggering radiatively-efficient AGN (HERGs/Type 2 quasars) is strongly dependent on their optical emission-line luminosities (and hence bolometric luminosities) but not on their radio powers, once the disturbance rate in the underlying galaxy population is accounted for. Moreover, there is particularly strong evidence to suggest that galaxy mergers and interactions provide the dominant triggering mechanism for quasar-like AGN at low-to-intermediate redshifts, regardless of radio power. In contrast, these processes appear to be of much lower importance for triggering radiatively-inefficient radio AGN, since the majority of 3CR LERGs are associated with undisturbed elliptical galaxies.
\section*{Acknowledgements}
The authors thank Mischa Schirmer for advice and assistance concerning image processing with \texttt{THELI}. YG and CO are supported by the National Sciences and Engineering Research Council of Canada (NSERC). CRA acknowledges financial support from the Spanish Ministry of Science, Innovation and Universities (MCIU) under grant with reference RYC-2014-15779, from the European Union's Horizon 2020 research and innovation programme under Marie Sk\l odowska-Curie grant agreement No 860744 (BiD4BESt), from the State Research Agency (AEI-MCINN) of the Spanish MCIU under grants ``Feeding and feedback in active galaxies" with reference PID2019-106027GB-C4 and ``Quantifying the impact of quasar feedback on galaxy evolution (QSOFEED)" with reference EUR2020-112266. CRA also acknowledges support from the Consejería de Econom\'{i}a, Conocimiento y Empleo del Gobierno de Canarias and the European Regional Development Fund (ERDF) under grant with reference ProID2020010105 and from IAC project P/301404, financed by the Ministry of Science and Innovation, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community. {PSB acknowledges financial support from the State Research Agency (AEI-MCINN) and from the Spanish MCIU under grant "Feeding and feedback in active galaxies" with reference PID2019-106027GB-C42.}
\section*{Data Availability}
The postage stamp images used for the morphological classifications are available online as supplementary material, along with the full morphological classification results obtained for the active galaxies using the online interface. Additional data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2024-02-18T23:40:25.476Z
|
2021-11-08T02:00:28.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03075","language":"en","timestamp":1636336828000,"url":"https:\/\/arxiv.org\/abs\/2111.03075","yymm":"2111"}
|
proofpile-arXiv_000-10216
|
{"provenance":"002.jsonl.gz:10217"}
| null | null |
\section*{Acknowledgment}
We acknowledge Yan Sun, Guowei Li, Jinfeng Jia, Vidya Madhavan and Eva Andrei for valuable discussions and comments on the manuscript.
|
2024-02-18T23:40:25.484Z
|
2021-11-24T02:03:14.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03058","language":"en","timestamp":1637719394000,"url":"https:\/\/arxiv.org\/abs\/2111.03058","yymm":"2111"}
|
proofpile-arXiv_000-10217
|
{"provenance":"002.jsonl.gz:10218"}
| null | null |
\section{Motivation and canonical axion mass}
The QCD axion is one of the most motivated scenarios beyond the Standard Model. This simple extension not only explains in an elegant way the absence of CP violation in the strong interactions~\cite{Peccei:1977hh,Peccei1977} but could also account for the Dark matter abundance~\cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. Furthermore, the QCD axion is a highly predictive scenario since a lot of the properties of this pseudoscalar stem from its pseudo-Goldstone boson nature: both the axion mass
and the couplings to ordinary matter scale as $1/f_a$, where $f_a$ is the
axion decay constant, denoting the scale at which the Peccei-Quinn (PQ) symmetry $U(1)_{\rm PQ}$ is spontaneously broken.
At the heart of the axion solution to the strong CP problem lies the fact that the QCD anomaly is the only source of explicit PQ breaking. As a byproduct, within the so-called canonical axions the $m_a$-$f_a$ relation is fixed by QCD~\cite{Weinberg:1977ma,Wilczek1978},
\begin{align}
m_{a}^{\rm QCD} \simeq \frac{f_{\pi} m_{\pi}}{f_{a}} \frac{m_{u} m_{d}}{\sqrt{m_{u}+m_{d}}}\,,
\label{Eq: axion mass}
\end{align}
where $m_\pi, f_\pi, m_u$ and $m_d$ denote respectively the pion mass, its decay constant, and the up and down quark masses. The strength of the axion couplings to Standard Model (SM) fields is instead model-dependent: it varies with the matter content of the UV complete axion model.
In recent years there have been many attempts to enlarge the canonical QCD
axion window, by considering UV completions of the axion effective Lagrangian
which depart from the minimal invisible axion constructions \cite{Zhitnitsky:1980tq,Dine:1981rt,Kim:1979if,Shifman:1979if}. Most approaches actually focused on the possibility of modifying
the Wilson coefficient of specific axion-SM effective operators (see Ref.~\cite{Agrawal:2021dbo} and Refs. therein). A most intriguing possibility consists on departing from the $m_a$-$f_a$ relation in Eq.~(\ref{Eq: axion mass}). Indeed axions which are heavier than the canonical QCD axion have been explored since long and revived in the last years (see e.g. Refs.~\cite{rubakov:1997vp,Gaillard:2018xgk,Csaki:2019vte}).
In contrast, solutions to the strong CP problem with lighter axions were uncharted territory until very recently.
The goal of this work is to study in detail and determine the phenomenological implications of a freshly proposed dynamical --and technically natural-- scenario, which solves the strong CP problem with an axion much lighter than the canonical QCD one~\cite{Hook:2018jle,ZNCPpaper}. Next, we show that dark matter
can be accounted for by this extremely light axion and that features a novel production mechanism in the early Universe: the \emph{trapped misalignment mechanism}~\cite{DiLuzio:2021gos}.
\section{The non-linearly realized $Z_{\mathcal N}$ axion}
Let's assume that Nature is endowed with a $Z_{\mathcal N}$ symmetry under which ${\mathcal N}$ copies of the SM are interchanged and which is non-linearly realized by the axion field~\cite{Hook:2018jle,ZNCPpaper},
\begin{align}
Z_{\mathcal N}:\quad &\text{SM}_{k} \longrightarrow \text{SM}_{k+1\,(\text{mod} \,{\mathcal N})} \label{mirror-charges}\\
& a \longrightarrow a + \frac{2\pi k}{{\mathcal N}} f_a\,.
\label{axion-detuned-charge}
\end{align}
Given this symmetry, ${\mathcal N}$ mirror and degenerate worlds linked by the axion field would coexist with the same coupling strengths as in the SM, with the exception of the
effective $\theta_k$-parameters: for each copy $k$ the effective $\theta$-value is shifted by $2\pi/{\mathcal N}$ with respect to that in the
neighboring $k-1$ sector. Thus the total potential for the axion is given by the sum of all the shifted contributions,
\begin{align}
V_{\mathcal N}(\theta_a)=-m_{\pi}^{2} f_{\pi}^{2}\sum_{k=0}^{{\mathcal N}-1} \sqrt{1-\frac{4 m_{u} m_{d}}{\left(m_{u}+m_{d}\right)^{2}} \sin ^{2}\left(\frac{\theta_a}{2}+\frac{\pi k}{{\mathcal N}}\right)}\, ,
\label{Eq:Vsmilga ZN}
\end{align}
where $\theta_a \equiv a / f_a$ is the angular axion field.
Strikingly, the resulting axion is exponentially lighter than the canonical QCD axion in Eq.~(\ref{Eq: axion mass}), because the non-perturbative contributions to its potential from the ${\mathcal N}$ degenerate QCD groups conspire by symmetry to suppress each other~\cite{Hook:2018jle,ZNCPpaper}.
Indeed, it has been shown that this kind of $Z_{\mathcal N}$ symmetric potentials have interesting mathematical properties\footnote{See Ref.~\cite{Das:2020arz} for a generalization of the mechanism for non-abelian discrete symmetries.} in the large ${\mathcal N}$ limit and for this case the total axion potential is given in all generality by a compact analytical formula~\cite{ZNCPpaper},
\begin{align}
\label{Eq: fourier potential large N hyper-compact}
V_{\mathcal{N}}\left(\theta_a\right)
\simeq - \frac{m_a^2 f_a^2}{{\mathcal N}^2} \,\cos ({\mathcal N}\theta_a)\,, \qquad
m_a^2 \, f^2_a \simeq \frac{m_{\pi}^{2} f_{\pi}^{2}}{\sqrt{\pi}} \,\sqrt{\frac{1-z}{1+z}} \,{\mathcal N}^{3/2} \,z^{\mathcal N}\,,
\end{align}
where the exponential suppression of the $Z_{\mathcal N}$ axion mass squared ($\propto 2^{-{\mathcal N}}$) in comparison to the canonical case $\left(m_a^{\rm QCD}\right)^2$ in Eq.~(\ref{Eq: axion mass}) is controlled by the ratio of light quark masses $z\equiv m_u/m_d\simeq 1/2$.
The solution to the strong CP problem of this $Z_{\mathcal N}$ scenario required ${\mathcal N}$ to be odd.
Overall, the $\sim 10$ orders of magnitude tuning require by the SM strong CP problem is traded by a $1/{\mathcal N}$ adjustment, where ${\mathcal N}$ could be as low as ${\mathcal N}=3$.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figs/AxionPhotonZN_NoDM_proj.pdf}
\includegraphics[width=0.45\textwidth]{figs/FiniteDensity-squared.pdf}
\caption{\small Limits on the axion-photon coupling (left) and on the inverse of the axion decay constant (right) as a function of the axion mass. The {\color{OrangeC2} \bf orange} oblique lines represent the theoretical prediction for the $Z_{\mathcal N}$ axion case (assuming vanishing ``bare'' coupling to photons)
for different ${\mathcal N}$.
See Refs.~\cite{ciaran_o_hare_2020_3932430,ZNCPpaper} for details.}
\label{fig:Photon coupling no DM}
\end{figure}
The crucial properties of such a light axion are generic and do not depend on the details of the putative UV completion. An important byproduct of this construction is an enhancement of all axion interactions which is {\it universal}, that is, model-independent and equal for all axion couplings, at fixed $m_a$, see Fig.~(\ref{fig:Photon coupling no DM}). The detailed exploration of the $Z_{\mathcal N}$ paradigm and of the phenomenological constraints which do not require the axion to account for DM can be found in Ref.~\cite{ZNCPpaper}.
It is particularly enticing that experiments set {\it a priori} to hunt only for ALPs may in fact be targeting solutions to the strong CP problem. For instance,
ALPS II is shown to be able to probe the $Z_{\mathcal N}$ scenario here discussed down to
${\mathcal N} \sim 25$ for a large enough axion-photon coupling, while IAXO and BabyIAXO may test the whole ${\mathcal N}$ landscape for values of that coupling even smaller, see Fig.~(\ref{fig:Photon coupling no DM}).
Furthermore, highly dense stellar bodies allow one to set even stronger bounds in wide regions of the parameter space. These exciting limits have an added value: they avoid model-dependent assumptions about the axion couplings to SM particles, because they rely exclusively on the anomalous axion-gluon interaction needed to solve to the strong CP problem. A dense medium of ordinary matter is a background that breaks the $Z_{\mathcal N}$ symmetry. This hampers the symmetry-induced cancellations in the total axion potential:
the axion becomes heavier inside dense media {\it and} the minimum of the potential is located at $\theta_a=\pi$. The corresponding bounds from present solar data, together with projections with neutron stars are shown in Fig.~(\ref{fig:Photon coupling no DM}) (right). Moreover, gravitational wave data from NS-NS and BH-NS mergers by LIGO/VIRGO and Advanced LIGO will allow to further probe this setup \cite{Hook:2017psm,Zhang:2021mks}.
For the sake of illustration, we have also developed two examples of UV completed models featuring this mechanism~\cite{ZNCPpaper}. Specially interesting is the $Z_{\mathcal N}$ KSVZ model, which is shown to enjoy an improved PQ quality behaviour in large regions of the parameter space, depicted in solid orange lines in Fig.~(\ref{fig:Photon coupling no DM}), see e.g. Refs.~\cite{Redi:2016esr,Gavela:2018paw} for alternative solutions to the PQ quality problem.
\section{$Z_{\mathcal N}$ axion dark matter}
\label{sec:axion_dark_matter}
The evolution of the $Z_{\mathcal N}$ axion field
and its contribution to the DM relic abundance
departs drastically from the standard case~\cite{DiLuzio:2021gos}.
The cosmological impact of hypothetical parallel ``mirror'' worlds has been studied at length in the literature (for a review, see e.g. Ref.~\cite{Berezhiani:2003xm}).
Crucially, the constraints on the number of effective relativistic species $N_{\text{eff}}$ imply that the mirror copies of the SM must be less populated\footnote{Mechanisms that source this world-asymmetric initial temperatures while preserving the $Z_{\mathcal N}$ symmetry may arise naturally in the cosmological
evolution~\cite{Kolb:1985bf,Dvali:2019ewm}. } --cooler-- than the ordinary SM world.
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\textwidth]{figs/TrappedMis.png}
\caption{\small Comparison of the evolution of the axion field in the trapped case vs standard misalignment}
\label{fig:Trapped}
\end{figure}
As a consequence of this temperature asymmetry among the worlds, and similarly to the previously described finite density effects, the temperature dependence of the $Z_{{\mathcal N}}$ axion potential presents some particular features that modify the production of DM axions in terms of the misalignment mechanism. The scenario results in a novel type of misalignment, with a large value of the misalignment angle.
In particular, the relic density is enhanced because the axion field undergoes two stages of oscillations,
that are separated by
an unavoidable and
drastic --non-adiabatic-- modification of the potential. The axion field
is first \emph{trapped} in the wrong minimum (with $\theta=\pi$), which effectively delays the onset of the true oscillations and thus enhances the DM density.
We will call this new production mechanism \emph{trapped misalignment}, see Fig.~(\ref{fig:Trapped}).
Furthermore, in some regions of the parameter space, trapped misalignment will automatically source the recently proposed kinetic misalignment mechanism~\cite{Co:2019jts}.
In the latter, a sizeable initial axion velocity is the source of the axion relic abundance as opposed to the conventionally assumed initial misalignment angle.
The early stage of oscillations in the $Z_{\mathcal N}$ axion framework naturally flows out into kinetic misalignment.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{figs/AxionPhotonZN_proj}
\includegraphics[width=0.45\textwidth]{figs/AxionEDMcoupling_squared.pdf}
\caption{ Limits on the axion-photon coupling (left) and on the inverse of the axion decay constant (right) as a function of the axion mass, assuming the axion accounts for the entire DM relic density. See Refs.~\cite{ciaran_o_hare_2020_3932430,DiLuzio:2021gos}.}
\label{fig:axionEDM}
\end{figure}
\vspace*{-6pt}
The interplay of the different mechanisms together with the implications of the $Z_{\mathcal N}$ reduced-mass axion
for axion DM searches is studied in detail in \cite{DiLuzio:2021gos}, including the experimental
prospects to probe its coupling to photons,
nucleons, electrons and the nEDM operator. As an example the bounds and projections of the $Z_{\mathcal N}$ axion assuming it accounts for all the DM abundance are depicted in Fig.~(\ref{fig:axionEDM}) for the coupling to photons and to gluons (i.e. $1/f_a$).
The {\color{purpleDMbands}\bf purple} band in Fig.~(\ref{fig:axionEDM}) encompasses the region where the prediction of the $Z_{\mathcal N}$ axion relic density within the different regimes of the trapped misalignment can account for the entire DM abundance.
As a wonderful byproduct of the lower-than-usual $f_a$ values
allowed in the $Z_{\mathcal N}$
axion paradigm to solve the strong CP problem,
all axion-SM couplings are equally enhanced for a given $m_a$. This increases the testability of the theory in current and future experiments, see Fig.~(\ref{fig:axionEDM}). It follows that the $Z_{\mathcal N}$ paradigm is --to our knowledge-- the only true axion theory that could explain a positive signal in CASPEr-Electric phase I and in a large region of the parameter space in phase II.
Moreover, the $Z_{\mathcal N}$ axion scenario includes --to our knowledge-- \emph{the first technically natural axion model of fuzzy DM that can also solve the strong CP problem}.
\vspace*{-10pt}
{\section*{Acknowlegments}
\vspace*{-9pt}
{ \footnotesize P.Q. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC 2121 \textit{Quantum Universe} -- 390833306 and from the EU H2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements 690575 (RISE InvisiblesPlus) and 674896 (ITN ELUSIVES) and 860881-HIDDeN, as well as from the Spanish Research Agency through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597.
}
}
\bibliographystyle{utphys.bst}
\let\OLDthebibliography\thebibliography
\renewcommand\thebibliography[1]{
\OLDthebibliography{#1}
\setlength{\parskip}{1pt}
\setlength{\itemsep}{3pt plus 0.3ex}
}
{\footnotesize \setstretch{1}
\providecommand{\href}[2]{#2}\begingroup\raggedright
|
2024-02-18T23:40:25.484Z
|
2021-11-08T02:04:42.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03149","language":"en","timestamp":1636337082000,"url":"https:\/\/arxiv.org\/abs\/2111.03149","yymm":"2111"}
|
proofpile-arXiv_000-10218
|
{"provenance":"002.jsonl.gz:10219"}
| null | null |
\section{Introduction}
\label{sect:intro}
Disk accretion onto an object possessing a material surface (i.e. not a black hole) is an important problem emerging in a variety of astrophysical settings. When the magnetic field of an accretor is weak enough \citep{Ghosh1978,Kon1991}, the accretion flow can extend all the way to its surface. In this case the transfer of incoming matter onto the central object (hereafter "a star") must proceed through the so-called {\it boundary layer} (hereafter BL) --- a narrow region between the disk and the accretor, in which the angular velocity of the accreting matter adjusts to the rotation of the central star. In order for that to happen, the gas arriving from the disk must lose its angular momentum inside the BL.
The magneto-rotational instability (MRI; \citealt{VEL59,BAL91}), traditionally invoked as the favored angular momentum transport mechanism in sufficiently ionized accretion disks, cannot enable this process in the BL: it does not operate in this region \citep{PES12} since the angular velocity of the fluid there necessarily {\it increases} with the distance. Instead, \citet{BR12} proposed that a {\it sonic instability}, driving the excitation of acoustic waves by the supersonic shear flow inside the BL, mediates the angular momentum transport in that region in a {\it non-local} fashion, very different from the local $\alpha$-models \citet{SS73}, that have been previously invoked in the BL context \citep{POP93,BAL09,Hert2017}. This instability has been subsequently verified to robustly operate within the BL and drive angular momentum transport using hydrodynamic and magneto-hydrodynamic (MHD) simulations \citep{BRS12,BRS13a,BRS13b}.
Recently \citet{Coleman2021} (hereafter \citetalias{Coleman2021}) presented a new, extensive suite of two-dimensional (2D) hydrodynamic simulations of the BLs run for multiple values of the Mach number ${\ensuremath{\mathcal{M}}}\xspace=v_K(R_\star)/c_s$ --- the ratio of the Keplerian velocity $v_K(R_\star)$ at the inner edge of the disk (i.e. at the stellar radius $R_\star$) to the sound speed $c_s$ (which was assumed constant in that work). They carefully analyzed the different waves emerging in the inner disk as a result of the sonic instability operating in the BL, and discovered some new types of modes emerging in the vicinity of the BL, e.g. the vortex-driven modes. That study focused predominantly on the {\it morphological} characteristics of the waves and their correspondence to the known analytical results.
In this work we will use the simulation suite presented in \citetalias{Coleman2021} to characterize the BL from a different angle, namely to explore the angular momentum and mass transport in its vicinity driven by the wave activity. A number of past studies explored the connection between the two transport processes in accretion disks. For example, \citet{BalPap1999} argued that the MRI-driven transport can be characterized as a local $\alpha$-viscosity \citep{SS73}, whereas the transport driven by disk self-gravity is global and cannot be represented using the $\alpha$-anzatz (see also \citealt{Larson1984,Gnedin1995}). A number of studies have also looked at the angular momentum transport by the global spiral waves in disks \citep{Larson1990,Spruit1987,RRR16,AR18}, in particular those driven by embedded planets \citet{GR01,R02}. In the BL context, \citet{BRS12,BRS13a} explored some global transport characteristics of the acoustic waves, whereas \citet{BRS13b} did the same in the MHD case\footnote{In their global, unstratified MHD simulations of the near-BL region \citet{BQ17} found accumulation of accreted material in a belt-like structure in the BL, spinning at sub-Keplerian velocity. To allow the accreted material to join the star this belt must dissipate somehow, but it is not yet clear how that happens. Nevertheless, even with the belt \citet{BQ17} still found a substantial wave activity in the inner disk, which is what matters for us in this study.}. {Also, \citet{Dittmann2021} studied how the efficiency of transport varies as the spin of the accreting object changes.}
The aim of our present work is to extend the latter studies by looking at the various characteristics of the wave-driven transport in the vicinity of the BL --- angular momentum and mass fluxes --- in a systematic fashion across the range of ${\ensuremath{\mathcal{M}}}\xspace$ values. {A distinct feature of this work compared to previous studies} is that we examine the transport properties of individual modes emerging in our simulations and determine their variation across the different types of modes. Another goal is to explore the wave-driven evolution of the inner parts of the accretion disk adjacent to the BL.
This work is organized as follows. After briefly describing in \S\ref{sect:data} the simulation suite on which this study is based, we remind the reader the basics of the mass and angular momentum transport in accretion disks in \S\ref{sect:other_diag}. Our simulation results on the two kinds of transport are described in \S\ref{sect:transport_mass} and \ref{sect:transport_AM}, respectively. We examine the different contributions to the angular momentum budget in \S\ref{sect:transport_AM_terms} and describe the correlation between the angular momentum and mass fluxes found in our simulations in \S\ref{sect:Mdot-CS}. The wave-driven evolution of the inner disk is characterized in \S\ref{sect:disk_evol}. Finally, we discuss and summarize our results in \S\ref{sect:disc} and \ref{sect:sum}, respectively.
\section{Description of the numerical data set}
\label{sect:data}
Our present study is based on the data produced by a set of 2D simulations (39 runs in total) presented in \citetalias{Coleman2021}. These runs used \texttt{Athena++}\xspace \citep{athena} to solve hydrodynamic equations with the globally isothermal equation of state (EOS), i.e. $P=\Sigma c_s^2$ for a constant sound speed $c_s$, where $P$ is the vertically integrated pressure and $\Sigma$ is the surface density. Magnetic fields and disk self-gravity have been ignored, {and the accretor was initially non-spinning}.
These simulations are set in polar $r-\phi$ coordinates, with the azimuthal coordinate covering full $2\pi$. In the radial direction the simulation domain starts below the stellar radius $R_\star$ and extends in the disk out to $4R_\star$. The radial grid is uniformly spaced in $\log r$ to increase resolution in the vicinity of the BL. Simulations were run for every integer value of ${\ensuremath{\mathcal{M}}}\xspace$ in the interval $5\le {\ensuremath{\mathcal{M}}}\xspace\le 15$. For three values of ${\ensuremath{\mathcal{M}}}\xspace=6,9,12$ we run multiple simulations with the different forms of the initial noise spectrum (triggering sonic instability in the BL) and numerical resolution, to test their impact on the outcomes.
A distinctive feature of our simulations is their detailed, purpose-built, real-time analysis. They employed a high-cadence sampling of the outputs, allowing a highly informative analysis of the outputs to be performed. In particular, we ran on-the-fly fast-Fourier transforms (FFTs) of various fluid variables, giving us new, previously unobtainable diagnostic capabilities. This allowed us to detect and characterize a number of different modes present in the system, with their distinct azimuthal wavenumbers $m$ and pattern speeds $\Omega_P$, at every moment of time. The ability to automatically detect and analyze different modes present in the system is an important improvement of the study presented in \citetalias{Coleman2021} compared to the existing works.
In the following, when presenting our results, we will be using units in which $R_\star=1$ and $\Sigma(R_\star)=1$ at $t=0$; in these units the initial profile of the surface density is $\Sigma=r^{-3/2}$. We also set the Keplerian velocity at the surface of the star $v_K(R_\star)$ to be unity, which implies that $c_s=\mathcal{M}^{-1}$ and $GM_\star=1$. This choice makes the Keplerian period at $R_\star$ to be $\tau_{\star}=2\pi$, and we will often express time in the form of $t/2\pi$, or in the units of $\tau_{\star}$.
\subsection{Main findings of Paper I}
\label{sect:findings}
We now recount the main conclusions of the wave morphology study presented in \citetalias{Coleman2021}, that will allow us to better interpret the results of the current work.
Our suite of simulations reveals a complicated pattern of wave activity in the vicinity of the BL, with a number of different modes operating in the inner disk. In addition to the upper and lower acoustic modes previously described in \citet{BRS13a}, we have also discovered a new type of modes, so called {\it vortex-driven} modes. They appear as global spiral arms extending from the vicinity of the BL into the upper disk, and owe their existence to the localized vortex-like structures that form in the disk next to the BL and launch these density waves.
Another type of modes that we routinely detect in our runs are the so called {\it resonant} modes (owing their existence to a particular geometric resonance condition, \citealt{BRS12}), which are present only in the disk. Similar to the lower acoustic modes, the resonant modes are trapped between the stellar surface and the inner Lindblad resonance, whose location $r_\mathrm{ILR}$ in a Keplerian disk with $\Omega=\Omega_K=(GM_\star/r^3)^{1/2}$ is given by
\ba
r_\mathrm{ILR}=R_\star\left[\frac{\Omega_K(R_\star)}{\Omega_P}\frac{m-1}{m}\right]^{2/3}.
\label{eq:r_ILR}
\ea
In \citetalias{Coleman2021} we also found a general tendency of the azimuthal wavenumber $m$ of the most prominent modes to increase with ${\ensuremath{\mathcal{M}}}\xspace$. These results appear robust with respect to variations of the initial conditions and numerical resolution.
\section{Wave-driven mass and angular momentum transport: basics}
\label{sect:other_diag}
The main goal of the present work is to better understand how matter and angular momentum are transported through the inner disk and the BL, resulting in accretion onto the star. To that effect, we examine how the characteristics of the different modes that we see in our runs are linked to the global evolution of the star-disk system. We do this by exploring the behavior of certain variables derived from our simulations --- mass and angular momentum fluxes --- for different values of ${\ensuremath{\mathcal{M}}}\xspace$ and connecting them to the changes of the disk properties.
In all our simulations transport of mass is tracked by measuring the mass accretion rate,
\begin{align}
\dot{M}\left(r\right)\equiv-
r\int_0^{2\pi} v_r\left(r, \phi\right)\Sigma\left(r, \phi\right)\,{\rm d}\phi=-2\pi r\langle \Sigma v_r\rangle,
\label{eq:Mdot}
\end{align}
{(see Figure \ref{fig:Mdot-split})} where we introduced a shorthand notation
\ba
\langle f(r)\rangle=(2\pi)^{-1}\int_0^{2\pi}f(r,\phi) d\phi
\ea
for azimuthal averaging of any variable $f(r,\phi)$. Accretion rate is defined such that $\dot M>0$ for {\it inflow} of mass towards the star.
To characterize angular momentum transport we adopt the standard procedure of measuring the total angular momentum flux (AMF) $C_{\rm L}$, defined as
\begin{align}
C_{\rm L} = 2\pi r^2 \ave{\Sigma v_r v_\phi},
\label{eq:C_L}
\end{align}
where $v_r$ and $v_\phi$ are the radial and azimuthal velocity components. By introducing a reference azimuthal velocity of the fluid $v_{\phi,0}(r)$, to be specified later, and the azimuthal velocity perturbation $\delta v_\phi=v_\phi-v_{\phi,0}$, we can further decompose $C_{\rm L}$ into the {\it advective} angular momentum flux $C_{\rm A}$, and the {\it wave} angular momentum flux\footnote{Despite the typographic similarities $c_{\rm s}$ and $C_{\rm S}$ are used to denote two distinct quantities: sound speed and wave-induced angular momentum flux, respectively.} (or Reynolds stress) $C_{\rm S}$, defined as follows:
\begin{align}
C_{\rm A} &= 2\pi r^2 \ave{\Sigma v_r}v_{\phi,0}=-\dot M ~r v_{\phi,0},
\label{eq:C_A}\\
C_{\rm S} &= 2\pi r^2 \ave{\Sigma v_r \left(v_\phi-v_{\phi,0}\right)}=2\pi r^2 \ave{\Sigma v_r \delta v_\phi},
\label{eq:C_S}
\end{align}
so that $C_{\rm L}=C_{\rm A}+C_{\rm S}$. In our case the stress contribution $C_{\rm S}$ arises primarily because of the oscillatory wave-like motions and is driven by the wave modes rather than some turbulence, as would be the case for e.g. the MRI.
In this work, when analyzing simulation outputs, we will most often be choosing $v_{\phi,0}$ to be the mean azimuthal velocity of the fluid,
\begin{align}
v_{\phi,0}=\ave{v_\phi},
\label{eq:v_0}
\end{align}
following \citet{Fromang2006} and \citet{Flock2011}, among others. Note that alternative definitions of $C_{\rm S}$ using other choices of $v_{\phi,0}(r)$ are also possible, see e.g. \citet{JU16}, \citet{AR18}, as well as \S\ref{sect:fluxes_rel}.
Because of oscillatory, intrinsically time-dependent nature of the acoustic mode-driven transport, we find $\dot M$ and the different angular momentum flux contributions to be highly time-dependent. For that reason, we resort to averaging these physical quantities over sufficiently long time intervals to provide meaningful comparison between them. We describe mathematical details of such averaging procedures in Appendix \ref{sect:averaging}.
\subsection{Relation between \texorpdfstring{$\dot M$}{Mdot} and \texorpdfstring{$C_{\rm S}$}{CS}}
\label{sect:fluxes_rel}
The mass and angular momentum fluxes through the disk are closely related to each other, which can be demonstrated quite generally starting from the hydrodynamic equations of motion and continuity.
In particular, it was shown in \citet{BRS13a} that by choosing $v_{\phi,0}(r)$ in the {\it mass-weighted} form \citep{BAL98,BalPap1999}
\begin{align}
v_{\phi,0}=v_\Sigma\equiv\frac{\ave{\Sigma v_\phi}}{\ave{\Sigma}},
\label{eq:v_0-weighted}
\end{align}
different from that given by the equation (\ref{eq:v_0}), and defining the wave AMF (\ref{eq:C_S}), or Reynolds stress, $C_{\rm S}$ accordingly (i.e. with $v_{\phi,0}=v_\Sigma$), the equation representing conservation of the angular momentum can be cast in the following form:
\begin{align}
\dot M\frac{\partial l}{\partial r} =\frac{\partial C_{\rm S} }{\partial r}+2\pi r^3\langle\Sigma\rangle \frac{\partial \Omega_0}{\partial t}.
\label{eq:AM_terms}
\end{align}
Here $\Omega_0=v_\Sigma/r$ and $l=rv_\Sigma=\Omega_0 r^2$ are the angular frequency and the specific angular momentum corresponding to the reference azimuthal velocity $v_\Sigma$. If instead of (\ref{eq:v_0-weighted}) we adopted $v_{\phi,0}$ in the form (\ref{eq:v_0}), then the last term in the equation (\ref{eq:AM_terms}) would look differently.
Equation (\ref{eq:AM_terms}) directly relates the transport of mass --- the term proportional to $\dot M$ in the left hand side --- to the transport of the angular momentum in the disk --- the divergence of the wave angular momentum flux $C_{\rm S}$ in the right-hand side. The second term in the right hand side, usually neglected in studies of accretion processes, arises in disks which evolve sufficiently rapidly for their mean angular frequency $\Omega_0$ to change in time. If we neglect this contribution to the angular momentum balance for a moment, then we find
\begin{align}
\dot M =\left(\frac{\partial l}{\partial r}\right)^{-1} \frac{\partial C_{\rm S} }{\partial r}=\frac{\partial C_{\rm S} }{\partial l}.
\label{eq:AM_terms1}
\end{align}
Note that this expression is fully general and applies for any choice of $v_{\phi,0}$; a particular form of $v_{\phi,0}$ affects the definition of both $C_{\rm S}$ and $l$.
In truly viscous disks with kinematic viscosity $\nu$ one should use viscous angular momentum flux $F_J=-2\pi\nu\Sigma r^3 d\Omega/dr$ instead of $C_{\rm S}$ \citep{Lynden1974}. In particular, in disks with radially constant $\dot M$ one finds $F_J=\dot M l$.
However, as we will demonstrate in \S\ref{sect:transport_AM_terms}, the second term in the right hand side of equation (\ref{eq:AM_terms}) often plays an important role near the BL. This contribution to the angular momentum balance has been previously considered in \citet{BRS13a}, and its relation to the evolution of the disk properties was explored in \citet{AR18}.
\section{Transport of mass}
\label{sect:transport_mass}
We start by describing our results for the mass flux $\dot M$. Given that $\dot M$ is closely related to $C_{\rm A}$, see equation (\ref{eq:C_A}), and in the disk $v_{\phi,0}$ is typically pretty close to the Keplerian velocity (see \S\ref{sect:omega_evol}). One can then approximate $C_{\rm A}\approx -\dot M l_K$ outside the BL, where $l_K(r)=\sqrt{GM_\star r}$ is the specific angular momentum for a Keplerian disk. The profile of $C_{\rm A}(r)$ is plotted in Figures \ref{fig:AM1}-\ref{fig:AM2} (blue curve in the middle row of each individual subpanel) for simulations at varied ${\ensuremath{\mathcal{M}}}\xspace$ and at different moments of time. It is discussed in more details in \S\ref{sect:transport_AM_types}, but for now we will mention some key features pertinent for the $\dot M$ behavior.
First, in all plotted cases $C_{\rm A}(r)$ (and $\dot M$) exhibits a deep minimum (maximum) close to the BL. This is naturally explained by the enhanced wave activity near the BL, and results in important consequences for the surface density evolution in the inner disk discussed in \S\ref{sect:disk_evol}. Second, in most cases both $C_{\rm A}(r)$ and $\dot M$ substantially diminish in amplitude far from the accretor. We will relate this behavior to the angular momentum fluxes carried by the different types of waves in \S\ref{sect:transport_AM}.
\subsection{\texorpdfstring{$\dot M$}{Mdot} decomposition}
\label{sect:Mdot_decompose}
Our simulations reveal an interesting feature of the $\dot M$ behavior near the BL, that we discuss next. Let us write down $\Sigma(r,\phi)=\langle\Sigma\rangle +\delta\Sigma(r,\phi)$, where $\delta\Sigma(r,\phi)$ is the perturbation of the surface density relative to its azimuthally-averaged value $\langle\Sigma\rangle$. Then the definition (\ref{eq:Mdot}) can be cast as
\begin{align}
\dot{M}\left(r\right)=-2\pi r\langle \Sigma \rangle\langle v_r\rangle-2\pi r\langle \delta\Sigma v_r\rangle,
\label{eq:Mdot1}
\end{align}
illustrating the separation of $\dot M$ into two distinct contributions --- the one due to the advection of the mean surface density $\langle\Sigma\rangle$ (first term) and the contribution due to the correlation between the non-axisymmetric fluctuation of $\Sigma$ and $v_r$. We illustrate the behavior of these contributions in Figure \ref{fig:Mdot-split} for three simulations with different values of ${\ensuremath{\mathcal{M}}}\xspace$. One can see that in all cases both contributions to $\dot M$ are, in general, equally important: they are comparable in magnitude and tend to offset each other so that the total $\dot M$ is considerably lower in magnitude than each of these terms over large radial ranges (which is especially noticeable near the BL). For that reason, one cannot directly connect $\dot M(r)$ to the radial profiles of $\langle v_r\rangle$ and $\langle\Sigma\rangle$.
Importance of the $\dot M$ contribution due to the correlation between $\delta\Sigma$ and $v_r$ is one of the distinctive features of the global, acoustic wave-driven transport. In the classical picture of laminar viscous disk accretion \citep{SS73,Lynden1974} this contribution is identically zero since the disk is axisymmetric and $\delta\Sigma=0$. In our case, this contribution is non-zero in the vicinity of the BL because of highly correlated $\delta\Sigma$ and $v_r$ for wave-like fluid motions and the nonlinear dissipation of these waves, which leads to damping of the angular momentum flux carried by the waves. Interestingly, $\langle \delta\Sigma v_r\rangle$ is often more regular than $\langle \Sigma \rangle\langle v_r\rangle$, since the latter sometimes exhibits small-scale spatial variability even upon the long-term time averaging, see Figure \ref{fig:Mdot-split}b.
We are not aware of any studies mentioning the role of the mass transport term proportional to $\langle \delta\Sigma v_r\rangle$ in simulations featuring turbulence, self-consistently driven by a local hydrodynamic (e.g. the vertical shear instability) or MHD (e.g. MRI) mechanism. Examining this issue may be interesting for figuring out whether in our case the non-zero second term in equation (\ref{eq:Mdot1}) arises due to the intrinsically global nature of the wave-driven transport near the BL (see \S\ref{sect:global}), or it is also present when accretion is mediated by local processes.
\begin{figure}
\includegraphics[width=\linewidth]{figs/multi_mdot_split.pdf}
\caption{Different contributions to the total mass accretion rate $\dot M$ (black curve) in equation (\ref{eq:Mdot1}), plotted as a function of $r$ for several simulations (see \citetalias{Coleman2021} for their naming notation). The advective term $2\pi\langle \Sigma v_r\rangle\langle v_r\rangle$ is shown in blue, while the diffusive term $-2\pi\langle \delta\Sigma v_r\rangle$ is the orange curve. Averaging over the interval $t/2\pi=100-200$ is performed.}
\label{fig:Mdot-split}
\end{figure}
\section{Angular momentum transport}
\label{sect:transport_AM}
We now turn to the details of the angular momentum transport in the vicinity of the BL. Figures \ref{fig:AM1},\ref{fig:AM2} illustrate the behavior of various representative variables during three different periods of time for the four representative simulations with ${\ensuremath{\mathcal{M}}}\xspace=7,9,12$ and $15$. The morphological description of the modes emerging in two of the runs --- for ${\ensuremath{\mathcal{M}}}\xspace=9$ (M09.FR.r.a) and ${\ensuremath{\mathcal{M}}}\xspace=15$ (M15.FR.r.a) --- has been provided in \citetalias{Coleman2021}, whereas the ${\ensuremath{\mathcal{M}}}\xspace=7$ (M07.FR.r.a) and ${\ensuremath{\mathcal{M}}}\xspace=12$ (M12.FR.mix.a) runs have not been discussed in detail in \citetalias{Coleman2021}.
For each ${\ensuremath{\mathcal{M}}}\xspace$ we select three time intervals of length $50$ or $100$ orbits and perform averages (as described in Appendix \ref{sect:averaging}) of various angular momentum fluxes --- total $C_{\rm L}$, advective $C_{\rm A}$, and wave $C_{\rm S}$, defined by the equations (\ref{eq:C_L})-(\ref{eq:C_S}) with $v_{\phi,0}=\ave{v_\phi}$ given by the equation (\ref{eq:v_0}). Their radial profiles are shown in the middle row of each panel. As mentioned earlier in \S\ref{sect:transport_mass}, the behavior of the advective flux $C_{\rm A}$ in the disk is closely related to the radial profile of the mass accretion rate $\dot M$.
We also select a representative moment of time within each interval, which illustrates a set of modes typical for that period, and show snapshots of various fluid variables in the top row the panel corresponding to that time interval. The format is the same as in Figure 2 of \citetalias{Coleman2021}, namely, we show a polar map of the wave amplitude variable $r v_r\sqrt{\Sigma}$ (right), a Cartesian map of the same variable over the reduced radial range to highlight the near-BL details (middle), and a Cartesian map of the vperturbation of ortensity $\zeta=\omega/\Sigma$ (here $\omega=\nabla\times \mathbf{v}$ is the vorticity) relative to its initial value near the BL (left).
\begin{figure*}
\vspace*{-1em}
\includegraphics[width=\linewidth]{figs/am_subpanels_1.png}
\vspace*{-2em}
\caption{Figure illustrating angular momentum flux data for simulations with ${\ensuremath{\mathcal{M}}}\xspace=7$ (run M07.FR.r.a, left panels) and ${\ensuremath{\mathcal{M}}}\xspace=9$ (run M09.FR.r.a, right panels). Each panel (labeled with capital letters) consists of 5 sub-panels, labeled by their subscript and described in the beginning of \S\ref{sect:transport_AM}. Every panel is associated with a time interval over which the AMF data are computed ($t/2\pi$, shown above the panel) and a representative moment of time $T/2\pi$ within this interval, indicated as follows.
}
\label{fig:AM1}
\end{figure*}
Finally, the bottom row shows the decomposition of the wave angular momentum flux $C_{\rm S}$ into the contributions provided by the individual Fourier modes of the fluid perturbation, which are dominant through the plotted time interval (in a time integrated sense). To second order in perturbed variables the Fourier contribution $C_{{\rm S},m}$ to $C_{\rm S}$ from the $m$-th mode is \citep{BRS13a}
\begin{align}
C_{{\rm S},m}=2\pi r^2 \ave{\Sigma}\left(v_{r,m} v_{\phi,m}^\ast+v_{r,m}^\ast v_{\phi,m}\right),
\label{eq:C_S_m}
\end{align}
where the asterisks denote complex conjugates, so that $C_{\rm S}\approx \sum_{m=1}^\infty C_{{\rm S},m}$. The particular $C_{{\rm S},m}$ profiles shown in the figure are for the dominant modes chosen according to the procedure described in Appendix \ref{sect:averaging}. The gray curves, representing the sum of only the dominant harmonics indicated in each panel, sometimes show deviations from the full $C_{\rm S}$ (black), computed using equation (\ref{eq:C_S}). These differences arise because a number of other, less significant modes not shown in these plots also contribute to the full $C_{\rm S}$, and also because equation (\ref{eq:C_S_m}) is only second order accurate in fluid perturbations.
The decomposition of $C_{\rm S}$ into contributions from the different azimuthal harmonics allows us to explore the role played by the individual modes (which, once their $m$ is known, can be easily associated with the different types of waves) in transporting the angular momentum in the vicinity of the BL. {This is the key advantage of our present work compared to \citet{BRS13a} and \citet{Dittmann2021}, who did not perform such decomposition. Another improvement is the larger range of Mach numbers explored in our study, providing us with a better understanding of the dependence of transport properties on ${\ensuremath{\mathcal{M}}}\xspace$.}
\subsection{Different AMF contributions}
\label{sect:transport_AM_types}
Middle sub-panels of Figures \ref{fig:AM1}-\ref{fig:AM2} display the advective angular momentum flux $C_{\rm A}$, the wave flux $C_{\rm S}$, and their sum $C_{\rm L}$ on the same scale. A notable feature of the $C_{\rm A}$ curves is the small-scale oscillations that they often exhibit. These oscillations are likely caused by the intrinsic time-variability, since the inner disk is pervaded by the multiple wave modes. This variability ends up manifesting itself in the spatial domain, even though we perform time averaging over a substantial interval of time, see Appendix \ref{sect:averaging}.
One can see that $|C_{\rm A}|$ peaks not too far from the stellar surface and its maximum amplitude is always larger than that of $C_{\rm S}$ by a factor of several. This makes sense since, to zeroth order (certainly in steady state), $C_{\rm A}\propto\dot M \propto \partial C_{\rm S}/\partial r$, see equations (\ref{eq:C_A})), (\ref{eq:AM_terms}). Since $C_{\rm S}$ typically varies over a narrow region near the BL, its radial derivative can reach high values. Naturally, the narrower is the near-BL region over which $C_{\rm S}$ varies, the larger should we expect the difference in amplitudes between $C_{\rm A}$ and $C_{\rm S}$ to be. And indeed, in Figure \ref{fig:AM1} (for ${\ensuremath{\mathcal{M}}}\xspace=7$) we find $C_{\rm S}$ variation in the range $1< r\lesssim 1.5$ to result in the ratio of the maximum amplitudes $C_{\rm A}$ and $C_{\rm S}$ being $\sim (2-2.5)$, while in Figure \ref{fig:AM2}Fd (for ${\ensuremath{\mathcal{M}}}\xspace=15$) $C_{\rm S}$ variation in the range $1< r\lesssim 1.2$ leads to the maximum ratio of fluxes $\sim 5-6$.
In most cases we find $C_{\rm A}$ to be negative, as expected for mass inflow with $\dot M>0$. Notable exceptions include ${\ensuremath{\mathcal{M}}}\xspace=9$ run at $t/2\pi=400-500$ (Figure \ref{fig:AM1}Fd) and ${\ensuremath{\mathcal{M}}}\xspace=15$ run at $t/2\pi=100-200$ (Figure \ref{fig:AM2}Dd), for which $C_{\rm A}>0$ (i.e. $\dot M<0$) at large distances. This can again be understood on the basis of the equations (\ref{eq:AM_terms}) or (\ref{eq:AM_terms1}), by noticing that in these cases $C_{\rm S}$ is {\it positive and decays} with $r$ far from the star as a result of dissipation {(it will become clear in \S\ref{sect:transport_AM_modes},\ref{sect:dif-modes} that this behavior is caused by the nonlinear damping of the vortex-driven modes)}. As a consequence, $\partial C_{\rm S}/\partial r<0$ there, leading to $\dot M<0$ (outflow) and positive $C_{\rm A}$.
\begin{figure*}
\vspace*{-1em}
\includegraphics[width=\linewidth]{figs/am_subpanels_2.png}
\vspace*{-2em}
\caption{Same as Fig. \ref{fig:AM1} but for ${\ensuremath{\mathcal{M}}}\xspace=12$ (run M12.FR.mix.lc.a, left panels) and ${\ensuremath{\mathcal{M}}}\xspace=15$ (run M15.FR.r.a, right panels). For ${\ensuremath{\mathcal{M}}}\xspace=12$: (A) $T/2\pi=150$, $t/2\pi=100-200$; (B) $T/2\pi=300$, $t/2\pi=250-350$; (C) $T/2\pi=575$, $t/2\pi=550-600$. For ${\ensuremath{\mathcal{M}}}\xspace=15$: (D) $T/2\pi=150$, $t/2\pi=100-200$; (E) $T/2\pi=350$, $t/2\pi=300-400$; (F) $T/2\pi=550$, $t/2\pi=500-600$.
}
\label{fig:AM2}
\end{figure*}
\subsection{Wave AMF \texorpdfstring{${\ensuremath{C_{\rm S}}}\xspace$}{CS} as a function of \texorpdfstring{${\ensuremath{\mathcal{M}}}\xspace$}{M} and \texorpdfstring{$m$}{m}}
\label{sect:transport_AM_modes}
We now describe the behavior of the wave AMF $C_{\rm S}$ for different values of ${\ensuremath{\mathcal{M}}}\xspace$ shown in Figures \ref{fig:AM1},\ref{fig:AM2} in conjunction with the morphological characteristics of the different waves present in the system.
\subsubsection{${\ensuremath{\mathcal{M}}}\xspace=7$ run}
\label{sect:M=7}
We start our description of the ${\ensuremath{C_{\rm S}}}\xspace$ behavior with the ${\ensuremath{\mathcal{M}}}\xspace=7$ run,
shown in Figures \ref{fig:AM1}A-C. At early times\footnote{{Note that by that time a significant depression in surface density has already formed in the inner disc, see Figure \ref{fig:multi_st}a.}} ($t/2\pi=125$) the inner disk is dominated by the $m=11$ lower mode trapped within $r\lesssim 1.5$ (panel Ab), which roughly corresponds to the $r_{\rm ILR}$ of this mode (its $\Omega_P\approx 0.46$). This mode gives rise to the negative AMF $C_{\rm S,11}$, see the blue curve\footnote{The orange curve in that panel corresponding to the $m=10$ lower mode also provides a substantial contribution to ${\ensuremath{C_{\rm S}}}\xspace$ near the star. This mode is not seen in panels (Aa)-(Ac) because the data shown in panels (Ad), (Ae) are integrated over an extended time interval and $m=10$ signal gets washed out.} in panel (Ae). As a result, ${\ensuremath{C_{\rm S}}}\xspace$ starts negative at $r=R_\star$ and crosses zero around $r_{\rm ILR}$.
Outside the resonant cavity there is a set of rather incoherent vortex-driven spiral arms, excited by a number of chaotic vortices residing at $r\lesssim 1.5$, visible in panel (Aa). These weak arms drive low-amplitude but {\it positive} $C_{\rm S}$ outside $r=1.5$, see panel (Ad).
By $t/2\pi=225$ the amplitude of the wave activity decreases; $m=11$ lower mode gets replaced with the lower $m=10$ mode, which is prominent inside the star but is rather weak in the disk, see panel (Bb). As a result, the magnitude of the (negative) segment of the $C_{\rm S}$ curve in the inner disk drops by about two orders of magnitude, and it gets confined close to the star (panel (Bd)). Also by that time, vortensity evolution produces three prominent vortices at $r\approx 1.1$ (panel (Ba)), which excite strong vortex-driven spirals in the outer disk, increasing the amplitude of $C_{\rm S}>0$ in the outer disk (panel (Bd)).
In the final stages of this ${\ensuremath{\mathcal{M}}}\xspace=7$ simulation, at $t/2\pi=575$, a lower $m=16$ mode clearly shows in the disk at $r\lesssim 1.8$ (although inside the star $m=8$ mode dominates), although with rather modest amplitude (panel (Cb)). This, again, makes ${\ensuremath{C_{\rm S}}}\xspace$ negative close to the star, see the red curve for $C_{{\rm S},16}$ in panel (Ce). Also, the vortensity map in panel (Ca) reveals two strong, azimuthally elongated vortices centered at $r\approx 1.5$. These "rolls", as we termed them in \citetalias{Coleman2021}, drive a pair of azimuthally extended vortex-driven spiral arms with $m=2$ in the outer disk. Their $C_{{\rm S},2}>0$ makes ${\ensuremath{C_{\rm S}}}\xspace$ positive in the outer disk (see panel (Ce)), with the amplitude $\approx 2.5$ times higher than at $t/2\pi=225$.
Already this run alone demonstrates that depending on a type of the mode dominating in a particular region of the disk, one can get rather different behaviors of ${\ensuremath{C_{\rm S}}}\xspace$. Our subsequent description of the runs for other values of ${\ensuremath{\mathcal{M}}}\xspace$ will reinforce this conclusion.
\subsubsection{${\ensuremath{\mathcal{M}}}\xspace=9$ run}
\label{sect:M=9}
The ${\ensuremath{\mathcal{M}}}\xspace=9$ run in its early phases $t/2\pi=150$ reveals a set of lower $m=19-21$ modes confined within $r_{\rm ILR}\approx 2.2$, corresponding to their $\Omega_P\approx 0.3$, see Figure \ref{fig:AM1}Db,Dc. These modes give rise to ${\ensuremath{C_{\rm S}}}\xspace<0$ near the star. Similar picture persists also at $t/2\pi=275$, see panels (Eb)-(Ee). At the same time, in the outer disk ${\ensuremath{C_{\rm S}}}\xspace$ stays close to zero: unlike the late stages of the ${\ensuremath{\mathcal{M}}}\xspace=7$ run, coherent vortices do not emerge near the star (see panels (Da), (Ea)), so vortex-driven spirals are very weak.
An interesting feature of the ${\ensuremath{\mathcal{M}}}\xspace=9$ run pointed out in \citetalias{Coleman2021}, is the emergence of $m=2$, radially elongated mode with very low pattern speed $\Omega_P\approx 0.15$. It is readily noticeable in panel (Eb), however its signal is not present in panel (Ee), which means that its AMF contribution $C_{{\rm S},2}$ is negligible. This is hardly surprising since this mode has radial wave number $k_r=0$.
Finally, at $t/2\pi=150$ the picture changes dramatically, see panels (Fb),(Fc). Near the star, for $r\lesssim 1.4$, the disk is dominated by the $m=6$ resonant mode (see \citetalias{Coleman2021} for details), whereas in the outer disk there are global $m=5$ spiral arms driven by a set of five vortex rolls located at $r\approx 1.3$, see panel (Fa). The resonant mode gives rise to a small (but negative near the star) AMF contribution $C_{{\rm S},6}$, whereas the vortex-driven modes produce $C_{{\rm S},5}>0$ in the outer disk, similar to ${\ensuremath{\mathcal{M}}}\xspace=7$ run.
\subsubsection{${\ensuremath{\mathcal{M}}}\xspace=12$ run}
\label{sect:M=12}
An ${\ensuremath{\mathcal{M}}}\xspace=12$ run that we use in this work is different from the run employed in \citetalias{Coleman2021}. This is done in part to illustrate the fact, pointed out in \citetalias{Coleman2021}, that all ${\ensuremath{\mathcal{M}}}\xspace=12$ runs look very similar to each other, in a much more homogeneous way than for any other value of ${\ensuremath{\mathcal{M}}}\xspace$.
Indeed, comparing Figure \ref{fig:AM2}A-C with the Fig. 7 of \citetalias{Coleman2021}, one can see a very familiar pattern: for most of the simulation, including $t/2\pi=150$ and $300$ shown in Figure \ref{fig:AM2}A,B, the inner disk (out to $r\approx 1.6$) is dominated by the lower $m=16$ mode, accompanied by a set of narrow vortex-driven spirals clearly visible in the outer disk. As expected, this pattern results in $C_{{\rm S},16}$ driving ${\ensuremath{C_{\rm S}}}\xspace<0$ near the star, with ${\ensuremath{C_{\rm S}}}\xspace$ becoming positive further out, at $r\gtrsim 1.3-1.5$, see panels (Ae), (Be).
Closer to the end of the run, at $t/2\pi=550$, one can see a combination of a resonant and a lower $m=11$ modes driving ${\ensuremath{C_{\rm S}}}\xspace$ below zero near the star, with the weak vortex-driven modes carrying ${\ensuremath{C_{\rm S}}}\xspace>0$ outside $r\approx 1.3$.
\subsubsection{${\ensuremath{\mathcal{M}}}\xspace=15$ run}
\label{sect:M=15}
The ${\ensuremath{\mathcal{M}}}\xspace=15$ simulation is the highest ${\ensuremath{\mathcal{M}}}\xspace$ run carried out as a part of our simulation suite described in \citetalias{Coleman2021}. Unlike other simulations described so far, this run shows quite substantial upper acoustic mode activity inside the star \citep{BRS13a}, see Figure \ref{fig:AM2}Db,Eb,Fb, although the manifestations of this mode type in the disk are not as clear.
In this run we witness the appearance of 7-8 vortex rolls at $r\approx 1.15$ rather early on, already by $t/2\pi=150$, see panel Da. These rolls drive a set of global, relatively narrow spiral arms propagating in the outer disk and making ${\ensuremath{C_{\rm S}}}\xspace$ positive beyond $r\approx 1.1$. For $1<r\lesssim 1.1$ we find ${\ensuremath{C_{\rm S}}}\xspace<0$, which could be due to the resonant modes starting to develop in this part of the disk.
These resonant modes do become rather prominent (and more radially extended) at later time, e.g. at $t/2\pi=350$ and $550$, see panels (Eb),(Fb). They keep ${\ensuremath{C_{\rm S}}}\xspace$ negative within $r\approx 1.2$, see panels (Ee),(Fe). At the same time, the vortex rolls appearing earlier in the simulation merge into two (at $t/2\pi=350$) and then one vortex (at $t/2\pi=550$), see panels (Ea), (Fa). They give rise to the vortex-driven mode activity in the outer disk, which maintains positive ${\ensuremath{C_{\rm S}}}\xspace$ there, albeit at a reduced amplitude compared to $t/2\pi=150$.
\subsection{Transport by the different types of modes}
\label{sect:dif-modes}
Examination of the bottom sub-panels in all panels of Figures \ref{fig:AM1}-\ref{fig:AM2} reveals that at any moment of time there is often only one or two modes in the disk that dominate the wave AMF $C_{\rm S}$. For example, in the ${\ensuremath{\mathcal{M}}}\xspace=7$ run at $t/2\pi=100-200$ we find two lower modes with $m=10,11$ to provide the dominant contribution to $C_{\rm S}$ for $r<1.4$ (Figure \ref{fig:AM1}Ae), while at $t/2\pi=550-600$ the vortex-driven $m=2$ mode dominates $C_{\rm S}$ for $r>1.4$ (Figure \ref{fig:AM1}Ce); in the ${\ensuremath{\mathcal{M}}}\xspace=12$ run the lower $m=16$ mode dominates $C_{\rm S}$ for $r<1.5$ at both $t/2\pi=100-200$ and $250-350$ (Figure \ref{fig:AM2}Ae,Be). As these dominant modes have a particular sign of $C_{\rm S}$, they affect disk evolution in a certain way, as described later in \S\ref{sect:disk_evol}.
A wave propagating in the disk carries angular momentum $\Delta J_{\rm w}\propto [\Omega_P-\Omega(r)]$ (see below), so that the {\it flux} of angular momentum associated with it is
\ba
C_{\rm S}\propto k_r\Delta J_{\rm w}\propto k_r[\Omega_P-\Omega(r)],
\label{eq:CSwave}
\ea
where $k_r$ is the radial wavenumber. In particular, an outgoing ($k_r>0$) wave has $C_{\rm S}>0$ in the region where $\Omega_P>\Omega(r)$ (e.g. a vortex-driven mode far from the star), whereas an incoming ($k_r<0$) wave has $C_{\rm S}>0$ in the region where $\Omega_P<\Omega(r)$ (e.g. a lower mode reflected off the ILR). This is what one also finds for the density waves excited by planets, outside and inside of the planetary semi-major axis. On the other hand, in the disk region where $\Omega_P<\Omega(r)$ an outgoing ($k_r>0$) wave has $C_{\rm S}<0$ (e.g. a lower mode propagating away from the star).
Nonlinear wave damping provides an intuitive way to understand why $\Delta J_{\rm w}\propto {\rm sgn}[\Omega_P-\Omega(r)]$. Let us consider a shock wave with a pattern speed $\Omega_P$. Whenever $\Omega_P>\Omega(r)$ (which is true far from the accretor) the shock overtakes the disk fluid, {\it accelerating} it in the azimuthal direction upon crossing the shock as a result of the shock jump conditions. As a result, disk fluid receives {\it positive} $\Delta J$ from the wave. On the contrary, for $\Omega_P<\Omega(r)$ (fulfilled close to the BL, in the inner disk) the disk fluid catches up with the shock front and {\it decelerates} upon crossing it; this means that the wave has transferred {\it negative} angular momentum to the disk, i.e. that $\Delta J_{\rm w}<0$.
\subsubsection{Transport by the lower acoustic modes}
\label{sect:transport-lower}
Lower modes are trapped inside the resonant cavity at $1<r<r_{\rm ILR}$, and in this part of the disk $\Omega_P<\Omega(r)$. The mode is excited in the BL \citep{BRS13a} and first propagates out (i.e. $k_r>0$) towards the ILR. At the ILR (where its $k_r\to 0$) it gets reflected and propagates back to the stellar surface with $k_r<0$ (see Figure \ref{fig:AM2}Ab for an illustration). {As a result,} the outgoing lower mode has {angular momentum flux} $C_{\rm S}^{\rm out}<0$, whereas the incoming (reflected) one carries $C_{\rm S}^{\rm in}>0$. The full angular momentum flux is the sum of the two,
\ba
C_{\rm S}(r)=C_{\rm S}^{\rm out}(r)+C_{\rm S}^{\rm in}(r).
\label{eq:full-lower-CS}
\ea
Because of the nonlinear wave damping, $|C_{\rm S}^{\rm out}(r)|$ gradually decreases in amplitude as $r$ increases, whereas $|C_{\rm S}^{\rm in}(r)|$ decreases in amplitude with decreasing $r$. As a result, for any $1<r<r_{\rm ILR}$ one finds that the negative $C_{\rm S}^{\rm out}(r)<C_{\rm S}^{\rm out}(r_{\rm ILR})$, whereas the positive $C_{\rm S}^{\rm in}(r)<C_{\rm S}^{\rm in}(r_{\rm ILR})$. Also, $C_{\rm S}^{\rm out}(r_{\rm ILR})=-C_{\rm S}^{\rm in}(r_{\rm ILR})$ since $C_{\rm S}(r_{\rm ILR})=0$. With this in mind, equation (\ref{eq:full-lower-CS}) can be cast as
\ba
C_{\rm S}(r)=\left[C_{\rm S}^{\rm out}(r)-C_{\rm S}^{\rm out}(r_{\rm ILR}\right]+\left[C_{\rm S}^{\rm in}(r)-C_{\rm S}^{\rm in}(r_{\rm ILR})\right]<0,
\label{eq:full-lower-CS1}
\ea
showing that ${\ensuremath{C_{\rm S}}}\xspace$ of the lower modes is always negative, in agreement with Figures \ref{fig:AM1}-\ref{fig:AM2}. As a result, these modes drive mass flow towards the accretor, $\dot M>0$, see \S\ref{sect:disk_evol}.
\subsubsection{Transport by the resonant modes}
\label{sect:transport-resonant}
Resonant modes are very similar to the lower modes, since they are also trapped in the resonant cavity, where $\Omega_P<\Omega(r)$. They also propagate both towards and away from the ILR. As a result, by applying the same logic as in \S\ref{sect:transport-lower}, we conclude that resonant modes have $C_{\rm S}(r)<0$ and their dissipation gives rise to $\dot M>0$.
Typically, the magnitude of $C_{\rm S}$ for the resonant modes is lower than $|C_{\rm S}|$ of the lower modes. This is caused by the smaller azimuthal wavenumber $m$ of the resonant modes: lower $m$ implies azimuthally wider wave profile, slower nonlinear wave steepening and shocking, and weaker dissipation. As a result, for resonant modes $C_{\rm S}^{\rm out}(r)$ and $C_{\rm S}^{\rm in}(r)$ in equation (\ref{eq:full-lower-CS}) are closer in amplitude to each other (but still different in sign) than for the lower modes, resulting in their more substantial cancellation and lower $|C_{\rm S}|$.
\subsubsection{Transport by the vortex-driven modes}
\label{sect:transport-vortex}
Vortex-driven modes have $\Omega_P$ set by the angular frequency of their parent vortices, which is typically high since the vortices reside not too far from the BL. As a result, these modes propagate with $k_r>0$ and $\Omega_P>\Omega(r)$ in the outer disk, {\it outside} the corotation region of the mode. This means that these modes have $C_{\rm S}>0$ and (in steady state) result in $\dot M<0$, i.e. mass {\it outflow} in the outer disk.
As shown in \citetalias{Coleman2021}, vortex-driven waves typically fall into two classes: those produced by the compact, isolated vortices (e.g. see Figure \ref{fig:AM1}Ba,Bb,\ref{fig:AM2}Aa,Ab), and the ones driven by the more azimuthally elongated "rolls" (e.g. see Figure \ref{fig:AM1}Ca,Cb,Fa,Fb). The former have small azimuthal width and are superpositions of a number of high-$m$ perturbation modes (see \citetalias{Coleman2021}). As a result, their constituent modes are often not singled out individually in our plots, see e.g. the $C_{\rm S}>0$ segments in the outer disk in Figures \ref{fig:AM1}Be,\ref{fig:AM2}Ae, which are not dominated by a mode with a single value of $m$. On the contrary, the azimuthally extended rolls typically produce waves with a well defined, low value of $m$. As a result, in such cases a particular $C_{{\rm S},m}$ dominates the full $C_{\rm S}$ in the outer disk, see Figures \ref{fig:AM1}Ce,Fe.
The way in which vortex-driven modes dissipate depends on their amplitude, as well as ${\ensuremath{\mathcal{M}}}\xspace$ and $m$. Higher $m$ accelerates wave evolution into a shock because of the smaller azimuthal scale. For that reason, the vortex-driven $m=2$ mode in the ${\ensuremath{\mathcal{M}}}\xspace=7$ run shown Figure \ref{fig:AM1}Ce experiences little damping (its $C_{{\rm S},2}$ is essentially constant in the outer disk), whereas the $m=5$ vortex-driven mode in the ${\ensuremath{\mathcal{M}}}\xspace=9$ run shown Figure \ref{fig:AM1}Fe starts damping appreciably outside $r\approx 2.5$. Note that these modes have similar ${\ensuremath{\mathcal{M}}}\xspace$ and amplitude, so it is the difference in their $m$ that is responsible for the difference in their decay. Also, everything else being equal, higher amplitude and higher ${\ensuremath{\mathcal{M}}}\xspace$ \citep{GR01,R02} of a wave promote its faster shocking and dissipation.
\subsubsection{Transport by the upper acoustic modes}
\label{sect:transport-upper}
Upper acoustic modes show up in the disk early on in low ${\ensuremath{\mathcal{M}}}\xspace$ runs. At high ${\ensuremath{\mathcal{M}}}\xspace$ they also operate inside the star over long periods of time, see \citetalias{Coleman2021}. These modes are evanescent near the BL, where their $\Omega_P<\Omega(r)$, but become propagating outside their Outer Lindblad Resonance, where their $\Omega_P>\Omega(r)$. In that regard, they are similar to the vortex-driven modes considered above, meaning that they promote mass outflow in the disk, $\dot M<0$. However, the amplitude of the $\dot M$ driven by these modes in the disk is typically too low to make them a significant agent in the disk evolution.
\subsubsection{Transport by other modes}
\label{sect:transport-other}
Another interesting mode type that we see in our simulations is the low $m=2$ mode in ${\ensuremath{\mathcal{M}}}\xspace=9$ runs clearly visible in Figure \ref{fig:AM1}Eb. This mode has a characteristic radially elongated perturbation pattern (with a clear shift of phase around $r\approx 1.5$) with $k_r=0$. This implies that this mode propagates purely azimuthally and thus does not transport angular momentum in the radial direction. This conclusion is supported by the lack of $C_{{\rm S},2}$ among the different $C_{{\rm S},m}$ appearing in Figure \ref{fig:AM1}Ee.
\subsection{Effective transport coefficients}
\label{sect:alphas}
\begin{figure*}
\includegraphics[width=\linewidth]{figs/multi_st.png}
\caption{Space-time diagrams of $(\Sigma(t)-\Sigma(0))/\Sigma(0)$ (top), $\alpha_{\rm stress}$ (middle), and $\alpha_{\rm acc}$ (bottom), defined by equations (\ref{eq:alpha_stress})-(\ref{eq:alpha_acc}) for the four simulations described in \S\ref{sect:transport_AM}. Note that the two $\alpha$ parameters often have different signs, even though their amplitudes vary in similar fashion (with 'bursts' over the same time intervals). Large values of $\alpha$ typically lead to significant depletion of material from the disk. See text for details.}
\label{fig:multi_st}
\end{figure*}
A standard way of quantifying transport processes in an accretion disc is through the use of the dimensionless $\alpha$ parameter, which effectively normalizes stress by the thermal pressure $\Sigma c_s^2$ \citep{SS73}. If the stress is caused by some form of local shear viscosity (e.g. due to MRI turbulence), then the same $\alpha$ also characterizes mass accretion through the disk. However, the transport of mass and energy associated with the acoustic modes is intrinsically non-local as sonic waves are dissipated far from their launching site.
Nevertheless, we can still formally define the two dimensionless quantities based on Reynolds stress associated with the wave angular momentum flux $C_{\rm S}$ and accretion mass flux $\dot M$ as follows:
\begin{align}
\alpha_{\rm stress}&\equiv \dfrac{C_{\rm S}}{2\pi r^2 \Sigma c_s^2} = \dfrac{C_{\rm S}}{2\pi r^2 \Sigma}
\left[\dfrac{\mathcal{M}}{v_K(R_\star)}\right]^2,
\label{eq:alpha_stress}\\
\alpha_{\rm acc}&\equiv \dfrac{\dot{M}}{2\pi \Sigma c_s^2} \Omega = \dfrac{\dot{M}}{2\pi \Sigma} \left[\dfrac{\mathcal{M}}{v_K(R_\star)}\right]^2\Omega.
\label{eq:alpha_acc}
\end{align}
Using equation (\ref{eq:AM_terms1}) we can also write $\alpha_{\rm acc}$ as
\begin{align}
\alpha_{\rm acc} &= \dfrac{\Omega}{2\pi \Sigma} \left[\dfrac{\mathcal{M}}{v_K(R_\star)}\right]^2
\left(\frac{\partial l}{\partial r}\right)^{-1}\left[\frac{\partial C_{\rm S} }{\partial r}+2\pi r^3\langle\Sigma\rangle \frac{\partial \Omega_0}{\partial t}\right]
\label{eq:alpha_acc_gen}\\
&= \dfrac{\Omega}{2\pi \Sigma} \left[\dfrac{\mathcal{M}}{v_K(R_\star)}\right]^2
\left(\frac{\partial l}{\partial r}\right)^{-1}\frac{\partial C_{\rm S} }{\partial r}~~~~~~\mbox{in steady state.}
\label{eq:alpha_acc_steady}
\end{align}
Any local transport mechanism acting as a source of shear viscosity would naturally have $\alpha_{\rm stress}=\alpha_{\rm acc}$, but in our case of the transport being global this is no longer true. We illustrate this difference in Figure~\ref{fig:multi_st}, where we show $\alpha_{\rm stress}$ (middle row) and $\alpha_{\rm acc}$ (bottom row) as a function of time and radial location in the vicinity of the stellar surface for several values of ${\ensuremath{\mathcal{M}}}\xspace$. This data has also been box-car smoothed in the time dimension only, with a width of $\delta t/2\pi=5$.
First, one can see that in many cases $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ are different not only in magnitude but also in sign, in drastic difference with the \citet{SS73} model. In particular, for all ${\ensuremath{\mathcal{M}}}\xspace$ we find $\alpha_{\rm acc}$ to be positive over extended intervals of time and space, while $\alpha_{\rm stress}$ is negative (clearly visible for $t/2\pi\lesssim 200$, $r<1.4$ in ${\ensuremath{\mathcal{M}}}\xspace=7$ case, almost always in the ${\ensuremath{\mathcal{M}}}\xspace=12$ case, and so on). Comparison with the Figures \ref{fig:AM1},\ref{fig:AM2} shows that this situation is typical whenever the parts of the disk adjacent to the star are dominated by the modes trapped in the resonant cavity near the star --- lower and resonant modes --- which carry $C_{\rm S}<0$ and have large $\partial{\ensuremath{C_{\rm S}}}\xspace/\partial r>0$. This naturally results in $\alpha_{\rm stress}<0$ and $\alpha_{\rm acc}>0$, see equations (\ref{eq:alpha_stress})-(\ref{eq:alpha_acc_steady}).
Further from the star, the disk is typically dominated by the upper and vortex-driven modes carrying low-amplitude $C_{\rm S}>0$, which results in $\alpha_{\rm stress}>0$. As for $\alpha_{\rm acc}$, we find it to be negative in some cases, e.g. at $t/2\pi\lesssim 100$ in ${\ensuremath{\mathcal{M}}}\xspace=12$ case, or at $t/2\pi\sim 100$ in ${\ensuremath{\mathcal{M}}}\xspace=15$ case, when strong vortex-driven modes are present for $r\gtrsim 1.5$. This situation is typical when $C_{\rm S}>0$ carried by the upper or vortex-driven modes decays sufficiently rapidly with $r$ (as a result of nonlinear dissipation), leading to large $\partial{\ensuremath{C_{\rm S}}}\xspace/\partial r<0$ and mass outflow, see (\ref{eq:alpha_acc_gen})-(\ref{eq:alpha_acc_steady}).
But more often we find $\alpha_{\rm acc}>0$ even far from the star, despite ${\ensuremath{C_{\rm S}}}\xspace$ being positive there (e.g. for $t/2\pi\lesssim 200$ in ${\ensuremath{\mathcal{M}}}\xspace=7$ case). This may seem surprising, since $\partial C_{\rm S}/\partial r$ is typically small but negative during such episodes, see Figure \ref{fig:AM2}Be, which would result in mass outflow according to the equation (\ref{eq:alpha_acc_steady}). However, this steady-state equation does not apply in such situations, as the (second) time-dependent term in the brackets in equation (\ref{eq:alpha_acc_gen}) is often more important than the (negative) divergence of ${\ensuremath{C_{\rm S}}}\xspace$, driving mass inflow and $\alpha_{\rm acc}>0$ even far from the star.
Second, both $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ are clearly highly variable both in time and space. This has important implications for the mediation of accretion flow by the modes, which will be discussed in \S \ref{sect:disc_transport}. The peak values of $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ are reached during the relatively short-lived bursts of activity. Outside of these episodes both $|\alpha_{\rm stress}|$ and $|\alpha_{\rm acc}|$ can maintain much lower levels $\lesssim 10^{-3}$ for extended periods of time.
Third, with our data we can
compare the values of $\alpha$ typical for simulations with different ${\ensuremath{\mathcal{M}}}\xspace$. One can see that $\alpha_{\rm stress}$ varies roughly from $-10^{-2}$ to $10^{-3}$, with negative (positive) values marking the local dominance of either lower or resonant (upper or vortex-driven) modes. At the same time, $\alpha_{\rm acc}$ only rarely attains negative values (at the level of $-10^{-3}$) implying mass outflow. Its peak amplitude can be as high as $\sim 0.1$ when it is positive. {Results shown in Figure~\ref{fig:multi_st} suggest that $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ do not show any obvious dependence on ${\ensuremath{\mathcal{M}}}\xspace$. These transport parameters certainly show little variation with ${\ensuremath{\mathcal{M}}}\xspace$ at higher $\dot M$, e.g. during the bursts of activity confined to relatively short intervals in the beginning of each simulation. This observation is important since it suggests that similar values of $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ may be expected also for significantly higher ${\ensuremath{\mathcal{M}}}\xspace$, typical for many astrophysical objects, see \citetalias{Coleman2021}.}
High amplitudes of $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ during the initial phases of our simulations lead to rapid re-adjustment of the disk structure in response to the vigorous deposition of the angular momentum carried by the waves, as described in \S \ref{sect:disk_evol}. The top row of Figure~\ref{fig:multi_st} illustrates this process by showing the space-time diagram of the local surface density change in the disk. It is clear that injection of angular momentum by the acoustic modes into the inner disk leads to the depletion of gas in the disk out to $r\sim (1.2-1.6)$, which gets accreted onto the central object. The depletion usually takes place early on, following the large bursts of $\alpha_{\rm acc}$.
There is also an inverse effect: a significant depletion of mass in the inner disk Figure~\ref{fig:multi_st} acts to suppress the mode amplitudes and to reduce the efficiency of the angular momentum and mass transport. This is quite clear in the ${\ensuremath{\mathcal{M}}}\xspace=7,9$ simulations, where the amplitudes of $\alpha_{\rm stress}$ and $\alpha_{\rm acc}$ sharply drop right after a substantial gap has been carved up in the inner disk during the initial burst of accretion. In our purely hydrodynamic simulations, there is no mechanism (e.g. MRI) to replenish this accreted material. As a result, we do not achieve a true steady state. However, situation would be different when other transport mechanisms operating in the bulk of the disk are taken into account, see \S \ref{sect:disc_transport}.
\section{Detailed angular momentum balance}
\label{sect:transport_AM_terms}
Next we examine the role played by the different contributions in the angular momentum balance equation (\ref{eq:AM_terms}). In Figure \ref{fig:AM-terms} we plot the time averages (see Appendix \ref{sect:averaging} for details of the averaging procedure) of the three terms featured in that equation, computed for $v_{\phi,0}=v_\Sigma$ (see equation (\ref{eq:v_0-weighted})), for three different simulations (${\ensuremath{\mathcal{M}}}\xspace=6,9,12$) listed in the figure. One can see that in all runs the sum of the two terms in the right-hand side of the equation (\ref{eq:AM_terms}), represented by the red dotted curve, falls right on top of the black $\dot M\partial_r l$ curve, as expected. This demonstrates that the relation (\ref{eq:AM_terms}) indeed holds with high accuracy in our simulations.
\begin{figure}
\includegraphics[width=\linewidth]{figs/am_terms.pdf}
\caption{Different terms in the angular momentum balance equation (\ref{eq:AM_terms}) extracted from our simulations. Blue ($\partial_r{\ensuremath{C_{\rm S}}}\xspace$), orange (denoted as $\partial_t v_\phi$) and black ($\dot M\partial_rl$) curves correspond to the second, third and first terms in this equation, respectively (see the legend). Red dotted curve gives the sum of the blue and orange curves, which should equal the black one in theory (as it does). Appropriately averaged (see Appendix \ref{sect:averaging} for details) data are shown for 3 runs: (a) ${\ensuremath{\mathcal{M}}}\xspace=6$ (run M06.HR.r.a) for $t/2\pi=200-300$; (b) ${\ensuremath{\mathcal{M}}}\xspace=9$ (run M09.FR.r.a) for $t/2\pi=400-500$; (c) ${\ensuremath{\mathcal{M}}}\xspace=12$ (run M12.FR.mix.a) for $t/2\pi=500-600$.
}
\label{fig:AM-terms}
\end{figure}
In most cases $\dot M$ (as well as $\dot M\partial_r l$, which is plotted) is positive (meaning inflow) and largest near the star, for $r\lesssim 1.5$. This behavior can often be directly related to that of $C_{\rm S}$, e.g. see Figures \ref{fig:AM1}Be, \ref{fig:AM2}Ae,Aj, in which the positive radial derivative of $C_{\rm S}$ provides the dominant contribution to the right hand side of the equation (\ref{eq:AM_terms}) near the star. Large $\partial_r C_{\rm S}>0$ leading to large $\dot M$ arises in the inner disk because of the deposition of the AMF carried by the lower and resonant modes, which are trapped in the resonant cavity near the star. On the other hand, at larger radii one can also find low-amplitude $\dot M<0$, meaning outflow, see Figure \ref{fig:AM-terms}b at $r\gtrsim 2.5$ and \S\ref{sect:alphas}.
Very importantly, one can see that in all cases shown in Figure \ref{fig:AM-terms} the last term in equation (\ref{eq:AM_terms}) accounting for the time-dependence of the mean rotational profile in the disk (marked $\partial_t v_\phi$ for brevity) plays a very significant role. In many cases its contribution (orange curve) is similar in amplitude (but opposite in sign) to that of $\partial_r C_{\rm S}$, see e.g. Figure \ref{fig:AM-terms}a,b. At first, this may seem rather surprising since the variation of $\Omega$ during the corresponding time intervals is quite small (see Figure \ref{fig:multi_omega}). However, it must be also kept in mind that all terms in equation (\ref{eq:AM_terms}) are {\it second-order} in perturbed variables and are thus small in magnitude. For that reason, even a weak variation of $\Omega$ in time can provide a substantial contribution to the angular momentum balance in equation (\ref{eq:AM_terms}). We will discuss the implications of this observation in \S \ref{sect:disc_transport}.
\section{\texorpdfstring{$C_{\rm S}$-$\dot M$}{CS-Mdot} correlation}
\label{sect:Mdot-CS}
\begin{figure}
\includegraphics[width=\linewidth]{figs/mdot_cs.pdf}
\caption{Correlation between the time-averaged values of $C_{\rm S}$ and $\dot M$ for two runs: (a) ${\ensuremath{\mathcal{M}}}\xspace=9$ run M09.FR.r.a and (b) ${\ensuremath{\mathcal{M}}}\xspace=12$ run M12.FR.mix.a. Dots show the time-averaged data taken at different moments of time, with color showing the duration of time-averaging interval: 5 (blue) or 10 (yellow) orbits. The dotted black lines are $C_{\rm S}=\pm 0.5\dot{M}$ to illustrate equation (\ref{eqn:cs_mdot}).
}
\label{fig:Mdot-CS}
\end{figure}
A classical steady-state accretion disk with no torque at the origin, which is mediated by the local shear viscosity necessarily exhibits a linear scaling between the {\it instantaneous, local} values of the viscous angular momentum flux $C_{\rm S}$ and $\dot M$ in the form $C_{\rm S}=\dot M l_K$, where $l_K$ is the Keplerian angular momentum. The non-local wave-driven transport that we observe in our BL simulations does not need to obey the same relation, for several reasons. First, the angular momentum balance in equation (\ref{eq:AM_terms}) involves the time-dependent term, which is generally non-zero, see \S\ref{sect:transport_AM_terms}. Second, even in steady state equation (\ref{eq:AM_terms1}) provides only a differential relation between $C_{\rm S}$ and $\dot M$; solving for ${\ensuremath{C_{\rm S}}}\xspace$ would require the knowledge of its value at the boundary $r=R_\star$, which is not necessarily zero (as we show next).
Nevertheless, the overall amplitude of $\dot M$ should still correlate somehow with the amplitude of $C_{\rm S}$, an expectation based on the equation (\ref{eq:AM_terms}). To check whether this is the case we performed the measurement of $C_{\rm S}$ and $\dot M$ at the stellar surface, i.e. $C_{\rm S}(R_\star)$ and $\dot M(R_\star)$. {The radius $r=R_\star$ is chosen so as to minimize the impact of wave damping and non-uniformity of $\dot M$ in the disc on the measurement of angular momentum and mass fluxes.} This measurement is quasi-simultaneous, since we average these quantities over $5$ or $10$ inner orbits to reduce the stochastic noise, which is naturally present near the BL. The results are plotted in Figure \ref{fig:Mdot-CS} for the ${\ensuremath{\mathcal{M}}}\xspace=9$ and ${\ensuremath{\mathcal{M}}}\xspace=12$ runs previously discussed in \S\ref{sect:M=9},\ref{sect:M=12}.
There are several notable features in these plots. First, $C_{\rm S}(R_\star)$ is almost always negative, which is consistent with the $C_{\rm S}(r)$ behavior in Figures \ref{fig:AM1},\ref{fig:AM2}. Second, for moderate negative values of $C_{\rm S}(R_\star)$ one often observes values of $\dot M$, which are roughly equal in magnitude but opposite in sign. This is most likely caused by the oscillatory nature of the wave driven transport activity near the BL (note that Figure \ref{fig:Mdot-CS} uses quasi-logarithmic scale).
Third, and most importantly for us, the largest negative values of $C_{\rm S}(R_\star)$ are very tightly correlated with the largest positive values of $\dot M(R_\star)$. We quantified this correlation by examining the dimensionless quantity $C_{\rm S}/\dot{M}$ (in simulation units, with $R_\star=1$, $v_K(R_\star)=1$) in every run of our simulation suite described in \citetalias{Coleman2021}. To evaluate this ratio, we averaged $C_{\rm S}(R_\star)$ and $\dot{M}(R_\star)$ separately in time over five orbits for each simulation\footnote{We also tried using 10 orbits for averaging interval and found that the results were consistent to within $10\%$.}. Part of the reason for this averaging\footnote{We discard some bins to remove the initial data and noisy data in which both quantities frequently change sign.} is to smooth over possible time lags between $C_{\rm S}(R_\star)$ and $\dot{M}(R_\star)$ arising due to the nonlocal nature of the wave driven transport. This data is plotted in Figure \ref{fig:CS_Mdot}, and results in (median $\pm$ one quartile)
\begin{align}
\label{eqn:cs_mdot}
\left.\dfrac{C_{\rm S}}{ \dot{M}\ell_K}\right|_{r=R_\star}=- 0.51^{+0.28}_{-0.17}.
\end{align}
This relation (shown with dotted lines in Figures \ref{fig:Mdot-CS},\ref{fig:CS_Mdot}) once again demonstrates that mass inflow into the BL requires {\it negative} $C_{\rm S}$ at the stellar interface. It also characterizes the efficiency with which mass accretion is enabled by the angular momentum injection into the disk by the wave modes excited in the BL. Finally, it provides an important boundary condition, which can be used in constructing semi-analytical models describing the structure of the inner regions of accretion disks fully or partly mediated by the acoustic waves.
\begin{figure}
\includegraphics[width=\linewidth]{figs/CS_Mdot_plot.pdf}
\caption{The ratio of $C_{\rm S}/\dot{M}$ at $r=R_\star=1$ (in simulation units, equal to $C_{\rm S}/(\dot{M}\ell_K)|_{r=R_\star}$ in physical units) as a function of $\dot{M}$, with the color denoting the value of $\mathcal{M}$ of the runs from the data were taken. Each colored point represents a sample of five orbit period averages of data for the corresponding ${\ensuremath{\mathcal{M}}}\xspace$, such as blue dots shown in Fig. \ref{fig:Mdot-CS}, and the statistical properties of that sample: in panel (a) we show means $\pm$ one standard-deviation and in (b) we show medians $\pm$ one quartile.
The data are binned in $\dot{M}$ prior to constructing the sample, the vertical dotted grey lines indicate the bins. Horizontal placement of points within a bin is not meaningful; the points are spread out so that they do not visually overlap. Points without error bars correspond to samples consisting of a single five orbit period average. The horizontal black dotted line and shaded grey band show the global mean/median $\pm$ one standard-deviation/quartile for (a)/(b) respectively.
}
\label{fig:CS_Mdot}
\end{figure}
\section{Wave-driven disk evolution}
\label{sect:disk_evol}
\begin{figure*}
\includegraphics[width=\linewidth]{figs/multi_omega.pdf}
\caption{Evolution of $\Omega(r)$ profiles in simulations with different values of $M$ (found in the run label). Different curves correspond to different moments of time coded in the colorbar on the right. See text for details.}
\label{fig:multi_omega}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{figs/bl_properties.pdf}
\caption{(a) Width of the BL as a function of ${\ensuremath{\mathcal{M}}}\xspace$; dotted line corresponds to equation (\ref{eq:deltaBL}).
(b) Deviation of the plateau from $\Omega_K(R_\star)$ as a function of ${\ensuremath{\mathcal{M}}}\xspace$; dotted line corresponds to equation (\ref{eq:deltaOmega}).
(c) Radial extent of the $\Omega$ plateau as a function of ${\ensuremath{\mathcal{M}}}\xspace$; dotted line corresponds to equation (\ref{eq:deltaPlateau}). The dotted lines are by-eye fits to the relevant data.
The data shown correspond to the final output file of each simulation ($t/2\pi=600$).}
\label{fig:BLproperties}
\end{figure}
In the absence of wave damping $C_{\rm S}$ would be exactly conserved in our globally isothermal setup, implying no transfer of the wave angular momentum to the disk and $\dot M=0$. But in our simulations wave modes of different types can exchange their angular momentum with the disk fluid, causing its surface density to evolve. Since our runs have no viscous or radiative dissipation, this exchange is accomplished through the nonlinear dissipation of the waves \citep{GR01,R02}: nonlinear steepening of the acoustic wave profile eventually turns the wave into a shock \citep{LL}, transferring its angular momentum to the background flow \citep{RRR16}. This process inevitably occurs even if the shock is weak.
Nonlinear damping always {\it reduces} wave amplitude and $|C_{\rm S}|$, regardless of the sign of $C_{\rm S}$. As a result, in agreement with the equations (\ref{eq:AM_terms})-(\ref{eq:AM_terms1}), non-zero $\dot M$ arises in the disk changing its structure. But the direction of the mass flow also depends on a particular type of the wave and, more specifically, on the sign of AMF $C_{\rm S}$ that it carries.
As we saw in \S\ref{sect:alphas}, damping of the modes with $C_{\rm S}<0$, e.g. lower and resonant modes (see \S\ref{sect:transport_AM_modes}), gives rise to $\partial C_{\rm S}/\partial l\propto \partial C_{\rm S}/\partial r>0$, leading to $\dot M>0$ (see equations (\ref{eq:AM_terms}),(\ref{eq:AM_terms1})), i.e. mass {\it inflow} through the disk. On the other hand, dissipation of the modes with $C_{\rm S}>0$, i.e. the upper and vortex-driven modes, can sometimes lead to $\partial C_{\rm S}/\partial l>0$. As a result, the disk fluid gains angular momentum, resulting in $\dot M<0$, i.e. mass {\it outflow}.
In practice, lower and resonant modes carrying $C_{\rm S}<0$ generally affect disk evolution much stronger than the other types of modes. Since these modes are trapped in the resonant cavity in the inner disk, gas accretion onto the star driven by them clears out a substantial depression, or gap, in the inner disk. The development of such a gap is illustrated in the top row of Figure~\ref{fig:multi_st}. One can see that the reduction of $\Sigma$ can be quite substantial, with the gap depth reaching $\sim 80\%$ of the original density at $r\lesssim 1.5$ in simulations with ${\ensuremath{\mathcal{M}}}\xspace\lesssim 9$. For higher-${\ensuremath{\mathcal{M}}}\xspace$ runs, which often exhibit more regular patterns of the acoustic modes, the decrement of $\Sigma$ is not so large, with the gap depths at the level of $\sim (20-30)\%$ being more typical, see Figure~\ref{fig:multi_st}. The radial extent of the gap also decreases with increasing ${\ensuremath{\mathcal{M}}}\xspace$, in agreement with the smaller radial width of the resonant cavity at higher ${\ensuremath{\mathcal{M}}}\xspace$, see \S\ref{sect:transport_AM_modes}.
In our simulations the deepening of the gap saturates once it reaches a sufficient depth that depends on ${\ensuremath{\mathcal{M}}}\xspace$. The gap does not get refilled by the matter arriving from larger radii, as would be expected in a real accretion disk, because our runs do not have an an explicit mechanism of the angular momentum transport in the outer disk (i.e. $\alpha$-viscosity or MRI). In real disks one should expect the gap to be shallower than what we find in our simulations.
\subsection{Evolution of the \texorpdfstring{$\Omega(r)$}{Omega} profile}
\label{sect:omega_evol}
Sharp gradients of $\Sigma$ that develop around the gap region cause substantial modification of $\Omega(r)$ profile away from the purely Keplerian $\Omega_K(r)$ in the bulk of the disk. In a steady state $\Omega(r)$ is given by
\begin{align}
\Omega^2(r)=\Omega_K^2(r)+\frac{1}{\Sigma r}\frac{\partial P}{\partial r},
\label{eq:Omega}
\end{align}
following from the radial momentum balance equation. Because of the initial non-uniform profile of $\Sigma$ and the resultant radial pressure gradient, some deviation of $\Omega$ from $\Omega_K$ is present even at the start of simulations.
But after the gap develops in the disk, these $\Omega$ deviations increase as the second term in the right hand side of the equation (\ref{eq:Omega}) becomes much larger. This is illustrated in Figure \ref{fig:multi_omega}, which shows the profiles of $\Omega(r)$ at different moments of time in simulations with different ${\ensuremath{\mathcal{M}}}\xspace$. Using the simulation data on $\Sigma(r)$ (e.g. the ones shown in the top row of Figure \ref{fig:multi_st}) we find that the $\Omega(r)$ profile agrees very well with the equation (\ref{eq:Omega}). Interestingly, equation (\ref{eq:Omega}) describes the $\Omega(r)$ behavior quite well even inside the BL, where $\Omega(r)$ substantially deviates from $\Omega_K$. This means that even within the BL the radial velocity of the accreting matter $v_r$ is still quite small, so that the inertial terms in the radial momentum balance equation can be neglected.
In agreement with the equation (\ref{eq:Omega}), we find $\Omega<\Omega_K$ interior from the deepest part of the gap, where $\Sigma$ and $P$ decrease with radius. In fact, $\Omega$ tends to exhibit a relatively flat, {\it plateau}-like segment with $\Omega(r)\approx \Omega_{\rm max}$ just outside the BL.
Right outside the deepest part of the gap, $P$ increases with $r$ (very strongly for lower ${\ensuremath{\mathcal{M}}}\xspace$), driving $\Omega$ {\it above} $\Omega_K$. This can be most easily seen in Figure \ref{fig:multi_omega}c-f. As the gap width and depth become smaller for higher ${\ensuremath{\mathcal{M}}}\xspace$, so does the deviation of $\Omega$ from $\Omega_K$: the region near the BL where $\Omega(r)$ exhibits a plateau gets narrower as ${\ensuremath{\mathcal{M}}}\xspace$ grows, and the value of $\Omega$ in this region gets closer to $\Omega_K$.
To characterize these $\Omega(r)$ features developing near the BL, in Figure \ref{fig:BLproperties} we plot the width of the BL $\delta_{\rm BL}$, defined as the radial extent over which $\Omega$ transitions from 25\% to 75\% of its maximum value, i.e. $\delta_{\rm BL}\equiv R\left(\Omega=0.75\Omega_{\rm max}\right) - R\left(\Omega = 0.25\Omega_{\rm max}\right)$
(panel (a)); the deviation of $\Omega(r)$ in the plateau region $\delta\Omega$ from $\Omega_K(R_\star)=1$, i.e. $\delta\Omega\equiv 1-\Omega_{\rm max}$ (panel (b)); and the radial width $\delta_{\rm plateau}$ of the plateau in $\Omega$, defined as the radial extent over which $\Omega\ge 0.9\Omega_{\rm max}$. One can see that all three plotted variables generally {\it decrease} with increasing ${\ensuremath{\mathcal{M}}}\xspace$, as mentioned previously. Note that some values of ${\ensuremath{\mathcal{M}}}\xspace$ for which we have multiple simulations show a spread in these derived variables, see e.g. the variation of $\delta_{\rm BL}$ for ${\ensuremath{\mathcal{M}}}\xspace=9$ in panel (a).
Such spreads in $\delta_{\rm BL}$ are caused by the emergence of an inflection point in $\Omega(r)$ profile inside the BL in some runs, which acts to increase the radial range over which $\Omega$ varies. This feature is clearly seen e.g. in Figure \ref{fig:multi_omega}d-f (see \citealt{BRS12} {and \citealt{Dittmann2021}} for similar observations). To circumvent the effect of this feature on the determination of $\delta_{\rm BL}$ we focus on the smallest values of $\delta_{\rm BL}$ for a given ${\ensuremath{\mathcal{M}}}\xspace$. To guide the eye, in Figure \ref{fig:BLproperties}a we run\footnote{This and other dotted lines in Figure \ref{fig:BLproperties} are not formal fits to the data.} a dotted line
\ba
\delta_{\rm BL}\approx 2~{\ensuremath{\mathcal{M}}}\xspace^{-2}
\label{eq:deltaBL}
\ea
through these points that appears to provide a decent fit to the lower envelope of the measured $\delta_{\rm BL}$ values. As previously noted in \citet{BRS12}, this scaling implies that $\delta_{\rm BL}$ is equal to a fixed (i.e. independent of ${\ensuremath{\mathcal{M}}}\xspace$) number of {\it stellar} scaleheights (equal to ${\ensuremath{\mathcal{M}}}\xspace^{-2}$ in our notation). In our case this number is $\approx 2$, which is significantly less than $7-8$ found in \citet{BRS12}, a difference naturally explained by the fact that \citet{BRS12} did not attempt to account for the effect of the inflection point on the $\Omega$ profile.
In Figure \ref{fig:BLproperties}b we plot $\delta\Omega$ --- the deviation of $\Omega$ from unity in the plateau region --- together with a by-eye fit
\ba
\delta\Omega\approx 1.5~{\ensuremath{\mathcal{M}}}\xspace^{-1}.
\label{eq:deltaOmega}
\ea
This rough scaling can be understood analytically using equation (\ref{eq:Omega}) if we also assume that the radial scale of $\Sigma$ variation in the gap near the stellar surface is of order of the {\it disk} scaleheight ${\ensuremath{\mathcal{M}}}\xspace^{-1}$, which appears to be the case in our runs.
The data for the radial extent of $\Omega$ plateau $\delta_\mathrm{plateau}$ is shown in Figure \ref{fig:BLproperties}c together with a simple fit (dotted curve)
\ba
\delta_\mathrm{plateau}\approx 0.9~{\ensuremath{\mathcal{M}}}\xspace^{-2/3}.
\label{eq:deltaPlateau}
\ea
Based on the Keplerian rotation law and equation (\ref{eq:deltaOmega}), one might have expected a steeper dependence on ${\ensuremath{\mathcal{M}}}\xspace$ for a flat $\Omega$ plateau, closer to $\propto{\ensuremath{\mathcal{M}}}\xspace^{-1}$. However, examination of Figure \ref{fig:multi_omega} shows that approximating the $\Omega(r)$ plateau shape as flat is very approximate, and that it has a certain non-trivial radial dependence to it. Thus, the exponent in equation (\ref{eq:deltaPlateau}) different from $-1$ should not come as a surprise.
{Knowledge of $\delta\Omega$ and $\delta_\mathrm{plateau}$ is important since we often find the plateau region to contain near-surface vortices that drive prominent spiral arms in the disc, see \citetalias{Coleman2021}. The pattern speed of such features should be roughly equal to $1-\delta\Omega$ (as the vortices are comoving with the fluid), while $\delta_\mathrm{plateau}$ informs us about the distance out to which these near-surface vortices might be expected to be found.}
Given that the exact shape of the $\Omega$ plateau depends on the $\Sigma(r)$ profile in the gap, one may wonder if scalings (\ref{eq:deltaOmega}), (\ref{eq:deltaPlateau}) would still hold in the presence of mass inflow from larger radii, which is expected in real disks. It is very likely that $\delta\Omega$ and $\delta_{\rm plateau}$ behaviors shown in Figure \ref{fig:BLproperties} will change in that situation. However, the scaling (\ref{eq:deltaBL}) for the width of the BL ($\delta_{\rm BL}$) must be robust regardless of the presence or absence of the mass inflow, since the existence of the BL is independent of the presence or absence of a depression in $\Sigma$ outside the star.
\section{Discussion}
\label{sect:disc}
Results of the previous sections illustrate the important effect of the wave modes triggered by the supersonic shear inside the BL on the angular momentum and mass transport in the inner disk. We now discuss some additional aspects of the wave-driven transport processes in the vicinity of the BL.
\subsection{Modes dominating mass transport and disk evolution}
\label{sect:disc_transport}
The discussion in \S\ref{sect:transport-lower}-\ref{sect:transport-other} allows us to identify the modes most relevant for driving the accretion onto the star. Since accretion implies $\dot M>0$, we can immediately conclude that it is the lower acoustic and resonant modes that must be responsible for driving the mass inflow onto the stellar surface. This conclusion is supported by Figure \ref{fig:Mdot-CS} and the results presented in \S\ref{sect:Mdot-CS}, which show that the highest values of $\dot M$ require ${\ensuremath{C_{\rm S}}}\xspace<0$ , which is a unique feature of these two types of modes.
At the same time, the vortex-driven (and, at lower amplitude, the upper acoustic) modes drive a weak mass outflow in the disk, which is not conducive to accretion. In realistic disks this outflow would be suppressed by the mass inflow caused by other sources of the angular momentum transport capable of operating in the bulk of the disk, e.g. the MRI.
It is important that the $\dot M$ driven by the lower acoustic and resonant modes is large only in radially narrow region of the disk near the stellar surface, where the mode is trapped. This means that these modes by themselves are not capable of maintaining constant $\dot M$ through the whole disk, which would be necessary to keep it in a steady state. Other transport mechanisms are clearly needed to ensure a steady delivery of mass towards the BL, e.g. the MRI.
It is also important to realize that in realistic disks, inside the resonant cavity where the $\dot M$-driving modes are trapped, the angular momentum transport would be provided {\it both} by the waves, globally, {\it and} by the MRI, locally (since $d\Omega/dr<0$ in that region, see Figure \ref{fig:multi_omega}). This enhanced transport is likely to result in a non-trivial structure of the surface density in this near-BL part of the disk.
\subsection{Global nature of the angular momentum transport}
\label{sect:global}
Our morphological study in \citetalias{Coleman2021} demonstrates that the waves excited in the vicinity of the BL can propagate over substantial distances in the disk. This is clear for the vortex-driven and upper acoustic modes that can propagate from their corotation region near the BL all the way into the outer disk where they get gradually damped. However, even the modes which are trapped near the BL --- the lower acoustic waves and the resonant modes --- still travel over a significant radial range in the disk, comparable to $R_\star$. This is especially true for the low-$m$ resonant modes, for which the corotation radius can lie quite far out in the disk, see \citetalias{Coleman2021}.
The long-range propagation of the waves makes the angular momentum transport effected by them truly global.
It gives rise to a non-local coupling between the BL, where the accreting matter uploads its angular momentum and energy to the waves, and the more distant parts of the disk, where the waves dissipate, driving $\dot M$ and injecting their energy into the disk fluid \citep{RRR16}. Such non-locality is typical also for the angular momentum transport by the planet-driven density waves in the protoplanetary disks \citep{Lunine1982,GR01,R02,RP2012,PR2012}.
Because of its non-locality, the wave-driven transport cannot be characterized by a single $\alpha$ parameter, which is often invoked to describe local transport \citep{SS73,BalPap1999}. As we demonstrate in \S\ref{sect:alphas}, the values of $\alpha$ near the BL computed using stress ${\ensuremath{C_{\rm S}}}\xspace$ and $\dot M$ (which are the same for local transport) are different not only in magnitude but also in sign, see Figure \ref{fig:multi_st}. For that reason, the standard methods for calculating the structure and viscous evolution of the disk based on the diffusion-type equation approach \citep{Lynden1974}, would fail to describe the disk in the vicinity of the BL. Instead, calculations of the wave-mediated structure of the inner disk need to explicitly account for the long-range propagation and dissipation of the waves excited in the BL.
\subsection{Observational implications}
\label{sect:observations}
Non-local wave-driven transport effected by the waves has important implications for the observational appearance of the objects accreting through the BL. Models of the BL structure using the local $\alpha$-viscosity inevitably predict that local energy dissipation in the layer should heat gas in the BL to high temperatures \citep{POP93,NAR93}. This would naturally give rise to a high-energy component in the spectra of objects accreting through the BL, which is often not observed \citep{Ferland1982}.
Energy transport by the waves changes the whole pattern of the energy dissipation: it occurs not in the narrow BL itself, but over a larger area of the disk, lowering the effective temperature of emission associated with the BL and naturally alleviating the "missing boundary layer" problem \citep{Ferland1982}. The rate $d\dot E/dr$ (per unit time and radius) at which thermal energy is deposited in the disk by a wave (with pattern speed $\Omega_P$) is closely related to the rate of dissipation of the wave angular momentum flux, as $d\dot E/dr=(\Omega-\Omega_P)d{\ensuremath{C_{\rm S}}}\xspace/dr$ \citep{Lynden1972,GR01,RRR16}. Thus, the results of our study on the radial profiles of ${\ensuremath{C_{\rm S}}}\xspace$ for different wave modes can be directly employed to analyze the spatial distribution of the disk heating induced by the BL-excited waves.
\subsection{Comparison with the existing studies}
\label{sect:literature}
In the past, \citet{BRS12,BRS13a,BRS13b} analyzed the behavior of the wave angular momentum flux ${\ensuremath{C_{\rm S}}}\xspace$ in the vicinity of the BL, and related it to the $\dot M$ behavior and the wave amplitude in the disk. {More recently, \citet{Dittmann2021} looked at the ${\ensuremath{C_{\rm S}}}\xspace$ behavior in the disk (as well as the flow of mass and angular momentum into the star) for non-zero stellar spin rates.} Our present work goes beyond these studies in several ways.
First, we analyze ${\ensuremath{C_{\rm S}}}\xspace$ behavior for a variety of {\it individual} modes present in the disk. Coupling this with the results of our morphological study in \citetalias{Coleman2021} allows us to understand the ${\ensuremath{C_{\rm S}}}\xspace$ behavior for each mode type --- lower and upper, resonant, vortex-driven, and so on. Second, we follow how the behavior of ${\ensuremath{C_{\rm S}}}\xspace$ changes as our simulations progress and different types of wave modes come and go. Third, we systematically explore the differences in the ${\ensuremath{C_{\rm S}}}\xspace$ behavior as a function of the Mach number ${\ensuremath{\mathcal{M}}}\xspace$ of our runs.
\section{Summary}
\label{sect:sum}
In this work we studied the angular momentum and mass transport driven by the different types of the waves operating in the vicinity of the BL of an accretion disk. Our analysis is based on a large suite of global, 2D hydrodynamic simulations described in \citetalias{Coleman2021}, in which a large number of modes have been previously identified and characterized. Results of our present study can be summarized as follows.
\begin{itemize}
\item In wave-mediated inner region of the disk mass accretion rate $\dot M$ has a significant contribution arising from the correlated radial velocity $v_r$ and the surface density perturbation $\delta\Sigma$.
\item {The efficiency of angular momentum transport expressed through the effective $\alpha$-parameter appears to not depend strongly on the Mach number of the flow ${\ensuremath{\mathcal{M}}}\xspace$.}
\item By examining angular momentum flux $C_{{\rm S},m}$ carried by each individual Fourier harmonics, we were able to determine the transport properties associated with each type of the mode for different values of the Mach number ${\ensuremath{\mathcal{M}}}\xspace$. In particular, we find that lower acoustic and resonant modes carry negative angular momentum flux, whereas the vortex-driven and upper acoustic modes carry positive angular momentum flux.
\item Nonlinear damping of the modes leads to mass inflow ($\dot M>0$) for the lower acoustic and resonant modes, and weak mass outflow for the vortex-driven and upper acoustic modes. This implies that accretion onto the central object must be mediated by a combination of the lower acoustic and resonant modes.
\item Wave-driven transport is intrinsically non-local, which leads to the values of effective $\alpha$-parameter determined through stress and $\dot M$ being different not only in amplitude but also in sign. This is very different from the conventional local transport for which these values are the same.
\item Despite the non-local nature of the wave-mediated transport we still find a strong correlation, given by the equation (\ref{eqn:cs_mdot}), between the angular momentum flux injected at the BL into the disk, and the mass accretion rate through the BL.
\item Despite the long duration of our simulations, we find the time-dependent contribution to the angular momentum balance caused by the variation of $\Omega$ (see equation (\ref{eq:AM_terms})) to play an important role (see \S\ref{sect:transport_AM_terms}).
\item We characterize the wave-driven evolution of the disk properties --- surface density $\Sigma$ and angular frequency $\Omega$ profile --- as a function of time and ${\ensuremath{\mathcal{M}}}\xspace$.
\end{itemize}
Our study, based on 2D hydro simulations, provides a natural foundation for understanding the properties of the wave-driven transport in future three-dimensional and fully MHD simulations of the BLs.
\section*{Acknowledgements}
We thank Jim Stone for making the \texttt{Athena++}\xspace code publicly available. We gratefully acknowledge financial support from NSF via grant AST-1515763, NASA via grant 14-ATP14-0059, and Institute for Advanced Study via the John N. Bahcall Fellowship to R.R.R. Research at the Flatiron Institute is supported by the Simons Foundation. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. Through allocation AST160008, this work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562 \citep{XSEDE}.
\section*{Data Availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
|
2024-02-18T23:40:25.486Z
|
2021-11-08T02:00:15.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03068","language":"en","timestamp":1636336815000,"url":"https:\/\/arxiv.org\/abs\/2111.03068","yymm":"2111"}
|
proofpile-arXiv_000-10219
|
{"provenance":"002.jsonl.gz:10220"}
| null | null |
\section{Introduction}\label{sec:introduction}
\IEEEPARstart{R}{ecently}, the amount of videos uploaded to the internet has increased substantially. According to Statista \cite{StatistaResearchDepartment}, by May 2019, more than 500 hours of video were uploaded to YouTube every minute, and the numbers did not slow down. Therefore, the need for robust algorithms to analyse this enormous amount of data has increased accordingly. \par
An action recognition system based upon human body motions is the most efficient way of interpreting videos' contents. Several solutions have been proposed in this regard, and they vary from the analysis of optical flow \cite{Bobick2001}, convolutional neural networks upon RGB images \cite{Tran2015} and more recently, the skeleton movements \cite{yan2018spatial}. The skeleton movements approach offers multiple advantages over the other solutions. The skeleton information is robust to changes in the illumination of the environment where the action takes place. Also, it is robust to changes in the background \cite{Keskes2021}. Moreover, the computational cost for training is considerably reduced for skeleton data consisting of only sets of joint cartesian coordinates. For these reasons, we have chosen this approach to define the premise of our proposed method.\par
There are multiple sources to obtain the skeleton information from videos. Among these, the OpenPose library \cite{Cao2021a} is the simplest yet effective tool to accomplish this. This system receives a video clip as an input and outputs the 2D or 3D coordinates of the 18 main skeleton joints. Each skeleton joint information consists of three values as (x, y, c), where x and y are the cartesian coordinates in the horizontal and vertical axis, respectively, and c represents the confidence score of the detected joint. The keypoints indexes of the OpenPose output layout are shown in Fig. \ref{fig:keypoints}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.4\textwidth]{Figures/Open_Pose_Components_1.png}
\caption{Skeleton components and the keypoints indexes of OpenPose layout.}
\label{fig:keypoints}
\end{figure}
Our study is based upon the proposal presented by Yan \textit{et al.} in \cite{yan2018spatial}. Instead of analysing the frames of a video by their pixel values (i.e., RGB images), the authors first represent the actors as a set of the main joints of the body using the OpenPose library \cite{Cao2021a}. Given the skeleton representation of the person performing the action, they model the skeleton joints as a set of vertices of a graph. On the other hand, the bones-like connections can be represented as the edges of the graph. Thus, the video clips are transformed from RGB image sequences to a sequence of skeleton joints. To achieve the action recognition, the authors proposed the Spatial-Temporal Graph Convolutional Neural Network (ST-GCN) model. As the name indicates, this framework can analyse both the spatial and the temporal relations between the set of nodes (i.e., the skeleton joints) during the performance of the action (Fig. \ref{fig:skeleton_rep}). Subsequently, the model is trained in an end-to-end manner using a Graph Convolutional Neural Network (GCN) architecture \cite{Zhou2018}.
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/skeleton_video_clip.png}
\caption{Spatial-temporal skeleton.}
\label{fig:skeleton_rep}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{Figures/array_video_clip.png}
\caption{Tensor.}
\label{fig:tensor_rep}
\end{subfigure}
\caption{Video clip representations.}
\label{fig:video_rep}
\end{figure*}
Presently, there are multiple datasets available for research on human action recognition. Among these alternatives, the UCF-101 \cite{Soomro2012}, and the HMDB-51 \cite{Kuehne2011a} datasets are considered to be reference benchmarks.
\subsection{UCF-101}
The UCF-101 is the most commonly used benchmark human action dataset. Every video sample from this dataset is sourced from YouTube. The clip's duration varies from 1.06 sec to 71.04 sec and has a fixed frame rate of 25 fps and a fixed resolution of 320×240 pixels. This dataset provides a total of 13,320 clips classified into 101 action classes. These classes can be broadly divided into five major subsets: \textit{Human-Object Interaction}, \textit{Body-Motion Only}, \textit{Human-Human Interaction}, \textit{Playing Musical Instruments} and \textit{Sports} \cite{Soomro2012}.
\subsection{HMDB-51}
Similar to the UCF-101, the Human Motion Database (HMDB) is considered as one of the top 5 most popular datasets for action recognition \cite{Zhang2019b}. Aside from YouTube, the HMDB-51 dataset was collected from a wider range of sources (i.e., movies, Google videos, etc.). For that reason, the height of all the samples was scaled to 240 pixels, and the width has was scaled to maintain the original video ratio. Furthermore, the frame rate was modified to have a fixed value of 30 fps. It provides a total of 6,766 video clips of 51 different classes. These classes can be broadly classified into 5 categories: \textit{General facial actions}, \textit{Facial actions with object manipulation}, \textit{General body movements}, \textit{Body movements with object interaction} and \textit{Body movements for human interaction} \cite{Kuehne2011a}.
\subsection{ST-GCN additional layer: the M-Mask}
Complex movements can be inferred from a small set of representatives \emph{bright spots} on the joints of the human body \cite{Johansson1973}. However, not all the joints provide the same quality and quantity of information regarding the movement performed. Therefore, it is intuitive to assign a different level of importance to every joint in the skeleton. In the ST-GCN framework, the authors added a mask M (or M-mask) to each layer of the GCN to express the importance of each joint \cite{yan2018spatial}. This mask scales the contribution of each skeleton's joint according to the learned weights of the spatial graph network. According to their results, the proposed M-mask considerably improves their architecture's performance. Therefore, the authors constantly apply it to the ST-GCN network in their experiments.
\subsection{Our Contribution}
The convolution operation is not explicitly defined on graphs. Suppose a graph with no fixed structure (i.e., the quantity and arrangement of the nodes may vary), label mapping criteria need to be defined to perform the convolution process. For instance, the work of Yan \textit{et al.} in \cite{yan2018spatial} proposed three skeleton partition strategies (Uni-label, distance, and spatial configuration partitioning) to perform action recognition. This strategy was applied using ST-GCN upon the UCF-101 dataset \cite{Zheng2019}. However, to the best of our knowledge, there is no previous evidence of using the ST-GCN model on the HMDB-51 dataset for action recognition. In what follows, we summarise our contributions below:
\begin{itemize}
\item We present the first results of the ST-GCN model trained on the HMDB-51 dataset for action recognition. Moreover, we have used the previous skeleton extraction information of both the UCF-101 and the HMDB-51 datasets for the experiments.
\item We have implemented our proposed partitioning strategies on the ST-GCN model \cite{Alsawadi2021} on the benchmark datasets (UCF-101 and the HMDB-51 datasets).
\item We provide a deep analysis of the impact of different batch sizes during training upon the accuracy performance of the output models using the both benchmark datasets.
\item Additionally, we have provided the open-source skeleton information of the UCF-101 and HMDB-51 datasets for the research community\footnote{https://github.com/malswadi/skeleton\_ucf\_hmdb}.
\end{itemize}
The remainder of the paper is structured as follows: in \textbf{Section II}, we present the state-of-the-art skeleton-based systems that utilize the ST-GCN model for action recognition. In \textbf{Section III} we explain the constraints we have used in our experiments. The experimental results are described in depth in \textbf{Section IV}. Finally, \textbf{Section V} presents the summary and discussions.
\section{Action recognition using ST-GCN}
In order to perform the convolution operation, Yan \textit{et al.} \cite{yan2018spatial} first divided the skeleton into subsets of joints (\textit{i.e., neighbor-sets}). Each of these sets are composed by a \textit{root node} and its adjacent nodes (Fig. \ref{fig:keypoints}). On the other hand, each kernel has a size of $K$ x $K$.
In the same research \cite{yan2018spatial}, the authors used their architecture to recognize human actions upon the NTU-RGB+D \cite{Shahroudy2016} and the Kinetics \cite{Kay2017} dataset. They used the skeleton information of both datasets for their training. The NTU-RGB+D \cite{Shahroudy2016} provides the skeleton modality for their data with the main joints of the actors. However, the skeleton data is not available for the Kinetics dataset \cite{Kay2017}. Therefore, they initially extracted the skeleton data with the use of the OpenPose library \cite{Cao2021a} and released this data as the Kinetics-skeleton dataset \cite{yan2018spatial}. Once the skeleton information has been obtained, each video clip is modeled as a tensor (18,3, T), where T represents the length of the video as it is shown in Fig. \ref{fig:tensor_rep}. As a consequence, the data is prepared to perform the convolution process.
\subsection{Partitioning strategies}
For graphs with no specific order, priority criteria must be set in each neighbor-set to map each joint to a label. Hence, the convolution process can be performed, and network training can be possible. In \cite{yan2018spatial}, three neighbor set partitioning criteria were presented: Uni-labeling, Distance, and Spatial configuration partitioning. In the first approach, the kernel size K = 1. Therefore, all the joints in the neighbor set share the same label (label 0). In the second, the kernel size K = 2. The root node has the top priority (label 0), and the adjacent nodes share the same label (label 1). On the other hand, spatial configuration partitioning is more complex.
\subsubsection{Spatial Configuration partitioning}
In this approach, the kernel size K = 3 and the center of gravity of the skeleton (average of the values on each joint axis across all the training set) are considered. Mathematically, the mapping for this strategy is defined with the following equation
\begin{equation}
l_{ti}(v_{tj}) =
\left\{
\begin{array}{lcr}
0 & if & r_j=r_i\\
1 & if & r_j<r_i\\
2 & if & r_j>r_i
\end{array}
\right.
,
\label{eq:Strategy3}
\end{equation}
where $l_{ti}(v_{tj})$ represents the label mapping for the node $v_{tj}$, $r_j$ is the average value from the root node to the center of gravity and $r_i$ is the average value from the $i_{th}$ node to the center of gravity. Yan \textit{et al.} \cite{yan2018spatial} have reported a maximum performance accuracy on both of the NTU-RGB+D \cite{Shahroudy2016}, and the Kinetics-skeleton \cite{yan2018spatial} datasets using this partitioning strategy.\par
There have been multiple action recognition systems using the ST-GCN architecture \cite{Keskes2021, liutwo, Jiang2020, Yang2020}. Zheng \textit{et al.} \cite{Zheng2019} extracted the skeleton from the UCF-101 dataset in a similar manner as Yan \textit{et. al.} \cite{yan2018spatial} with the Kinetics \cite{Kay2017} and obtained 50.53\% top-1 accuracy using the spatial configuration partitioning for label mapping. Some additional hand-craft work needed to be done. They selected only the video clips on which the skeleton was detected during the first 250 frames.\par
Recently, we were able to improve the ST-GCN performance upon the NTU-RGB+D \cite{Shahroudy2016} and the Kinetics \cite{Kay2017} benchmarks in \cite{Alsawadi2021}. As the base model \cite{yan2018spatial} proposed, we defined each neighbor set to contain a root node with its adjacent nodes. Nevertheless, in our previous work \cite{Alsawadi2021}, we considered a kernel size K=4. Thus, each of the nodes in the neighbor sets owns a separate label. The root node was set to have the highest priority in every split strategy (label 0). However, to define which of the adjacent nodes in the neighbor set has the highest priority in the label mapping, we have introduced three novel partitioning strategies: the \textit{full distance}, \textit{connection} and \textit{index splits}.
\subsubsection{Full distance split}
In this strategy, we took the contribution of the spatial configuration partitioning from Yan \textit{et al.} \cite{yan2018spatial} one step further. We considered the distance from \textit{every joint} in the neighbor set to the center of gravity of the skeleton. As it is shown in Fig. \ref{fig:full_distance}, the nearest the node is to the center of gravity, the highest priority it is assigned to it \cite{Alsawadi2021}. In the figure, the joint labeled as \textit{B} has the highest priority among the adjacent nodes because of its closeness to the center of gravity. To describe this strategy mathematically, a set $\mathcal{F}$ is defined. This set contains the Euclidean distances of the \textit{i}-th adjacent node $u_{ti}$ (of the root node $u_{tj}$) with respect to the center of gravity of the skeleton, sorted in ascending order as
\begin{equation}
\mathcal{F}=\{f_{m|m=1,\cdots,N}\}
\end{equation}
where $N$ is the number of adjacent nodes to the root node $u_{tj}$. With this auxiliary set in place, the label mapping can be defined using the Eq. \ref{eq:full_distance}.
\begin{equation}
l_{ti}(u_{tj}) =
\left\{
\begin{array}{lcr}
0 &if& |u_{ti}-cg|_{2}=x_{r}\\
m &if& |u_{ti}-cg|_{2}=f_{m}
\end{array}
\right.
,
\label{eq:full_distance}
\end{equation}
where $l_{ti}$ represents the label map for each joint $u_{ti}$ in the neighbor set of the root node $u_{tj}$, $x_{r}$ is the Euclidean distance from the root node $u_{tj}$ to the center of gravity of the skeleton.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{Figures/full_distance.png}
\caption{Full distance split.}
\label{fig:full_distance}
\end{figure}
\subsubsection{Connection split}
For this partitioning criteria, the degree of each vertex (i.e., the joints) of the skeleton graph is considered. The higher degree, the higher priority \cite{Alsawadi2021}. For instance, consider the skeleton graph shown in Fig. \ref{fig:connection}. In the figure are indicated the connections of each of the adjacent nodes in the neighbor set. For this example, the root node has the top priority (label 0), and the node labeled as \textit{B} has the next priority (label 1). Given that both nodes \textit{A} and \textit{C} have the same degree, we considered them with the same priority; hence, their priority is set randomly.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{Figures/connection.png}
\caption{Connection split.}
\label{fig:connection}
\end{figure}
To define the label mapping, we describe a set $\mathcal{C}$ as the degree values of each of the $N$ adjacent nodes of the root node sorted in descending order as follows
\begin{equation}
\mathcal{C}=\{c_{m|m=1,\cdots,N}\}
\end{equation}
Given the set $\mathcal{C}$ defined, the label mapping can be obtained using Eq. \ref{eq:index_eq}.
\begin{equation}
l_{ti}(u_{tj}) =
\left\{
\begin{array}{lcr}
0 &if&d(u_{ti})=d_{r}\\
m &if&d(u_{ti})=d_{m}\\
\end{array}
\right.
,
\label{eq:index_eq}
\end{equation}
where $l_{ti}$ represents the label map for each joint $u_{ti}$ in the neighbor set of the root node $u_{tj}$ and $d_{r}$ is the degree corresponding the root node.
\subsubsection{Index split}
For this strategy, we considered the OpenPose \cite{Cao2021a} output keypoints shown in Fig. \ref{fig:keypoints}. The priority criteria are defined as follows: the smallest value of the keypoint index, the highest priority \cite{Alsawadi2021}. For instance, consider the neighbor set shown in Fig. \ref{fig:index}. Like the other partition strategies, the highest priority is assigned to the root node (label 0). Subsequently, the adjacent with the highest priority is given to the node labeled as \textit{B} because its keypoint index is the smallest (index 1). Finally, the node labeled as \textit{A} and the node labeled as \textit{C} have the second and third priority, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{Figures/index.png}
\caption{Index split.}
\label{fig:index}
\end{figure}
Similar to the other split strategy, we defined an auxiliary set $\mathcal{P}$ with the keypoint index values of the adjacent nodes.
\begin{equation}
\mathcal{P}=\{p_{m|m=1,\cdots,N}\}
\end{equation}
The values of $\mathcal{P}$ are ascendant ordered. Then, the label mapping is obtained using Eq. \ref{eq:connection_eq}.
\begin{equation}
l_{ti}(u_{tj}) =
\left\{
\begin{array}{lcr}
0 &if&ind(u_{ti})=in_{r}\\
m &if&ind(u_{ti})=p_m\\
\end{array}
\right.
,
\label{eq:connection_eq}
\end{equation}
where $l_{ti}$ and $ind(u_{ti})$ represent the label map and the index keypoint value of the $i_{th}$ joint, respectively; and $in_{r}$ is the index of the keypoint corresponding to the root node $u_{tj}$.
\section{Experimental settings}
Given that the skeleton representation of the actors is not provided for either the UCF-101 \cite{Soomro2012} or the HMDB-51 \cite{Kuehne2011a}, we first extract that skeleton representation from both datasets. Similarly to \cite{yan2018spatial}, we used the Open-Pose library to extract the skeleton data for our evaluation. The library installation was oriented to be compatible with the Ubuntu 18.04 environment.\par
We followed the experiment guidelines provided in \cite{yan2018spatial}. First, the resolution of each video sample has been resized into a fixed dimension of 340 × 256 pixels. Second, the set of resized image frames of the video samples is input to the Open-Pose algorithm. Third, due to the variability of the duration of each clip, a fixed duration of 300 frames has been proposed. Therefore, if any video clip has less than 300 frames, we repeat the initial frames until we reach the amount needed. Otherwise, if the video clip exceeds the frame number, we trim it. Consequently, the Spatio-temporal information of the skeleton of each video sample can be represented as a tensor with shape (18, 3, 300). By setting the \textit{T} value to 300, our output tensor is illustrated in Fig. \ref{fig:tensor_rep}. In the fourth step, we considered the joint recognition score provided by the Open-Pose algorithm (i.e., the \textit{C} value). After several experiments, we concluded to consider for training only those videos with more than 50\% skeleton joint recognition.\par
Additionally, we only considered those samples with a maximum of two people performing the action. Finally, the Spatio-temporal skeleton data of the UCF-101 and the HMDB-51 video that fulfilled those quality criteria is collected during the \textit{Data extraction} stage. We have iterated through the video clips of the datasets and saved the skeleton information as JSON files. An independent JSON file has been exported for each video sample. Thus, the outcome of this process are 13,320 and 6,766 JSON files with the skeleton information of the UCF-101 and the HMDB-51 datasets, respectively. These files are publicly available for the research community. \footnotemark[\value{footnote}]
\subsection{Training Details}
We utilized the PyTorch framework \cite{Paszke2017} for deep learning modelling to execute our experiments. The experiment process is composed in 3 stages: \textit{Data Splitting}, \textit{ST-GCN Model Setup} and the \textit{Model Training}. The first stage divides each of the datasets mentioned above into two subsets: the training and the validation sets. For our experiments, we considered a 3:1 relation for training and validation split, respectively. Then, the second stage aims to prepare the ST-GCN architecture to be trained using the spatial configuration partitioning strategy proposed by Yan \textit{et al.} \cite{yan2018spatial} and also with the use of the our previously proposed split processes presented in \cite{Alsawadi2021}. \par
Finally, in the \textit{Model Training} stage, we performed the experiments of the implementation of the ST-GCN model using the spatial configuration partitioning \cite{yan2018spatial}. During this stage, we utilized the enhanced split strategies proposed in \cite{Alsawadi2021} in our experiments to find the partitioning approach that offered the best performance in terms of accuracy. To provide a valid comparison, we included the M-mask layer in the architecture during experimentation. Additionally, we perform a further analysis without the M-mask implementation.
Every model has been trained using the stochastic gradient descent (SGD) with learning rate decay as optimization algorithm. Also, all the models training started with a learning rate value of 0.1. The models were trained for 80 epochs and the learning rate decays by a factor of 0.1 every \(10^{th}\) epoch, starting from the epoch number 20. Additionally, in order to avoid overfitting on the datasets, a weight decay value of 0.0001 has been considered.\par
One experiment setting criteria was to find the optimal batch size. This hyperparameter allows the model to adjust its parameters during optimization with respect to a small subset of training samples called \textit{mini batches} \cite{Goodfellow-et-al-2016}. The optimization algorithm requires a lower computational cost to update the weights by training the network in mini-batches. If the batch size is too small, the learned parameters in each step of the gradient descent tend to be less robust, given that the weights were updated from a set of samples with minor variation; if the batch size is too big, the computational cost increases accordingly. Therefore, we proposed this hyper-parameter to be one of the experiment's definition criteria. We performed the experiments using different batch sizes values. These vary from 8, 16, 32, 64, and 128.
\section{Results}
The experiment's outcome for each benchmark dataset is presented separately in different sections. The results correspond to the models with the best performance in terms of accuracy. The accuracy values shown were obtained using top-1 criteria.
\subsection{UCF-101}
As mentioned in the previous section, we vary the implemented partition strategy in the ST-GCN architecture. Additionally, we performed experiments with and without the implementation of the M-mask. In Table \ref{table:batch_ucf_performance} are shown the results of these experiments upon the UCF-101. The "Y" ("Yes") and "N" ("No") values in the "M-mask" column represent whether the M-mask layer was implemented or not in that experiment, respectively. It can be noticed that, in most experiments, the output model tends to be more robust as the batch size increases.
\begin{table}[ht]
\caption{Experiments performance upon UCF-101}
\centering
\begin{tabular}{c c c c}
\hline\hline
\centering Method & Batch size & M-mask & Accuracy \\ [0.5ex]
\hline
\centering Spatial C.P.
& 8 & Y & 46.42\% \\
& & N & 65.36\% \\
& 16 & Y & 68.71\% \\
& & N & 65.89\% \\
& 32 & Y & 68.96\% \\
& & N & 68.55\% \\
& 64 & Y & 70.47\% \\
& & N & 68.18\% \\
& \textbf{128} & \textbf{N} & \textbf{70.72\%}
\\[1ex]
\hline
\centering Full Distance Split
& 8 & Y & 48.51\% \\
& & N & 58.73\% \\
& 16 & Y & 61.02\% \\
& & N & 67.89\% \\
& 32 & Y & 69.16\% \\
& & N & 66.30\% \\
& \textbf{64} & \textbf{Y} & \textbf{70.43}\% \\
& & N & 68.59\% \\
& 128 & N & 66.91\%
\\[1ex]
\hline
\centering Connection Split
& 8 & Y & 63.03\% \\
& & N & 63.48\% \\
& 16 & Y & 64.46\% \\
& & N & 62.99\% \\
& 32 & Y & 70.88\% \\
& & N & 69.41\% \\
& \textbf{64} & \textbf{Y} & \textbf{70.96\%} \\
& & N & 68.18\% \\
& 128 & N & 70.35\%
\\[1ex]
\hline
\centering Index Split
& 8 & Y & 56.61\% \\
& & N & 58.24\% \\
& 16 & Y & 69.33\% \\
& & N & 62.70\% \\
& 32 & Y & 68.34\% \\
& & N & 68.34\% \\
& 64 & Y & 72.31\% \\
& & N & 72.19\% \\
& \textbf{128} & \textbf{N} & \textbf{73.25\%} \\
[1ex]
\hline
\end{tabular}
\label{table:batch_ucf_performance}
\end{table}
By analysing the output values of the experiments shown in Table \ref{table:batch_ucf_performance}, we have created Table \ref{table:m_mask_ucf_performance} with the results with the M-mask layer proposed by Yan \textit{et al.} \cite{yan2018spatial}. To provide a comparative, we have also included the outcome of the previous ST-GCN implementation performed by Zheng \textit{et al.} \cite{Zheng2019} in this table.
\begin{table}[ht]
\caption{UCF-101 performance using M-mask}
\centering
\begin{tabular}{c p{2.7cm} c}
\hline\hline
& \centering Method & Accuracy \\ [0.5ex]
\hline
ST-GCN & \centering Spatial Configuration Partitioning & 70.47\% \\
Zheng \textit{et al.} \cite{Zheng2019} & \centering Spatial Configuration Partitioning & 50.53\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Full Distance Split & 70.43\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Connection Split & 70.96\% \\
\textbf{Alsawadi and Rio \cite{Alsawadi2021}} & \centering \textbf{Index Split} & \textbf{72.31}\% \\ [1ex]
\hline
\end{tabular}
\label{table:m_mask_ucf_performance}
\end{table}
The model with M-mask implementation that achieved the best accuracy performance was trained using a 64 batch size and utilized the index split partitioning strategy. It has achieved 1.84\% of accuracy improvement with respect to the spatial configuration partitioning approach proposed by Yan \textit{et al.} in \cite{yan2018spatial}. Moreover, this model enhances the previous state-of-the-art results by 21.78\%.
\begin{table}[ht]
\caption{UCF-101 performance without M-mask}
\centering
\begin{tabular}{c p{2.7cm} c}
\hline\hline
& \centering Method & Accuracy \\ [0.5ex]
\hline
ST-GCN & \centering Spatial Configuration Partitioning & 70.72\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Full Distance Split & 68.59\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Connection Split & 70.35\% \\
\textbf{Alsawadi and Rio \cite{Alsawadi2021}} & \centering \textbf{Index Split} & \textbf{73.25\%} \\ [1ex]
\hline
\end{tabular}
\label{table:no_m_mask_ucf_performance}
\end{table}
On the other hand, Table \ref{table:no_m_mask_ucf_performance}
shows that the accuracy performance increased when the M-mask implementation is not considered in the ST-GCN architecture. Again, the index split partitioning strategy allowed the ST-GCN model to achieve the best accuracy performance. For this model, a batch size of 128 was considered. This solution enhanced the spatial configuration partitioning model approach proposed by Yan \textit{et al.} in \cite{yan2018spatial} by 2.53\%.\par
Therefore, we can reach the highest accuracy performance by using the index split partitioning approach upon the ST-GCN architecture without the M-mask implementation. We have evaluated the model for each of the five epochs. The outcome of this model during training is shown in Fig. \ref{fig:ucf_index}. The evaluation of the training and the test set are shown in red and blue curves, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/acc_UCF_Cus2NoIm_128_128_20211020104844.png}
\caption{Best UCF-101 Model Training Process.}
\label{fig:ucf_index}
\end{figure}
\subsection{HMDB-51}
The results corresponding to the different experiments upon the HMDB-51 dataset are shown in Table \ref{table:batch_hmdb_performance}. Similar to the outcome obtained in Table \ref{table:batch_ucf_performance}, in most of the experiments, the accuracy performance tends to improve as the batch size increases.
\begin{table}[ht]
\caption{Experiments performance upon HMDB-51}
\centering
\begin{tabular}{c c c c}
\hline\hline
\centering Method & Batch size & M-mask & Accuracy \\ [0.5ex]
\hline
\centering Spatial C.P.
& 8 & Y & 37.34\% \\
& & N & 40.77\% \\
& 16 & Y & 44.39\% \\
& & N & 41.08\% \\
& 32 & Y & 43.89\% \\
& & N & 45.45\% \\
& \textbf{64} & Y & 45.64\% \\
& & \textbf{N} & \textbf{46.82\%} \\
& 128 & N & 44.64\% \\[1ex]
\hline
\centering Full Distance Split
& 8 & Y & 41.77\% \\
& & N & 33.23\% \\
& 16 & Y & 38.97\% \\
& & N & 42.08\% \\
& 32 & Y & 33.23\% \\
& & N & 45.51\% \\
& \textbf{64} & Y & 42.02\% \\
& & \textbf{N} & \textbf{45.82\%} \\
& 128 & N & 45.26\% \\[1ex]
\hline
\centering Connection Split
& 8 & Y & 23.63\% \\
& & N & 39.34\% \\
& 16 & Y & 43.27\% \\
& & N & 40.84\% \\
& 32 & Y & 40.52\% \\
& & N & 41.52\% \\
& 64 & Y & 32.29\% \\
& & N & 47.19\% \\
& \textbf{128} & \textbf{N} & \textbf{48.88\%} \\[1ex]
\hline
\centering Index Split
& 8 & Y & 38.97\% \\
& & N & 34.91\% \\
& 16 & Y & 35.47\% \\
& & N & 46.57\% \\
& 32 & Y & 43.20\% \\
& & N & 45.51\% \\
& \textbf{64} & \textbf{Y} & \textbf{47.69\%} \\
& & N & 43.39\% \\
& 128 & N & 46.51\% \\
[1ex]
\hline
\end{tabular}
\label{table:batch_hmdb_performance}
\end{table}
There is no previous application of the ST-GCN model upon the HMDB-51 dataset to the author's knowledge. Hence, the table only contains the results of the present study using the different partitioning strategies.
\begin{table}[ht]
\caption{HMDB-51 performance using M-mask}
\centering
\begin{tabular}{c p{2.7cm} c}
\hline\hline
& \centering Method & Accuracy \\ [0.5ex]
\hline
ST-GCN & \centering Spatial Configuration Partitioning & 45.64\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Full Distance Split & 42.02\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Connection Split & 45.89\% \\
\textbf{Alsawadi and Rio \cite{Alsawadi2021}} & \centering \textbf{Index Split} & \textbf{47.69\%} \\ [1ex]
\hline
\end{tabular}
\label{table:m_mask_hmdb_performance}
\end{table}
Table \ref{table:m_mask_hmdb_performance} contains the highest performance achieved with each partitioning strategy with M-mask implementation upon the HMDB-51 dataset. As indicated in bold letters, the highest value was performed using the index split partition strategy. This model was trained by choosing a training batch size of 64. It has reached more than 2\% accuracy improvement with respect to the spatial configuration partitioning proposed by Yan \textit{et al.} in \cite{yan2018spatial}. Additionally, it can be noticed that also the connection split outperformed the spatial configuration partitioning outcome.
\begin{table}[ht]
\caption{HMDB-51 performance without M-mask}
\centering
\begin{tabular}{c p{2.7cm} c}
\hline\hline
&\centering Method & Accuracy \\ [0.5ex]
\hline
ST-GCN & \centering Spatial Configuration Partitioning & 46.82\% \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Full Distance Split & 45.82\% \\
\textbf{Alsawadi and Rio \cite{Alsawadi2021}} & \centering \textbf{Connection Split} & \textbf{48.88\%} \\
Alsawadi and Rio \cite{Alsawadi2021} & \centering Index Split & 46.57\% \\ [1ex]
\hline
\end{tabular}
\label{table:no_m_mask_hmdb_performance}
\end{table}
In the experiments with no M-mask implementation shown in Table \ref{table:no_m_mask_hmdb_performance}. The partitioning strategy that achieved the highest accuracy performance was the connection split. This model was trained using a batch size of 128. The result obtained with this model outperformed with more than 2\% the outcome of the ST-GCN architecture without M-mask implementation using the spatial configuration partitioning. The training process performance of this model is shown in Fig. \ref{fig:hmdb_connection}. As Fig. \ref{fig:ucf_index}, the evaluation upon the training and the test set is shown in the colours red and blue, respectively. We have tested the performance of the trained model for every five epochs.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/acc_HMDB_Cus1NoIm.png}
\caption{Best HMDB-51 Model Training Process.}
\label{fig:hmdb_connection}
\end{figure}
\section{Conclusion}
In this paper, we have proposed novel action recognition method using ST-GCN model by exploiting partitioning strategies: \textit{spatial configuration paritioning}, \textit{full distance split}, \textit{connection split} and \textit{index split}. We have presented the first implementation of the ST-GCN framework on the HMDB-51 \cite{Kuehne2011a} dataset achieving 48.88\% top-1 accuracy by using the connection split partitioning approach. Our proposal outperforms the state-of-the-art using the ST-GCN framework on the UCF-101. Our results further show performance superiority over the most recent related work proposed in \cite{Alsawadi2021} with much lower training and computational inference costs and structural simplicity.\par
The difference in the amount of training data impacted considerably in the final performance. The UCF-101 provides more than twice the amount of samples for training than the HMDB-51 counterpart. Therefore, the learning achieved with this dataset is more robust to changes in the input data than the model obtained with the second set. This is clearly demonstrated in the accuracy result values of Tables \ref{table:batch_ucf_performance} and \ref{table:batch_hmdb_performance}.
As future work, we propose increasing the size of nodes in the neighbor sets to capture the relationships between joints that are distant from each other. We believe that the more details we can capture of each movement, the more we can model the action to increase the overall accuracy.
\bibliographystyle{IEEEtran}
|
2024-02-18T23:40:25.492Z
|
2021-11-08T02:01:51.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03106","language":"en","timestamp":1636336911000,"url":"https:\/\/arxiv.org\/abs\/2111.03106","yymm":"2111"}
|
proofpile-arXiv_000-10220
|
{"provenance":"002.jsonl.gz:10221"}
| null | null |
\section{Introduction}
\label{sec:intro}
One of the main objectives in computer vision is to develop systems that can ``see'' the world \cite{marr1982vision,tarr2002visual}. Here we tackle the problem of single image holistic understanding and 3D reconstruction, which is deeply rooted in decades of development in computer vision and photogrammetry but became practically feasible only recently thanks to the exploding growth in modeling, computing \cite{krizhevsky2012imagenet,szegedy2015going,simonyan2015very,he2016deep,goodfellow2016deep,xie2017aggregated}, and large-scale datasets \cite{deng2009imagenet,Lin2014COCO,Cordts2016Cityscapes,chang2015shapenet,fu20203dfuture}.
\begin{table*}[!ht]
\vspace{-2mm}
\begin{center}
\caption{\small Comparison for different 3D reconstruction methods.
Mesh-RCNN \cite{gkioxari2019meshrcnn} only allows single-instance per image during training, but it will enable outputs of multi-object components at inference time. Nevertheless, efforts are still required to allow for the multi-object module in an end-to-end pipeline for training and evaluation.
}
\label{tab:method_comparison}
\scalebox{0.90}{
\begin{tabular}{ccccccc}
\hline
Method &3D & Single & Layout & Panoptic & Outdoor &Multiple \\
& & image & 3D & segmentation & scenes & objects \\
\hline
\cite{zhang2018genre,kanazawa2018end,groueix2018atlas,xu2019disn} & \cmark & \cmark & & & &\\
\cite{gkioxari2019meshrcnn} & \cmark & \cmark & & & & $\dagger$\\
\cite{han2004automatic} & \cmark & \cmark & & & \cmark & \\
\cite{Tulsiani17factored3d,zou2018layoutnet,huang2018cooperative,nie2020total3d} & \cmark & \cmark & \cmark & & & \cmark\\
\cite{Li2018megadepth} & \cmark & \cmark & & & \cmark &\\
\cite{Alhashim2018densedepth}&\cmark&\cmark&\cmark& &\cmark&\\
\cite{pollefeys1999self,pollefeys2008detailed} & \cmark & &\cmark & & \cmark & \cmark\\
\cite{Kirillov2018panoptic,Kirillov2019fpn,xiong19upsnet,Lazarow2019ocfusion} & & \cmark & & \cmark & \cmark & \cmark\\
\hline
{\small Panoptic3DParsing (ours)} & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark\\
\hline
\end{tabular}
}
\end{center}
\vspace{-3mm}
\end{table*}
We name our system single image panoptic 3D parsing (Panoptic3D). It takes in a single natural RGB image and jointly performs dense image semantic labeling \cite{shotton2006textonboost,tu2008auto,long2015fully}, object detection \cite{Ren2015FasterRCNN}, instance segmentation \cite{he2017mask}, depth estimation \cite{Li2018megadepth, Alhashim2018densedepth}, object shape 3D reconstruction \cite{zhang2018genre, gkioxari2019meshrcnn}, and 3D layout estimation \cite{Tulsiani17factored3d} from a single natural RGB image.
Figure \ref{fig:teaser} gives an illustration for the pipeline where a 3D scene is estimated from a single-view RGB image with the background layout (``stuff'') segmented and the individual foreground instances (``things'') detected, segmented, and fully reconstructed. A closely related work to our Panoptic3D is the recent Total3DUnderstanding method \cite{nie2020total3d} where 3D layout and instance 3D meshes are reconstructed for an indoor input image. However, Total3DUnderstanding \cite{nie2020total3d} does not perform panoptic segmentation \cite{Kirillov2018panoptic}/image parsing \cite{tu2005image} and is limited to indoor scenes.
We briefly discuss the literature from two main angles: 1). 3D reconstruction, particularly from a single-view RGB image; and 2). image understanding, particularly for panoptic segmentation \cite{Kirillov2018panoptic} and image parsing \cite{tu2005image}.
3D reconstruction is an important area in photogrammetry \cite{linder2009digital,colomina2014unmanned} and computer vision \cite{hartley2003multiple,ma2012invitation, ullman1979interpretation,brown2003advances,pollefeys1999self,kutulakos2000theory,pollefeys2008detailed,szeliski2010computer}.
We limit our scope to single RGB image input for 3D instance reconstruction \cite{wu2015shapenets,zhang2018genre,wang2018pixel2mesh,groueix2018atlas,kanazawa2018end,xu2019disn,chen2020topology} and 3D layout generation \cite{Tulsiani17factored3d,zou2018layoutnet}.
There is a renewed interest in performing holistic image and object segmentation (Image Parsing \cite{tu2005image}), called panoptic segmentation \cite{Kirillov2018panoptic,Kirillov2019fpn,xiong19upsnet,Lazarow2019ocfusion}, where the background regions (``stuff'') are labeled with the foreground objects (``things'') detected. Our panoptic 3D parsing method is a system that gives holistic 3D scene reconstruction and understanding for an input image. It includes multiple tasks such as depth estimation, panoptic segmentation, and object instance reconstruction.
In Section \ref{sec:related}, we discuss the motivations for the individual modules. A comparison between the existing methods and ours is illustrated in Table \ref{tab:method_comparison}. The contributions of our work are summarized below.
\begin{itemize}
\setlength\itemsep{0mm}
\item We present a stage-wise system for panoptic 3D parsing, Panoptic3D (stage-wise), by comabatting the issue where full annotations for panoptic segmentation, depth, and 3D instances are absent. To the best of our knowledge, this is the first system of its kind to perform joint panoptic segmentation and holistic 3D reconstruction for the generic indoor and outdoor scenes from a single RGB image.
\item In addition, we have developed an end-to-end pipeline for panoptic 3D parsing, Panoptic3D (end-to-end), where datasets have complete segmentation and 3D reconstruction ground-truth annotations.
\end{itemize}
Observing the experiments, we show encouraging results for indoor \cite{fu20203dfuture} and the outdoor scenes for the natural scenes \cite{Lin2014COCO,Cordts2016Cityscapes}.
\section{Related Work \label{sec:related}}
Table \ref{tab:method_comparison} shows a comparison with related work. Our panoptic 3D parsing framework has the most complete set of properties and is more general than the competing methods. Next, we discuss related work below in details.
\noindent{\bf Single-view 3D scene reconstruction}.
Single image 3D reconstruction has a long history \cite{roberts1963machine,han2004automatic,Tulsiani17factored3d,zou2018layoutnet,huang2018cooperative,nie2020total3d}. The work in\cite{huang2018cooperative} jointly predicts 3D layout bounding box, 3D object bounding box, and camera intrinsics without any geometric reconstruction for indoor scenes. Factored3D \cite{Tulsiani17factored3d} is closely related to our work, which combines indoor scene layout (amodal depth) with 3D instance reconstructions without much abstraction. Still, no label is predicted for the scene layout (``stuff'') \cite{Tulsiani17factored3d}, and the instance object reconstruction tends to overfit the canonical shape of known categories. Total3DUnderstanding \cite{nie2020total3d} infers a box layout and has produced 3D reconstruction inference results on natural indoor images. However, as discussed before, these methods do not perform holistic 3D reconstruction for natural outdoor scenes or panoptic segmentation in general.
\noindent{\bf Single image depth estimation}.
David Marr pioneered the 2.5D depth representation \cite{marr1982vision}. Depth estimation from a single image can be performed in a supervised way and has been extensively studied in the literature \cite{Saxena2009make3d,Eigen2014depth}.
Development in deep learning \cite{long2015fully} has expedited the progress for depth estimation \cite{Bansal2016normal,Li2018megadepth}.
In our work, we adopt a relatively lightweight inverse depth prediction module from Factored3D \cite{Tulsiani17factored3d} and regress the loss jointly with 3D reconstruction and panoptic segmentation.
\noindent {\bf Single-view single object reconstruction}.
Single image single object 3D reconstruction can typically be divided into volume-based \cite{wu2015shapenets,zhang2018genre,chen2020topology}, mesh-based \cite{wang2018pixel2mesh,groueix2018atlas,kanazawa2018end}, and implicit-function-based \cite{chen2019learning, xu2019disn} methods.
In this paper, we adopt the detection and shape reconstruction branch from Mesh R-CNN \cite{gkioxari2019meshrcnn} for multi-object prediction . Building on top of it, we can perform supervised end-to-end single image panoptic 3D parsing. We also adopt unseen class reconstruction, GenRe \cite{zhang2018genre}, for multi-object reconstruction for natural image reconstruction when well-aligned ground truth 3D mesh models are not available.
\noindent{\bf Panoptic and instance segmentation}. Panoptic segmentation \cite{Kirillov2018panoptic} or image parsing \cite{tu2005image} combines semantic segmentation and instance detection/segmentation.
In our work, we build our panoptic head by referencing the end-to-end structure of UPSNet \cite{xiong19upsnet}. Additionally, we predict the 3D reconstruction of instances for each corresponding instance mask. However, the instance segmentation in panoptic segmentation is occluded. In comparison,
amodal instance segmentation predicts un-occluded instance masks for ``things''.
In this work, we generate both amodal as well as panoptic segmentation annotations from the 3D-FRONT dataset. This dataset enables the network to jointly perform 3D ``things'' reconstruction as well as panoptic segmentation. In the stage-wise pipeline, we utilize the work from Zhan \etal. \cite{zhan2020self} to better assist 3D reconstruction on natural images.
\section{Method}
We design our networks with the following goals in mind: 1). The network should be generalizable to both indoor and outdoor environments; 2). Datasets with various levels of annotations should be able to utilize the framework with simple replacement; 3). The segmentation masks should align with the reconstruction from the input view.
We will first introduce our stage-wise system, Panoptic3D (stage-wise) and show that it can process natural images without corresponding 3D annotations in training. Then, we will present our end-to-end network, Panoptic3D (end-to-end).
\subsection{Stage-wise Panoptic3D}
\begin{figure}[!ht]
\vspace{-2mm}
\centering
\includegraphics[width=\linewidth]{hybrid_framework.png}
\caption{Our stage-wise system, Panoptic3D (stage-wise). We adopt DenseDepth \cite{Alhashim2018densedepth} for depth prediction, UPSNet \cite{xiong19upsnet} for panoptic segmentation, a de-occlusion network \cite{zhan2020self} for amodal mask completion, and GenRe \cite{zhang2018genre} to perform instance-based single image 3D reconstruction. The alignment module in Panoptic3D (stage-wise) outputs the image on the bottom.}
\label{fig:networkstructure2}
\end{figure}
We present our stage-wise system, Panoptic3D (stage-wise), in Figure~\ref{fig:networkstructure2}. We design this system for natural image datasets that contain well-annotated panoptic segmentation information but lack 3D information, such as COCO \cite{Lin2014COCO} and Cityscapes \cite{Cordts2016Cityscapes}. This stage-wise system contains four main parts: 1). Instance and panoptic segmentation network. 2). Instance amodal completion network. 3). Single object 3D reconstruction network for unseen classes. 4). Single-image depth prediction network.
We take advantage of the state-of-art panoptic segmentation system, UPSNet \cite{xiong19upsnet}, scene de-occlusion system \cite{zhan2020self}, depth prediction system, DenseDepth \cite{Alhashim2018densedepth}, and unseen class object reconstruction networks \cite{zhang2018genre} and integrate them into a single pipeline.
This Panoptic3D (stage-wise) framework takes an RGB image and predicts the panoptic 3D parsing of the scene in point cloud (for ``stuff'') and meshes (for `` things''). The implementation details are as follows: the network first takes panoptic results from UPSNet and depth estimation from DenseDepth. It then passes the modal masks to the de-occlusion net to acquire amodal masks. We use GenRe to reconstruct the instance meshes based on the amodal masks. The module then maps panoptic labels to depth pixels and uses empirically estimated camera intrinsics estimation to inverse project depth into point clouds.
Since GenRe \cite{zhang2018genre} only predicts normalized meshes centering at the origin, the final module aligns individual shapes using depth estimation in the z-direction and the mask in the x-y direction. The module takes the mean of the $98th$ percentile and the $2nd$ percentile of the filtered and sorted per-pixel depth prediction within the predicted mask region to estimate the z center depth of each object. Finally, it places meshes and depth point cloud in the same coordinate system to render the panoptic 3D parsing results. The general inference time is 2.4 seconds per image on one NVIDIA Titan X GPU.
\subsection{End-to-end Panoptic3D}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.9\textwidth]{end-2-end.png}
\caption{Network architecture for our Panoptic3D (end-to-end) pipeline. dz: means depth extent. The red stop sign indicates that only predictions with centered ground truth shapes are used for regression during training. }
\label{fig:networkstructure1}
\vspace{-2mm}
\end{figure*}
The overview of the end-to-end network structure, Panoptic3D (end-to-end), is shown in Figure~\ref{fig:networkstructure1}. Similar to Panoptic3D (stage-wise), Panoptic3D (end-to-end) also has four main components: 1). instance segmentation head. 2). multi-object training enabled shape heads. 3). panoptic segmentation head. 4). ``stuff'' depth and relative object z center prediction branch. The entire network is trained end-to-end and can jointly predict amodal instance, semantic, and panoptic segmentation, ``stuff'' depth, and ``things'' reconstruction in 3D. Our design ideas are as follows.
For the panoptic 3D prediction, we predict ``stuff'' depth instead of box representation because it is not easily generalizable to scenes with other ``stuff'' categories, such as windows, doors, rugs, etc., and it does not apply to outdoor environments. Taking advantage of the advanced development in 2D panoptic segmentation, we first predict 2D panoptic segmentation and then align ``stuff'' segmentation with the depth prediction.
We predict that amodal ``stuff'' prediction would significantly improve the panoptic 3D parsing task for future works.
For multi-object 3D reconstruction, we first enable multi-object training and evaluation for the baseline network. For joint training with panoptic segmentation, we mainly address the following three challenges: 1). With multiple objects in a scene, mesh shapes that are too close to the camera may have a negative z-center, not tolerated in end-to-end detection and reconstruction baseline model by design. 2). For objects that appear to be cut-off by the camera view (non-centered/boundary objects) or too close to the camera, transformation to camera coordinate will deform ground truth 3D voxel and mesh into a shape that contain infinitely far points, preventing the network from converging. One approach would be to cut the ground truth shapes to be within the camera frustum. However, this may result in unnatural edge connections, and the preprocessing step is time-consuming. Instead, we introduce a partial loss. We first detect and mark objects occluded by image boundaries (non-centered/boundary objects) and exclude their loss for shape-related regressions. For example, we use an indicator function $\mathds{1}(\cdot)$ to return 1 for centered objects and 0 for boundary objects. The final loss per batch is defined as $\mathcal{L} = \mathcal{L}_{mask}$\cite{he2017mask} $+ \mathcal{L}_{box}$ \cite{he2017mask} $+ \mathcal{L}_{class}$ \cite{he2017mask} $+ \mathcal{L}_{panoptic}$ \cite{xiong19upsnet}$ + \mathcal{L}_{semantic}$ \cite{Kirillov2019fpn} $+ \mathcal{L}_{depth}$ \cite{Tulsiani17factored3d} $+ \mathds{1} \cdot (\mathcal{L}_{dz}$ \cite{gkioxari2019meshrcnn} $+ \mathcal{L}_{zc} + \mathcal{L}_{voxel}$ \cite{gkioxari2019meshrcnn} $+ \mathcal{L}_{mesh}$ \cite{gkioxari2019meshrcnn}$)$.
For depth, we use a simple U-Net network structure to predict inverse ``stuff'' depth, which is adopted from Factored3D \cite{Tulsiani17factored3d} because it is relatively lightweight. In addition to depth, to assist the positioning of objects relative to their environment, we add an inverse z center prediction head to align predicted objects with the predicted layout or depth map in 3D. The z center is defined as $z_c$ in $\bar{dz} = \frac{d_z}{z_c} \cdot \frac{f}{h}$, where $\bar{dz}$ is defined as the scale-normalized depth extent \cite{gkioxari2019meshrcnn}, $h$ is the height of the object's bounding box, $f$ is the focal length, $d_z$ is the depth extent. Our z center head predicts the inverse $z_c$, which is the object's center in the z-axis of the camera coordinate system.
In summary, the Panoptic3D (end-to-end) network uses a ResNet and an FPN network as our backbone for detection, along with an FPN-based semantic head to assist the 2D panoptic prediction, an inverse z center head in predicting object centers relative to inverse depth prediction produced by the depth branch, and enables multi-object training and evaluation for the shape heads.
\begin{table*}[!ht]
\centering
\caption{Available datasets comparison. More comparison is available in \cite{fu20203dfuture}. The last row shows the panoptic 3D 3D-FRONT dataset rendered and annotated by us.}
\label{tab:inhouse_dataset}
\scalebox{0.9}{
\begin{tabular}[width=\linewidth]{cccccccc}
\hline
Dataset & Instance & Semantic & Panoptic &Depth & 3D ``things'' & 3D ``stuff'' & Alignment\\
\hline
SUN-RGBD\cite{song2014sunrgbd} &\cmark&\cmark&-&\cmark&0&-&-\\
AI2Thor\cite{Kolve2017AI2THORAn}&\cmark&\cmark&\cmark&\cmark&100&\cmark&\cmark\\
ScanNet\cite{dai2017scannet}&\cmark&\cmark&-&\cmark&14225/1160\cite{Avetisyan2019scan2cad}&-&approx.\cite{Avetisyan2019scan2cad}\\
3D-FUTURE\cite{fu20203dfuture}&\cmark&-&-&-&9992&-&\cmark\\
3D-FRONT\cite{fu20203dfuture}&-&-&-&-&9992&\cmark&\cmark\\
Panoptic 3D 3D-FRONT&\cmark&\cmark&\cmark&\cmark&2717&\cmark&\cmark\\
\hline
\end{tabular}
}
\end{table*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.9\linewidth]{cityscapes.png}
\caption{Qualitative results of Panoptic3D (stage-wise) on Cityscapes images \cite{Cordts2016Cityscapes}. Results are taken from an off-angle shot to show the difference between depth and 3D panoptic results. We sampled the point cloud from the result object meshes for better visualization of the 3D effect. We show that our alignment module outputs visually correct alignment between things reconstruction and ``stuff'' depth point cloud.}
\label{fig:cityscapes}
\vspace{-4mm}
\end{figure*}
\section{Datasets}
For Panoptic3D (stage-wise), we show qualitative results for natural datasets such as COCO and Cityscapes, where well-annotated panoptic segmentation labels are provided.
To our best knowledge, no available dataset is accurately annotated with amodal instance segmentation, panoptic segmentation, 2.5D information for ``stuff'', and 3D meshes for ``things''.
Most natural image datasets either do not provide panoptic segmentation annotations or suffer from low diversity or low quantity for corresponding 3D mesh annotations. ScanNet \cite{dai2017scannet} has a diverse environment, a large number of images annotated with instance/semantic segmentation, and annotations for corresponding 3D meshes. However, the mesh annotations on ScanNet do not have good alignment with their masks. Additionally, our attempt to generate panoptic segmentation information for ScanNet suffers from significant human errors in semantic and instance segmentation annotations. Therefore, we are not able to work on ScanNet for the current end-to-end supervised system. We are also aware of other 3D datasets such as SUN-RGBD \cite{song2014sunrgbd}, AI2Thor \cite{Kolve2017AI2THORAn}, Scan2CAD \cite{Avetisyan2019scan2cad}, 3D-FUTURE \cite{fu20203dfuture} and OpenRooms \cite{Li2020OpenRoomsAE}. We show in Table~\ref{tab:inhouse_dataset} that the natural datasets, such as SUN-RGBD and ScanNet, do not precisely align 3D ``stuff'' or ``things''. For a virtual dataset, even though we can extract all the information from AI2Thor, the number of shapes was too limited for shape reconstruction training during the early stages of our project. OpenRooms has not yet released its 3D CAD models.
Thanks to the availability of the 3D-FRONT dataset \cite{fu20203dfuture}, we can generate a first version of the panoptic 3D parsing dataset with COCO-style annotations, including 2D amodal instance and panoptic segmentation, modal and amodal (layout) depth, and corresponding 3D mesh information for every image. Referenced from the 3D-FUTURE dataset \cite{fu20203dfuture}, we adopt 34 instance categories representing all of the countable objects as ``things'', and add three categories representing walls, ceilings, and floors as ``stuff'' as no other ``stuff'' categories exist in the first release.
For rendering, the first release of the 3D-FRONT dataset does not provide the textures and colors for ``stuff'' objects, so we adopt textures from the SceneNet RGB-D dataset \cite{McCormac2017scenenet}. We place a point light at the renderer's camera position for the lightning to make sure the scene is fully lit. We use the official Blender\cite{blender} script with the officially released camera angles for this work.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.7\linewidth]{COCOv2.png}
\caption{Qualitative results for Panoptic3D (stage-wise) on COCO images \cite{Lin2014COCO}. Results are taken from an off-angle shot to show the difference between depth and 3D panoptic results. We sampled the point cloud from the predicted object meshes for better visualization of 3D structures. }
\label{fig:COCO}
\vspace{-4mm}
\end{figure*}
We use the first 1620 houses as the train set and the last 200 houses as the test set for the experiments. We first mark all objects that appear both in the train and test sets as invalid during training, ensuring that the 3D models are disjoint between the train and test sets. We only train and evaluate our mesh prediction on non-boundary (or relatively centered) objects. After filtering out images with no valid things, there are 7734 images in the train set and 1086 in the test set. There are 55216 instances in the train set for the panoptic segmentation task and 7548 instances in the test set. For the 3D reconstruction task, there are 1559 unique models in the train set and 1158 unique models in the test set. The final dataset covers 33 categories for ``things'' during training and 31 types of ``things'' during evaluation.
\section{Experiment Details and Evaluation}
\begin{table*}[!ht]
\centering
\caption{\small \textbf{Comparison of re-projected 2D panoptic qualities from a subset of COCO indoor images between Total3DUnderstanding and our Panoptic3D (stage-wise) system.} For Total3DUnderstanding, the re-projection uses inferred camera extrinsic and we change the predicted layout box into meshes for wall, ceiling, and floor. Our Panoptic3D (stage-wise) method outperforms Total3DUnderstanding on every metrics.}
\label{tab:total3dpanoptic}
\vspace{-2mm}
\scalebox{0.85}{
\begin{tabular}{c|ccc|ccc|ccc}
\hline
& \multicolumn{3}{c}{PQ $\uparrow$} & \multicolumn{3}{c}{SQ $\uparrow$} & \multicolumn{3}{c}{RQ $\uparrow$} \\
Methods & IOU@.5 & IOU@.4 & IOU@.3 & IOU@.5 & IOU@.4 & IOU@.3 & IOU@.5 & IOU@.4 & IOU@.3 \\
\hline
Total3DUnderstanding \cite{nie2020total3d} & 0.043 & 0.06 & 0.077 & 0.046 & 0.063 & 0.081 & 0.065 & 0.101 & 0.15\\
Panoptic3D (stage-wise) (ours) & \textbf{0.168} & \textbf{0.176} & \textbf{0.181} & \textbf{0.177} & \textbf{0.184} & \textbf{0.181} & \textbf{0.21} & \textbf{0.220} & \textbf{0.226}\\
\bottomrule
\end{tabular}
}
\end{table*}
\subsection{Stage-wise Panoptic3D}
Datasets such as COCO and Cityscapes, have well-annotated panoptic segmentation annotations but lack annotations of 3D shapes and depth information. Figure~\ref{fig:networkstructure2} shows the stage-wise system pipeline. With UPSNet\cite{xiong19upsnet} as the backbone, we can use a de-occlusion network \cite{zhan2020self} for amodal mask prediction and a depth network \cite{Alhashim2018densedepth} and an alignment module for scene alignment. Additionally, we use the predicted amodal mask and the input RGB image for unseen class instance reconstruction \cite{zhang2018genre}. The outputs of these networks would then be passed through an alignment module that produces the 3D panoptic parsing results.
For the Cityscapes dataset, we compute its camera intrinsics with FOV = 60, height = 1024 and width = 2048 \cite{Cordts2016Cityscapes}. Since it doesn't provide camera information for the COCO dataset, we estimate its FOV to be 60 based on heuristics and use an image size of $480 \times 640$, which is compatible with every sub-module of the stage-wise system.
In Figure~\ref{fig:cityscapes} and Figure~\ref{fig:COCO}, we show qualitative measures for Cityscapes and COCO, respectively. The pipeline has demonstrated qualitatively good results for both indoor and outdoor natural images.
Using the COCO dataset, we can project the panoptic 3D results back to the input view and evaluate it against their ground truth 2D panoptic annotation to show its image parsing capability. We acquired around 300 images from the COCO test set that contains overlapped panoptic labels Total3DUnderstanding. In Table~\ref{tab:total3dpanoptic}, we show that our pipeline outperforms Total3DUnderstanding on reprojected panoptic segmentation metrics.
\subsection{End-to-end Panoptic3D}
We train our networks with a learning rate of 0.005 for 30000 iterations. We use PyTorch for code development and 4 GeForce GTX TITAN X GPUs for ablation studies. We switch to 8 GPUs for larger architectures with depth/layout predictions. The experiments with the largest model take 16 hours to run on 8 GeForce GTX TITAN X GPUs. Our input size for the detection backbone is $1024 \times 1024$ instead of the original $800 \times 800$ used by Mesh R-CNN because the depth network requires the input to be divisible by 64. The input image is resized to $512 \times 512$ for the depth branch. The final network contains 13 losses: semantic segmentation pixel-wise classification loss, panoptic segmentation loss, RPN box classification loss, RPN box regression loss, instance box classification loss, instance box regression and segmentation loss, depth extent loss, inverse depth center loss, voxel loss, mesh loss, depth loss, and ``stuff'' depth loss. Partial loss is used for depth extent, object inverse depth loss, voxel loss, and mesh loss.
\subsection*{Shape Reconstruction}
For the baseline model, we add multi-instance training and allow shape regression only on centered objects on top of detection and reconstruction network structures used in \cite{gkioxari2019meshrcnn}.
Ablations on partial-loss training and joint training with other heads are included in Table~\ref{tab:baselines} and Table~\ref{tab:shape}. We find that utilizing more samples per image for training the instance head can help improve mesh prediction with higher $AP^{mesh}$ in Table~\ref{tab:baselines}. In Table~\ref{tab:shape}, we show that adding additional panoptic, z-center, depth, and layout heads significantly improve the average precision for boxes and masks, but only a slight improvement on meshes when used together. Notice that adding z-center loss starting from the model (b) does not significantly boost the earlier models; however, it provides considerably better qualitative visualization in Figure~\ref{fig:3dfront}. Compared to row 6 (without z-center loss), row 7 (with z-center loss) shows a more consistent layout against the input RGB image. The furniture cluster around a similar depth in row 6.
\begin{table}[!ht]
\centering
\caption{Baseline model comparisons. Our baseline model (1) is a multi-object training and evaluation enabled detection and reconstruction network \cite{gkioxari2019meshrcnn} trained on on centered objects in all three heads (instance, voxel, mesh). Model (2) is the baseline model trained on all things with a partial loss on voxel and mesh heads. N indicates the number of annotations used to regress each corresponding head during training. }
\label{tab:baselines}
\scalebox{0.65}{
\begin{tabular}{c|ccc|ccc}
\hline
Baseline&N&N&N&$AP^{box}$ & $AP^{mask}$ & $AP^{mesh}$ \\
&instances&voxels&meshes&&&\\
\hline
(1)&16175&16175&16175& 37.8 $\pm$ 1.4& 34.2 $\pm$ 1.9& 5.9 $\pm$ 0.4\\
(2)&55216&16175&16175& 56.5 $\pm$ 0.9& 52.6 $\pm$ 1.1& 8.9 $\pm$ 1.5\\
\hline
\end{tabular}
}
\end{table}
\begin{table}[ht]
\centering
\caption{Ablation studies for the Panoptic3D (end-to-end) model on the panoptic 3D 3D-FRONT dataset. Our ablation study is compared with the baseline (2) in Table~\ref{tab:baselines}. The results show that during joint training, the network can maintain its $AP^{mesh}$ performance while improving on $AP^{box}$ and $AP^{mask}$.}
\label{tab:shape}
\scalebox{0.66}{
\begin{tabular}{c|cccc|ccc}
\hline
&panoptic &z-center & depth& layout &$AP^{box}$ & $AP^{mask}$ & $AP^{mesh}$\\
\hline
(2)& -&-&-&- & 56.5 $\pm$ 0.9& 52.6 $\pm$ 1.1& 8.9 $\pm$ 1.5 \\
\hline
(a) & \cmark&&& &56.7 $\pm$ 2.2& 55.8 $\pm$ 2.7 & 8.3 $\pm$ 0.6 \\
(b) & \cmark&\cmark&&& 57.0 $\pm$ 1.7& 55.5 $\pm$ 1.2 & 8.1 $\pm$ 1.5\\
(c) & \cmark&\cmark&\cmark& &59.4 $\pm$ 0.6&56.7 $\pm$ 2.4 & 9.0 $\pm$ 1.2\\
\hline
ours &\cmark &\cmark&\cmark&\cmark&\textbf{60.0 $\pm$ 1.4}&\textbf{56.0 $\pm$ 1.4}&\textbf{9.0 $\pm$ 1.3} \\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{3dfront.png}
\caption{Qualitative results for Panoptic3D (end-to-end) on the 3D-FRONT dataset \cite{fu20203dfuture}. We show our predicted panoptic results (row 3) compared to panoptic ground truth (row 2) and our reconstruction results (row 5 to 7) to reconstruction ground truth (row 4). Row 8 shows the final panoptic 3D results, where we sample point clouds from meshes for better visualization. Comparing rows 6 and 7, row 7 shows the object placement is more consistent with the input RGB image. Our final results show that the shape predictions align well with the predicted ``stuff'' depth prediction.}
\label{fig:3dfront}
\end{figure*}
\subsection*{Panoptic Segmentation}
We compare our panoptic segmentation results with the original UPSNet panoptic segmentation results as one of our baselines. Although we use a panoptic feature pyramid network \cite{Kirillov2019fpn} instead of the FPN network with deformable CNNs from UPSNet, the results are comparable. We notice a slight decrease when we switch masks for instance head from modal to amodal, as amodal masks may pose challenges to the panoptic head. As for joint training, we show in Table~\ref{tab:panoptic} that the results from joint training are comparable with our baseline. We use the metrics of PQ, SQ, and RQ following the panoptic segmentation paper \ref{tab:panoptic}.
We are aware that our ``stuff'' categories are an easy set for panoptic segmentation tasks. 3D-FRONT offers new releases from when we began the project, so for future studies, we will attempt to incorporate more categories, such as doors and windows, with better rendering effects. However, our dataset does provide the first version of any such dataset that enables end-to-end training on the task of panoptic 3D parsing. Based on our acquired results, there are still challenges even with the current ``stuff'' categories for panoptic segmentation.
\begin{table}[ht]
\centering
\caption{Ablation studies for the Panoptic3D (end-to-end) model on the panoptic 3D 3D-FRONT dataset. The model numbers here correspond to models in Table~\ref{tab:shape}. Here we show that the panoptic performance is comparable to the baseline performance with joint training.}
\label{tab:panoptic}
\scalebox{0.8}{
\begin{tabular}{c|cccc|ccc}
\hline
&panoptic &z-center & depth& layout &PQ & SQ & RQ\\
\hline
(a) & \cmark&&& &46.4&76.8&54.0 \\
(b) & \cmark&\cmark&&& 46.0& 75.9 &53.4 \\
(c) & \cmark&\cmark&\cmark& &47.4&76.1 & 55.2\\
\hline
ours &\cmark &\cmark&\cmark&\cmark&46.9&75.7&54.4 \\
\hline
\end{tabular}
}
\end{table}
\subsection*{Depth and Layout predictions}
We adopt the U-Net structure from Factored3D and jointly train the depth with the rest of our pipeline. We find regressing for layout depth alone is an easier task for the network than joint training. Joint training using layout depth loss with other losses appear to be a challenging problem. Adding cross-consistency loss \cite{zamir2020consistency} between layout depth and normal does not seem to improve depth's performance easily. Nonetheless, adding layout depth loss appears to help the network to perform better in general in both shape metrics and panoptic metrics as shown in Table~\ref{tab:shape} and Table~\ref{tab:panoptic}.
\vspace{-2mm}
\section{Conclusions}
\vspace{-2mm}
This paper presents a framework that aims towards tackling panoptic 3D parsing for a single image in the wild. We demonstrate qualitative results for natural images from datasets such as Cityscapes \cite{Cordts2016Cityscapes} (shown in Figure~\ref{fig:cityscapes}) and COCO \cite{Lin2014COCO} (shown in Figure~\ref{fig:COCO}) under the stage-wise system. Additionally, an end-to-end pipeline that can be trained with full annotations is also proposed.
For societal impact, we are proposing a new task in computer vision, Panoptic 3D Parsing (Panoptic3D), that is concerned with a central task in computer vision: joint single image foreground object detection and segmentation, background semantic labeling, depth estimation, 3D reconstruction, and 3D layout estimation. A successful Panoptic3D system can see its applications in a wide range of domains beyond computer vision such as mapping, transportation, computer graphics etc. However, Panoptic3D is a learning based system and may have bias introduced in various training stages. Careful justification and adoption of the system in appropriate tasks subject to regulation are needed.\\
\noindent {\bf Limitations}: We use a fixed FOV of 60 and fixed image sizes for Cityscapes, COCO, and 3D-FRONT. Hence the estimated 3D scene can only provide a rough ordering estimation of things and stuff in terms of distance to the camera plane. We use a single light source for the 3D-FRONT image rendering, which results in artificial lighting for the training images. Therefore, the generalization capability for the models trained using this dataset might not be strong. We expect the model to generalize better under more natural lighting conditions, which is to be verified in the next step with a new dataset.
\section{Acknowledgments}
Part of the work was done during Sainan Liu's internship at Intel.
{\small
\bibliographystyle{ieee_fullname}
|
2024-02-18T23:40:25.494Z
|
2021-12-01T02:08:52.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03039","language":"en","timestamp":1638324532000,"url":"https:\/\/arxiv.org\/abs\/2111.03039","yymm":"2111"}
|
proofpile-arXiv_000-10221
|
{"provenance":"002.jsonl.gz:10222"}
| null | null |
\section{Introduction}
Miniaturization of energy-information processing devices is indisputably a cornerstone of nowadays technology. The growing ability to control and design systems ``at the bottom'' makes it possible to construct micro machines that transfer heat and work similarly to well-known macroscopic heat engines. However, thermodynamics at this scale faces two significant phenomena unobserved in the macroscopic reality. Firstly, the law of big numbers does not hold for small systems, which results in fluctuations of thermodynamic quantities comparable to the average values. Secondly, in the micro-world, quantum effects may play a significant role. In particular, the non-classical contributions coming from the quantum interference can appear in statistics of measured quantities.
Quantum thermodynamics is a program of adapting those two features into one universal framework.
The most significant milestone, to incorporate the intrinsic randomness, was the formulation of \emph{fluctuation theorems} \cite{Bochkov1977, Jarzynski1997, Crooks1999, Esposito2009, Campisi2011}, which are the Laws of Thermodynamics expressed in terms of probability distributions. For quantum systems, the best-known concept of how to construct such distribution is the \emph{two-point measurement scheme} (TPM) \cite{Talkner2007, Campisi2011, Esposito2009}, leading to fluctuation theorems for work or heat \cite{Jarzynski2004}. However, in this approach, quantum coherences are fundamentally rejected due to the invasive nature of the projective measurement. Hence, in order to incorporate quantum interference, other frameworks have been proposed like an idea of the \emph{work operator} \cite{Yukava2000, Allahverdyan2004work}. These attempts indeed can include quantum coherences; nevertheless, the obtained probability distributions are inconsistent with the fluctuation theorems if applied for classical states. Later this incompatibility has been rigorously formulated in the form of the no-go theorem \cite{Perarnau2017, Baumer2018}: Any generalized measurement of two-point observables, leading to the properly defined probability distribution and incorporating coherences, is incompatible with fluctuation theorems if applied to classical states.
In the light of this result, another proposed solution (to quantitatively describe fluctuating quantities for coherent states) is to abandon the positivity of probability distributions and replace them with possibly negative-valued \emph{quasi-probabilities} \cite{Allahverdyan2014, Solinas2015, Solinas2016, Miller2017, Lostaglio2018, Levy2020}. In principle, these should recover the statistics enriched by interference terms, and for classical states, they should converge to distributions obeying classical fluctuations theorems.
Despite the above obstacles, the problem of reconciling fluctuation theorems with a quantum coherence also has another side with equally fundamental importance. In all of the mentioned above frameworks attempting to formulate the (quasi) distribution of work, the system on which the work is done (e.g., a load) is implicit, assumed, in general, to be an external classical field. As long as it is a good approximation for many cases, it is generally not adequate in plenty of experimental situations, where the energy-storage device should be treated autonomously within a fully quantum-mechanical framework. We stress that this is unavoidable if one wants to incorporate interference effects since the processing of coherences of the system strongly depends on coherences of the work reservoir. In particular, treating the energy-storage device explicitly leads to the so-called \emph{work-locking} \cite{Korzekwa2016, Lobejko2021} (i.e., an inability of work extraction from coherences), which non-autonomous frameworks cannot even capture.
The problem of a proper model of the work reservoir is as hard as the problem of a correct work probability distribution. Nevertheless, we have an essential hint on how to do this since, in the first place, we need to make it consistent with (classical) fluctuation theorems. Here, the most crucial insight is realizing that the probability distribution constructed from the two-point measurements solely relies on the energy differences, rather than its absolute values \cite{Aberg2018, Alhambra2016}, which presuppose a kind of symmetry. Indeed, if an explicit work reservoir is present, this refers to the translational invariance within the space of its energy states. The concept of such a device is called the \emph{quantum weight} and first proposed in papers \cite{Brunner2012, Skrzypczyk2014}.
Consequently, we possess a framework with an explicit work reservoir that recreates all of the fluctuation theorems just by this symmetry (cf. \cite{Bartosik2021} where are discussed corrections if translational invariance is violated). Moreover, it has also been shown that the optimal work extracted by the quantum weight is limited by the \emph{ergotropy} \cite{Lobejko2020, Lobejko2021}, which, with the notion of passivity, is another building block of quantum thermodynamics (see, e.g., \cite{Pusz1978, Allahverdyan2004}). Hence, taking the weight as a proper model for fluctuation theorems allows us to analyze the quantum coherences, from a thermodynamic perspective, within a fully quantum setup. So far, the work-locking for a quantum weight has been completely described as a dephasing process of a coherent contribution to the ergotropy, described by an effective, the so-called \emph{control-marginal state} \cite{Lobejko2021}. Since this particular example of the work-locking refers to a loss of the ergotropy due to a dumping of coherences, we call it the \emph{ergotropy decoherence}.
In this paper, we take a step further, and instead of analyzing just the averages (i.e., the extracted work), we also formulate general results for the work fluctuations. In particular, we derive a formula for changes of the weight's energy variance and bounds for its absolute values. Although we formulate our main results in a framework with an explicit work reservoir (i.e., the weight), firstly, we propose a general quasi-distribution constructed as a convolution of the one-point probability functions (avoiding the problem of measurements invasiveness). Then we adapt the concept to the quantum weight model. We prove the physical meaning of cumulants of this quasi-distribution, showing that they are equal to a difference between the initial and final cumulants of the corresponding one-point distributions, e.g.., the first cumulant is equal to change of the mean value (e.g., work) and second to change of the variance, which are indeed the main subjects of our interests. By this quasi-distribution, we mainly want to compare it with the TPM and work operator approach showing especially that in the classical (incoherent) limit, they coincide with each other.
Our first main result is the mentioned before formula for change of the energy-storage variance (during the work extraction process). As it was shown in previous papers \cite{Lobejko2020, Lobejko2021}, the first cumulant of the quasi-distribution (i.e., change in the average energy of the weight) is equal to the first cumulant of the work operator but calculated with respect to the control-marginal state. In the formula for the second cumulant (i.e., change in weight's variance), similarly appears an analogous term given by the second cumulant of the work operator; however, an additional term is also present, which is a fully quantum contribution that cannot be captured in non-autonomous frameworks. It is of significant importance since it may take negative values in contrast to the first one, which is always non-negative. Consequently, it is possible to decrease the variance gain due to quantum interference between the system and the weight. Moreover, the most surprising is that the second term can even overcome the first one, which results in a net decrease of the energy dispersion within the state of the work reservoir. This entirely quantum feature, which leads to a qualitative phenomenon (i.e., squeezing of work fluctuations), shows the importance of analyzing work distribution within a fully quantum framework.
The second main result refers to bounds for an absolute value of the weight's energy dispersion. In particular, we relate the bound for a final (or initial) dispersion with the dumping function that characterizes the mentioned before ergotropy decoherence process. Due to this interplay between work-locking caused by a decoherence process and dispersion of the energy, we call the proved inequality as the \emph{fluctuation-decoherence relation}. The main conclusion from this relation is that unlocking the system's total (coherent) ergotropy always results in a divergence of the weight's energy dispersion. We stress that it is again solely quantum effect caused by the Heisenberg uncertainty principle.
Finally, we apply the introduced general formulas to a qubit, and we derive an entire phase space of possible work-variance pairs for any translational-invariant and energy-conserving protocol and the arbitrary initial state. Then, we go through a few particular examples and compare the classical and quantum regimes explicitly. In particular, we present a non-classical process of reducing the energy fluctuations for a `cat-state' of the weight (i.e., two energetically separate Gaussian wave packets), where both peaks collapse to each other (hence reducing the variance). In contrast, the classical protocol can only broaden each of them.
The paper is organized as follows. In Section \ref{work_definitions_section} we quickly review work distributions coming from the work operator and the TPM scheme, and then we propose a new quasi-distribution based on the convolution of one-point measurements. Section \ref{work_extraction_and_weight_section} is an introduction of the weight model and work extraction protocol, as well as all of the mathematical methods used later. Next, in Section \ref{work_quasi_distribution_section} we apply a definition of the introduced quasi-distribution to a weight model, and we analyze its statistical properties and classical limits. In Section \ref{work_variance_gain_section} and \ref{bounds_section} we present our main results. First, we introduce the formulas for the average energy and variance changes, and then the bounds for its absolute value expressed as the fluctuation-decoherence relation. Finally, in Section \ref{qubit_section} we solve the problem of the work-variance trade-off for a qubit, and we discuss quantum and classical regimes when coherent or incoherent ergotropy is extracted. We summarize all of the results with a discussion in Section \ref{summary_section}.
\section{Work definitions} \label{work_definitions_section}
We start with a quick review of the work operator concept and the TPM probability distribution, which shall be used as our reference points. After that, in the subsection \ref{double_subsection}, we introduce a new definition of a quasi-probability, based on the idea of the convolution of one-point distributions.
\subsection{Preliminaries}
We shall start with a discussion of the non-autonomous closed system. In accordance, we assume that the initial state is described by the density matrix $\hat \rho_i$, and we consider a thermodynamic protocol given by the following unitary map, i.e.,
\begin{equation}\label{unitary_evolution}
\hat \rho_i \to \hat V \hat \rho_i \hat V^\dag.
\end{equation}
In general, the unitary $\hat V$ comes from an integration of the time-dependent Hamiltonian, describing the driving force, starting from the initial Hamiltonian $\hat H_i = \sum_k \epsilon_k^i \Pi_k^i$, and resulting in the final operator $\hat H_f = \sum_k \epsilon_k^f \Pi_k^f$.
Throughout the paper we use a following definition of the $n$-th moment:
\begin{eqnarray}
\langle w^n \rangle_{\text{X}} = \int dw \ w^n \ P_X(w),
\end{eqnarray}
and the $n$-th cumulant:
\begin{equation}
\langle \langle w^n \rangle \rangle_{X} = \frac{d^n}{d(it)^n} \log [\langle e^{i t w} \rangle_{X}] \Big{|}_{t=0}.
\end{equation}
\subsection{Work operator}
Firstly, we review the idea of the work operator, which is defined as follows:
\begin{equation} \label{averaged_work}
\hat W = \hat H_i - \hat V^\dag \hat H_f \hat V.
\end{equation}
According to the introduced protocol \eqref{unitary_evolution}, since the system is isolated from the thermal environment, the mean change of its energy is identified with the average work, namely
\begin{equation}
\langle w \rangle_{\text{W}} = \Tr[\hat W \hat \rho_i].
\end{equation}
The important point is that the above definition includes the quantum contributions coming from the initial coherences in a state $\hat \rho_i$. Furthermore, via the spectral decomposition of the work operator, i.e.,
\begin{equation}
\hat W = \sum_i w_i \dyad{w_i},
\end{equation}
one can define the following probability distribution:
\begin{equation} \label{work_operator_dist}
P_{\text{W}}(w) = \sum_i \delta(w - w_i) \ \bra{w_i} \hat \rho_i \ket{w_i}.
\end{equation}
The formula suggests that eigenvalues $w_i$ with associated probabilities $\bra{w_i} \hat \rho_i \ket{w_i}$ can be interpreted as the outcomes of the fluctuating work; however, according to the no-go theorem \cite{Perarnau2015} they do not satisfy the fluctuation theorems for classical systems (see e.g. \cite{Allahverdyan2014}).
\subsection{(single) Two-point measurement (TPM)}
In order to construct the probability distribution satisfying fluctuation theorems, the TPM scheme was introduced \cite{Talkner2007, Campisi2011, Esposito2009}. In this approach, we perform the measurement on the initial state $\hat \rho_i$ (in its initial energy basis), which results with the outcome $\epsilon_n^i$, observed with the probability $p_n = \Tr[\hat \Pi_n^i \hat \rho_i]$. Furthermore, the system is projected into the state: $\hat \rho_i \to \hat \Pi_n^i$. Next, we apply the protocol, i.e., $\Pi_n^i \to \hat V \hat \Pi_n^i \hat V^\dag$, and then the second measurement is performed, giving the outcome $\epsilon_m^f$ with the (conditional) probability $p_{m|n} = \Tr[\hat \Pi_m^f \hat V \hat \Pi_n^i \hat V^\dag]$. Finally, the fluctuating work is defined as the outcomes difference: $w = \epsilon_m^f - \epsilon_n^i$ with the associated (joint) probability $p_{m,n} = p_{m|n} p_n$.
According to the proposed two-point measurement scheme, the probability distribution of the fluctuating work is equal to:
\begin{equation} \label{TPM_distribution}
P_{\text{TPM}}(w) = \sum_{m,n} \delta(w - \epsilon_m^f + \epsilon_n^i) p_{m|n} p_n.
\end{equation}
The main problem with this approach is the invasive nature of the first measurement, such that all of the initial coherences are destroyed. As a consequence, in general the first moment (i.e., the average work) is incompatible with the value calculated via the work operator, namely
\begin{equation}
\langle w \rangle_{\text{TPM}} \neq \langle w \rangle_{\text{W}}.
\end{equation}
\subsection{(double) One-point measurement} \label{double_subsection}
To overcome the problem with the invasive nature of the first measurement, we introduce a quasi-probability distribution. On the contrary to the TPM, here, the basic idea is to measure independently initial and final energy distributions and then to construct a quasi-probability via the convolution operation.
In accordance, firstly, we perform the initial measurement on the state $\hat \rho_i$, which gives us outcomes $\epsilon_n^i$ with associated probabilities $p_n^i = \Tr[\hat \Pi_n^i \hat \rho_i]$. This is precisely the same procedure as for the TPM; however, in this case, instead of evolving the resulting (projected) state, it is evolved the initial (unperturbed) state $\hat \rho_i$. This results in a new statistical ensemble described by a density matrix $\hat \rho_f = \hat V \hat \rho_i \hat V^\dag$, and once again the statistics of final energy outcomes $\epsilon_n^f$ with probabilities $p_n^f = \Tr[\hat \Pi_n^f \hat V \hat \rho_i \hat V^\dag]$ is measured. Finally, we construct the initial and final energy distributions, i.e.,
\begin{equation}
P_{i,f} (\epsilon) = \sum_n \delta(\epsilon - \epsilon_n^{i,f}) p_n^{i,f},
\end{equation}
and we introduce a real function $P_{\text{QP}}$, such that
\begin{equation}
P_i (\epsilon_i) = \int d \epsilon_f P_{\text{QP}}(\epsilon_i - \epsilon_f) P_f (\epsilon_f).
\end{equation}
Finally, we interpret the kernel $P_{\text{QP}}(w)$ as the quasi-distribution of fluctuating work $w$, which formally is given by:
\begin{equation} \label{quasi_probability}
P_{\text{QP}} (w) = \frac{1}{2\pi} \int \ dt \ e^{-i w t} \frac{\Tr[e^{i \hat H_i t} \hat \rho_i]}{\Tr[e^{i \hat H_f t} \hat \rho_f]}.
\end{equation}
As it was mentioned, the function $P_{\text{QP}}$ can have the negative values, thus it cannot be interpreted as the proper probability distribution. Nevertheless, its $n$-th cumulant, i.e., $\langle \langle w^n \rangle \rangle_{\text{QP}}$ has a very straightforward interpretation as a difference of cumulants of the initial $P_i(\epsilon)$ and final $P_f(\epsilon)$ energy distributions, namely
\begin{equation}
\begin{split}
\log[\langle e^{i t w} \rangle_{\text{QP}}] &= \log[\frac{1}{2\pi} \int \ ds \ e^{-i w (s-t)} \frac{\Tr[e^{i \hat H_i s} \hat \rho_i]}{\Tr[e^{i \hat H_f s} \hat \rho_f]}] \\
&=\log[\Tr[e^{i \hat H_i t}]] - \log[\Tr[e^{i \hat H_f t}]]
\end{split}
\end{equation}
such that
\begin{equation} \label{cumulant_difference}
\langle \langle w^n \rangle \rangle_{\text{QP}} = \langle \langle \epsilon^n \rangle \rangle_i - \langle \langle \epsilon^n \rangle \rangle_f.
\end{equation}
In particular, for the first order ($n=1$), which is equal to the first moment, we have:
\begin{equation}
\langle w \rangle_{\text{QP}} = \langle w \rangle_{\text{W}},
\end{equation}
i.e., we recover the averaged work value given by the work operator \eqref{averaged_work}.
The above results show that the quasi-distribution $P_{\text{QP}}$ encodes the proper work statistic enriched with quantum interference effects. Despite that we consider here the work distribution, we stress that definition \eqref{quasi_probability} is general and can be used to quantify other thermodynamic quantities as well (like the heat flow). In the following sections, we will apply the introduced framework within the framework with explicit work reservoir given by the quantum weight, and in particular, we will prove that it reduces to the TPM scheme for incoherent states.
\section{Work extraction \\ and quantum weight} \label{work_extraction_and_weight_section}
This section introduces a work extraction protocol according to which we later define a quasi-probability distribution. In particular, the model is based on the First Law of Thermodynamics, i.e., strict energy conservation between coupled subsystems, and an explicit model of the energy-storage device known as the quantum weight \cite{Skrzypczyk2014, Alhambra2016, Lobejko2021}.
In accordance, we consider a composite Hilbert space of the system $\mathcal{S}$ and the quantum weight $\mathcal{W}$. The former (system $\mathcal{S}$) is assumed to have a finite-dimensional discrete energy spectrum, whereas the quantum weight $\mathcal{W}$ has a continuous and unbounded spectrum. Namely, the free Hamiltonians are given by:
\begin{equation}
\hat H_S = \sum_i \epsilon_i \dyad{\epsilon_i}_S, \ \hat H_W = \int d E \ E \dyad{E}_W.
\end{equation}
The work extraction protocol is understood as the energy transfer between subsystems, where a gain of the weight's average energy is interpreted as a positive work. Throughout the paper, we assume an initial state given by a product $\hat \rho = \hat \rho_S \otimes \hat \rho_W$, and the unitary operator governs dynamics, i.e., $\hat \rho \to \hat U \hat \rho \hat U^\dag$, that is specified in the next section.
Throughout the paper, we denote by $S$ or $W$ subscript the operators that act entirely on the system or the weight Hilbert space, respectively, whereas operators without subscript act on both.
\subsection{Energy-conserving \\ and translationally-invariant unitaries}
The crucial point of the whole model is to define a particular class of unitary operators, such that the quantum weight can be interpreted as the work reservoir (i.e., the energy flowing from the system to the weight has no ``heat-like'' contribution). The weight model (defined below) overcame this work definition problem in the quantum regime. Essentially, it was first proven that it could not decrease an entropy of the system \cite{Skrzypczyk2014, Alhambra2016}, and later, more precisely, that the optimal work is given by the system's ergotropy \cite{Lobejko2020, Lobejko2021}.
The class of unitaries is defined by two symmetries: (i) the energy-conservation and (ii) the translational-invariance, such that the following commutation relations are satisfied:
\begin{equation} \label{commutation_relations}
[\hat U, \hat H_S + \hat H_W ] = 0, \ [\hat U, \hat \Delta_W ] = 0,
\end{equation}
where $\hat \Delta_W$ is a generator of the displacement operator $\hat \Gamma_\epsilon = e^{-i \hat \Delta_W \epsilon}$, obeying the canonical commutation relation $[ \hat H_W, \hat \Delta_W] = i$, such that $\hat \Gamma_\epsilon^\dag \hat H_W \hat \Gamma_\epsilon = \hat H_W + \epsilon$. We call the conjugate observable $\hat \Delta_W$ the time operator, and its eigenstates $\ket{t}_W$ (i.e., $\hat \Delta_W \ket{t}_W = t \ \ket{t}_W$), the time states. Then, we introduce the following probability density functions:
\begin{equation} \label{energy_time_dist}
f(E) = \bra{E} \hat \rho_W \ket{E}, \ \ g(t) = \bra{t} \hat \rho_W \ket{t},
\end{equation}
with the variances:
\begin{equation} \label{variance_time_energy}
\begin{split}
\sigma_E^2 &= \int dx \ x^2 \ f(x) - \left(\int dx \ x \ f(x) \right)^2, \\
\sigma_t^2 &= \int dx \ x^2 \ g(x) - \left( \int dx \ x \ g(x) \right)^2.
\end{split}
\end{equation}
Finally, any unitary operator $\hat U$ obeying commutation relations \eqref{commutation_relations} is given in the form \cite{Alhambra2016, Lobejko2020}:
\begin{equation} \label{unitary_with_S}
\hat U = \hat S^\dag \hat V_S \hat S,
\end{equation}
with
\begin{equation} \label{S_operator}
\hat S = e^{-i \hat H_S \otimes \hat \Delta_W},
\end{equation}
and $\hat V_S$ is the unitary operator acting solely on the system Hilbert space $\mathcal{S}$.
\subsection{Coherent and incoherent work extraction}
In the next subsections, we will see that the unitary $\hat V_S$ corresponds to the evolution operator introduced in Section \ref{work_definitions_section} for the non-autonomous framework with an implicit work reservoir. From this we define the following work operator:
\begin{equation} \label{work_operator}
\hat W_S = \hat H_S - \hat V_S^\dag \hat H_S \hat V_S.
\end{equation}
In contrast to Eq. \eqref{averaged_work}, here, the protocol is cyclic, such that the initial and final Hamiltonian is the same (i.e., given by $\hat H_S$), and as a consequence, the maximum change of the average energy with respect to all unitaries $\hat V_S$ is given by the ergotropy of the state $\hat \rho_S$ \cite{Allahverdyan2004}:
\begin{eqnarray} \label{ergotropy_definition}
R(\hat \rho_S) = \max_{\hat V_S} \Tr[\hat W_S \hat \rho_S].
\end{eqnarray}
In comparison to Eq. \eqref{averaged_work}, one can get an impression that the autonomous framework presented here is less general since it is constrained only to cyclic protocols. However, it can be easily generalized by adding the additional subsystem (i.e., the so-called `clock'), which controls changes of the system's Hamiltonian (see, e.g., \cite{Horodecki2013, Alhambra2016, Aberg2018}), or, another way around, one can assume that the $\mathcal{S}$ is a composite system (involving the proper one and the clock).
In this framework, $\hat V_S$ is an arbitrary unitary operator acting on the system's Hilbert space. However, one can consider a subset of the so-called incoherent unitaries, denoted by $\hat V_S^I$, which members correspond to operations that permute the energy states (up to irrelevant phase factors) and thus preserve coherences \cite{Streltsov2017}. Within the energetic context, the \emph{incoherent work operator}, i.e., $\hat W_S^I = \hat H_S - \hat{V}_S^I{}^\dag \hat H_S \hat V_S^I$, satisfies the following commutation relation:
\begin{equation} \label{incoherent_work}
[\hat W_S^I, \hat H_S] = 0,
\end{equation}
such that the average work is extracted solely from the diagonal part. Conversely, any operator \eqref{work_operator} that does not commute with the Hamiltonian $\hat H_S$ is the \emph{coherent work operator}, i.e., it affects coherences through the process of work extraction.
According to this, the ergotropy \eqref{ergotropy_definition} can be divided into incoherent and coherent contribution, i.e., $R(\hat \rho_S) = R_I(\hat \rho_S) + R_C(\hat \rho_S)$. The former, being a part of energy extracted solely from a diagonal, is defined as:
\begin{eqnarray}
R_I(\hat \rho_S) = \max_{\hat V_S^I} \Tr[\hat W_S^I \hat \rho_S],
\end{eqnarray}
and then the coherent contribution is introduced by the formula:
\begin{eqnarray}
R_C(\hat \rho_S) = R(\hat \rho_S) - R_I(\hat \rho_S).
\end{eqnarray}
\subsection{Wigner function and control-marginal state}
To get a better insight of later calculations, we describe the weight state in terms of the Wigner function:
\begin{equation} \label{wigner_function}
W(E,t) = \frac{1}{2\pi} \int d\omega \ e^{i \omega t} \Tr[\hat \rho_W \dyad{E + \frac{\omega}{2}}{E - \frac{\omega}{2}}_W],
\end{equation}
such that the probability density functions for energy and time states \eqref{energy_time_dist} are given by the marginals:
\begin{equation}
f(E) = \int dt \ W(E,t), \ \ g(t) = \int dE \ W(E,t).
\end{equation}
Next, the crucial object for our analysis is the effective density operator $\hat \sigma$, which we call the \emph{control state}, and the marginal operator $\hat \sigma_S$, i.e. the \emph{control-marginal state} \cite{Lobejko2020, Lobejko2021}, in the form:
\begin{equation} \label{control_state}
\hat \sigma = \hat S \hat \rho \hat S^\dag, \ \hat \sigma_S = \Tr_W[\hat S \hat \rho \hat S^\dag].
\end{equation}
Furthermore, for the product state $\hat \rho = \hat \rho_S \otimes \hat \rho_W$, the control-marginal state is given by the mixture of free-dynamics unitaries averaged over a distribution of the weight's time states, namely
\begin{equation} \label{control_marginal}
\hat \sigma_S = \int dt \ g(t) \ e^{-i \hat H_S t} \hat \rho_S e^{i \hat H_S t}.
\end{equation}
Since the channel is given as a convex combination of unitaries, it corresponds to pure decoherence.
\section{Work quasi-distribution} \label{work_quasi_distribution_section}
\subsection{Quasi-probability density function}
Now, we are ready to apply the definition \eqref{quasi_probability} to the composite system with an explicit work reservoir, which formally brings us the expression:
\begin{equation} \label{weight_quasi_probability}
P_{\text{QP}}(w) = \frac{1}{2\pi} \int ds \ e^{-i w s} \frac{\Tr[\hat U^\dag e^{i \hat H_W s} \hat U \hat \rho_S \otimes \hat \rho_W]} {\Tr[ e^{i \hat H_W s} \hat \rho_W]}.
\end{equation}
Notice that in contrast to Eq. \eqref{quasi_probability}, the definition for the explicit energy storage has switched nominator with the denominator, such that the positive work refers to the energy gain of the weight. Next, since we assume that $\hat U$ obeys the commutation relations \eqref{commutation_relations}, we put the expression \eqref{unitary_with_S} and we get:
\begin{equation} \label{quasi_dist_weight}
P_{\text{QP}}(w) = \frac{1}{2\pi} \int ds \ e^{-i w s} \Tr[\hat{\mathcal{M}}_S(s) \hat \xi_S (s)],
\end{equation}
where
\begin{equation}
\hat{\mathcal{M}}_S(s) = e^{\frac{1}{2} i \hat H_S s} e^{-i \hat V_S^\dag \hat H_S \hat V_S s} e^{\frac{1}{2} i \hat H_S s},
\end{equation}
and
\begin{equation} \label{xi_state}
\hat \xi_S (s) = \frac{\int dt \int dE \ e^{i E s} \ W(E,t) \ e^{-i \hat H_S t} \hat \rho_S e^{i \hat H_S t}}{\Tr[ e^{i \hat H_W s} \hat \rho_W]}.
\end{equation}
\subsection{Semi-classical and incoherent states}
Let us consider a family of operators given by Eq. \eqref{xi_state} (for $s \in \mathbb{R}$). The structure of the trace average in Eq. \eqref{quasi_dist_weight} suggests that $\xi(s)$ is an effective density matrix; however, even though $\Tr[\hat \xi_S (s)] = 1$, the operator $\hat \xi_S (s)$ is not necessarily positive semi-definite; thus, in general, it cannot be interpreted as the quantum state.
According to the formula \eqref{xi_state}, we define the class of the \emph{semi-classical states}. For reasons explained later, we characterize the initial density operator $\hat \rho_S \otimes \hat \rho_W$ as the semi-classical if the state $\hat \xi_S (s)$ does not depend on the variable $s$ (i.e., $\hat \xi_S'(s) = 0$). Moreover, since we have $\hat \xi(0) = \hat \sigma_S$, then the semi-classical state can be simply defined by the condition:
\begin{equation} \label{semi_classical_def}
\hat \xi_S (s) = \hat \sigma_S,
\end{equation}
where $\hat \sigma_S$ is the introduced control-marginal state \eqref{control_state}.
Let us give an important example of the semi-classical states of the weight $\hat \rho_W$. First, notice that if the Wigner function of the weight factorizes into the product of the energy and time states distributions, i.e.,
\begin{equation} \label{product_E_t}
W(E,t) = f(E) g(t),
\end{equation}
then
\begin{multline}
\int dt \int dE \ e^{i E s} \ W(E,t) \ e^{-i \hat H_S t} \hat \rho_S e^{i \hat H_S t} = \\
\\ \int dE \ e^{i E s} f(E) \int dt \ g(t) \ e^{-i \hat H_S t} \hat \rho_S e^{i \hat H_S t} \\
\\ = \Tr[ e^{i \hat H_W s} \hat \rho_W] \hat \sigma_S,
\end{multline}
such that the condition \eqref{semi_classical_def} is satisfied. Moreover, according to the Hudson's theorem for pure states \cite{Hudson1974}, the separability condition \eqref{product_E_t} is satisfied if and only if the wave function of the weight has the Gaussian form, namely
\begin{equation} \label{gaussian_states}
\psi_{\mu, \nu, \sigma} (E) = (2 \pi \sigma^2)^{-\frac{1}{4}} e^{-\frac{1}{4 \sigma^2} (E-\mu)^2 + i \nu E}
\end{equation}
with real $\mu$ and $\nu$. Thus, the Gaussian wave packets belongs to the class of semi-classical states. Another example are states with a uniform wave functions.
Then, we consider the \emph{incoherent states} of the system, defined by by the following commutation relation:
\begin{equation} \label{incoherent_states}
[\hat \rho_S, \hat H_S] = 0.
\end{equation}
Since those commute also with the unitary $e^{-i \hat H_S t}$, we get
\begin{multline}
\int dt \int dE \ e^{i E s} \ W(E,t) \ e^{-i \hat H_S t} \hat \rho_S e^{i \hat H_S t} = \\ \\= \int dE \ e^{i E s} \ f(E) \ \hat \rho_S = \Tr[ e^{i \hat H_W s} \hat \rho_W] \hat \rho_S,
\end{multline}
and according to Eq. \eqref{control_marginal} we have $\hat \sigma_S = \hat \rho_S$, i.e., an arbitrary incoherent state of the system $\hat \rho_S$ is a semi-classical in accordance to Eq. \eqref{semi_classical_def}.
The definition of the incoherent state of the weight is less evident since here we consider the system with a continuous energy spectrum, and therefore the incoherent state of the weight is non-normalizable. Nevertheless, one can still consider the limit of a Gaussian wave packet \eqref{gaussian_states} with vanishing variance, which, as it is proved above, is the semi-classical state. Notice that for a Gaussian wave packet, the dispersion of time and energy states \eqref{variance_time_energy} obey the minimal uncertainty relation, i.e., $\sigma_t \sigma_E = \frac{1}{2}$, such that in the limit of vanishing dispersion of energies, the dispersion of time states diverges. As a consequence, the channel given by Eq. \eqref{control_marginal} is fully depolarizing, i.e., $\hat \sigma_S \to D[\hat \rho_S]$, where $D[\cdot]$ is a dephasing in the system energy basis.
\subsection{Classical limits}
We are ready to analyse the work statistics encoded in the quasi-distribution $P_{\text{QP}}$ \eqref{quasi_dist_weight} in the classical limit of incoherent states or/and unitary evolutions. In particular, in the following proposition we relate the quasi-distribution $P_{\text{QP}}$ with the TPM $P_{\text{TPM}}$ \eqref{TPM_distribution} and work-operator distribution $P_{\text{W}}$ \eqref{work_operator_dist}.
\begin{proposition} \label{TPM_limit}
Let us consider the following work distributions:
\begin{enumerate}
\item $P_{\text{QP}}$ for the unitary $\hat U = \hat S^\dag \hat V_S \hat S$ and state $\hat \rho_S \otimes \hat \rho_W$,
\item $P_{\text{W}}$ for the work operator $\hat W_S = \hat H_S - \hat V_S^\dag \hat H_S \hat V_S$ and state $\hat \rho_S$,
\item $P_{\text{TPM}}$ for the unitary $\hat V_S$ and state $\hat \rho_S$.
\end{enumerate}
Then,
\begin{equation}
P_{\text{QP}} = P_{\text{TPM}}
\end{equation}
if the initial state is incoherent, and
\begin{equation} \label{QP_TPM_W}
P_{\text{QP}} = P_{\text{TPM}} = P_{\text{W}}
\end{equation}
if the work operator is incoherent.
\end{proposition}
\begin{proof}
Firstly, let us remind that for an incoherent state of the system (i.e., obeying \eqref{incoherent_states}) we have $\hat \xi_S (s) = \hat \rho_S = D[\hat \rho_S]$. Similarly, in the limit of the incoherent weight state, we get $\hat \xi_S (s) = \hat \sigma_S \to D[\hat \rho_S]$. Taking it into account, for an arbitrary incoherent state, the quasi-distribution is equal to:
\begin{equation}
\begin{split}
P_{\text{QP}}(w) &= \frac{1}{2\pi} \int ds \ e^{-i w s} \Tr[\hat{\mathcal{M}}_S(s) D[\hat \rho_S]] \\
&= \frac{1}{2\pi} \int ds \ e^{-i w s} \Tr[e^{i \hat H_S s} e^{-i \hat V_S^\dag \hat H_S \hat V_S s} D[\hat \rho_S]] \\
&= \frac{1}{2\pi} \int ds \sum_n e^{-i (w - \epsilon_n) s} \bra{\epsilon_n} \hat V_S^\dag e^{-i \hat H_S s} \hat V_S \ket{\epsilon_n} p_n\\
&= \frac{1}{2\pi} \int ds \sum_{n,m} e^{-i (w - \epsilon_n + \epsilon_m) s} |\bra{\epsilon_m} \hat V_S \ket{\epsilon_n}|^2 p_n\\
&= \sum_{m,n} \delta (w + \epsilon_m - \epsilon_n) p_{m|n} p_n = P_{\text{TPM}} (w),
\end{split}
\end{equation}
where $p_{m|n} = |\bra{\epsilon_m} \hat V_S \ket{\epsilon_n}|^2$ and $p_n = \bra{\epsilon_n} D[\hat \rho_S] \ket{\epsilon_n}$.
Next, let us consider the incoherent work operator with a spectral decomposition: $\hat W_S^I = \sum_n w_n \ \dyad{w_n}_S$. We observe that:
\begin{equation} \label{incoherent_exponent}
\hat{\mathcal{M}}_S(s) = e^{i \hat W_S^I s},
\end{equation}
and then we have
\begin{equation}
\begin{split}
P_{\text{QP}}(w) &= \frac{1}{2\pi} \int ds \ e^{-i w s} \Tr[e^{i \hat W_S^I s} \hat \xi_S(s)] \\
&= \frac{1}{2\pi} \int ds \ e^{-i w s} \Tr[e^{i \hat W_S^I s} D[\hat \rho_S]] \\
&= \sum_n \frac{1}{2\pi} \int ds \ e^{-i (w-w_n) s} \bra{w_n} D[\hat \rho_S] \ket{w_n} \\
&= \sum_n \delta(w-w_n) P_n = P_{\text{W}}(w),
\end{split}
\end{equation}
where $P_n = \bra{w_n} D[\hat \rho_S] \ket{w_n}$.
\end{proof}
We stress that convergence to the TPM for incoherent states is one of the requirements imposed on quasi-distributions (see \cite{Perarnau2015, Baumer2018}) since it recovers the classical fluctuations theorems.
\subsection{Moments}
Secondly, we analyse moments of the quasi-distribution $P_{\text{QP}}$. Its characteristic function is given by:
\begin{equation}
\begin{split}
\langle e^{i w t} \rangle_{\text{QP}} &= \int dw \ e^{iwt} P_{\text{QP}}(w) = \Tr[\hat{\mathcal{M}}_S(t) \hat \xi_S (t)] \\
\end{split}
\end{equation}
such that statistical moments can be calculated via the expression:
\begin{equation} \label{moment_def}
\langle w^n \rangle_{\text{QP}} = \frac{1}{i^n} \frac{d^n}{dt^n} \Tr[\hat{\mathcal{M}}_S(t) \hat \xi_S (t)] \Big{|}_{t=0}.
\end{equation}
In general, the derivative in the above expression is calculated for a product of the operator $\hat{\mathcal{M}}_S(t)$ and the effective operator $\hat \xi_S (t)$, which, as we will see later, has important implications for the work extraction from coherences.
In the light of Proposition \ref{TPM_limit}, for incoherent states or work operators, we have
\begin{equation}
\langle e^{i w t} \rangle_{\text{QP}} = \Tr[\hat{\mathcal{M}}_S(t) D[\hat \rho_S]]
\end{equation}
such that the derivative in Eq. \eqref{moment_def} is only calculated with respect to the operator $\hat{\mathcal{M}}_S(t)$. As it is expected, the initial coherences (even if present) do not contribute. Additionally, if the work operator is incoherent (i.e., commutes with the Hamiltonian $\hat H_S$), we get the following simplified expression:
\begin{equation}
\langle w^n \rangle_{\text{QP}} = \Tr[(\hat W_S^I)^n \hat \rho_S],
\end{equation}
which, in this purely classical regime, relates the $n$-th moment of the quasi-distribution $P_{\text{QP}}$ to the $n$-th power of the incoherent work operator (averaged over the initial density matrix).
In general, for the coherent work operator (i.e., with the non-vanishing commutator \eqref{incoherent_work}), the expansion of the operator $\hat{\mathcal{M}}_S(t)$ is equal to:
\begin{equation} \label{moments_expansion}
\begin{split}
\hat{\mathcal{M}}_S(t) &= \mathbb{1} + i t \hat W_S + \frac{(it)^2}{2} \hat W_S^2 +\frac{(it)^3}{3!} \hat W_S^3\\
&+\frac{(it)^3}{3!} \frac{1}{2} \left[[\hat H_S, \hat V_S^\dag \hat H_S \hat V_S], \hat V_S^\dag \hat H_S \hat V_S \right] \\
&+ \frac{(it)^3}{3!} \frac{1}{4} \left[\hat H_S, [\hat H_S, \hat V_S^\dag \hat H_S \hat V_S] \right] + \dots,
\end{split}
\end{equation}
Especially, it shows that the non-commuting contributions appears only in the third and higher orders. Consequently, we arrive with simple formulas for the first two moments, namely
\begin{equation} \label{moments}
\begin{split}
\langle w \rangle_{\text{QP}} &= \Tr[\hat W_S \hat \xi_S (0)], \\
\langle w^2 \rangle_{\text{QP}} &= \Tr[\hat W_S^2 \hat \xi_S (0)] - 2i \Tr[\hat W_S \hat \xi'_S (0)],
\end{split}
\end{equation}
where $\hat \xi_S (0) = \hat \sigma_S$ and $\xi'_S (0)$ is the derivative evaluated in the point $s=0$.
As we will see in the next section, the second term appearing in the expression for the second moment has very interesting implications, which non-vanishing value is the signature of the non-classical state of the work reservoir.
\section{Work vs variance gain} \label{work_variance_gain_section}
In the previous section, we have discussed moments of the distribution $P_{\text{QP}}$ and its relation to the work operator \eqref{work_operator}. However, in accordance with a definition of the double one-point distribution \eqref{quasi_probability}, the cumulants are more interesting due to their physical interpretation \eqref{cumulant_difference}. For the explicit work reservoir, these are given by a difference between cumulants calculated for the initial and final state of the energy storage. We introduce the following notation:
\begin{equation}
\begin{split}
\mathrm{Cov}_{\hat \rho}[\hat A, \hat B] &= \frac{1}{2} \langle \hat A \hat B + \hat B \hat A \rangle_{\hat \rho} - \langle \hat A \rangle_{\hat \rho} \langle \hat B \rangle_{\hat \rho}, \\
\mathrm{Var}_{\hat \rho}[\hat A] &= \mathrm{Cov}[\hat A, \hat A]_{\hat \rho}, \ \ \langle \hat A \rangle_{\hat \rho} = \Tr[\hat A \hat \rho].
\end{split}
\end{equation}
and concentrate on first two cumulants.
The first is given as a change in average energy of the weight (i.e., the extracted work):
\begin{equation} \label{Ew}
\begin{split}
\langle\langle w \rangle\rangle_{\text{QP}} &= \langle w \rangle_{\text{QP}} \\
&= \langle \hat U^\dag \hat H_W \hat U \rangle_{\hat \rho} - \langle \hat H_W \rangle_{\hat \rho} \equiv \Delta E_W,
\end{split}
\end{equation}
whereas the second is equal to a change of the weight variance:
\begin{equation} \label{sigmaW}
\begin{split}
\langle\langle w^2 \rangle\rangle_{\text{QP}} &= \langle w^2 \rangle_{\text{QP}} - \langle w \rangle_{\text{QP}}^2 \\
&=\mathrm{Var}_{\hat \rho} [\hat U^\dag \hat H_W \hat U] - \mathrm{Var}_{\hat \rho} [\hat H_W ] \equiv \Delta \sigma_W^2.
\end{split}
\end{equation}
We are now ready to formulate our first main result, which characterizes changes in the weight's energy and variance through the work extraction protocol. In the following we put $\hat W_S(t) = e^{i \hat H_S t} \hat W_S e^{-i \hat H_S t}$.
\begin{theorem} \label{work_variance_theorem}
The change of the average energy $\Delta E_W$ and energy variance $\Delta \sigma_W^2$ of the quantum weight is equal to:
\begin{equation} \label{work_variance_equations}
\begin{split}
\Delta E_W &= \langle \hat W_S \rangle_{\hat \sigma_S}, \\
\Delta \sigma_W^2 &= \mathrm{Var}_{\hat \sigma_S}[\hat W_S] + 2 F,
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
F &= \mathrm{Cov}_{\hat \sigma} [\hat H_S - \hat H_W, \hat V_S^\dag \hat H_S \hat V_S] = -i \Tr[\hat W_S \hat \xi'_S (0)] \\
&= \int dt \Tr[\hat W_S(t) \hat \rho_S] \int dE \ [E - \langle \hat H_W \rangle_{\hat \rho}] \ W(E,t).
\end{split}
\end{equation}
\end{theorem}
Firstly, let us observe that the extracted work $\Delta E_W$ is equal to the average of the work operator $\hat W_S$ with respect to the control-marginal state $\hat \sigma_S$, such that the optimal value is precisely given by its ergotropy \eqref{ergotropy_definition}, i.e., $\Delta E_W \le R(\hat \sigma_S)$. In particular, the replacement of the proper marginal state by the control-marginal has huge implications on work extraction from coherences (see \cite{Lobejko2020, Lobejko2021}), which is discussed in more details in Section \ref{ergotropy_decoherence}.
Secondly, we see that there are two contributions to the change of the variance. The first one, similarly to the average energy, is given by the variance of the work operator and calculated with respect to the control-marginal state. One should notice that those two terms (i.e., $\langle \hat W_S \rangle_{\hat \sigma_S}$ and $\mathrm{Var}_{\hat \sigma_S}[\hat W_S]$) are solely calculated within the system's Hilbert space, and in this sense, they can be compared to the non-autonomous protocols like the TPM measurements. However, on the contrary, both of them can be influenced by coherences (within the control-marginal state), which are affected by the weight via a decoherence process \eqref{control_marginal}. Thus, in general, they involve additional quantum effects and information from a work reservoir absent in non-autonomous frameworks.
Finally, the last $F$-term, i.e., the second contribution to the variance gain, is the most complex one, primarily because it is evaluated in the total Hilbert space (e.g., via the control state $\hat \sigma$ \eqref{control_state}). For this reason, in principle, it cannot be captured by the frameworks treating the energy-storage device implicitly, and as we will see in the next section, it has solely quantum origin. The $F$-term is presented in three different forms. The first relates it to the covariance difference between initial Hamiltonians (i.e., $\hat H_S$ and $\hat H_W$) and the final one $\hat V_S^\dag \hat H_S \hat V_S$ calculated for the control state $\hat \sigma$. In fact, the whole right-hand side of Eq. \eqref{work_variance_equations} is evaluated with respect to $\hat \sigma$, but terms other than the $F$-term are calculated for local operators. The next formula involves a definition of $\hat \xi(s)$ operator given by Eq. \eqref{xi_state}, and it straightforwardly follows from the expression for the moments \eqref{moments} and definition of the second cumulant \eqref{sigmaW}. Finally, the third expression is derived from Wigner's representation of the weight state \eqref{wigner_function}. It is especially interesting since the variable $t$ refers to a `time' of a system's free evolution. Indeed the $F$-term is given by a product of an expected value of the work operator in a Heisenberg picture (i.e., $\text{Tr} [\hat W_S(t) \hat \rho_S]$) and the weight's energy deviations (i.e., $E - \langle \hat H_W \rangle_{\hat \rho_W}$), averaged over a Wigner quasi-distribution $W(E,t)$.
\begin{remark}
Notice that Theorem \ref{work_variance_theorem} can be expressed in the following form:
\begin{eqnarray}
\langle \langle w \rangle \rangle_{\text{QP}, \hat \rho} &=& \langle \langle w \rangle \rangle_{\text{W}, \hat \sigma_S}, \\
\langle \langle w^2 \rangle \rangle_{\text{QP}, \hat \rho} &=& \langle \langle w^2 \rangle \rangle_{\text{W}, \hat \sigma_S} + 2F,
\end{eqnarray}
where on the left-hand side we have cumulants of quasi-distribution \eqref{quasi_dist_weight} (defined for a composite state $\hat \rho$), whereas on the right-hand side there are cumulants of the work operator distribution \eqref{work_operator_dist} (defined for a control-marginal state $\hat \sigma_S$). However, despite this nice correspondence, due to a presence of the $F$-term, it is apparent that probability density function of the work operator $P_W$ cannot alone properly described the energy fluctuations. Another subtlety is that cumulants on the right-hand side are calculated for a control-marginal state, which is different than the initial state $\hat \rho_S$ (i.e., it involves a decoherence process affected by the initial state of the weight).
\end{remark}
\subsection{Variance changes}
Let us now discuss some of the consequences of Theorem \ref{work_variance_theorem}.
According to the definition \eqref{semi_classical_def}, for semi-classical states we have $\hat \xi_S' (0) = 0$, such that $F=0$, and we conclude:
\begin{corollary} \label{positive_variance}
For semi-classical states, the variance change is always non-negative and equal to:
\begin{equation}
\Delta \sigma_W^2 = \mathrm{Var}_{\hat \sigma_S}[\hat W_S] \ge 0.
\end{equation}
\end{corollary}
We interpret it as one of the main features of the semi-classical states. On the contrary, for the non-classical's we have:
\begin{corollary}
For non-classical states, the $F$-term can be negative and, in particular, can lead to negative variance changes (i.e., $\Delta \sigma_W^2 < 0$).
\end{corollary}
We will show that analyzing a particular example discussed in Section \ref{qubit_section}. Due to this feature, we see that non-classical states are qualitatively different from semi-classical ones. We stress that the squeezing of energy fluctuations is a pure quantum effect that involves interference between the coherent state of the system and the weight.
Going back to the semi-classical states, we further ask the conditions for a null change of the variance.
\begin{corollary}
For semi-classical states, a change of the weight variance is zero (i.e., $\Delta \sigma_W^2 =0$) if and only if the control-marginal state is pure $\hat \sigma_S = \dyad{w}_S$, and $\ket{w}_S$ is an eigenstate state of the work operator $\hat W_S$.
\end{corollary}
According to the above statement, we can have the following scenarios. First, it is always true for a trivial identity process (since $\hat W_S = 0$), but then $\Delta E_W = \Delta \sigma_W^2 = 0$, i.e., there is no extraction of work neither. Next, for the incoherent work operator $\hat W_S^I$, the zero change of the variance is observed if the initial state is an energy eigenstate, i.e., $\hat \rho_S = \hat \sigma_S = \dyad{\epsilon}_S$. Here, the energy transfer refers to a shift of the weight's energy distribution, which is called the \emph{deterministic work}. Finally, the most interesting is the work extraction from coherences, such that the control-marginal $\hat \sigma_S = \dyad{w}_S$ and $\ket {w}_S \neq \ket{\epsilon}_S$. However, in general, this cannot be realized in practice since the channel \eqref{control_marginal} is a decoherence process (within the energy basis), such that $\hat \sigma_S$ is not a pure state. Only in the limit when weight tends to the time state, i.e., $\hat \rho_W \to \dyad{t}_W$, then $\hat \rho_S \to \hat \sigma_S = e^{-i\hat H_S t} \hat \rho_S e^{i\hat H_S t}$, and the initial purity is preserved.
Here, we observe another difference between incoherent and coherent work extraction. In principle, the incoherent work extraction can be deterministic (with $\Delta \sigma_W^2 =0$), and it is independent of the weight state at all. On the contrary, for the work extraction from coherences, we can only consider the limit where $\Delta \sigma_W^2 \approx 0$ and, more importantly, the vanishing variance strongly constrains the initial state of the weight. In particular, to preserve the purity of the control-marginal state, one can consider the limit when weight tends to the time state; however, this particular state also has an infinite variance! The point is that even if we can achieve the small gain of the variance, the final state of the weight would still have substantial energy fluctuations. In other words, the work extraction from coherences depends significantly on the state of the work reservoir (either implicitly through the control-marginal state or explicitly via the $F$-term) and, in particular, one should consider not only the change of the variance but also an absolute (initial or final) dispersion. It is the main subject of the following section.
\section{Bounds on energy dispersion} \label{bounds_section}
In this section, we want to derive the fundamental bounds for energy dispersion of the energy-storage device within the context of the work extraction process. For this let us define the initial standard deviation $\sigma_E^{(i)}$ and the final one $\sigma_E^{(f)}$ (see Eq. \eqref{variance_time_energy}) of a weight state $\hat \rho_W$ and $\Tr_S[\hat U \hat \rho_S \otimes \hat \rho_W \hat U^\dag]$, respectively. Let us start with the incoherent work extraction process.
\subsection{Incoherent work extraction}
If we consider an incoherent work extraction (either because of the particular incoherent form of the unitary or a presence of the incoherent state), the control-marginal state is given by $\hat \sigma_S = D[\hat \rho_S]$ and $F = 0$. Then, in accordance with Theorem \ref{work_variance_theorem}, we succeed with the following conclusion.
\begin{corollary} \label{incoherent_work_corollary}
For the incoherent work extraction, i.e., if $\hat W_S = \hat W_S^I$ or $\hat \rho_S = D[\hat \rho_S]$, the extracted work $\Delta E_W$ and change of the variance $\Delta \sigma_W^2$ is independent of the weight state at all. Consequently, there is no fundamental constraint on the initial dispersion $\sigma_E^{(i)}$ and final dispersion is bounded as follows:
\begin{equation}
\sigma_E^{(f)} \ge \mathrm{Var}_{\hat \rho_S}[\hat W_S].
\end{equation}
\end{corollary}
The main conclusion coming from this corollary is that the extraction of an incoherent ergotropy is independent of the initial state of the weight, and hence the energy can be stored in a state with a finite energy dispersion. As we will see in the next, this is not true for extracting a coherent part.
\subsection{Coherent work extraction}
\subsubsection{Ergotropy decoherence} \label{ergotropy_decoherence}
In order to define similar bounds for work extraction involving coherences first, we want to explain the idea of the ergotropy decoherence. According to Theorem \ref{work_variance_theorem}, we see that work is defined as the ergotropy change of the control-marginal state $\hat \sigma_S$, instead of the proper marginal state $\hat \rho_S$. Essentially, the control-marginal state is the effective density matrix, representing the full statistical knowledge regarding the (average) work extraction. The crucial point is that the map $\hat \rho_S \to \hat \sigma_S$ \eqref{control_marginal} is a decoherence process, such that it preserves a diagonal part and decays off-diagonal elements as follows:
\begin{equation}
\dyad{\epsilon_i}{\epsilon_j} \to \gamma(\omega_{ij}) \dyad{\epsilon_i}{\epsilon_j}, \ \ |\gamma(\omega_{ij})| \le 1,
\end{equation}
where $\omega_{ij} = \epsilon_j - \epsilon_i$. As it was said, the incoherent part of the ergotropy is unaffected by this process, i.e., $R_I(\rho_S) = R_I(\hat \sigma_S)$, and thus the work extraction from the diagonal is independent of the weight state at all (see Corollary \ref{incoherent_work_corollary}). On the contrary, the coherent part is dumped by the work reservoir, such that $R_C(\hat \rho_S) \ge R_C(\hat \sigma_S)$. That is why we call this phenomenon the \emph{ergotropy decoherence}, which leads to the so-called work-locking \cite{Korzekwa2016, Lobejko2021}, i.e., an inability of work extraction from coherences.
Notice that the dumping function $\gamma(\omega)$ fully characterizes a loss of the coherences, and consequently, the loss of the ergotropy. Although it is straightforward to quantitatively connect the dumping function with some measure of the coherences, it is not so easy to do the same thing with the ergotropy. Recently, there were proposed some bounds relating both measures together \cite{Francica2020}.
\subsubsection{Fluctuation-decoherence relation}
Now, we are ready to formulate the fluctuation-decoherence relation, connecting the dumping function $\gamma(\omega)$ with an energy dispersion of the weight $\sigma_E^{(i,f)}$ (initial or final). Firstly, let us notice that $\gamma(\omega)$ is a characteristic function of the time states probability density function (see Eq. \eqref{energy_time_dist}), i.e., according to the definition \eqref{control_marginal} we have:
\begin{equation} \label{gamma_function}
\gamma(\omega) = \int dt \ g(t) \ e^{i\omega t}.
\end{equation}
On the other hand, time states and energy states are canonically conjugate, such that they satisfy the Heisenberg uncertainty relation (HUR):
\begin{equation} \label{HUR}
\sigma_t \sigma_E \ge \frac{1}{2},
\end{equation}
for arbitrary state $\hat \rho_W$, where $\sigma_t$ and $\sigma_E$ are square roots of the introduced variances \eqref{variance_time_energy}. Finally, according to properties of a characteristic function, we have:
\begin{equation} \label{gamma_derivative}
\sigma_t^2 = -\frac{d^2}{d\omega^2} |\gamma(\omega)| \Big{|}_{\omega=0}.
\end{equation}
This proves the relation between the dumping function $\gamma(\omega)$, related to the ergotropy decoherence, and the initial dispersion of energy in terms of the HUR \eqref{HUR}. However, this relation is implicit, and the formula \eqref{gamma_derivative} only involves the behavior of a characteristic function close to the origin.
In the following, we make it more practical, such that the dumping function $\gamma(\omega)$ will define a lower bound for the initial or final energy dispersion. To achieve that goal we need to bound from above the characteristic function by a function that is monotonically decreasing with a dispersion $\sigma_t$, i.e., $|\gamma(\omega)| \le h(\omega, \sigma_t)$, such that $\frac{\partial}{\partial \sigma_t}h(\omega, \sigma_t) < 0$. Then, using HUR we would obtain $|\gamma(\omega)| \le h(\omega, \sigma_t) \le h(\omega, \frac{1}{2 \sigma_E})$, and after optimization over $\omega$ the expected bound would be attained. In the mathematical literature, one can find results for lower and upper bounds for a characteristic function, but none of them is in the required form \cite{Ushakov1997}. Fortunately, recently it was derived the uncertainty relation for the characteristic functions (ChUR) in the form \cite{Rudnicki2016}:
\begin{equation}
\begin{split}
&|\gamma(\omega_t)|^2 + |\lambda(\omega_E)|^2 \le \beta(\omega_t \omega_E), \\
& \beta(x) = 2 \sqrt{2} \frac{\sqrt{2} - \sqrt{1 - \cos x}}{1+\cos x},
\end{split}
\end{equation}
where
\begin{equation}
\lambda(\omega) = \int dE \ e^{i \omega E} f(E)
\end{equation}
is an analogous characteristic function for the energy distribution $f(E)$. By using ChUR instead of HUR we achieve the mentioned goal in the form of the following lemma.
\begin{lemma} \label{sigmaE_inequality}
For arbitrary $\omega>0$, the following inequality is satisfied:
\begin{equation}
\sigma_E \ge \frac{\omega |\gamma(\omega)|}{\pi}.
\end{equation}
\end{lemma}
\begin{proof}
Let us introduce two arbitrary real variables $\omega$ and $x$, such that from ChUR we have:
\begin{equation}
|\gamma(\omega)|^2 \le \beta(x) - |\lambda(x/\omega)|^2.
\end{equation}
Next, we incorporate a lower bound for the modulus of the characteristic function \cite{Ushakov1997, Rudnicki2016}:
\begin{equation}
|\lambda(\omega)|^2 \ge 1 - \sigma_E^2 \omega^2
\end{equation}
which brings us the formula:
\begin{equation} \label{pre_lemma_bound}
\sigma_E^2 \ge \frac{\omega^2}{x^2} \left(1- \beta(x) + |\gamma(\omega)|^2 \right).
\end{equation}
Finally, taking the limit $x \to \pi$, we have $f(x) \to 1$, such that
\begin{equation} \label{lemma_bound}
\sigma_E^2 \ge \frac{\omega^2 |\gamma(\omega)|^2}{\pi^2}.
\end{equation}
\end{proof}
Now, let us come back to the work extraction process $\hat \rho_S \otimes \hat \rho_W \to \hat U \hat \rho_S \otimes \hat \rho_W \hat U^\dag$, for which we define the initial $\sigma_E^{(i)}$ and final $\sigma_E^{(i)}$ standard deviations of the energy distribution. Notice that despite the change of the energy distribution $f_i(E) \to f_f(E)$, the time states probability density function is conserved via the unitary \eqref{unitary_with_S}, namely
\begin{equation}
g(t) = \Tr[\hat \rho_W \dyad{t}_W] = \Tr[\hat U \hat \rho_W \hat U^\dag \dyad{t}_W],
\end{equation}
since $\hat S$ and $\hat V_S$ commutes with $\dyad{t}_W$ (see Eq. \eqref{S_operator}). Consequently, the characteristic function $\gamma(\omega)$ is invariant under the work extraction process.
According to the Lemma \ref{sigmaE_inequality}, this leads us to the second main theorem, defining the lower bound for the initial and final energy fluctuations of the work reservoir in terms of the dumping function of the ergotropy.
\begin{theorem} \label{fluctuation_decoherence_theorem}
The initial $\sigma_E^{(i)}$ and final $\sigma_E^{(f)}$ dispersion of the weight energy satisfy:
\begin{equation}
\sigma_E^{(i,f)} \ge \frac{1}{\pi} \max_{\omega > 0} \big[ \omega |\gamma(\omega)| \big],
\end{equation}
where $\gamma(\omega)$ is the dumping function of the ergotropy decoherence process.
\end{theorem}
\begin{remark}
Notice that in general, one could obtain a better bound if instead of inequality \eqref{lemma_bound} it is optimized (over $x$ and $\omega$) the right-hand side of the inequality \eqref{pre_lemma_bound}. However, since our main goal is to present the general idea of the interplay between fluctuations and dumping of coherences, we use here the simplified formula.
\end{remark}
Theorem \ref{fluctuation_decoherence_theorem} reveals that the lower bound for (initial or final) dispersion of the weight energy depends on how fast the characteristic function $\gamma(\omega)$ vanishes for large $\omega$, where a slower decay results in higher fluctuations. On the other hand, the $\gamma(\omega)$ is precisely the dumping function of the (ergotropy) coherences for frequency $\omega$. Hence, we observe a trade-off between maintaining the coherent ergotropy and final fluctuations of the energy. In particular, to unlock more ergotropy, the dumping function has to be close to one for all frequencies appearing in the spectrum of the Hamiltonian $\hat H_S$. But from a property of a characteristic function, if exist $\omega_0 > 0$ such that $|\gamma(\omega_0)| = 1$, then the function is equal to one for all $\omega$, which implies $\max_{\omega}[ \omega |\gamma(\omega)|] = \infty$. Consequently, we have the following corollary.
\begin{corollary} \label{dispersion_divergence_corollary}
If the total ergotropy is unlocked, i.e., $R(\hat \rho_S) = R(\hat \sigma_S)$, then $\sigma_E^{(i,f)} = \infty$.
\end{corollary}
The basic idea of the presented here fluctuation-decoherence relation is that in the function $\gamma(\omega)$ (which represent the initial state of the weight $\hat \rho_W$) is encoded both the information about the locked (coherent) ergotropy of the system and the initial and final dispersion of the energy. Specifically, it provides knowledge about the maximal possible work which can be extracted and the minimal value of its fluctuations.
\section{Exact Solution: Qubit} \label{qubit_section}
This section explicitly illustrates the presented results by analyzing a two-dimensional system (a qubit) interacting with the weight. In particular, we derive a phase space of possible combinations of the energy changes $\Delta E_W$ and variance changes $\Delta \sigma_W^2$ that can be observed for the arbitrary energy-preserving and translationally-invariant work extraction protocol.
\subsection{Work-variance phase space}
Let us consider an arbitrary product state of the qubit and the weight $\hat \rho_S \otimes \hat \rho_W$, with the Hamiltonian $\hat H_S = \omega \dyad{1}_S$. The energy gap of the qubit defines a natural energy scale, thus throughout this section, we work with dimensionless quantities with energetic units given as a multiple of a frequency $\omega$.
We start with a diagonal representation of the control-marginal state:
\begin{equation}
\hat \sigma_S = p \dyad{\psi_0}_S + (1-p) \dyad{\psi_1}_S.
\end{equation}
Without loss of generality we assume that $p\le \frac{1}{2}$ and $\langle \hat H_W \rangle_{\hat \rho_W} = 0$. Next, we introduce the (dimensionless) energies:
\begin{eqnarray}
\varepsilon_i = \frac{1}{\omega} \bra{\psi_i} \hat H_S \ket{\psi_i},
\end{eqnarray}
(for $i=0,1$) such that
\begin{eqnarray} \label{sum_of_epsilons}
\varepsilon_0 + \varepsilon_1 = \frac{1}{\omega} \Tr[\hat H_S (\dyad{\psi_0}_S + \dyad{\psi_1}_S)] = 1,
\end{eqnarray}
and the following integrals:
\begin{equation}
\begin{split}
\eta &= \frac{1}{\omega} \int dt \int dE \ E \ W(E,t) \ (e^{i \omega t} \frac{1}{\gamma} + e^{-i \omega t} \frac{1}{\gamma^*}), \\
\xi &= \frac{1}{\omega} \int dt \int dE \ E \ W(E,t) \ (e^{i \omega t} \frac{\varepsilon_1}{\gamma } - e^{-i \omega t} \frac{\varepsilon_0}{\gamma^*} ), \\
\gamma &= \int dt \int dE \ W(E,t) \ e^{i \omega t}.
\end{split}
\end{equation}
Then, we propose the following classification of the work extraction protocol:
\begin{proposition} \label{qubit_proposition}
For an arbitrary work extraction protocol $\hat \rho_S \otimes \hat \rho_W \to \hat U \hat \rho_S \otimes \hat \rho_W \hat U^\dag$ (where $\hat U = \hat S^\dag \hat V_S \hat S$), the corresponding change of the weight's energy and variance, i.e., $(\Delta E_W, \Delta \sigma_W^2)$, belongs to a set:
\begin{equation} \label{work_variance_set}
\begin{split}
\Delta E_W &= w, \ w \in [-\varepsilon_0 (1-2p), \varepsilon_1 (1-2p)], \\
\Delta \sigma_W^2 &\in \left[f(w) - h(w), f(w) + h(w) \right],
\end{split}
\end{equation}
where
\begin{multline}
f(w) = - w^2 + (\frac{\varepsilon_1 - \varepsilon_0}{1-2p} + 4 \varepsilon_0 \varepsilon_1 \eta) w \\
+ 2 \varepsilon_0\varepsilon_1 [1 - (1-2p) (\varepsilon_1 - \varepsilon_0) \eta]
\end{multline}
and
\begin{equation}
\begin{split}
h(w) &= 2 R \sqrt{\varepsilon_0\varepsilon_1 (\varepsilon_0 + \frac{w}{1-2p})(\varepsilon_1 - \frac{w}{1-2p})} \\
R &= \left|1- 2(1-2p) \xi\right|.
\end{split}
\end{equation}
Conversely, for an arbitrary point $(\Delta E_W, \Delta \sigma_W^2)$ within a set \eqref{work_variance_set}, there exist a protocol with unitary $\hat V_S$, with corresponding changes of the weight's cumulants.
\end{proposition}
In the following, we compare the phase space of possible values $(\Delta E_W, \Delta \sigma_W^2)$ of the semi-classical and non-classical states. The graphical illustration is presented in Fig. \ref{phase_space}.
\begin{figure*}[t]
\centering
\includegraphics[height = 0.35 \textwidth] {coherent_phase_space.eps}
\includegraphics[height = 0.35 \textwidth] {gaussian_phase_space.eps}
\includegraphics[height = 0.35 \textwidth] {cat_phase_space.eps}
\includegraphics[height = 0.35 \textwidth] {cat_distance_phase_space.eps}
\caption{\emph{Work-variance phase space.} Sets of possible points $(\Delta E_W, \Delta \sigma_W^2)$ for a pure state of the system $\hat \rho_S = \dyad{\psi}_S$, $\ket{\psi}_S \sim \ket{0}_S + 5 \ket{1}_S $. Top panel (\textbf{a}, \textbf{b}) corresponds to a Gaussian wave packet of the weight $\psi_{0, 0, \sigma}$ (semi-classical state) \eqref{gaussian_states}, and a bottom panel (\textbf{c}, \textbf{d}) corresponds to a cat state $\phi_{\mu, 1}$ (non-classical state) \eqref{cat_state}. In the left figures (\textbf{a}, \textbf{c}) (made for a particular values of $\sigma=1/\sqrt{2}$ and $\mu=2$), we plot boundaries of the set coming from Proposition \ref{phase_space}, as well as we numerically sample over different unitaries $\hat V_S$ (points). The extreme points of the set, corresponding to the minimal and maximal extracted work, are marked by the black dots. For semi-classical state the variance gain is always non-negative, i.e., the set is tangential and above the line $\Delta \sigma_W^2= 0$, whereas for non-classical states there exist a subset with negative changes of the variance. In the plot \textbf{b}, we present boundaries of the phase space for different values of a dispersion $\sigma$. We observe that for $\sigma \to 0$, i.e., in the limit of an incoherent state of the weight, the set reduces to the line. Oppositely, for large values of $\sigma$, the control-state converges to the initial one, i.e., $\hat \sigma_S \to \hat \rho_S$, and due to the initial purity, the set touches the line $\Delta \sigma_W^2 = 0$ for a non-zero work $w = \varepsilon_1 - \varepsilon_0$ (see Eq. \eqref{roots_of_work}). In the subfigure \textbf{d}, we present the expansion of a phase space with growing value of $\mu$ of the cat state, where $2\mu$ corresponds to a distance between its peaks. Results are presented in energy units given by a multiple of the qubit's gap $\omega$.}
\label{phase_space}
\end{figure*}
\subsubsection{Semi-classical states}
For semi-classical states Proposition \ref{qubit_proposition} significantly simplifies since from the condition $\hat \xi'(s) = 0$ (and the assumption $\Tr[\hat H_W \hat \rho_W] = 0$) one can easily show that:
\begin{equation}
\int dt \int dE \ E \ e^{i \omega t} \ W(E,t) = 0,
\end{equation}
which further implies $\eta = \xi = 0$. Hence, characterization of semi-classical phase space is given by functions:
\begin{equation} \label{semi_classical_phase_space}
\begin{split}
f(w) &= - w^2 + \frac{\varepsilon_1 - \varepsilon_0}{1-2p} w + 2 \varepsilon_0\varepsilon_1, \\
h(w) &= 2 \sqrt{\varepsilon_0 \varepsilon_1 (\varepsilon_0 + \frac{w}{1-2p})(\varepsilon_1 - \frac{w}{1-2p})}.
\end{split}
\end{equation}
As it follows from Corollary \ref{positive_variance} due to $\mathrm{Var}_{\hat \sigma_S}[\hat W_S] \ge 0$ we have here $f(w) \ge h(w)$ (see Fig. \ref{phase_space}a). Next, we consider the solutions of the equation $f(w_0) = h(w_0)$, which correspond to the energy transfer $w_0$ with a null change of the variance (i.e., $\Delta \sigma_W^2 = 0$). We get the following roots:
\begin{equation} \label{roots_of_work}
w_0 =
\begin{cases}
0, & \frac{1}{2} \ge p > 0\\
0, \varepsilon_1 - \varepsilon_0, & p=0
\end{cases}
\end{equation}
As it was discussed in the previous section, the only positive solution $w_0 = \varepsilon_1 - \varepsilon_0$ is for a pure state ($p=0$) and corresponds to the protocol $\hat V_S$ such that the control-marginal state $\hat \sigma_S$ is the eigenstate of the work operator $\hat W_S$. In Fig. \ref{phase_space}b we present how the point $(\Delta E_W = \varepsilon_1 - \varepsilon_0, \Delta \sigma_W^2 = 0)$ is achieved for a Gaussian wave packet of the weight \eqref{gaussian_states} with an increasing width $\sigma$.
Notice that the extreme point $w_{max} = \varepsilon_1 (1-2p)$ corresponds to the maximal extracted work, i.e., the ergotropy of the state $\hat \sigma_S$. The variance gain if the maximal work is extracted is equal to:
\begin{equation}
\Delta \sigma_W^2 = f(w_{max}) = 4p (1 - p)\varepsilon_1^2+\varepsilon_0\varepsilon_1.
\end{equation}
\emph{Incoherent states.} Incoherent states form a particular subclass of semi-classical's with $\hat \sigma_S = D[\hat \rho_S]$. Consequently, $\varepsilon_0 = 0$ and $\varepsilon_1 = 1$ (i.e., the eigenstates $\ket{\psi_i}$ are equal to the energy eigenstates $\ket{i}$), such that $h(w)=0$ and the set $(\Delta E_W, \Delta \sigma_W^2)$ reduces to the line:
\begin{equation} \label{parabola}
\Delta \sigma_W^2 = f(w) = \frac{w}{1-2p} - w^2,
\end{equation}
i.e., for a particular extracted work $\Delta E_W = w$ we have the unique change of the variance. This is also illustrated in Fig. \ref{phase_space} in the limit of vanishing dispersion of the wave packet $\sigma \to 0$ (such that wave function tends to the Dirac delta).
\begin{figure*}[t]
\centering
\includegraphics[width = 0.3 \textwidth] {weight_coh_state1.eps}
\includegraphics[width = 0.3 \textwidth] {weight_coh_state3.eps}
\includegraphics[width = 0.3 \textwidth] {weight_coh_state5.eps}
\includegraphics[width = 0.3 \textwidth] {weight_inc_state1.eps}
\includegraphics[width = 0.3 \textwidth] {weight_inc_state3.eps}
\includegraphics[width = 0.3 \textwidth] {weight_inc_state5.eps}
\caption{\emph{Subsequent reduction of energy fluctuations.} The energy distribution $f_n(E) = \bra{E} \hat \rho_W^{(n)} \ket E$ of a weight state obtained through a subsequent coupling with a system, i.e., $\hat \rho_W^{(n)} = \Tr_S[\hat U_n \hat \rho_S \otimes \hat \rho_W^{(n-1)} \hat U_n^\dag]$, where the initial state $\hat \rho_W^{(0)} = \dyad{\phi}_W$ is a cat state with a wave function $\phi_{3,1}$ \eqref{cat_state}. Top panel (\textbf{a}) is for the coherent ``plus'' state $\hat \rho_S = \dyad{+}_S$ \eqref{plus_state}, and bottom panel (\textbf{b}) for the incoherent state $\hat \rho_S = \frac{1}{2} (\dyad{0}_S + \dyad{1}_S)$. In both cases the evolution operator $\hat U_n = \hat S^\dag \hat V^{(n)}_S \hat S$ is for a unitary $\hat V_S^{(n)}$ which minimizes the variance gain given by Eq. \eqref{variance_minimum}. The solid line represents the probability density function $f_n(E)$ for $n=1, 3, 5$, whereas the dashed line is a reference distribution for $n=0$. It is seen that for a coherent state (\textbf{a}), the quantum interference leads to sequential reduction of the weight's energy variance via collapsing to each other both peaks of the cat state. On the contrary, if the system is incoherent (\textbf{b}), the process only leads to broadening of each peaks, and hence to increasing of the energy variance, as it is expected for an arbitrary semi-classical state. Results are presented in energy units given by a multiple of the qubit's gap $\omega$. }
\label{reducing_variance}
\end{figure*}
\subsubsection{Non-classical states with reflection symmetry}
Let us consider a wave function with reflection symmetry, i.e. $\psi(E) = \psi(-E)$, for which the Wigner function is symmetric : $W(E,t) = W(-E, -t)$. Due to this symmetry, we observe (by a simple change of the variables) that the dephasing factor $\gamma$ is a real number, and hence $\eta = 0$.
Next, one should notice that applying the relation $\varepsilon_1 = \omega - \varepsilon_0$ one can rewritten the $\xi$ function in the form:
\begin{equation}
\begin{split}
\xi &= \frac{1}{\gamma \omega} \int dt \int dE \ E \ W(E,t) \ e^{i \omega t} - \varepsilon_0 \eta,
\end{split}
\end{equation}
what finally gives us the expression:
\begin{equation}
\xi = \frac{1}{\omega} \frac{\int dt \int dE \ E \ e^{i \omega t} \ W(E,t) }{\int dt \int dE \ e^{i \omega t} \ W(E,t)}.
\end{equation}
Finally, one should observe that $\xi$ is pure imaginary, i.e. $\xi^* = -\xi$, such that we have
\begin{equation}
R = \sqrt{1 + 4 (1-2p)^2 |\xi|^2}.
\end{equation}
We see that for states with reflection symmetry the phase space of the variables $(\Delta E_W, \Delta \sigma_W^2)$ is exactly the same as for the semi-classical states given by Eq. \eqref{semi_classical_phase_space}, but with the radius $R\ge1$ (where for semi-classical states $R=1$).
\emph{Cat states.} As a particular example of space-symmetric states, we take the so-called ``cat state'', with the wave function:
\begin{equation} \label{cat_state}
\phi_{\mu,\nu} (E) = \frac{\psi_{\mu,\nu, \frac{1}{\sqrt{2}}} (E) + \psi_{-\mu,-\nu, \frac{1}{\sqrt{2}}} (E)}{\sqrt{2(1+e^{-\mu^2 - \nu^2})}},
\end{equation}
where $\psi_{\mu,\nu, 1/\sqrt{2}} $ is a Gaussian wave packet \eqref{gaussian_states} with $\sigma = 1/\sqrt{2}$. For this we derive an exact analytical formula for the radius $R$, i.e.:
\begin{equation}
\begin{split}
&R = \sqrt{1+\frac{4(1-2p)^2\left( \nu (1 - e^{2 \mu})- 2 \mu \kappa_{ \mu, \nu} \sin( \nu)\right)^2}{ \left(1+ e^{2 \mu} + 2 \kappa_{ \mu, \nu} \cos( \nu)\right)^2}}, \\
&\kappa_{ \mu, \nu} = e^{ \nu^2 + \mu^2 + \mu}.
\end{split}
\end{equation}
One should notice that $R=1$ either if $\mu = 0$ or $\nu = 0$.
\emph{Coherent ``plus" state.} We would like to derive a phase space for a coherent initial state of the system $\hat \rho_S = \dyad{+}_S$, where
\begin{equation} \label{plus_state}
\ket{+}_S = \frac{1}{\sqrt{2}} (\ket{0}_S + \ket{1}_S).
\end{equation}
For this we need to show that $\varepsilon_0 = \varepsilon_1 = 1/2$. Indeed, the transition $\hat \rho_S \to \hat \sigma_S$ is a decoherence process, thus $\langle \hat H_S \rangle_{\hat \sigma_S} = \langle \hat H_S \rangle_{\hat \rho_S}$ for arbitrary state of the weight. Then, we have a set of equalities (see Eq. \eqref{sum_of_epsilons}):
\begin{align*}
&p \varepsilon_0 + (1-p) \varepsilon_1 = \frac{1}{2}, \\
&\varepsilon_0 + \varepsilon_1 = 1.
\end{align*}
However, this should be satisfied for any value of $p \le \frac{1}{2}$ (which depends on the state $\hat \rho_W$), thus the only possible solution is $\varepsilon_0 = \varepsilon_1 = 1/2$. Finally, we put it to Proposition \ref{qubit_proposition}, and get:
\begin{equation} \label{coherent_phase_space}
\begin{split}
f(w) &= - w^2 + \eta w + \frac{1}{2}, \\
h(w) &= R \sqrt{\frac{1}{4} - \frac{w^2}{(1-2p)^2}}. \\
\end{split}
\end{equation}
\emph{Reducing of the energy dispersion.} The most interesting feature of non-classical states is the possibility to reduce the variance in the final state of the weight. We analyze this process of squeezing for a particular coherent initial state $\hat \rho_S = \dyad{+}_S$ (introduced above) and pure state of the weight with reflection symmetry (with $\eta = 0$ and $R \ge 1$). From Eq. \eqref{coherent_phase_space} one can calculate the maximal possible drop of the variance, which is achieved for the point $w = 0$, i.e. when no work is extracted. The minimum is given by:
\begin{equation} \label{variance_minimum}
\min[\Delta \sigma_W^2] = f(0) - h(0) = \frac{\omega}{2} (1 - R).
\end{equation}
We observe that for $R > 1$, the change of the variance is negative.
In Fig. \ref{reducing_variance} it is presented how the cat state of the weight changes according to the process of such variance reduction. Moreover, we show that it is possible to decrease the variance several times if subsequent protocols with the same state $\hat \rho_S$ are introduced. We compare it with the same evolution but with the fully dephased state $D[\hat \rho_S]$ (i.e., without coherences). It is seen that as long as the classical state only broadens both peaks of the cat state (which is a universal feature of all semi-classical states, see Corollary \ref{positive_variance}), a quantum interference can collapse those peaks to each other, such that the energy dispersion is reduced. However, we stress that such a subsequent reduction eventually saturates and cannot be repeated forever. Indeed, we have previously shown that a Gaussian wave packet also belongs to the semi-classical states, and hence no squeezing of its width is possible. It suggests that the presence of two peaks (i.e., with $\mu > 0$) is essential for the process of the variance reduction, but still, the reason for that is a quantum interference between the qubit and the weight. Especially, as it was discussed before, the cat state has $R > 1$, and hence $\Delta \sigma_W^2 < 0$, only if $\mu > 0$ and $\nu > 0$.
\subsection{Coherent vs. Incoherent work extraction: \\ Gaussian state and qubit}
In the section \ref{bounds_section} we discussed the bounds for an energy dispersion if the incoherent (coherent) part of ergotropy is extracted. Let us illustrate this behavior quantitatively for a particular example.
We consider an arbitrary Gaussian wave packet of the weight in the form \eqref{gaussian_states} and a system given by a qubit in a state:
\begin{eqnarray} \label{qubit_state}
\hat \rho_S = \frac{1}{2} (\mathbb{1} + \vec x \cdot \vec{\hat \sigma})
\end{eqnarray}
where $\vec x = (x, y, z)$ and $\vec {\hat \sigma}$ is a vector of Pauli matrices. We are interested in the final dispersion $\sigma_E^{(f)}$ of the weight after a protocol, if either is extracted the maximal incoherent work given by $R_I \equiv R_I(\hat \rho_S)$ or the coherent ergotropy $R_C \equiv R_C(\hat \rho_S)$.
For a qubit, we have the following expressions (in units of the energy gap $\omega$):
\begin{eqnarray}
R_I &=& \frac{1}{2} \left(|z| - z \right), \label{incoherent_qubit_ergotropy} \\
R_C &=& \frac{1}{2} \left(\sqrt{|\gamma(\omega)|^2 \alpha^2 + z^2} - |z|\right) \label{coherent_qubit_ergotropy}.
\end{eqnarray}
where we put $\alpha^2 = x^2 + y^2 $. As it follows from Theorem \eqref{fluctuation_decoherence_theorem}, the dumping factor $\gamma(\omega)$ is the characteristic function of the weight's time states (given by Eq. \eqref{gamma_function}). Hence, we see that the maximal coherent work depends explicitly on the initial state of the energy-storage device. For a Gaussian state, the function $\gamma(\omega)$ takes a simple exponential form (see Appendix \ref{coherent_incoherent_appendix}), which together with HUR relation \eqref{HUR} leads us to the following result: If via the protocol the coherent ergotropy $R_C$ is extracted, the final dispersion of energy is bounded by:
\begin{eqnarray} \label{coherent_bound_qubit}
\sigma_E^{(f)} &\ge& \frac{1}{2 \sqrt{\log \left[\frac{\alpha^2}{4 R_C(R_C + |z|)} \right]} }.
\end{eqnarray}
On the contrary, if the process is solely incoherent (either because of $\alpha = 0$ or $\hat W_S = \hat W_S^I$), then extracting the maximal work $R_I$ results in the bound (see Corollary \ref{incoherent_work_corollary}):
\begin{eqnarray} \label{incoherent_bound_qubit}
\sigma_E^{(f)} &\ge& \sqrt{1 - R_I^2}.
\end{eqnarray}
For a specific initial state $\hat \rho_S$, the incoherent ergotropy $R_I$, and therefore also the bound \eqref{incoherent_bound_qubit}, is fixed and finite. However, the coherent part lies in the set $R_C \in \left[0, \frac{1}{2} \left(|\vec x|- |z|\right) \right]$ (depending on the initial state of the weight). In Fig. \ref{coherent_bound_plot} we plot how the minimal final dispersion changes with respect to the value of $R_C$ within this set. In particular, it is seen that the dispersion diverges if the maximal coherent work is extracted in accordance with Corollary \ref{dispersion_divergence_corollary}.
\begin{figure}[t]
\centering
\includegraphics[width = 0.5 \textwidth] {sigma_min.eps}
\caption{\emph{The lower bound for the energy dispersion.} The function (solid line) represents the minimal final dispersion of the weight for an arbitrary initial Gaussian state \eqref{gaussian_states} and arbitrary state of a qubit \eqref{qubit_state}, if the coherent ergotropy $R_C$ \eqref{coherent_qubit_ergotropy} is extracted. The dashed line correspond to the maximal value $R_C = \frac{1}{2} \left(|\vec x| - |z|\right)$ for which the minimal variance diverges. Results are presented in energy units given by a multiple of the qubit's gap $\omega$.}
\label{coherent_bound_plot}
\end{figure}
\section{Summary and Discussion} \label{summary_section}
In this paper, we contribute to the long-standing problem of micro-scale thermodynamics, i.e., what role play quantum coherences in the process of work extraction? We have shown that to answer correctly at this question, it is not enough to construct a quasi-distribution with a proper classical limit but also to introduce an energy-storage device as an independent quantum system (as expected for fully autonomous thermal machines).
Within such a fully quantum setup, we consider a comprehensive study of work fluctuations. Primarily, we have shown the significance of an autonomous approach in Theorem \ref{work_variance_theorem}, where the $F$-term, involving interference between the system and the weight, appears as an additional contribution to changes of the energy variance (i.e., representing fluctuations of the extracted work). This contribution is crucial since the $F$-term for quantum systems can lead to a new qualitative phenomenon of the reduction of work fluctuations, in contrast to semi-classical states, for which an increase of the variance always accompanies the work extraction. However, we stress that not every state with coherences manifests those features, e.g., coherent Gaussian wave packets, behave as expected as a semi-classical system.
Besides, we derive the fluctuation-decoherence relation showing that the process of a work-locking caused by decoherence is related to energy dispersion of the work reservoir. In particular, we reveal that unlocking the total coherent ergotropy always results in divergence of the work fluctuations. In general, this observation points out the main difference between extraction of coherent and incoherent work: The former can decrease the variance, but its absolute value diverges if more and more energy is extracted, whereas for the latter, the gain is always non-negative, but a total (incoherent) ergotropy can be extracted with finite work fluctuations.
Presented here framework opens a bunch of new research areas. Firstly, one can ask what the formulas are for higher cumulants of the quasi-distribution. In particular, identifying, similar to the $F$-term, quantum contributions in higher cumulants shall answer the question of the impact of quantum coherences on the work extraction process. Moreover, in this paper, we only concentrate on product states, remaining an open question of what role quantum correlations like entanglement play. Furthermore, the introduced quasi-distribution is interesting on its own; due to its robustness to the invasiveness of the measurements and physical interpretation of cumulants, it can be successfully applied to the analysis of other fluctuation theorems (e.g., involving heat transfer).
Finally, we stress that our framework is a step forward to formulating the necessary conditions for a work reservoir diagonal part transition. Indeed, by derived theorems, we are able to characterize a work-variance phase space of a qubit completely; hence a natural extension is to do the same thing for higher dimensions and cumulants. In this sense, a complete characterization of the energy statistics evolution can be understood as an ultimate formulation of the fluctuation theorems.
\section*{Acknowledgements}
The author thanks Anthony J. Short for helpful and inspiring discussions. This research was supported by the National Science Centre, Poland, through grant SONATINA 2 2018/28/C/ST2/00364.
\newpage
|
2024-02-18T23:40:25.496Z
|
2021-11-08T02:02:47.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03116","language":"en","timestamp":1636336967000,"url":"https:\/\/arxiv.org\/abs\/2111.03116","yymm":"2111"}
|
proofpile-arXiv_000-10222
|
{"provenance":"002.jsonl.gz:10223"}
| null | null |
\section{Introduction}
In 1996, W. Dicks and E. Ventura introduced the concept of inertia for subgroups of the abstract free groups of finite rank $F_n$: a subgroup $H$ of $F_n$ is inert if $\operatorname{rk}(H\cap K) \leq \operatorname{rk}(K)$ for every subgroup $K$ of $F_n$ (\cite{dicksventura}). In that same monograph, they proved the subgroup of fixed points of any family of injective endomorphisms of $F_n$ is inert, and conjectured whether this holds for the subgroup of fixed points of any family of endomorphisms. It can be shown (\cite[Thm. 1]{turner}, \cite[Conj. 8.1]{venturasurvey}) that such subgroups are inert in some retract of $F_n$, and thus the now called Dicks-Ventura Inertia Conjecture is equivalent to retracts of $F_n$ being inert. This conjecture and it's version for surface groups were proven in \cite[Cor. 1.5]{antolin_zapirain_20} by Y. Antolín and A. Jaikin-Zapirain.
This paper concerns the problem of inertia for the retracts of Demushkin groups, defined in Section~\ref{sec-preliminaries}. For a general pro-$p$ groups, we define inertia and retracts as follows:
The rank of a finitely generated pro-$p$ group $G$ is the cardinality of a minimal set of topological generators of $G$, and this cardinal number is denoted by $d(G)$. A closed subgroup $H$ of a pro-$p$ group $G$ is inert if $d(H\cap K) \leq d(K)$ for every closed subgroup $K$ of $G$. Since this inequality holds for all subgroups of $K$ if $K$ is not finitely generated, it suffices to check inertia for the finitely generated such $K$.
We say that a closed subgroup $H$ of a pro-$p$ group $G$ is a retract of $G$ if there exists a homomorpshim $\tau\colon G \to H$ that extends the identity map on $H$. Equivalently, $H$ is a retract of $G$ if there exists a closed normal subgroup $N$ of $G$ such that $G \simeq N \rtimes H$. If $H$ is a retract of $G$, then $H$ is also a retract of every closed subgroup $K$ of $G$ containing $H$, in which case we have $d(H) \leq d(K)$. Moreover, equality is achieved if and only if $H = K$, since $H/\Phi(H)$ is a direct factor of $K/\Phi(K)$.
In the free pro-$p$ case, the inertia of retracts can be proven by more elementary means. If $H$ is a retract of a finitely generated free pro-$p$ group $F$, then $H$ is a free factor of $F$ (\cite[Lem. 3.1]{lubotzkyCombinatorialGroupTheory1982}). If $K$ is any finitely generated subgroup of $F$, then $H\cap K$ is a free factor of $K$ by the pro-$p$ analogue of Kurosh's subgroup theorem (\cite[Thm. 4.4]{herfortSubgroupsFreePropproducts1987}) and therefore $d(H\cap K) \leq d(K)$.
Our main result can then be stated as:
\begin{thm}\label{thm_retracts_are_inert} Every retract of a Demushkin group is inert.
\end{thm}
The paper is organized as follows: in Section~\ref{sec-preliminaries} we define and state some properties of Demushkin groups and it's homological gradients, and the proof of Theorem~\ref{thm_retracts_are_inert} is given in Section~\ref{sec_inertia}. We also fix some notations: for any pro-$p$ group $G$, we denote by $[\![\mathbb{F}_p G]\!]$ the complete group algebra of $G$ over the field $\mathbb{F}_p$ of $p$ elements. In general, if $X$ is a profinite space, $[\![\mathbb{F}_p X]\!]$ denotes the free pro-$p$ $\mathbb{F}_p$-module (vector space) over $X$. The complete tensor product of two pro-$p$ $[\![\mathbb{F}_p G]\!]$-modules $A$ and $B$ is denoted by $A \hat{\otimes}_{[\![\mathbb{F}_p G]\!]} B$. For any closed subgroup $H$ of a pro-$p$ group $G$ and any $[\![\mathbb{F}_p H]\!]$-module $M$, $\operatorname{Ind}^G_H M$ denotes the induced $[\![\mathbb{F}_p G]\!]$-module $[\![\mathbb{F}_p G]\!] \hat{\otimes}_{[\![\mathbb{F}_p H]\!]} M$.
\subsection*{Acknowledgements} The author would like to thank Andrei Jaikin-Zapirain, Pavel Zalesskii and Theo Zapata for helpful comments. He also thanks Andrei Jaikin-Zapirain for presenting him to the problem of inertia of retracts. The contents of this paper form part of the author's M.Sc. dissertation at the University of Brasilia, presented under the advice of Theo Zapata.
\section{Preliminaries}\label{sec-preliminaries}
We say that a pro-$p$ group $G$ is a Demushkin group if $G$ is a pro-$p$ Poincaré duality group in dimension $2$, that is, if $G$ satifies the three conditions below:
\begin{enumerate}
\item $H^i(G,\mathbb{F}_p)$ is finite for each $i \geq 0$;
\item $\dim_{\mathbb{F}_p} H^2(G, \mathbb{F}_p) = 1$;
\item The cup product $$\cup\colon H^i(G,\mathbb{F}_p) \times H^{2-i}(G,\mathbb{F}_p) \to H^2(G,\mathbb{F}_p) $$ is a non-degerate bilinear form for every $i \geq 0$.
\end{enumerate}
Some examples of Demushkin groups are the cyclic group of order two $\mathbb{Z}/2\mathbb{Z}$, semidirect products of the form $\mathbb{Z}_p \rtimes \mathbb{Z}_p$ and the pro-$p$ completions of orientable surface groups. The first one is the only finite Demushkin group and the only one to have infinite cohomological dimension, and the first two comprise the class of all solvable Demushkin groups.
We recall the rank formula for open subgroups of Demushkin groups: if $U$ is an open subgroup of a Demushkin group $G$, then $U$ is also a Demushkin group and its rank satisfies
\begin{equation}\label{eq_demushkin_rank_formula}
d(U) = (G\colon U)(d(G) - 2) + 2\,.
\end{equation}
Moreover, if $H$ is a closed subgroup of $G$ with infinite index, then $H$ is a free pro-$p$ group (\cite[Section~4.5]{serre_97}).
We now define some homological gradients of Demushkin groups that will be used on Section~\ref{sec_inertia}.
For a pro-$p$ group $G$ and a profinite $[\![\mathbb{F}_p G]\!]$-module $M$, we say that $M$ is finitely generated if there exists a finite collection of elements $m_1,\ldots,m_k \in M$ such that every element of $M$ can be written as a finite linear combination of the $m_i$ with coefficients in $[\![\mathbb{F}_p G]$. We say that $M$ is finitely related as an $[\![\mathbb{F}_p G]\!]$-module if $H_1(G, M)$ is finite. If $M$ is both finitely generated and finitely related, we say that $M$ is a finitely presented $[\![\mathbb{F}_p G]$-module.
By Nakayama's lemma, we know that $M$ is a finitely generated $[\![\mathbb{F}_p G]\!]$-module if and only if $$M/([\![\mathbb{F}_p G]\!] - 1)M \simeq M_G \simeq H_0(G,M)$$ is finite. If $M$ is finitely generated and $$0 \to N \to F \to M \to 0$$ is a presentation of $M$ with $F$ a free $[\![\mathbb{F}_p G]\!]$-module of minimal rank, the long exact sequence induced in homology gives us an isomorphism $$H_1(G,M) \simeq H_0(G,N)\,,$$ from which we deduce that $M$ is finitely related if and only if $N$ is a finitely generated $[\![\mathbb{F}_p G]\!]$-module.
\begin{defn} For a pro-$p$ group $G$ and a profinite $[\![\mathbb{F}_p G]\!]$-module $M$, the rank gradient $\beta_0^G(M)$ and the relation gradient $\beta_1^G(M)$ are defined as
the infimum $$\beta_i^G(M) = \inf_{U \unlhd_o G} \frac{\dim_{\mathbb{F}_p} H_i(U,M)}{(G\colon U)}\,.$$
\end{defn}
The gradients are always non-negative (and possibly infinite) real numbers. Since we have $$\frac{\dim_{\mathbb{F}_p} H_i(U,M)}{(G\colon U)} \leq \frac{\dim_{\mathbb{F}_p} H_i(V,M)}{(G\colon V)}$$ whenever $U \leq V$ by \cite[Lem. 4.2]{jaikin_shusterman_19}, we see that if $M$ is finitely generated (resp. presented), then $\beta_0^G(M)$ (resp. $\beta_1^G(M)$) is finite. We recall some properties of the rank and relation gradients:
\begin{prop}[{\cite[Section~4]{jaikin_shusterman_19}}]\label{prop_properties_of_gradients} Let $G$ be a finitely presented pro-$p$ group. Then, for any $i \in \{0,1\}$ and any $[\![\mathbb{F}_p G]\!]$-module $M$ such that $\beta_i^G(M)$ is finite, the following statements hold:
\begin{enumerate}[(i)]
\item For any open subgroup $U$ of $G$, we have $\beta_i^U(M) = (G\colon U)\beta_i^G(M)$;
\item For any closed subgroup $H$ of $G$ and any $[\![\mathbb{F}_p H]\!]$-module $N$ such that $\beta_i^H(N)$ is finite, we have $\beta_i^G(\operatorname{Ind}^G_H N) = \beta_i^H(N)$;
\item If $M'$ is a submodule of $M$ and $M'' = M/M'$, then $$\beta_i^G(M) \leq \beta_i^G(M') + \beta_i^G(M'')\,;$$
\end{enumerate}
\end{prop}
For a closed and finitely generated subgroup $H$ of an infinite Demushkin group $G$, the relation gradient of the induced module $\operatorname{Ind}^G_H \mathbb{F}_p \simeq [\![\mathbb{F}_p (G/H)]\!]$ satisfies
\begin{equation}\label{eq_relation_gradient_demushkin}
\beta_1^G([\![\mathbb{F}_p(G/H)]\!]) =
\begin{cases}
d(H) - 2\,,\ \text{ if }(G\colon H) < \infty\,,\\
d(H) - 1\,,\ \text{ otherwise.}
\end{cases}
\end{equation}
This is a consequence of the rank formula~(\ref{eq_demushkin_rank_formula}) when $(G\colon H)$ is finite and of Schreier's formula (\cite[Thm.~3.6.2]{ribes_zalesskii_10}) when $(G\colon H)$ is infinite. We also need the following inequality for the relation gradients of $[\![\mathbb{F}_pG]\!]$-modules when $G$ is an infinite Demushkin group:
\begin{lem}[{\cite[Prop. 4.6]{jaikin_shusterman_19}}]\label{lem_criteria_inequality_relation_gradient} Let $G$ be a non-solvable Demushkin group and $M$ be a finitely related $[\![\mathbb{F}_pG]\!]$-module. If $N$ is a submodule of $M$ such that $M/N$ is either finite or satisfies $H_2(G,M/N) = 0$, then $\beta_1^G(N) \leq \beta_1^G(M)$.
\end{lem}
\section{Inertia of retracts}\label{sec_inertia}
For any pro-$p$ group $G$ and any finitely generated closed subgroup $H$ of $G$, define the $[\![\mathbb{F}_pG]\!]$-module $I_{G/H}$ as the kernel of the map $[\![\mathbb{F}_p(G/H)]\!] \to \mathbb{F}_p$ that sends every coset $gH \in G/H$ to the multiplicative identity $1$ in $\mathbb{F}_p$. We write simply $I_G$ for $I_{G/\{1\}}$.
Suppose that $G$ is a Demushkin group. From the long exact sequence on cohomology associated with the inclusion $I_{G/H} \to [\![\mathbb{F}_p(G/H)]\!]$ we find that $$\dim_{\mathbb{F}_p} H_1(G,I_{G/H}) < \dim_{\mathbb{F}_p} H_1(G,[\![\mathbb{F}_p(G/H)]\!]) + \dim_{\mathbb{F}_p} H_2(G,\mathbb{F}_p) = d(H) + 1\,,$$ from which we conclude that $\beta_1^G(I_{G/H})$ is always finite.
\begin{lem}\label{lem_independent_rank} Let $G$ be an infinite Demushkin group and $H$ a closed subgroup of $G$ such that $\beta_1^G(I_{G/H}) = 0$. Then, $d(H) \leq d(G)$. Moreover, if $G$ is solvable, then every closed subgroup $H$ of $G$ is such that $\beta_1^G(I_{G/H}) = 0$. If $G$ is not solvable and the inclusion $H \subseteq G$ is proper, then the index $(G\colon H)$ is infinite, that is, $H$ is a free pro-$p$ group.
\end{lem}
\begin{proof} The first part is a straightforward computation:
\begin{align*}
d(H) - 2 &\leq \beta_1^G([\![\mathbb{F}_p(G/H)]\!])\,,\ \text{ by }(\ref{eq_relation_gradient_demushkin})\\
&\leq \beta_1^G(I_{G/H}) + \beta_1^G(\mathbb{F}_p) = d(G) - 2\,.
\end{align*}
If $G$ is solvable, then for every closed subgroup $H$ of $G$ we have that $$\beta_1^G([\![\mathbb{F}_p(G/H)]\!]) = 0$$ by~\ref{eq_relation_gradient_demushkin}, since $d(H)$ equals $2$ or $1$ according to the index $(G\colon H)$. Thus, $\beta_1^G(I_{G/H}) = 0$ by Lemma~\ref{lem_criteria_inequality_relation_gradient}. Suppose now that $G$ is not solvable, that is, $d(G) > 2$. If the index $(G\colon H)$ is finite, then after substituting $d(H)$ in the inequality $d(H) \leq d(G)$ by the expression on the rank formula of the equation~(\ref{eq_demushkin_rank_formula}), we find that $(G\colon H) = 1$.
\end{proof}
The following proposition is a small extension of \cite[Prop. 4.5]{antolin_zapirain_20}, and it provides us with our main source of subgroups $H$ of Demushkin groups $G$ with vanishing $\beta_1^G(I_{G/H})$.
\begin{prop}\label{prop_retracts_are_independent} Any proper retract $H$ of an infinite Demushkin group $G$ is a finitely generated free pro-$p$ subgroup of $G$ satisfying $\beta_1^G(I_{G/H}) = 0$.
\end{prop}
\begin{proof} Observe that the induced map on the Frattini quotients $H/\Phi(H) \to G/\Phi(G)$ is injective, and therefore the vanishing of $\beta_1^G(I_{G/H})$ follows from \cite[Prop. 4.5]{antolin_zapirain_20}. Let $\tau\colon G \to H$ be the retraction map from $G$ onto a proper retract $H$. Since the index $(G\colon H)$ equals the order of the kernel $\ker(\tau)$ and $G$ is torsion-free, we find that $(G\colon H) = \infty$ and therefore $H$ is a free pro-$p$ group which is finitely generated (because it is a quotient group of $G$).
\end{proof}
Finally, let $H$ be a proper retract of an infinite Demushkin group $G$ and take any closed and finitely generated subgroup $K$ of $G$. By Proposition~\ref{prop_retracts_are_independent}, we have $\beta_1^G(I_{G/H}) = 0$. Since all Demushkin groups possess Howson's property \cite[Thm.~3.1]{shusterman_zalesskii_20}, the intersection $H\cap K$ is a finitely generated subgroup, and therefore the relation gradient $\beta_1^K(I_{K/(H\cap K)})$ is also defined. Hence, Theorem~\ref{thm_retracts_are_inert} is implied by the vanishing of $\beta_1^K(I_{K/(H\cap K)})$, which we shall now prove.
\begin{thm}\label{thm_intersection_independent_subgroup} Let $H$ be a subgroup of a Demushkin group $G$ with $\beta_1^G(I_{G/H}) = 0$, and let $K$ be a closed and finitely generated subgroup of $G$. Then, $\beta_1^K(I_{K/(H\cap K)}) = 0$.
\end{thm}
\begin{proof} Without loss of generality, we may assume that $H$ is a proper subgroup of $G$. If $G$ is solvable, then the claim follows from Lemma~\ref{lem_independent_rank}, so henceforth we also assume that $d(G) > 2$. Thus, $H$ is a finitely generated free pro-$p$ group by Lemma~\ref{lem_independent_rank}.
Let $P$ be the kernel of the surjection $[\![\mathbb{F}_pK]\!] \to [\![\mathbb{F}_p(K/H\cap K)]\!]$. We have an isomorphism of $[\![\mathbb{F}_pK]\!]$-modules $I_{K/(H\cap K)} \simeq I_K/P$. If $O$ denotes the kernel of $[\![\mathbb{F}_p G]\!] \to [\![\mathbb{F}_p (G/H)]\!]$, then after identifying $[\![\mathbb{F}_p K]\!]$ with a submodule of $[\![\mathbb{F}_p G]\!]$ we also find that $P = I_K\cap O$. Hence, we also have an isomorphism of $K$-modules $I_{K/(H\cap K)} \simeq I_K/(I_K\cap O)$. This allows us to identify $I_{K/(H\cap K)}$ with an $[\![\mathbb{F}_p K]\!]$-submodule of $I_G/O \simeq I_{G/H}$, since $P$, $O$ and $I_K$ are $[\![\mathbb{F}_p K]\!]$-submodules of $I_G$. By choosing a complete set of representatives $X$ for the double cosets $H\backslash G/K$ with $1 \in X$, we obtain the following isomorphism for the quotient module:
\begin{equation} \label{eq_isomorphism_relative_augmentation_kernels}
\begin{aligned}
I_{G/H}/I_{K/(H\cap K)} &\simeq (I_G/O)/(I_{K}/I_K \cap O)\\
&\simeq [\![\mathbb{F}_p(G/H)]\!]/[\![\mathbb{F}_p(K/H\cap K)]\!]\\
&\simeq \left(\bigoplus_{x \in X} [\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!]\right)/[\![\mathbb{F}_p(K/H\cap K)]\!]\,.
\end{aligned}
\end{equation}
Since $K$ may have infinite index, the set $X$ may be infinite. Thus, the direct sum of pro-$p$ modules appearing in~(\ref{eq_isomorphism_relative_augmentation_kernels}) denotes the direct sum of modules indexed by a profinite space as in \cite[Section 9.1]{ribes_17}. We shall freely use the fact that this direct sum commutes with the homology operator (\cite[Thm. 9.1.3]{ribes_17}), that is, for all $i \in \mathbb{N}$ we have an isomorphism:
$$H_i\left(K,\bigoplus_{x \in X} [\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!]\right) \simeq \bigoplus_{x \in X} H_i(K, [\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!])\,.$$
Suppose at first that $K$ is open in $G$, in which case the isomorphism~(\ref{eq_isomorphism_relative_augmentation_kernels}) becomes $$I_{G/H}/I_{K/(K\cap H)} \simeq \bigoplus_{x \in X - \{1\}} [\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!]\,.$$ Then, since $\beta_1^G(I_{G/H}) = 0$, we find that $$\beta_1^K(I_{G/H}) = (G\colon K)\beta_1^G(I_{G/H}) = 0\,.$$ Therefore, by Lemma~\ref{lem_criteria_inequality_relation_gradient}, it suffices to prove that $$H_2(K,I_{G/H}/I_{K/(H\cap K)}) \simeq \bigoplus_{x \in X-\{1\}} H_2(K,[\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!]) = 0\,.$$ However, $xHx^{-1} \cap K$ is a free pro-$p$ group, and thus we apply Shapiro's lemma to obtain $$H_2(K,[\![\mathbb{F}_p(K/xHx^{-1}\cap K)]\!]) \simeq H_2(xHx^{-1}\cap K, \mathbb{F}_p) = 0\,.$$ Hence, $\beta_1^K(I_{K/(H\cap K)}) = 0$.
Suppose now that the index $(G\colon K)$ is infinite and we shall prove again that $\beta_1^K(I_{G/H})$ vanishes. Since $$\beta_1^K(I_{G/H}) = \beta_1^G(\operatorname{Ind}^G_K I_{G/H}) = \beta_1^G([\![\mathbb{F}_p (K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} I_{G/H})\,,$$ it suffices to show the inequality
\begin{equation}\label{eq_submultiplicativity_independence}
\beta_1^G([\![\mathbb{F}_p (K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} I_{G/H}) \leq \beta_1^G([\![\mathbb{F}_p (K\backslash G)]\!]) \beta_1^G(I_{G/H}) = 0\,.
\end{equation}
We procceed as in \cite[Section~8]{jaikin_shusterman_19}.
By \cite[Cor. 7.4]{jaikin_shusterman_19}, there is an open subgroup $U$ of $G$ and an $[\![\mathbb{F}_p U]\!]$-submodule $A$ of $[\![\mathbb{F}_p(K\backslash G)]\!]$ such that $\beta_1^U(A) = 0$ and
\begin{equation}\label{eq_first_thm_dimension_submodule_A}
\dim_{\mathbb{F}_p} [\![\mathbb{F}_p(K\backslash G)]\!]/A \leq \beta_1^G([\![\mathbb{F}_p(K\backslash G)]\!])\,.
\end{equation}
Hence, we obtain the inequality
\begin{equation}\label{eq_first_thm_first_bound}
\beta_1^U([\![\mathbb{F}_p(K\backslash G)]\!]\hat{\otimes}_{\mathbb{F}_p} I_{G/H}) \leq \beta_1^U(A\hat{\otimes}_{\mathbb{F}_p} I_{G/H}) + \beta_1^U(([\![\mathbb{F}_p(K\backslash G)]\!]/A)\hat{\otimes}_{\mathbb{F}_p} I_{G/H})\,.
\end{equation}
By repeatedly applying the subadditivity of the relation gradient and the inequality~(\ref{eq_first_thm_dimension_submodule_A}), we find that the last term of~(\ref{eq_first_thm_first_bound}) satisfies the bound
\begin{equation}\label{eq_first_thm_last_term}
\beta_1^U(([\![\mathbb{F}_p(K\backslash G)]\!]/A)\hat{\otimes}_{\mathbb{F}_p} I_{G/H}) \leq \beta_1^G([\![\mathbb{F}_p(K\backslash G)]\!])\beta_1^U(I_{G/H}) = 0\,.
\end{equation}
The $[\![\mathbb{F}_p K]\!]$-module $I_{G/H}$ is finitely related since $K$ is finitely generated and there is an isomorphism $$H_1(K, I_{G/H}) \leq H_1(K,[\![\mathbb{F}_p(G/H)]\!]) \simeq \bigoplus_{x \in X} H_1(xHx^{-1} \cap K,\mathbb{F}_p)$$ with the latter group being finite by \cite[Cor. 3.4]{jaikin_shusterman_19} and Howson's property. Thus, we can apply \cite[Lem. 7.1]{jaikin_shusterman_19} to find an open $[\![\mathbb{F}_p G]\!]$-submodule $B$ of $I_{G/H}$ such that $\beta_1^K(B) = 0$. Once again, we have the inequality
\begin{equation}\label{eq_first_thm_second_bound}
\beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} I_{G/H}) \leq \beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} B) + \beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} (I_{G/H}/B))\,.
\end{equation}
As it were the case with~(\ref{eq_first_thm_last_term}), repeated applications of the subadditive property gives us the bound
\begin{equation}\label{eq_first_thm_first_term}
\beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} (I_{G/H}/B)) \leq \beta_1^U(A)\cdot \dim_{\mathbb{F}_p} (I_{G/H}/B) = 0\,.
\end{equation}
Hence, by combining the inequalities~(\ref{eq_first_thm_first_bound}) through~(\ref{eq_first_thm_first_term}), we obtain
\begin{equation}\label{eq_first_thm_last_bound}
\beta_1^U([\![\mathbb{F}_p(K\backslash G)]\!]\hat{\otimes}_{\mathbb{F}_p} I_{G/H}) \leq \beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} B)\,.
\end{equation}
Since $U$ has cohomological dimension 2, we have the inequalities
\begin{align*}
\dim_{\mathbb{F}_p} H_2(U,B) &\leq \dim_{\mathbb{F}_p} H_2(U,I_{G/H})\\
&\leq \dim_{\mathbb{F}_p} H_2(U,[\![\mathbb{F}_p(G/H)]\!])\\
&= \dim_{\mathbb{F}_p} H_2(G, [\mathbb{F}_p(U/G)] \otimes_{\mathbb{F}_p} [\![\mathbb{F}_p (G/H)]\!])\text{, by Shapiro's lemma}\\
&= \dim_{\mathbb{F}_p} H_2(H, [\mathbb{F}_p(U/G)])\text{, again by Shapiro's lemma}\\
&= 0\,.
\end{align*}
Since $\dim_{\mathbb{F}_p} H_2(U,-)$ is also subadditive with respect to short exact sequences, we deduce that
\begin{equation}\label{eq_first_thm_H2_vanishes}
H_2(U, ([\![\mathbb{F}_p(K\backslash G)]\!]/A)\hat{\otimes}_{\mathbb{F}_p} B) = 0\,.
\end{equation}
We wrap up the argument with the observation that Lemma~\ref{lem_criteria_inequality_relation_gradient} applies to the short exact sequence of $[\![\mathbb{F}_pU]\!]$-modules $$0 \to A\hat{\otimes}_{\mathbb{F}_p} B \to [\![\mathbb{F}_p(K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} B \to ([\![\mathbb{F}_p(K\backslash G)]\!]/A)\hat{\otimes}_{\mathbb{F}_p} B \to 0$$ by~(\ref{eq_first_thm_H2_vanishes}). Thus:
\begin{align}\label{eq_first_thm_last_vanishing}
\beta_1^U(A\hat{\otimes}_{\mathbb{F}_p} B) &\leq \beta_1^U([\![\mathbb{F}_p(K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} B)\\
&= (G\colon U)\beta_1^G(\operatorname{Ind}^G_K B)\notag\\
&= (G\colon U)\beta_1^K(B) = 0\,.\notag
\end{align}
Multiplying by $(G\colon U)$ and using the index proportionality of the relation gradient (statement (i) of Proposition~\ref{prop_properties_of_gradients}), we see that the combination of~(\ref{eq_first_thm_first_bound}) through~(\ref{eq_first_thm_last_vanishing}) is readily equivalent to~(\ref{eq_submultiplicativity_independence}):
\begin{align*}
\beta_1^G([\![\mathbb{F}_p(K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} I_{G/H}) &\leq (G\colon U)\beta_1^U([\![\mathbb{F}_p(K\backslash G)]\!] \hat{\otimes}_{\mathbb{F}_p} I_{G/H})\\
&\leq (G\colon U)\beta_1^U(A\hat{\otimes}_{\mathbb{F}_p} I_{G/H})\,,\text{ by (\ref{eq_first_thm_first_bound}) and (\ref{eq_first_thm_last_term})}\\
&\leq (G\colon U)\beta_1^U(A \hat{\otimes}_{\mathbb{F}_p} B)\,,\text{ by (\ref{eq_first_thm_second_bound}) and (\ref{eq_first_thm_first_term})}\\
&= 0\,,\text{ by (\ref{eq_first_thm_last_vanishing}).}
\end{align*}
Since $K$ has cohomological dimension 1, there is the bound $$\beta_1^K(I_{K/(H\cap K)}) \leq \beta_1^K(I_{G/H}) = 0\,,$$ which shows that $\beta_1^K(I_{K/(H\cap K)}) = 0$.
\end{proof}
\begin{proof}[Proof of \ref{thm_retracts_are_inert}] If $G$ is solvable, then every closed subgroup $H$ of $G$ is inert; this follows from the fact that $d(K) \leq 2$ for every closed subgroup $K$ of $G$. If $G$ is not solvable, then the statement follows from Lemma~\ref{lem_independent_rank}, Proposition~\ref{prop_retracts_are_independent} and Theorem~\ref{thm_intersection_independent_subgroup}.
\end{proof}
\bibliographystyle{amsplain}
|
2024-02-18T23:40:25.502Z
|
2021-11-05T01:22:39.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03060","language":"en","timestamp":1636075359000,"url":"https:\/\/arxiv.org\/abs\/2111.03060","yymm":"2111"}
|
proofpile-arXiv_000-10223
|
{"provenance":"002.jsonl.gz:10224"}
| null | null |
\section{Evidence indicating transfer to real-world}
Due to the scope of the problem and the ongoing pandemic, we limit our experiments to be in simulation. However, we provided evidence indicating that the learned policies can be transferred to the real world in the future in the paper. We summarize this evidence as follows.
\paragraph{Convex decomposition}
The objects after the convex decomposition still have geometrically different and complex geometries as shown in \figref{fig:convex_decomp}. The objects in the EGAD dataset are 3D printable. The YCB objects are available in the real world.
\paragraph{Action space}
We control the finger joints via relative position control as explained in \secref{sec:learn_teacher}. This suffers less sim-to-real gap compared to using torque control on the joints directly.
\paragraph{Student policies}
We designed two student policies and both of them use the observation data that can be readily acquired from the real world. The first student policy only requires the joint positions and the object pose. Object pose can be obtained using a pose estimation system or a motion capture system in the real world. Our second student policy only require the point cloud of the scene and the joint positions. We can get the point cloud in the real world by using RGBD cameras such as Realsense D415, Azure Kinect, etc.
\paragraph{Domain randomization}
We also trained and tested our policies with domain randomization. We randomized object mass, friction, joint damping, tendon damping, tendon stiffness, etc. \tblref{tbl:dyn_ran_params} lists all the parameters we randomized in our experiments. We also add noise to the state observation and action commands as shown in \tblref{tbl:dyn_ran_params}. For the vision experiments, we also added noise (various ways of data augmentation including point position jittering, color jiterring, dropout, etc.) to the point cloud observation in training and testing as explained in \secref{app_subsec:vision_noise}.
The results in \tblref{tbl:up_success_rate}, \tblref{tbl:down_air_success_rate}, and \tblref{tbl:vision_pol_noise} show that even after adding randomization/noise, we can still get good success rates with the trained policies. Even though we cannot replicate the true real-world setups in the simulation, our results with domain randomization indicates a high possibility that our policies can be transferred to the real Shadow hand. Prior works~\cite{andrychowicz2020learning} have also shown the domain randomization can effectively reduce the sim-to-real gap.
\paragraph{Torque analysis}
We also conducted torque analysis as shown in \secref{app_subsec:torque_analysis}. We can see that the peak torque values remain in an reasonable and affordable range for the Shadow hand. This indicates that our learned policies are less likely to cause motor overload on the real Shadow hand.
\newpage
\section{Environment Setup}
\begin{figure}[!h]
\centering
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/upward/egad_1.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/upward/egad_2.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/upward/ycb_3.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/upward/ycb_4.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/upward/ycb_2.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down_table/egad_5.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down_table/egad_6.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down_table/ycb_1.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down_table/ycb_2.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down_table/ycb_6.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down/egad_5.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down/egad_7.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down/ycb_4.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down/ycb_5.png}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/teaser/down/ycb_6.png}
\end{subfigure}
\caption{We learn policies that can reorient many objects in three scenarios respectively: (1) hand faces upward, (2) hand faces downward with a table below the hand, (3) hand faces downward without any table. The extra object in each figure shows the desired orientation.}
\label{fig:reorient_examples}
\end{figure}
\subsection{State definition}
\label{app_subsec:state_def}
The full state space $\mathbb{S}^\expsymbol$ includes joint, fingertip, object and goal information detailed in \tblref{tbl:state_def}. To compute the angle difference between two quaternion orientations $\alpha_1$ and $\alpha_2$, we first compute the difference rotation quaternion: $\beta=\alpha_1\alpha_2^{-1}$. Then the angle difference (distance between two rotations) $\Delta\theta$ is computed as the angle of $\beta$ from the axis-angle representation of $\beta$.
\renewcommand{\arraystretch}{1.3}
\begin{table}[!htb]
\small{
\caption{Full state $s_t^\mathcal{E}\in\mathbb{R}^{134}$ information. Orientations are in the form of quaternions.}
\label{tbl:state_def}
\centering
\setlength\tabcolsep{3pt}
\begin{tabular}{ll|ll|ll}
\hline
Parameter & Description & Parameter & Description & Parameter & Description \\
\hline
$q_t\in\mathbb{R}^{24}$ & joint positions & $v^f_t\in\mathbb{R}^{15}$ & fingertip linear velocities & $\alpha^g\in\mathbb{R}^{4}$ & object goal orientation \\
$\dot{q}_t\in\mathbb{R}^{24}$ & joint velocities & $w^f_t\in\mathbb{R}^{15}$& fingertip angular velocities & $v^o_t\in\mathbb{R}^{3}$ & object linear velocity \\
$p^f_t\in\mathbb{R}^{15}$ & fingertip positions & $p^o_t\in\mathbb{R}^{3}$ & object position & $w^o_t\in\mathbb{R}^{3}$ & object angular velocity \\
$\alpha^f_t\in\mathbb{R}^{20}$ & fingertip orientation & $\alpha^o_t\in\mathbb{R}^{4}$ & object orientation & $\beta_t\in\mathbb{R}^{4}$ &
$\alpha^o_t(\alpha^g)^{-1}$
\\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1}
\subsection{Dataset}
\label{app_subsec:dataset}
We use two object datasets (EGAD and YCB) in our paper. To further increase the diversity of the datasets, we create $5$ variants for each object mesh by randomly scaling the mesh. The scaling ratios are randomly sampled such that the longest side of the objects' bounding boxes $l_{\max}$ lies in $[0.05, 0.08]$m for EGAD objects, and $l_{\max} \in [0.05, 0.12]$m for YCB objects. The mass of each object is randomly sampled from $[0.05, 0.15]$kg. When we randomly scale YCB objects, some objects become very small and/or thin, making the reorientation task even more challenging. In total, we use $11410$ EGAD object meshes and $390$ YCB object meshes for training.
\figref{fig:egad_ycb_dataset} shows examples from the EGAD and YCB dataset. We can see that these objects are geometrically different and have complex shapes. We also use V-HACD~\citep{mamou2016volumetric} to perform an approximate convex decomposition on the object meshes for fast collision detection in the simulator. \figref{fig:convex_decomp} shows the object shapes before and after the decomposition. After the decomposition, the objects are still geometrically different.
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A2_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A4_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B5_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B6_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/F6_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/G2_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/C4_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/D4_v.jpeg}
\end{subfigure}
\\\vspace{0.02cm}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/003_cracker_box.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/025_mug.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/065-d_cups.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/011_banana.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/072-b_toy_airplane.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/073-c_lego_duplo.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/004_sugar_box.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.11\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/ycb/048_hammer.jpeg}
\end{subfigure}%
\caption{First row: examples of EGAD objects. Second row: examples of YCB objects.}
\label{fig:egad_ycb_dataset}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A2_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A4_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B5_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B6_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/C4_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/C6_v.jpeg}
\end{subfigure}%
\\\vspace{0.02cm}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A2_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/A4_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B5_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/B6_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/C4_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/C6_c.jpeg}
\end{subfigure}%
\\
\vspace{0.4cm}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/D4_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/D5_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/E3_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/F6_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/G2_v.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/G3_v.jpeg}
\end{subfigure}%
\\\vspace{0.02cm}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/D4_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/D5_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/E3_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/F6_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/G2_c.jpeg}
\end{subfigure}
\begin{subfigure}[t]{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/vhacd/egad/G3_c.jpeg}
\end{subfigure}%
\caption{Examples of EGAD objects. The first and third row shows the visual mesh of the objects. The second and fourth row show the corresponding collision mesh (after V-HACD decomposition).}
\label{fig:convex_decomp}
\end{figure}
\subsection{Camera setup}
\label{subsec:camera_setup}
We placed two RGBD cameras above the hand, as shown in \figref{fig:camera_pos}. In ISAAC gym, we set the camera pose by setting its position and focus position. The two cameras' positions are shifted from the Shadow hand's base origin by $[-0.6, -0.39, 0.8]$ and $[0.45, -0.39, 0.8]$ respectively. And their focus points are the points shifted from the Shadow hand's base origin by $[-0.08, -0.39, 0.15]$ and $[0.045, -0.39, 0.15]$ respectively.
\begin{figure}[!htb]
\centering
\includegraphics[height=0.3\linewidth]{figures/camera/cam1.png}
\includegraphics[angle=90,height=0.3\linewidth]{figures/camera/cam2.png}
\caption{Camera positions}
\label{fig:camera_pos}
\end{figure}
\section{Experiment Setup}
\label{app_sec:exp_setup}
\subsection{Network architecture}
\label{app_subsec:network_arch}
For the non-vision policies, we experimented with two architectures: The MLP policy $\pi_M$ consists of $3$ hidden layers with $512, 256, 256$ neurons respectively. The RNN policy $\pi_R$ has $3$ hidden layers ($512 - 256 - 256$), followed by a $256$-dim GRU layer and one more $256$-dim hidden layer. We use the exponential linear unit (ELU)~\citep{clevert2015fast} as the activation function.
For our vision policies, we design a sparse convolutional network architecture (\textit{Sparse3D-IMPALA-Net}). As shown in \figref{fig:vision_policy_arch}, the point cloud $W_t$ is processed by a series of sparse CNN residual modules and projected into an embedding vector. $q_t$ and $a_{t-1}$ are concatenated together and projected into an embedding vector via an MLP. Both embedding vectors from $W_t$ and $(q_t, a_{t-1})$ are concatenated and passed through a recurrent network to output the action $a_t$.
\subsection{Training details}
\label{app_sec:training_details}
All the experiments in the paper were run on at most $2$ GPUs with a $32$GB memory. We use PPO~\citep{schulman2017proximal} to learn $\pi^\expsymbol$. \tblref{tbl:hyper_params} lists the hyperparameters for the experiments. We use 40K parallel environments for data collection. We update the policy with the rollout data for $8$ epochs after every $8$ rollout steps for the MLP policies and $50$ rollout steps for the RNN policies. A rollout episode is terminated (reset) if the object is reoriented to the goal orientation successfully, or the object falls, or the maximum episode length is reached. To learn the student policies $\pi^\stusymbol$, we use Dagger\citep{ross2011reduction}. While Dagger typically keep all the state-action pairs for training the policy, we do Dagger in an online fashion where $\pi^\stusymbol$ only learns from the latest rollout data.
For the vision experiments, the number of parallel environments is 360 and we update policy after every $50$ rollout steps from all the parallel environments. The batch size is 30. We sample 15000 points from the reconstructed point cloud of the scene from 2 cameras for the scene point cloud $W_t^s$ and sample 5000 points from the object CAD mesh model for the goal point cloud $W^g$.
We use Horovod~\citep{sergeev2018horovod} for distributed training and Adam~\citep{kingma2014adam} optimizer for neural network optimization.
\textbf{Reward function for reorientation}: For training $\pi^\expsymbol$ for the reorientation task, we modified the reward function proposed in \citep{makoviychuk2021isaac} to be:
\begin{align}
\label{eqn:rot_reward}
r(s_t, a_t) = c_{\theta_1}\frac{1}{|\Delta \theta_t|+\epsilon_\theta}+c_{\theta_2}\mathds{1}(|\Delta \theta_t| < \Bar{\theta}) + c_3\left\Vert a_t\right\Vert_2^2
\end{align}
where $c_{\theta_1}>0$, $c_{\theta_2}>0$ and $c_3<0$ are the coefficients, $\Delta\theta_t$ is the difference between the current object orientation and the target orientation, $\epsilon_\theta$ is a constant, $\mathds{1}$ is an indicator function that identifies whether the object is in the target orientation. The first two reward terms encourage the policy to reorient the object to the desired orientation while the last term suppresses large action commands.
\textbf{Reward function for object lifting}: To train the lifting policy, we use the following reward function:
\begin{align}
r(s_t, a_t) &= c_{h_1}\frac{1}{|\Delta h_t|+\epsilon_h}+c_{h_2}\mathds{1}(|\Delta h_t| < \Bar{h}) + c_3\left\Vert a_t\right\Vert_2^2
\end{align}
where $\Delta h_t=\max(p_t^{b,z} - p_t^{o,z}, 0)$ and $p_t^{b,z}$ is the height ($z$ coordinate) of the Shadow Hand base frame, $p_t^{o,z}$ is the height of the object, $\Bar{h}$ is the threshold of the height difference. The objects have randomly initialized poses and are dropped onto the table.
\textbf{Goal specification for vision policies}: We obtain $W^g$ by sampling $5000$ points from the object's CAD mesh using the Barycentric coordinate, rotating the points by the desired orientation, and translating them so that these points are next to the hand. Note that one can also put the object in the desired orientation right next to the hand in the simulator and render the entire scene altogether to remove the need for CAD models. We use CAD models for $W^g$ just to save the computational cost of rendering another object while we still use RGBD cameras to get $W_t^s$.
\renewcommand{\arraystretch}{1.2}
\begin{table}[!htb]
\centering
\caption{Hyperparameter Setup}
\label{tbl:hyper_params}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccc}
\hline
Hyperparameter & Value & Hyperparameter & Value & Hyperparameter & Value \\
\hline
Num. batches & 5 & Entropy coeff. & 0. & Num. pts sampled from $W_t^s$ & 15000 \\
Actor learning rate & 0.0003 & GAE parameter & 0.95 & Num. pts sampled from $W^g$ & 5000 \\
Critic learning rate & 0.001 & Discount factor & 0.99 & Num. envs & 40000 \\
Num. epochs & 8 & Episode length & 300 & \begin{tabular}[c]{@{}c@{}}Num. rollout steps per \\policy update (MLP/RNN)\end{tabular} & 8/50 \\
Value loss coeff. & 0.0005 & PPO clip range & 0.1 & $c_{\theta_1} $ & 1 \\
$c_{\theta_2}$ & 800 & $c_3$ & 0.1 & $\epsilon_\theta$ & 0.1\\
$\Bar{\theta}$ & 0.1\si{\radian} & $c_{h_1}$ & 0.05 & $\epsilon_h$ & 0.02 \\
$\Bar{h}$ & 0.04 & $c_{h_2}$ & 800 & &\\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1.}
\begin{table}
\centering
\caption{Mesh Parameters}
\label{tbl:mesh_params}
\begin{tabular}{ll}
\hline
Parameter & Range \\
\hline
longest side of the bounding box of EGAD objects & {[}0.05, 0.08]m \\
longest side of the bounding box of YCB objects & {[}0.05, 0.12]m \\
mass of each object & {[}0.05, 0.15]kg \\
No. of EGAD meshes & 2282 \\
No. of YCB meshes & 78 \\
No. of variants per mesh & 5 \\
Voxelization resolution & 0.003 m \\
\hline
\end{tabular}
\end{table}
\subsection{Dynamics randomization}
\tblref{tbl:dyn_ran_params} list all the randomized parameters as well the state observation noise and action command noise.
\renewcommand{\arraystretch}{1.2}
\begin{table}[!htb]
\centering
\caption{Dynamics Randomization and Noise}
\label{tbl:dyn_ran_params}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccccc}
\hline
Parameter & Range & Parameter & Range & Parameter & Range \\
\hline
state observation & $+\mathcal{U}(-0.001, 0.001)$ & action & $+\mathcal{N}(0, 0.01)$ & joint stiffness & $\times\mathcal{E}(0.75, 1.5)$ \\
object mass & $\times\mathcal{U}(0.5, 1.5)$ & joint lower range & $+\mathcal{N}(0, 0.01)$ & tendon damping & $\times\mathcal{E}(0.3, 3.0)$ \\
object static friction & $\times\mathcal{U}(0.7, 1.3)$ & joint upper range & $+\mathcal{N}(0, 0.01)$ & tendon stiffness & $\times\mathcal{E}(0.75, 1.5)$ \\
finger static friction & $\times\mathcal{U}(0.7, 1.3)$ & joint damping & $\times\mathcal{E}(0.3, 3.0)$ & & \\
\hline
\multicolumn{6}{l}{\begin{tabular}[c]{@{}l@{}}$\mathcal{N}(\mu, \sigma)$: Gaussian distribution with mean $\mu$ and standard deviation $\sigma$.\\ $\mathcal{U}(a, b)$: uniform distribution between $a$ and $b$. $\mathcal{E}(a, b)=\exp^{\mathcal{U}(\log(a), \log(b))}$.\\ $+$: the sampled value is added to the original value of the variable. $\times$: the original value is scaled by the sampled value.\end{tabular}} \\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1}
Comparing the Column 1 and Column 2 in \tblref{tbl:up_success_rate}, we can see that if we directly deploy the policy trained without domain randomization into an environment with different dynamics, the performance drops significantly. If we train policies with domain randomization (Column 3), the policies are more robust and the performance only drops slightly compared to Column 1 in most cases. The exceptions are on C3 and H3. In these two cases, the $\pi_M^\stusymbol$ policies collapsed in training during the policy distillation along with the randomized dynamics.
\subsection{Gravity curriculum}
\label{subsec:grav_curr}
We found building a curriculum on gravity helps improve the policy learning for YCB objects when the hand faces downward. \algoref{alg:grav} illustrates the process of building the gravity curriculum. In our experiments, we only test on the training objects once (one random initial and goal orientation) to get the surrogate average success rate $w$ on all the objects during training. $\Bar{w}=0.8, g_0=\SI{1}{\meter\per\second^2}, \Delta g = \SI{-0.5}{\meter\per\second^2}, K=3, L=20, \Delta T_{\min}=40$.
\begin{algorithm}
\caption{Gravity Curriculum}\label{alg:grav}
\begin{algorithmic}[1]
\State Initialize an empty FIFO queue $Q$ of size $K$, $\Delta T = 0, g=g_{0}$
\For{$i \gets 1$ to $M$}
\State $\tau$ = \texttt{rollout\_policy($\pi_\theta$)} \Comment{get rollout trajectory}
\State $\pi_\theta$ = \texttt{optimize\_policy($\pi_\theta$, $\tau$)} \Comment{update policy}
\State $\Delta T = \Delta T + 1$
\If{$ i \mod L = 0$}
\State $w$ = \texttt{evaluate\_policy($\pi_\theta$)} \Comment{evaluate the policy, get the success rate $w$}
\State append $w$ to the queue $Q$
\If{\texttt{avg}$(Q) > \Bar{w}$ and $\Delta T> \Delta T_{\min}$}
\State $g=\max(g-\Delta g, -9.81) $
\State $\Delta T = 0$
\EndIf
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Supplementary Results}
\subsection{Hand faces upward}
\label{app_subsec:hand_up}
\paragraph{Learning curves}
\figref{fig:egad_learn_upward} shows the learning curve of the RNN and MLP policies on the EGAD and YCB datasets. Both policies learn well on the EGAD and YCB datasets. The YCB dataset requires much more environment interactions for the policies to learn. We can also see that using the full-state information can speed up the learning and give a higher final performance.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.32\linewidth]{figures/upward/egad.pdf}
\includegraphics[width=0.32\linewidth]{figures/upward/ycb_0_14.pdf}
\includegraphics[width=0.32\linewidth]{figures/upward/ycb_0_14_full_reduced_rnn.pdf}
\caption{Learning curves of the MLP and RNN policies on the EGAD (\textbf{Left}) and YCB datasets (\textbf{Middle}). The \textbf{Right} plot shows that using the full state information speeds up the policy learning compared to only using the reduced state information.}
\label{fig:egad_learn_upward}
\end{figure}
\paragraph{Testing performance - Teacher}
The testing results in \tblref{tbl:up_success_rate} show that both the MLP and RNN policies are able to achieve a success rate greater than 90\% on the EGAD dataset (entries A1, B1) and greater than 70\% on the YCB dataset (entries F1, G1) without any explicit knowledge of the object shape. This result is surprising because intuitively, one would assume that information about the object shape is important for in-hand reorientation.
\paragraph{Testing performance - Student}
We experimented with the following three combinations: (1) distill $\pi_M^\expsymbol$ into $\pi_M^\stusymbol$, (2) distill $\pi_M^\expsymbol$ into $\pi_R^\stusymbol$, (3) distill $\pi_R^\expsymbol$ into $\pi_R^\stusymbol$. The student policy state is $s_t^\mathcal{S}\in\mathbb{R}^{31}$. In \tblref{tbl:up_success_rate} (entries C1-E1, H1-J1), we can see that $\pi_R^\expsymbol\rightarrow\pi_R^\stusymbol$ gives the highest success rate on $\pi^\stusymbol$, while $\pi_M^\expsymbol\rightarrow\pi_M^\stusymbol$ leads to much worse performance ($36\%$ drop of success rate in EGAD, and $47\%$ drop in YCB). This shows that $\pi^\stusymbol$ requires temporal information due to reduced state space. The last two columns in \tblref{tbl:up_success_rate} also show that the policy is more robust to dynamics variations and observation/action noise after being trained with domain randomization.
\renewcommand{\arraystretch}{1.1}
\begin{table}[!tb]
\vspace{-0.5cm}
\centering
\caption{Success rates (\%) of policies tested on different dynamics distribution. $\Bar{\theta}=0.1$\si{\radian}. DR stands for domain randomization and observation/action noise. X$\rightarrow$Y: distill policy X into policy Y.}
\label{tbl:up_success_rate}
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{cccc|ccc}
\hline
\multicolumn{4}{c}{} & 1 & 2 & 3 \\
\hline
\multirow{2}{*}{Exp. ID} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{State} & \multicolumn{1}{c}{\multirow{2}{*}{Policy}} & \multicolumn{2}{c}{Train without DR} & Train with DR \\
\cline{5-7}
& & & \multicolumn{1}{c}{} & \multicolumn{1}{l}{Test without DR} & Test with DR & Test with DR \\
\hline
A & \multirow{5}{*}{EGAD} & \multirow{2}{*}{Full state} & MLP & 92.55 $\pm$ 1.3 & 78.24 $\pm$ 2.4 & \textbf{91.92} $\pm$ \textbf{0.4} \\
B & & & RNN & \textbf{95.95} $\pm$ \textbf{0.8} & \textbf{84.27} $\pm$ \textbf{1.0} & 88.04 $\pm$ 0.6 \\
C & & \multirow{3}{*}{Reduced state} & MLP$\rightarrow$MLP & 55.55 $\pm$ 0.2 & 25.09 $\pm$ 3.0 & 23.77 $\pm$ 1.8 \\
D & & & MLP$\rightarrow$RNN & 85.32 $\pm$ 1.2 & 68.31 $\pm$ 2.6 & \textbf{81.05} $\pm$ \textbf{1.2} \\
E & & & RNN$\rightarrow$RNN & \textbf{91.96} $\pm$ \textbf{1.5} & \textbf{78.30} $\pm$ \textbf{1.2} & 80.29 $\pm$ 0.9 \\
\hline
F & \multirow{5}{*}{YCB} & \multirow{2}{*}{Full state} & MLP & 73.40 $\pm$ 0.2 & 54.57 $\pm$ 0.6 & 66.00 $\pm$ 2.7 \\
G & & & RNN & \textbf{80.40} $\pm$ \textbf{1.6} & \textbf{65.16} $\pm$ \textbf{1.0} & \textbf{72.34} $\pm$ \textbf{0.9} \\
H & & \multirow{3}{*}{Reduced state} & MLP$\rightarrow$MLP & 34.08 $\pm$ 0.9 & 12.08 $\pm$ 0.4 & \ \ 5.41 $\pm$ 0.3 \\
I & & & MLP$\rightarrow$RNN & 69.30 $\pm$ 0.8 & 47.38 $\pm$ 0.6 & 53.07 $\pm$ 0.9 \\
J & & & RNN$\rightarrow$RNN & \textbf{81.04} $\pm$ \textbf{0.5} & \textbf{64.93} $\pm$ \textbf{0.2} & \textbf{65.86} $\pm$ \textbf{0.7} \\
\hline
\end{tabular}
}
\vspace{-0.5cm}
\end{table}
\renewcommand{\arraystretch}{1}
\paragraph{Failure cases}
\figref{fig:failure_cases_up} shows some example failure cases. If the objects are too small, thin, or big, the objects are likely to fall. If objects are initialized close to the hand border, it is much more difficult for the hand to catch the objects. Another failure mode is that the objects are reoriented close to the goal orientation but not close enough to satisfy $\Delta \theta\leq \Bar{\theta}$.
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[trim=300 240 80 200, clip, width=\linewidth]{figures/failures/upv2/1.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[trim=300 240 80 200, clip,width=\linewidth]{figures/failures/upv2/2.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[trim=300 240 80 200, clip,width=\linewidth]{figures/failures/upv2/3.png}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[trim=300 240 80 200, clip,width=\linewidth]{figures/failures/upv2/4.png}
\caption{}
\end{subfigure}%
\caption{Examples of failure cases. (a) and (b): objects are too small. (c): the object is reoriented close to the target orientation, but not close enough. (d): the object is too big and initialized around the palm border.}
\label{fig:failure_cases_up}
\end{figure}
\subsection{Hand faces downward (in the air)}
\label{app_subsec:hand_down}
\paragraph{Testing performance} For the case of reorienting objects in the air with the hand facing downward \tblref{tbl:down_air_success_rate} lists the success rates of different policies trained with/without domain randomization, and tested with/without domain randomization.
\renewcommand{\arraystretch}{1.2}
\begin{table}[!htb]
\caption{Success rates (\%) of policies trained with hand facing downward and to reorient objects in the air. Due to the large number of environment steps required in this setup, we fine-tune the model trained without DR with randomized dynamics instead of training models with DR from scratch. }
\label{tbl:down_air_success_rate}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc|ccc}
\hline
\multicolumn{4}{c}{} & 1 & 2 & 3 \\
\hline
\multirow{2}{*}{Exp. ID} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{State} & \multicolumn{1}{c}{\multirow{2}{*}{Policy}} & \multicolumn{2}{c}{Train without DR} & Finetune with DR \\
\cline{5-7}
& & & \multicolumn{1}{c}{} & Test without DR & Test with DR & Test with DR \\
\hline
K & \multirow{4}{*}{EGAD} & \multirow{2}{*}{Full state} & MLP & \textbf{84.29} $\pm$ \textbf{0.9} &\textbf{38.42} $\pm$ \textbf{1.5} & \textbf{71.44} $\pm$ \textbf{1.3} \\
L & & & RNN & 82.27 $\pm$ 3.3 & \begin{tabular}[c]{@{}c@{}}36.55 $\pm$ 1.4\\\end{tabular} & 67.44 $\pm$ 2.1 \\
M & & \multicolumn{1}{l}{\multirow{2}{*}{Reduced state}} & MLP$\rightarrow$RNN & \textbf{77.05} $\pm$ \textbf{1.6} & 29.22 $\pm$ 2.6 & 59.23 $\pm$ 2.3 \\
N & & \multicolumn{1}{l}{} & RNN$\rightarrow$RNN & 74.10 $\pm$ 2.3 & \textbf{37.01} $\pm$ \textbf{1.5} & \textbf{62.64} $\pm$ \textbf{2.9} \\
\hline
O & \multirow{6}{*}{YCB} & \multirow{3}{*}{Full state} & MLP & 58.95 $\pm$ 2.0 & 26.04 $\pm$ 1.9 & 44.84 $\pm$ 1.3 \\
P & & & RNN & 52.81 $\pm$ 1.7 &\textbf{26.22} $\pm$ \textbf{1.0} & 40.44 $\pm$ 1.5 \\
Q & & & RNN + $g$-curr & \textbf{74.74} $\pm$ \textbf{1.2} & 25.56 $\pm$ 2.9 & \textbf{54.24} $\pm$ \textbf{1.4} \\
R & & \multirow{3}{*}{Reduced state} & MLP$\rightarrow$RNN & 46.76 $\pm$ 2.5 & \textbf{25.49} $\pm$ \textbf{1.4} & 34.14 $\pm$ 1.3 \\
S & & & RNN$\rightarrow$RNN & 45.22 $\pm$ 2.1 & 24.45 $\pm$ 1.2 & 31.63 $\pm$ 1.6 \\
T & & & RNN + $g$-curr$\rightarrow$ RNN & \textbf{67.33} $\pm$ \textbf{1.9} & 19.77 $\pm$ 2.8 & \textbf{48.58} $\pm$ \textbf{2.3} \\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1}
\paragraph{Example visualization}
We show an example of reorienting a cup in \figref{fig:down_reorient_cup} and an example of reorienting a sponge in \figref{fig:down_reorient_sponge}. More examples are available at \url{https://taochenshh.github.io/projects/in-hand-reorientation}.
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000000.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000004.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000008.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000012.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000016.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000020.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000024.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000028.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000032.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000036.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000040.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb1/000045.png}
\end{subfigure}%
\caption{An example of reorienting a cup with the hand facing downward. From left to right, top to bottom, we show the some moments in an episode.}
\label{fig:down_reorient_cup}
\end{figure}
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000001.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000009.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000016.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000024.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000032.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000040.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000048.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000056.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000064.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000072.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000080.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000088.png}
\end{subfigure}%
\\
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000092.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000096.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000104.png}
\end{subfigure}
\begin{subfigure}[t]{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/episode/ycb2/000113.png}
\end{subfigure}%
\caption{An example of reorienting a sponge with the hand facing downward. From left to right, top to bottom, we show the some moments in an episode.}
\label{fig:down_reorient_sponge}
\end{figure}
\subsection{Success rate on each type of YCB objects}
We also analyzed the success rates on each object type in the YCB dataset. Using the same evaluation procedure described in \secref{sec:env_setup}, we get the success rates of each object using $\pi_R^\expsymbol$. \figref{fig:success_rate_ycb_up} shows the distribution of the success rates on YCB objects with the hand facing upward while \figref{fig:success_rate_ycb_down} corresponds to the case of reorienting the objects in the air with the hand facing downward. We can see that sphere-like objects such as tennis balls and orange are easiest to reorient. Long or thin objects such as knives and forks are the hardest ones to manipulate.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\linewidth]{figures/analysis/ycb_up.pdf}
\caption{Reorientation success rates for each object in the YCB dataset when the hand faces upward.}
\label{fig:success_rate_ycb_up}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.95\linewidth]{figures/analysis/ycb_down.pdf}
\caption{Reorientation success rates for each object in the YCB dataset when the hand faces downward without a table.}
\label{fig:success_rate_ycb_down}
\end{figure}
\subsection{Torque analysis}
\label{app_subsec:torque_analysis}
We randomly sampled $100$ objects from the YCB dataset, and use our RNN policy trained without domain randomization with the hand facing downward to reorient each of these objects $200$ times. We record the joint torque values for each finger joint at each time step. Let the joint torque value of $i^{th}$ joint at time step $j$ in $k^{th}$ episode be $J_{ij}^k$. We plot the distribution of $\{\max_{i=[\![1,24]\!]}|J_{ij}^k| \mid j\in[\![ 1, T]\!], k=[\![ 1,20000]\!]\}$, where $[\![a, b]\!]$ represents $\{x\mid x\in[a,b], x\in\mathbb{Z}\}$. \figref{fig:torque} shows that the majority of the maximum torque magnitude is around \SI{0.2}{\newton\meter}.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{figures/analysis/max_torque.pdf}
\caption{Distribution of the \textit{maximum} absolute joint torque values on all joints for all the time steps. We exclude a few outliers in the plot, i.e., we only plot the data distributions up to $99\%$ quantile.}
\label{fig:torque}
\end{figure}
\subsection{Vision experiments with noise}
\label{app_subsec:vision_noise}
We also trained our vision policies with noise added to the point cloud input. We added the following transformations to the point cloud input.
We applied four types of transformations on the point cloud:
\begin{itemize}[leftmargin=*]
\item \textit{RandomTrans}: Translate the point cloud by $[\Delta x, \Delta y, \Delta z]$ where $\Delta x, \Delta y, \Delta z$ are all uniformly sampled from $[-0.04, 0.04]$.
\item \textit{JitterPoints}: Randomly sample $10\%$ of the points. For each sampled point $i$, jitter its coordinate by $[\Delta x_i, \Delta y_i, \Delta z_i]$ where $\Delta x_i, \Delta y_i, \Delta z_i$ are all sampled from a Normal distribution $\mathcal{N}(0, 0.01)$ (truncated at $-0.015$m and $0.015$m).
\item \textit{RandomDropout}: Randomly dropout points with a dropout ratio uniformly sampled from $[0, 0.4]$.
\item \textit{JitterColor}: Jitter the color of points with the following 3 transformations: (1) jitter the brightness and rgb values, (2) convert the color of $30\%$ of the points into gray, (3) jitter the color contrast. Each of this transformation can be applied independently with a probability of $30\%$ if \textit{JitterColor} is applied.
\end{itemize}
Each of these four transformations is applied independently with a probability of $40\%$ for each point cloud at every time step. \tblref{tbl:vision_pol} shows the success rates of the vision policies trained with the aforementioned data augmentations until policy convergence and tested with the same data augmentations. We found that adding the data augmentation in training actually helps improve the data efficiency of the vision policy learning even though the final performance might be a bit lower. As a reference, we show the policy performance trained and tested without data augmentation in \tblref{tbl:vision_pol}. For the mug object, adding data augmentation in training improves the final testing performance significantly. Without data augmentation, the learned policy reorients the mug to a pose where the body of the mug matches how the mug should look in the goal orientation, but the cup handle does not match. Adding the data augmentation helps the policy to get out of this local optimum.
\renewcommand{\arraystretch}{1.2}
\begin{table}[!htb]
\centering
\centering
\caption{Vision policy success rates when the policy is trained and tested with/without data augmentation ($\Bar{\theta}=\SI{0.2}{\radian}, \Bar{d}_C=0.01$)}
\label{tbl:vision_pol}
\begin{tabular}{clcc}
\hline
\multicolumn{2}{c}{Object} & Without data augmentation (noise) & With data augmentation (noise) \\
\hline
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/025_mug.jpeg}
\end{minipage} & 025\_mug & $36.51 \pm 2.8$ & $89.67 \pm 1.2$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/065-d_cups.jpeg}
\end{minipage}& 065-d\_cups & $79.17\pm 2.3$ & $68.32\pm 1.9$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/072-b_toy_airplane.jpeg}
\end{minipage}&072-b\_toy\_airplane & $90.25 \pm 3.7$ & $84.52 \pm 1.4$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-a_lego_duplo.jpeg}
\end{minipage}& 073-a\_lego\_duplo & $62.16 \pm 3.7$ & $58.16 \pm 3.1$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-c_lego_duplo.jpeg}
\end{minipage} &073-c\_lego\_duplo & $58.21 \pm 3.9$ & $50.21 \pm 3.7$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-e_lego_duplo.jpeg}
\end{minipage}&073-e\_lego\_duplo & $76.57\pm 3.6$ & $66.57\pm 3.1$ \\
\hline
\end{tabular}
\end{table}
\renewcommand{\arraystretch}{1}
\end{appendices}
\section{Introduction}
\vspace{-0.15cm}
A common maneuver in many tasks of daily living is to pick an object, reorient it in hand and then either place it or use it as a tool. Consider three simplified variants of this maneuver shown in Figure~\ref{fig:teaser}.
The task in the top row requires an upward-facing multi-finger hand to manipulate an \textit{arbitrary} object in a random orientation to a goal configuration shown in the rightmost column. The next two rows show tasks where the hand is facing downward and is required to reorient the object either using the table as a support or without the aid of any support surface respectively. The last task is the hardest because the object is in an intrinsically unstable configuration owing to the downward gravitational force and lack of support from the palm. Additional challenges in performing such manipulation with a multi-finger robotic hand stem from the control space being high-dimensional and reasoning about the multiple transitions in the contact state between the finger and the object. Due to its practical utility and several unsolved issues, in-hand object reorientation remains an active area of research.
Past work has tackled the in-hand reorientation problem via several approaches: (i) The use of analytical models with powerful trajectory optimization methods~\cite{mordatch2012contact,bai2014dexterous,kumar2014real}. While these methods demonstrated remarkable performance, the results were largely in simulation with simple object geometries and required detailed knowledge of the object model and physical parameters. As such, it remains unclear how to scale these methods to real-world and generalize to new objects. Another line of work has employed (ii) model-based reinforcement learning~\cite{kumar2016optimal,nagabandi2020deep}; or (iii) model-free reinforcement learning with~\cite{gupta2016learning,kumar2016learning,rajeswaran2017learning,radosavovic2020state} and without expert demonstrations~\cite{mudigonda2018investigating,andrychowicz2020learning,akkaya2019solving,zhu2019dexterous}. While some of these works demonstrated learned skills on real robots, it required use of additional sensory apparatus not readily available in the real-world (e.g., motion capture system) to infer the object state, and the learned policies did not generalize to diverse objects. Furthermore, most prior methods operate in the simplified setting of the hand facing upwards. The only exception is pick-and-place, but it does not involve any in-hand re-orientation. A detailed discussion of prior research is provided in Section~\ref{sec:related}.
In this paper, our goal is to study the object reorientation problem with a multi-finger hand in its general form. We desire (a) manipulation with hand facing upward or downward; (b) the ability of using external surfaces to aid manipulation; (c) the ability to reorient objects of novel shapes to arbitrary orientations; (d) operation from sensory data that can be easily obtained in the real world such as RGBD images and joint positions of the hand. While some of these aspects have been individually demonstrated in prior work, we are unaware of any published method that realizes all four. Our main contribution is building a system that achieves the desiderata. The core of our framework is a model-free reinforcement learning with three key components: teacher-student learning, gravity curriculum, and stable initialization of objects. Our system requires no knowledge of object or manipulator models, contact dynamics or any special pre-processing of sensory observations. We experimentally test our framework using a simulated Shadow hand. Due to the scope of the problem and the ongoing pandemic, we limit our experiments to be in simulation. However, we provide evidence indicating that the learned policies can be transferred to the real world in the future.
\noindent{\textbf{A Surprising Finding}:} While seemingly counterintuitive, we found that policies that have no access to shape information can manipulate a large number of previously unseen objects in all the three settings mentioned above. At the start of the project, we hypothesized that developing visual processing architecture for inferring shape while the robot manipulates the object would be the primary research challenge. On the contrary, our results show that it is possible to learn control strategies for general in-hand object re-orientation that are shape-agnostic. Our results, therefore, suggest that visual perception may be less important for in-hand manipulation than previously thought. Of course, we still believe that the performance of our system can be improved by incorporating shape information. However, our findings suggest a different framework of thinking: a lot can be achieved without vision, and that vision might be the icing on the cake instead of the cake itself.
\section{Method}
\label{sec:method}
We found that \textit{simple} extensions to existing techniques in robot learning can be used to construct a system for general object reorientation. First, to avoid explicit modeling of non-linear and frequent changes in the contact state between the object and the hand, we use model-free reinforcement learning (RL). An added advantage is that model-free RL is amenable to direct operation from raw point cloud observations, which is preferred for real-world deployment. We found that better policies can be trained faster using \textit{privileged} state information such as the velocities of the object/fingertips that is easily available in the simulator but not in the real world. To demonstrate the possibility of transferring learned policies to the real world in the future, we overcome the need for privileged information using the idea of \textit{teacher-student training}~\citep{lee2020learning, chen2020learning}. In this framework, first, an expert or \textit{teacher} policy ($\pi^\expsymbol$) is trained using privileged information. Next, the \textit{teacher} policy guides the learning of a \textit{student} policy ($\pi^\stusymbol$) that only uses sensory inputs available in the real world. Let the state space corresponding to $\pi^\expsymbol$ and $\pi^\stusymbol$ be $\mathbb{S}^\expsymbol$ and $\mathbb{S}^\stusymbol$ respectively. In general, $\mathbb{S}^\expsymbol\neq \mathbb{S}^\stusymbol$.
We first trained the teacher policy to reorient more than two thousand objects of diverse shapes (see \secref{sec:learn_teacher}). Next, we detail the method for distilling $\pi^\expsymbol$ to a student policy using a reduced state space consisting of only the joint positions of the hand, the object position, and the difference in orientation from the goal configuration (see \secref{subsec:student_nonvision}). However, in the real world, even the object position and relative orientation must be inferred from sensory observation. Not only does this process require substantial engineering effort (e.g., a motion capture or a pose estimation system), but also inferring the pose of a symmetric object is prone to errors. This is because a symmetric object at multiple orientations looks exactly the same in sensory space such as RGBD images.
To mitigate these issues, we further distill $\pi^\expsymbol$ to operate directly from the point cloud and position of all the hand joints (see \secref{subsec:student_vision_policy}). We propose a simple modification that generalizes an existing 2D CNN architecture~\citep{espeholt2018impala} to make this possible.
The procedure described above works well for manipulation with the hand facing upwards and downwards when a table is available as support. However, when the hand faces downward without an underlying support surface, we found it important to initialize the object in a stable configuration. Finally, because gravity presents the primary challenge in learning policies with a downward-facing hand, we found that training in a curriculum where gravity is slowly introduced (i.e., \textit{gravity curriculum}) substantially improves performance. These are discussed in Section~\ref{subsec: hand_down}.
\vspace{-0.2cm}
\subsection{Learning the teacher policy}
\label{sec:learn_teacher}
\vspace{-0.1cm}
We use model-free RL to learn the teacher policy ($\pi^\expsymbol$) for reorienting an object ($\{O_i | i=1, ..., N\}$) from an initial orientation $\alpha^o_0$ to a target orientation $\alpha^g$ ($\alpha^o_0\neq\alpha^g$). At every time step $t$, the agent observes the state $s_t$, executes the action $a_t$ sampled from the policy $\pi^\expsymbol$, and receives a reward $r_t$. $\pi^\expsymbol$ is optimized to maximize the expected discounted return: $\pi=\arg\max_{\pi^\expsymbol}\mathbb{E} \big[ \sum^{T-1}_{t=0} \gamma^{t} r_t\big]$, where $\gamma$ is the discount factor. The task is successful if the angle difference $\Delta\theta$ between the object's current ($\alpha^o_t$) and the goal orientation ($\alpha^g$) is smaller than the threshold value $\Bar{\theta}$, \textit{i.e.}\xspace, $\Delta\theta\leq \Bar{\theta}$.
To encourage the policy to be smooth, the previous action is appended to the inputs to the policy (\textit{i.e.}\xspace, $a_t=\pi^\expsymbol(s_t, a_{t-1})$) and large actions are penalized in the reward function. We experiment with two architectures for $\pi^\expsymbol$: (1) an MLP policy $\pi_M$,
(2) an RNN policy $\pi_R$.
We use PPO~\citep{schulman2017proximal} to optimize $\pi^\expsymbol$.
More details about the training are in \secref{app_subsec:network_arch} and \secref{app_sec:training_details} in the appendix.
\textbf{Observation and action space}: We define $\mathbb{S}^\expsymbol$ to include joint, fingertip, object, and goal information as detailed in \tblref{tbl:state_def} in the appendix. Note that $\mathbb{S}^\expsymbol$ does not include object shape or information about friction, damping, contact states between the fingers and the object, etc.
We control the joint movements by commanding the relative change in the target joint angle ($q_{t}^{target}$) on each actuated joint (action $a_t\in\mathbb{R}^{20}$): $q_{t+1}^{target} = q_{t}^{target} + a_t\times \Delta t$, where $\Delta t$ is the control time step. We clamp the action command if necessary to make sure $|\Delta q_t^{target}|\leq\SI{0.33}{\radian}$. The control frequency is \SI{60}{\hertz}.
\textbf{Dynamics randomization}: Even though we do not test our policies on a real robot, we train and evaluate policies with domain randomization~\citep{tobin2017domain} to provide evidence that our work has the potential to be transferred to a real robotic system in the future. We randomize the object mass, friction coefficient, joint damping and add noise to the state observation $s_t$ and the action $a_t$. More details about domain randomization are provided in \tblref{tbl:dyn_ran_params} in the appendix.
\subsection{Learning the student policy}
\label{subsec:student_pol}
We distill the teacher $\pi^\expsymbol$ into the student policy $\pi^\stusymbol$ using Dagger~\citep{ross2011reduction}, a learning-from-demonstration method that overcomes the covariate shift problem. We optimize $\pi^\stusymbol$ by minimizing the KL-divergence between $\pi^\stusymbol$ and $\pi^\expsymbol$: $\pi^\stusymbol=\arg\min_{\pi^\stusymbol}D_{KL}\left(\pi^\expsymbol(s^\mathcal{E}_t, a_{t-1}) || \pi^\stusymbol(s^\mathcal{S}_t, a_{t-1}))\right)$. Based on observation data available in real-world settings, we investigate two different choices of $\mathbb{S}^\stusymbol$.
\subsubsection{Training student policy from low-dimensional state}
\label{subsec:student_nonvision}
In the first case, we consider a non-vision student policy $\pi^\stusymbol(s_t^\mathcal{S}, a_{t-1})$. $s_t^\mathcal{S}\in\mathbb{R}^{31}$ includes the joint positions $q_t\in\mathbb{R}^{24}$, object position $p^o_t\in\mathbb{R}^3$, quaternion difference between the object's current and target orientation $\beta_t\in\mathbb{R}^4$. In this case, $\mathbb{S}^\stusymbol\subset\mathbb{S}^\expsymbol$, and we assume the availability of object pose information, but do not require velocity information. We use the same MLP and RNN network architectures used for $\pi^\expsymbol$ on $\pi^\stusymbol$ except the input dimension changes as the state dimension is different.
\subsubsection{Training student policy from vision}
\label{subsec:student_vision_policy}
In the second case, $\mathbb{S}^\stusymbol$ only uses direct observations from RGBD cameras and the joint position ($q_t$) of the robotic hand. We convert the RGB and Depth data into a colored point cloud using a pinhole camera model~\cite{andrew2001multiple}. Our vision policy takes as input the voxelized point cloud of the scene $W_t$, $q_t$, and previous action command $a_{t-1}$, and outputs the action $a_t$, \textit{i.e.}\xspace, $a_t=\pi^\stusymbol(W_t, q_t, a_{t-1})$.
\textbf{Goal specification}: To avoid manually defining per-object coordinate frame for specifying the goal quaternion, we provide the goal to the policy as an object point cloud rotated to the desired orientation $W^g$, \textit{i.e.}\xspace, we only show the policy how the object should look like in the end (see the top left of \figref{fig:vision_policy_arch}). The input to $\pi^\stusymbol$ is the point cloud $W_t = W^s_t\cup W^g$ where $W_t^s$ is the actual point cloud of the current scene obtained from the cameras. Details of obtaining $W_g$ are in \secref{app_sec:training_details}.
\begin{figure}[!tb]
\centering
\includegraphics[width=\linewidth]{figures/architecture/architecture_v5.pdf}
\caption{Visual policy architecture. MK stands for Minkowski Engine. $q_t$ is the joint positions and $a_t$ is the action at time step $t$.}
\label{fig:vision_policy_arch}
\end{figure}
\textbf{Sparse3D-IMPALA-Net}: To convert a voxelized point cloud into a lower-dimensional feature representation, we use a sparse convolutional neural network. We extend the IMPALA policy architecture~\citep{espeholt2018impala} for processing RGB images to process colored point cloud data using 3D convolution. Since many voxels are unoccupied, the use of regular 3D convolution substantially increases computation time. Hence, we use Minkowski Engine~\citep{choy20194d}, a PyTorch library for sparse tensors, to design a 3D version of IMPALA-Net with sparse convolutions (\textit{Sparse3D-IMPALA-Net})\footnote{We also experimented with a 3D sparse convolutional network based on ResNet18, and found that 3D IMPALA-Net works better.}.
The Sparse3D-IMPALA network takes as input the point cloud $W_t$, and outputs an embedding vector which is concatenated with the embedding vector of $(q_t, a_{t-1})$. Afterward, a recurrent network is used and outputs the action $a_t$.
The detailed architecture is illustrated in \figref{fig:vision_policy_arch}.
\textbf{Mitigating the object symmetry issue}: $\pi^\expsymbol$ is trained with the the ground-truth state information $s_t^\mathcal{E}$ including the object orientation $\alpha_t^o$ and goal orientation $\alpha^g$. %
The vision policy does not take any orientation information as input. If an object is symmetric, the two different orientations of the object may correspond to the same point cloud observation. This makes it problematic to use the difference in orientation angles ($\Delta\theta\leq \Bar{\theta}$) as the stopping and success criterion.
To mitigate this issue, we use Chamfer distance~\citep{fan2017point} to compute the distance between the object point cloud in $\alpha^o_t$ and the goal point cloud (i.e., the object rotated by $\alpha^g$) as the evaluation criterion. The Chamfer distance is computed as $d_{C}=\sum_{a\in W_t^o}\min_{b\in W^g} \left\Vert a-b\right\Vert_2^2 + \sum_{b\in W^g}\min_{a\in W_t^o} \left\Vert a-b\right\Vert_2^2$, where $W_t^o$ is the object point cloud in its current orientation. Both $W_t^o$ and $W^g$ are scaled to fit in a unit sphere for computing $d_C$. We check Chamfer distance in each rollout step. If $d_C\leq \Bar{d}_C$ ($\Bar{d}_C$ is a threshold value for $d_C$), we consider the episode to be successful. Hence, the success criterion is $(\Delta\theta\leq\Bar{\theta}) \lor (d_C\leq \Bar{d}_C)$. In training, if the success criterion is satisfied, the episode is terminated and used for updating $\pi^\stusymbol$.
\section{Experimental Setup}
\label{sec:env_setup}
We use the simulated Shadow Hand\xspace~\citep{shadowhand} in NVIDIA Isaac Gym\xspace~\citep{makoviychuk2021isaac}. Shadow Hand\xspace is an anthropomorphic robotic hand with 24 degrees of freedom (DoF). We assume the base of the hand to be fixed. Twenty joints are actuated by agonist–antagonist tendons and the remaining four are under-actuated.
\textbf{Object datasets}:
We use the EGAD dataset~\citep{morrison2020egad} and YCB dataset~\citep{calli2015ycb} that contain objects with diverse shapes (see \figref{fig:egad_ycb_dataset}) for in-hand manipulation experiments. EGAD contains $2282$ geometrically diverse textureless object meshes, while the YCB dataset includes textured meshes for objects of daily life with different shapes and textures. We use the $78$ YCB object models collected with the Google scanner. Since most YCB objects are too big for in-hand manipulation, we proportionally scale down the YCB meshes. To further increase the diversity of the datasets, we create $5$ variants for each object mesh by randomly scaling the mesh. More details of the object datasets are in \secref{app_subsec:dataset}.
\begin{figure}[!tb]
\vspace{-0.5cm}
\centering
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/exp_scene/up_crop.png}
\caption{}
\label{fig:up}
\end{subfigure}%
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/exp_scene/down_random_crop.png}
\caption{}
\label{fig:down_random}
\end{subfigure}%
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/exp_scene/down_table_crop.png}
\caption{}
\label{fig:down_table}
\end{subfigure}%
\begin{subfigure}{0.24\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/exp_scene/down_pregrasp_crop.png}
\caption{}
\label{fig:down_pregrasp}
\end{subfigure}\\
\caption{Examples of initial poses of the hand and object. \textbf{(a)}: hand faces upward. \textbf{(b)}, \textbf{(c)}, \textbf{(d)}: hand faces downward. \textbf{(b)}: both the hand and the object are initialized with random poses . \textbf{(c)}: there is a table below the hand. \textbf{(d)}: the hand and the object are initialized from the lifted poses. }
\label{fig:exp_setup}
\end{figure}
\textbf{Setup for visual observations}:
For the vision experiments, we trained policies for the scenario of hand facing upwards. We place two RGBD cameras above the hand (\figref{fig:camera_pos}). The data from these cameras is combined to create a point cloud observation of the scene~\citep{andrew2001multiple}. For downstream computation, the point cloud is voxelized with a resolution of \SI{0.003}{\meter}.
\textbf{Setup with the upward facing hand}: We first consider the case where the Shadow Hand\xspace faces upward and is required to reorient objects placed in the hand (see \figref{fig:up}). We use the coordinate system where the $z$-axis points vertically upward and the $xy$-plane denotes the horizontal plane. The object pose is initialized with the following procedure: $xy$ position of the object's center of mass (COM) $p_{0,xy}^o$ is randomly sampled from a square region of size $\SI{0.09}{\meter}\times\SI{0.09}{\meter}$. The center of this square is approximately located on the intersection of the middle finger and the palm so that the sampling region covers both the fingers and the palm. The $z$ position of the object is fixed to \SI{0.13}{\meter} above the base of the hand to ensure that the object does not collide with the hand at initialization. The initial and goal orientations are randomly sampled from the full $SO(3)$ space.
\textbf{Setup with the downward facing hand}: Next, we consider the cases where the hand faces downward. We experiment with two scenarios: with and without a table below the hand. In the first case, we place a table with the tabletop being \SI{0.12}{\meter} below the hand base. We place objects in a random pose between the hand and the table so that the objects will fall onto the table. We will describe the setup for the second case (without a table) in \secref{subsec:reorient_air}.
\textbf{Evaluation criterion}: For non-vision experiments, a policy rollout is considered a success if $\theta\leq\Bar{\theta}$. $\Bar{\theta}=\SI{0.1}{\radian}$. For vision experiments, we also check $d_C\leq \Bar{d}_C$ as another criterion and $\Bar{\theta}=\SI{0.2}{\radian}, \Bar{d}_C=0.01$.
The initial and goal orientation are randomly sampled from $SO(3)$ space in all the experiments. We report performance as the percentage of successful episodes when the agent is tasked to reorient each training object $100$ times from arbitrary start to goal orientation. We report the mean and standard deviation of success rate from $3$ seeds.
\section{Results}
\label{sec:results}
We evaluate the performance of reorientation policies with the hand facing upward and downward. Further we analyze the generalization of the learned policies to unseen object shapes.
\subsection{Reorient objects with the hand facing upward}
\label{subsec: result_hand_up}
\paragraph{Train a teacher policy with full-state information}
We train our teacher MLP and RNN policies using the full state information using all objects in the EGAD and YCB datasets separately. The progression of success rate during training is shown in \figref{fig:egad_learn_upward} in \appref{app_subsec:hand_up} . \figref{fig:egad_learn_upward} also shows that using privileged information substantially speeds up policy learning.
Results reported in \tblref{tbl:up_success_rate_short} indicate that the RNN policies achieve a success rate greater than 90\% on the EGAD dataset (entry B1) and greater than 80\% on the YCB dataset (entry G1) without any explicit knowledge of the object shape\footnote{More quantitative results on the MLP policies are available in \tblref{tbl:up_success_rate} in the appendix.}. This result is surprising because apriori one might believe that shape information is important for in-hand reorientation of diverse objects.
The visualization of policy rollout reveals that the agent employs a clever strategy that is invariant to object geometry for re-orienting objects. The agent throws the object in the air with a spin and catches it at the precise time when the object's orientation matches the goal orientation. Throwing the object with a spin is a dexterous skill that automatically emerges! One possibility for the emergence of this skill is that we used very light objects. This is not true because we trained with objects in the range of 50-150g which spans many hand-held objects used by humans (e.g., an egg weighs about 50g, a glass cup weighs around 100g, iPhone 12 weighs 162g, etc.). To further probe this concern, we evaluated zero-shot performance on objects weighing up to 500g\footnote{We change the mass of the YCB objects to be in the range of $[0.3, 0.5]$kg, and test $\pi_R^\expsymbol$ from the YCB dataset on these new objects. The success rate is around $75\%$.} and found that the learned policy can successfully reorient them. We provide further analysis in the appendix showing that forces applied by the hand for such manipulation are realistic. While there is still room for the possibility that the learned policy is exploiting the simulator to reorient objects by throwing them in the air, our analysis so far indicates otherwise.
Next, to understand the failure modes, we collected one hundred unsuccessful trajectories on YCB dataset and manually analyzed them. The primary failure is in manipulating long, small, or thin objects, which accounts for $60\%$ of all errors. In such cases, either the object slips through the fingers and falls, or is hard to be manipulated when the objects land on the palm. Another cause of failures ($19\%$) is that objects are reoriented close to the goal orientation but not close enough to satisfy $\Delta\theta<\Bar{\theta}$.
Finally, the performance on YCB is lower than EGAD because objects in the YCB dataset are more diverse in their aspect ratios. Scaling these objects by constraining $l_{\max} \in[0.05, 0.12]$m (see \secref{sec:env_setup}) makes some of these objects either too small, too big, or too thin and consequently results in failure (see \figref{fig:failure_cases_up}). A detailed object-wise quantitative analysis of performance is reported in appendix \figref{fig:success_rate_ycb_up}. Results confirm that sphere-like objects such as tennis balls and orange are easiest to reorient, while long/thin objects such as knives and forks are the hardest to manipulate.
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\caption{\small{Success rates (\%) of policies tested on different dynamics distribution. $\Bar{\theta}=0.1$\si{\radian}. DR: domain randomization and observation/action noise. X$\rightarrow$Y: distill policy X into policy Y. The full table is in \tblref{tbl:up_success_rate}.}}
\label{tbl:up_success_rate_short}
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc|ccc}
\hline
\multicolumn{4}{c}{} & 1 & 2 & 3 \\
\hline
\multirow{2}{*}{Exp. ID} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{State} & \multicolumn{1}{c}{\multirow{2}{*}{Policy}} & \multicolumn{2}{c}{Train without DR} & Train with DR \\
\cline{5-7}
& & & \multicolumn{1}{c}{} & \multicolumn{1}{l}{Test without DR} & Test with DR & Test with DR \\
\hline
B & \multirow{2}{*}{EGAD} & Full state & RNN & 95.95 $\pm$ 0.8 & 84.27 $\pm$ 1.0 & 88.04 $\pm$ 0.6 \\
E & & Reduced state & RNN$\rightarrow$RNN & 91.96 $\pm$ 1.5 & 78.30 $\pm$ 1.2 & 80.29 $\pm$ 0.9 \\
\hline
G & \multirow{2}{*}{YCB} & Full state & RNN & 80.40 $\pm$ 1.6 & 65.16 $\pm$ 1.0 & 72.34 $\pm$ 0.9 \\
J & & Reduced state & RNN$\rightarrow$RNN & 81.04 $\pm$ 0.5 & 64.93 $\pm$ 0.2 & 65.86 $\pm$ 0.7 \\
\hline
\end{tabular}
}
\end{table}
\renewcommand{\arraystretch}{1}
\vspace{-0.2cm}
\paragraph{Train a student policy with a reduced state space}
The student policy state is $s_t^\mathcal{S}\in\mathbb{R}^{31}$. In \tblref{tbl:up_success_rate_short} (entries E1 and J1), we can see that $\pi_R^\stusymbol$ can get similarly high success rates as $\pi_R^\expsymbol$. The last two columns in \tblref{tbl:up_success_rate_short} also show that the policy is more robust to dynamics variations and observation/action noise after being trained with domain randomization.
\subsection{Reorient objects with the hand facing downward}
\label{subsec: hand_down}
The results above demonstrate that when the hand faces upwards, RL can be used to train policies for reorienting a diverse set of geometrically different objects. A natural question to ask is, does this still hold true when the hand is flipped upside down? Intuitively, this task is much more challenging because the objects will immediately fall down without specific finger movements that stabilize the object. Because with the hand facing upwards, the object primarily rests on the palm, such specific finger movements are not required. Therefore, the hand facing downwards scenario presents a much harder exploration challenge. To verify this hypothesis, we trained a policy with the downward-facing hand, objects placed underneath the hand (see \figref{fig:down_random}), and using the same reward function (\equaref{eqn:rot_reward}) as before. Unsurprisingly, the success rate was $0\%$. The agent's failure can be attributed to policy needing to learn to both stabilize the object under the effect of gravity and simultaneously reorient it. Deploying this policy simply results in an object falling down, confirming the hard-exploration nature of this problem.
\subsubsection{Reorient objects on a table}
\label{subsec:reorient_table}
To tackle the hard problem of reorienting objects with the hand facing downward, we started with a simplified task setup that included a table under the hand (see \figref{fig:down_table}). Table eases exploration by preventing the objects from falling. We train $\pi_M^\expsymbol$ using the same reward function \equaref{eqn:rot_reward} on objects sampled from the EGAD and YCB datasets. The success rate using an MLP policy using full state information for EGAD and YCB is $95.31\%\pm 0.9\%$ and $81.59\%\pm 0.7\%$ respectively. Making use of external support for in-hand manipulation has been a challenging problem in robotics. Prior work approach this problem by building analytical models and constructing motion cones~\citep{chavan2018hand}, which is challenging for objects with complex geometry. Our experiments show that model-free RL provides an effective alternative for learning manipulation strategies capable of using external support surfaces.
\subsubsection{Reorient objects in air with hand facing downward}
\label{subsec:reorient_air}
To enable the agent to operate in more general scenarios, we tackled the re-orientation problem with the hand facing downwards and without any external support. In this setup, one might hypothesize that object shape information (e.g., from vision) is critical because finding the strategy in \secref{subsec: result_hand_up} is not easy when the hand needs to overcome gravity and stabilize the object while reorienting it. We experimentally verify that even in this case, the policies achieve a reasonably high success rate without any knowledge of object shape.
\textbf{A good pose initialization is what you need}:
The difficulty of directly training the RL policies when the hand faces downward is mainly because of the hard-exploration issue in learning to catch the objects that are moving downward. However, catching is not necessary for the reorientation. Even for human, we only reorient the object after we grasp it. More specifically, we first train an object-lifting policy to lift objects from the table, collect the ending state (joint positions $q_T$, object position $p^o_T$ and orientation $\alpha^o_T$) in each successful lifting rollout episode, and reset the hand and objects to these states (velocities are all $0$) for the pose initialization in training the reorientation policy. The objects have randomly initialized poses and are dropped onto the table. We trained a separate RNN policy for each dataset using the reward function in \secref{app_sec:training_details}. The success rate on the EGAD dataset is $97.80\%$, while the success rate on the YCB dataset is $90.11\%$. Note that objects need to be grasped first to be lifted. Our high success rates on object lifting also indicate that using an anthropomorphic hand makes object grasping an easy task, while many prior works~\citep{mousavian2019graspnet, mahler2017dex} require much more involved training techniques to learn grasping skills with parallel-jaw grippers. After we train the lifting policy, we collect about $250$ ending states for each object respectively from the successful lifting episodes. In every reset during the reorientation policy training, ending states are randomly sampled and used as the initial pose of the fingers and objects. With a good pose initialization, policies are able to learn to reorient objects with high success rates. $\pi_R^\expsymbol$ trained on EGAD dataset gets a success rate more than $80\%$ while $\pi_R^\expsymbol$ trained on YCB dataset gets a success rate greater than $50\%$. More results on the different policies with and without domain randomization are available in \tblref{tbl:down_air_success_rate} in the appendix. This setup is challenging because if at any time step in an episode the fingers take a bad action, the object will fall.
\textbf{Improving performance using gravity curriculum}:
Since the difficulty of training the re-orientation policy with the hand facing downward is due to the gravity, we propose to build a gravity curriculum to learn the policy $\pi^\expsymbol$. Since $\pi^\expsymbol$ already performs very well on EGAD objects, we apply gravity curriculum to train $\pi^\expsymbol$ on YCB objects. Our gravity curriculum is constructed as follows: we start the training with $g=\SI{1}{\meter\per\second^2}$, then we gradually decrease $g$ in a step-wise fashion if the evaluation success rate ($w$) is above a threshold value ($\Bar{w}$) until $g=-\SI{9.81}{\meter\per\second^2}$. More details about gravity curriculum are in \secref{subsec:grav_curr}. In \tblref{tbl:down_air_success_rate} (Exp Q and T) in the appendix, we can see that adding gravity curriculum ($g$-curr) significantly boost the success rates on the YCB dataset.
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\caption{Performance of the student policy when the hand faces upward and downward}
\label{tbl:summary_up_down}
\small{
\begin{tabular}{cccc}
\hline
Dataset & Upward & Downward (air) & Downward (air, $g$-curr) \\
\hline
EGAD & 91.96 $\pm$ 1.5 & 74.10 $\pm$ 2.3 & $\diagup$ \\
YCB & 81.04 $\pm$ 0.5 & 45.22 $\pm$ 2.1 & 67.33 $\pm$ 1.9 \\
\hline
\end{tabular}}
\end{table}
\renewcommand{\arraystretch}{1}
\subsection{Zero-shot policy transfer across datasets}
\label{subsec:zero_shot}
We have shown the testing performance on the same training dataset so far. How would the policies work on a different dataset? To answer this, we test our policies across datasets: policies trained with EGAD objects are now tested with YCB objects and vice versa. We used the RNN policies trained with full-state information and reduced-state information respectively (without domain randomization) and tested them on the other dataset with the hand facing upward and downward. In the case of the hand facing downward, we tested the RNN policy trained with gravity curriculum. \tblref{tbl:zero_shot} shows that policies still perform well on the untrained dataset.
\begin{table}[!tb]
\begin{minipage}[t]{.47\linewidth}
\vspace{-0.5cm}
\caption{\small{Zero-shot RNN policy transfer success rates (\%) across datasets. \textbf{U.} (\textbf{D.}) means hand faces upward (downward). \textbf{FS} (\textbf{RS}) means using full-state (reduced-state) information. }}
\label{tbl:zero_shot}
\centering
\renewcommand{\arraystretch}{1.1}
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|ccc}
\cline{1-3}
\multicolumn{1}{c}{} & EGAD~$\rightarrow$~YCB & YCB~$\rightarrow$~EGAD & \\
\cline{1-3}
\textbf{U.FS} & $68.82\pm1.7$ & $96.41\pm 1.2$ & \\
\textbf{U.RS} & $59.64\pm 1.8$ & $96.38\pm 1.3$ & \\
\textbf{D.FS} & $62.73\pm 2.2$ & $85.45\pm 2.9$ & \\
\textbf{D.RS} & $55.30\pm 1.3$ & $77.91\pm 2.1$ & \\
\cline{1-3}
\multicolumn{1}{c}{} & & & \\
\multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{}
\end{tabular}
}
\renewcommand{\arraystretch}{1}
\end{minipage}%
\hfill
\begin{minipage}[t]{.48\linewidth}
\vspace{-0.5cm}
\centering
\caption{\small{Vision policy success rate ($\Bar{\theta}=\SI{0.2}{\radian}, \Bar{d}_C=0.01$)}}
\label{tbl:vision_pol_noise}
\resizebox{0.95\columnwidth}{!}{
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{clc}
\hline
\multicolumn{2}{c}{Object} & Success rate (\%) \\
\hline
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/025_mug.jpeg}
\end{minipage} & 025\_mug & $89.67 \pm 1.2$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/065-d_cups.jpeg}
\end{minipage}& 065-d\_cups & $68.32\pm 1.9$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/072-b_toy_airplane.jpeg}
\end{minipage}&072-b\_toy\_airplane & $84.52 \pm 1.4$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-a_lego_duplo.jpeg}
\end{minipage}& 073-a\_lego\_duplo & $58.16 \pm 3.1$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-c_lego_duplo.jpeg}
\end{minipage} &073-c\_lego\_duplo & $50.21 \pm 3.7$ \\
\begin{minipage}{.06\textwidth}
\includegraphics[width=\linewidth]{figures/ycb/073-e_lego_duplo.jpeg}
\end{minipage}&073-e\_lego\_duplo & $66.57\pm 3.1$ \\
\hline
\end{tabular}
\renewcommand{\arraystretch}{1.}
}
\end{minipage}
\vspace{-1.cm}
\end{table}
\subsection{Object Reorientation with RGBD sensors}
\label{subsec:vision_exp}
In this section, we investigate whether we can train a vision policy to reorient objects with the hand facing upward. As vision-based experiments require much more compute resources, we train one vision policy for each object individually on six objects shown in \tblref{tbl:vision_pol_noise}. We leave training a single vision policy for all objects to future work. We use the expert MLP policy trained in \secref{subsec: result_hand_up} to supervise the vision policy. We also performed data augmentation on the point cloud input to the policy network at each time step in both training and testing. The data augmentation includes the random translation of the point cloud, random noise on the point positions, random dropout on the points, and random variations on the point color. More details about the data augmentation are in \secref{app_subsec:vision_noise}. We can see from \tblref{tbl:vision_pol_noise} that reorienting the non-symmetric objects including the toy and the mug has high success rates (greater than $80\%$). While training the policy for symmetric objects is much harder, \tblref{tbl:vision_pol_noise} shows that using $d_C$ as an episode termination criterion enables the policies to achieve a success rate greater than $50\%$.
\section{Related Work}
\label{sec:related}
Dexterous manipulation has been studied for decades, dating back to \citep{salisbury1982articulated, mason1989robot}. In contrast to parallel-jaw grasping, pushing, pivoting~\citep{karayiannidis2016adaptive}, or pick-and-place, dexterous manipulation typically involves continuously controlling force to the object through the fingertips of a robotic hand~\citep{dafle2014extrinsic}. Some prior works used analytical kinematics and dynamics models of the hand and object, and used trajectory optimization to output control policies~\citep{mordatch2012contact, bai2014dexterous, sundaralingam2019relaxed} or employed kinodynamic planning to find a feasible motion plan~\citep{rus1999hand}. However, due to the large number of active contacts on the hand and the objects, model simplifications such as simple finger and object geometries are usually necessary to make the optimization or planning tractable. \citet{sundaralingam2019relaxed} moved objects in hand but assumes that there is no contact breaking or making between the fingers and the object. \citet{furukawa2006dynamic} achieved a high-speed dynamic regrasping motion on a cylinder using a high-speed robotic hand and a high-speed vision system. Prior works have also explored the use of a vision system for manipulating an object to track a planned path~\citep{calli2018path}, detecting manipulation modes~\citep{calli2018learning}, precision manipulation~\citep{calli2017vision} with a limited number of objects with simple shapes using a two-fingered gripper. Recent works have explored the application of reinforcement learning to dexterous manipulation. Model-based RL works learned a linear ~\citep{kumar2016optimal, kumar2016learning} or deep neural network~\citep{nagabandi2020deep} dynamics model from the rollout data, and used online optimal control to rotate a pen or Baoding balls on a Shadow hand. However, when the system is unstable, collecting informative trajectories for training a good dynamics model that generalizes to different objects remains challenging. Another line of works uses model-free RL algorithms to learn a dexterous manipulation policy. For example, \citet{andrychowicz2020learning} and \citet{akkaya2019solving} learned a controller to reorient a block or a Rubik's cube. \citet{van2015learning} learned the tactile informed policy via RL for a three-finger manipulator to move an object on the table. To reduce the sample complexity of model-free learning, \citep{radosavovic2020state, zhu2019dexterous, rajeswaran2017learning, jeong2020learning, gupta2016learning} combined imitation learning with RL to learn to reorient a pen, open a door, assemble LEGO blocks, etc. However, collecting expert demonstration data from humans is expensive, time-consuming, and even incredibly difficult for contact-rick tasks~\citep{rajeswaran2017learning}. Our method belongs to the category of model-free learning. We use the teacher-student learning paradigm to speed up the deployment policy learning. Our learned policies also generalize to new shapes and show strong zero-shot transfer performance. To the best of our knowledge, our system is the first work that demonstrates the capabilities of reorienting a diverse set of objects that have complex geometries with both the hand facing upward and downward. A recent work~\citep{huang2021geometry} (after our CoRL submission) learns a shape-conditioned policy to reorient objects around $z$-axis with an upward-facing hand. Our work tackles more general tasks (more diverse objects, any goal orientation in $SO(3)$, hand faces upward and downward) and shows that even without knowing any object shape information, the policies can get surprisingly high success rates in these tasks.
\section{Discussion and Conclusion}
Our results show that model-free RL with simple deep learning architectures can be used to train policies to reorient a large set of geometrically diverse objects. Further, for learning with the hand facing downwards, we found that a good pose initialization obtained from a lifting policy was necessary, and the gravity curriculum substantially improved performance. The agent also learns to use an external surface (i.e., the table). The most surprising observation is that information about shape is not required despite the fact that we train a single policy to manipulate multiple objects. Perhaps in hindsight, it is not as surprising -- after all, humans can close their eyes and easily manipulate novel objects into a specific orientation. %
Our work can serve a strong baseline for future in-hand object reorientation works that incorporate object shape in the observation space.%
While we only present results in simulation, we also provide evidence that our policies can be transferred to the real world. The experiments with domain randomization indicate that learned policies can work with noisy inputs. Analysis of peak torques during manipulation (see \figref{fig:torque} in the appendix) also reveals that generated torque commands are feasible to actuate on an actual robotic hand.
Finally, \figref{fig:success_rate_ycb_up} and \figref{fig:success_rate_ycb_down} in the appendix show that the success rate varies substantially with object shape. This suggests that in the future, a training curriculum based on object shapes can improve performance. Another future work is to directly train one vision policy for a diverse set of objects. A major bottleneck in vision-based experiments is the demand for much larger GPU memory. Learning visual representations of point cloud data that can ease the computational bottleneck is, therefore, an important avenue for future research.
\acknowledgments{
We thank the anonymous reviewers for their helpful comments in revising the paper. We thank the members of Improbable AI lab for providing valuable feedback on research idea formulation and manuscript. This research is funded by Toyota Research Institute, Amazon Research Award, and DARPA Machine Common Sense Program. We also acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper.}
|
2024-02-18T23:40:25.505Z
|
2021-11-05T01:22:22.000Z
|
{"paloma_paragraphs":[]}
|
{"arxiv_id":"2111.03043","language":"en","timestamp":1636075342000,"url":"https:\/\/arxiv.org\/abs\/2111.03043","yymm":"2111"}
|
proofpile-arXiv_000-10224
|
{"provenance":"002.jsonl.gz:10225"}
| null | null |
Subsets and Splits
Google References in Olmo-Mix-11
Finds and returns documents from the dataset that mention 'google' in the text, title, or abstract, offering a simple search result limited to 100 entries.
INCOSE Text Samples
Returns the first 100 entries containing the term 'INCOSE', providing basic filtering with minimal insight.