1 Introduction
In probability and statistics, the location (e.g., mean), spread (e.g, standard deviation), skewness, and kurtosis play an important role in the modeling of random processes. One often uses the mean and standard deviation to construct confidence intervals or conduct hypothesis testing, and significant skewness or kurtosis of a data set indicates deviations from normality. Moreover, moment matching algorithms are among the most widely used fitting procedures in practice. As a result, it is important to be able to find the moments of a given distribution. Winkelbauer, in his popular note [18], gave the closed form formulae for the moments as well as absolute moments of a normal distribution $N(\mu ,{\sigma ^{2}})$. The obtained results are beautiful and have been well received. Recently, Ogasawara [13] has provided a unified, nonrecursive formulae for moments of normal distribution with strip truncation. Also see [12, 17] for the binomial family. Given the close relationship between the normal and Student’s t-distributions, a natural question arises: Can we derive similar formulae for the family of Student’s t-distributions? From the authors’ best knowledge, no such set of formulae exist for (generalized) Student’s t-distributions. The purpose of this note is to provide a complete set of closed form formulae for raw moments, central raw moments, absolute moments, and central absolute moments for (generalized) Student’s t-distributions in the one-dimensional case and n-dimensional case. In particular, the formulae given in (2.5)–(2.8) and Proposition 3.1 are new in the literature. In this sense, we unify existing results and provide extensions to higher dimensions, within a common probabilistic framework.
Notation.
For later use, we denote the probability density function (pdf) of a Gamma distribution with parameters $\alpha \gt 0$, $\beta \gt 0$ by
\[ \text{Gamma}(x|\alpha ,\beta )=\frac{{\beta ^{\alpha }}}{\Gamma (\alpha )}{x^{\alpha -1}}{e^{-\beta x}},\hspace{1em}x\in (0,\infty ).\]
Similarly, the probability density function of a normal distribution $X\sim N(\mu ,{\sigma ^{2}})$ is denoted by
This is extended naturally to higher dimensional cases.We will also require two common special functions. The Kummer’s confluent hypergeometric function is defined by
\[ K(\alpha ,\gamma ;z)\equiv {_{1}}{F_{1}}(\alpha ,\gamma ;z)={\sum \limits_{n=0}^{\infty }}\frac{{\alpha ^{\overline{n}}}{z^{n}}}{{\gamma ^{\overline{n}}}n!}.\]
The hypergeometric function is defined by
where
2 Student’s t-distribution: one dimensional case
Recall that the probability density function (pdf) of a standard Student’s t-distribution with $\nu \in \{1,2,3,\dots \}$ degrees of freedom, denoted by $\mathit{St}(t|0,1,\nu )$, is given by
where the Gamma function is defined as
More generally, the probability density function of a location-scale (or generalized) Student’s t-distribution with $\nu \gt 0$ degrees of freedom is denoted by
where $\mu \in (-\infty ,\infty )$ is the location, $\sigma \gt 0$ determines the scale, and $\nu \in \{1,2,3,\dots \}$ represents the number of the degrees of freedom. The thickness of its tails is determined by the degrees of freedom. When $\nu =1$, the pdf in (2.2) reduces to the pdf of $\text{Cauchy}(\mu ,\sigma )$, while the pdf in (2.2) converges to the pdf of the normal $N(t|\mu ,{(1/\sqrt{\sigma })^{2}})$ as $\nu \to \infty $.
(2.1)
\[ \mathit{St}(t|0,1,\nu )=\frac{\Gamma (\frac{\nu +1}{2})}{\Gamma (\frac{\nu }{2})}\frac{1}{\sqrt{\nu \pi }}{\left(1+\frac{{t^{2}}}{\nu }\right)^{-\frac{\nu +1}{2}}},\hspace{1em}-\infty \lt t\lt \infty ,\](2.2)
\[ St(t|\mu ,\sigma ,\nu )=\frac{\Gamma (\frac{\nu +1}{2})}{\Gamma (\frac{\nu }{2})}{\left(\frac{\sigma }{\nu \pi }\right)^{\frac{1}{2}}}{\left(1+\frac{\sigma }{\nu }{\left(t-\mu \right)^{2}}\right)^{-\frac{\nu +1}{2}}},\hspace{1em}-\infty \lt t\lt \infty ,\]While the tails of the normal distribution decay at an exponential rate, the Student’s t-distribution is heavy-tailed, with a polynomial decay rate. Because of this, the Student’s t-distribution has been widely adopted in robust data analysis including (non)linear regression [9], sample selection models [10], and linear mixed effect models [14]. It is also among the most widely applied distributions for financial risk modeling, see [11, 16, 8]. The reader is invited to refer to [7] for more.
The mean and variance of a Student’s t-distribution T are well known and can be found in closed form by using the properties of the Gamma function. Specifically for $\nu \gt 2$, we have (see, for example, [5]):
However, for higher order raw or central moments, the calculation quickly becomes tedious.
We note that one can use the fact that the Student’s t-distribution can be written as $T=X/\sqrt{Z/\nu }$ where $X\sim N(0,1)$, $Z\sim {\chi _{\nu }^{2}}$, $X,Z$ are independent. From there, one can derive the probability density function of T. We adopt the mixture approach which is surprisingly simple and will be very useful in later derivations. It provides a representation of a conditional Student’s t-distribution in terms of a normal distribution, see, for example, [3, page 103]. More specifically, we have the following lemma.
Lemma 2.1.
Assume that for $\nu \gt 0$, $\Lambda \sim \textit{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean μ and variance $1/(\sigma \lambda )$. Then T is a $\mathit{St}(t|\mu ,\sigma ,\nu )$ Student’s t-distribution.
Proof.
As the proof is very concise, we reproduce it here for the reader’s convenience. Let ${f_{T}}(t)$ be the probability density function of T. We have
\[\begin{aligned}{}{f_{T}}(t)=& {\int _{0}^{\infty }}N(t|\mu ,\frac{1}{\sigma \lambda })\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} =& {\int _{0}^{\infty }}\frac{\sqrt{\sigma \lambda }}{\sqrt{2\pi }}{e^{-\frac{\sigma \lambda }{2}{(t-\mu )^{2}}}}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}{\lambda ^{\nu /2-1}}{e^{-\frac{\nu }{2}\lambda }}d\lambda \\ {} =& \frac{\sqrt{\sigma }}{\sqrt{2\pi }}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}\frac{\Gamma (\frac{\nu +1}{2})}{{(\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}})^{\frac{\nu +1}{2}}}}\\ {} \hspace{1em}& \times {\int _{0}^{\infty }}\text{Gamma}(\lambda |\frac{\nu +1}{2},\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}}))d\lambda \\ {} =& \frac{\sqrt{\sigma }}{\sqrt{2\pi }}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}\frac{\Gamma (\frac{\nu +1}{2})}{{(\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}})^{\frac{\nu +1}{2}}}}\\ {} =& \mathit{St}(t|\mu ,\sigma ,\nu ).\end{aligned}\]
This completes the proof of the lemma. □The equalities in Theorem 2.1 below are well known.
With this and Lemma 2.1 above, we are able to find moments of the Student’s t-distribution. More specifically, we have the following comprehensive theorem in one dimension.
Theorem 2.2.
For $k\in {\mathbb{N}_{+}}$, $0\lt k\lt \nu $, the following results hold:
In general, the moments are undefined when $k\ge \nu $.
-
1. For $T\sim \mathit{St}(t|0,1,\nu )$, the raw and absolute moments satisfy
-
2. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the raw moments satisfy
(2.5)
\[ \mathbb{E}({T^{k}})=\left\{\begin{array}{l}{(\nu /\sigma )^{k/2}}\displaystyle \frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\displaystyle \frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\displaystyle \frac{k}{2},\displaystyle \frac{\nu }{2}-\displaystyle \frac{k}{2},\displaystyle \frac{1}{2};-\displaystyle \frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{even},\\ {} 2\mu {(\nu /\sigma )^{(k-1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k-1}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(\frac{1-k}{2},\frac{\nu }{2}-\frac{k-1}{2},\frac{3}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{odd};\end{array}\right.\] -
3. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the absolute moments satisfy
Proof.
First assume that $T\sim \mathit{St}(t|0,1,\nu )$; we will find $\mathbb{E}(|T{|^{k}})$. The proof for $\mathbb{E}({T^{k}})$ follows from similar ideas in combination with the result obtained in Theorem 2.1. Assume $\Lambda \sim \text{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean 0 and variance $1/\lambda $. From the equation (17) in [18], we have
\[\begin{aligned}{}\mathbb{E}(|T{|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}|t{|^{k}}\hspace{2.5pt}\text{N}(t|0,\frac{1}{\lambda })\hspace{2.5pt}dt=\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right).\end{aligned}\]
Hence we have
\[\begin{aligned}{}& \mathbb{E}(|T{|^{k}})=\mathbb{E}(\mathbb{E}(|T{|^{k}}|\lambda ))\\ {} & ={\underset{0}{\overset{\infty }{\int }}}\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}{\lambda ^{\nu /2-1}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}{\underset{0}{\overset{\infty }{\int }}}{\lambda ^{\nu /2-1-k/2}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}\\ {} & \hspace{1em}\cdot \frac{\Gamma (\frac{\nu -k}{2})}{{(\frac{\nu }{2})^{\frac{\nu -k}{2}}}}{\underset{0}{\overset{\infty }{\int }}}\frac{{(\frac{\nu }{2})^{\frac{\nu -k}{2}}}}{\Gamma (\frac{\nu -k}{2})}{\lambda ^{(\nu -k)/2-1}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}\cdot \frac{\Gamma (\frac{\nu -k}{2})}{{(\nu /2)^{\frac{\nu -k}{2}}}}\\ {} & =\frac{{\nu ^{k/2}}\Gamma ((k+1)/2)\Gamma ((\nu -k)/2)}{\sqrt{\pi }\Gamma (\nu /2)},\end{aligned}\]
where we have used the fact that $K\left(-\frac{k}{2},\frac{1}{2};0\right)=1$.Next, assume that $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$ and $\Lambda \sim \text{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean μ and variance $1/(\sigma \lambda )$. Using the following facts (obtained in [18])
\[\begin{aligned}{}\mathbb{E}({(T-\mu )^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}{(t-\mu )^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt\\ {} & =(1+{(-1)^{k}})\frac{1}{{\lambda ^{k/2}}}{2^{k/2-1}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\end{aligned}\]
and
\[\begin{aligned}{}\mathbb{E}(|T-\mu {|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}|t-\mu {|^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt=\frac{{\sigma ^{-k/2}}}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }},\end{aligned}\]
the derivations for $\mathbb{E}({(T-\mu )^{k}})$ and $\mathbb{E}(|T-\mu {|^{k}})$ follow similarly.Next assume that $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$ and we would like to compute the absolute raw moment $\mathbb{E}(|T{|^{k}})$ of T. From the equation (17) in [18], we have
\[\begin{aligned}{}\mathbb{E}(|T{|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}\hspace{-0.1667em}|t{|^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt=\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\hspace{-0.1667em}\left(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda \right).\end{aligned}\]
Hence, using Part 2) of Theorem 2.1, we have for $k\lt \nu $
\[\begin{aligned}{}& \mathbb{E}(|T{|^{k}})=\mathbb{E}(\mathbb{E}(|T{|^{k}}|\lambda ))\\ {} & =\int \frac{1}{{\lambda ^{k/2}}}{2^{k/2}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda \right)\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}\int {\lambda ^{n-k/2}}\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n+k/2}}\frac{\Gamma (n-k/2+\nu /2)}{\Gamma (\nu /2)}\\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n+k/2}}{(\frac{\nu }{2}-\frac{k}{2})^{\overline{n}}}\\ {} & ={\sigma ^{-k/2}}{2^{k/2}}{(\nu /2)^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}\\ {} & \hspace{1em}\times {\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n}}{(\frac{\nu }{2}-\frac{k}{2})^{\overline{n}}}\\ {} & ={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\frac{k}{2},\frac{\nu }{2}-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}\sigma }{\nu }).\end{aligned}\]
Lastly, from the equation (12) in [18], we have
\[\begin{aligned}{}\mathbb{E}({T^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}{t^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}{\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{1}{{\lambda ^{k/2}}}K(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda ),& k\hspace{2.5pt}\text{even},\\ {} \mu {\sigma ^{-(k-1)/2}}{2^{(k+1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{1}{{\lambda ^{(k-1)/2}}}K(\frac{1-k}{2},\frac{3}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda ),& k\hspace{2.5pt}\text{odd}.\end{array}\right.\end{aligned}\]
Similar to the calculations done for $\mathbb{E}(|T{|^{k}})$, we have
\[ \mathbb{E}({T^{k}})=\left\{\hspace{-0.1667em}\begin{array}{l@{\hskip10.0pt}l}{(\nu /\sigma )^{k/2}}\displaystyle \frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\displaystyle \frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\displaystyle \frac{k}{2},\displaystyle \frac{\nu }{2}-\displaystyle \frac{k}{2},\displaystyle \frac{1}{2};-\displaystyle \frac{{\mu ^{2}}\sigma }{\nu }),& k\hspace{2.5pt}\text{even},\\ {} 2\mu {(\nu /\sigma )^{(k-1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k-1}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(\frac{1-k}{2},\frac{\nu }{2}-\frac{k-1}{2},\frac{3}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),& k\hspace{2.5pt}\text{odd}.\end{array}\right.\]
This completes the proof of the theorem. □Remark 2.1.
-
1. The formulae given in (2.5)–(2.8) are new in the literature. Also when $T\sim \mathit{St}(t|0,1,\nu )$, $\mathbb{E}({T^{k}})$ is well known. Moreover, one can directly use the definition to find $\mathbb{E}(|T{|^{k}})$ through the class of β-functions defined in Section 6.2 of [1] and arrive at the same formula. However, this direct approach no longer works for expectations of the forms $\mathbb{E}(|T{|^{k}})$, $\mathbb{E}({T^{k}})$ when $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, or for higher dimensional moments considered in Section 3. Also, clearly (2.5) is reduced to (2.3), and (2.7) is reduced to (2.4) when $\mu =0$ and $\sigma =1$.
-
2. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, and once $\mathbb{E}({(T-\mu )^{i}}),0\le i\le k$, have been computed, we can use them to compute $\mathbb{E}({T^{k}})$ for $k\lt \nu $ using the expansion
3 Higher-dimensional case
Now we consider the case when $n\ge 2$. Denote $\boldsymbol{t}=({t_{1}},{t_{2}},\dots ,{t_{n}})\in {\mathbb{R}^{n}}$. Denote the pdf of n-dimensional Normal random variable as
where $\boldsymbol{\mu }$ is called the location, Σ is the scale matrix, and ν is the parameter, the degrees of freedom. Similarly to Lemma 2.1, we have
Note that in the standardized case of $\boldsymbol{\mu }=\mathbf{0}$ and $\boldsymbol{\Sigma }=\boldsymbol{I}$, the representation in (3.1) is reduced to
Let’s write $\boldsymbol{T}=({T_{1}},{T_{2}},\dots ,{T_{n}})$, and $\boldsymbol{k}=({k_{1}},{k_{2}},\dots ,{k_{n}})$ with $0\le {k_{i}}\in \mathbb{N}$. For $\nu \gt 2$, it is known that (see, for example, [3, page 105]),
\[ N(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma })=\frac{1}{{(2\pi )^{n/2}}|\boldsymbol{\Sigma }{|^{\frac{1}{2}}}}{e^{-\frac{1}{2}{(\boldsymbol{x}-\boldsymbol{\mu })^{T}}{\boldsymbol{\Sigma }^{-1}}(\boldsymbol{x}-\boldsymbol{\mu })}},\hspace{1em}\boldsymbol{x}\in {\mathbb{R}^{n}},\]
where $\boldsymbol{\mu }\in {\mathbb{R}^{n}}$ and $|\boldsymbol{\Sigma }|$ is the determinant of the $n\times n$ symmetric positive definite matrix Σ. Similarly to the 1-dimensional case, we have the probability density of the n-dimensional Student’s t-distribution defined as
(3.1)
\[ St(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}|\boldsymbol{\mu },{(\lambda \boldsymbol{\Sigma })^{-1}})\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda ,\]
\[\begin{aligned}{}& St(\boldsymbol{t}\mid \boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}\mid \boldsymbol{\mu },{(\lambda \boldsymbol{\Sigma })^{-1}})\text{Gamma}\left(\lambda \mid \frac{\nu }{2},\frac{\nu }{2}\right)d\lambda \\ {} & ={\int _{0}^{\infty }}\frac{|\lambda \boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}}\exp \left\{-\frac{1}{2}{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}(\lambda \boldsymbol{\Sigma })(\boldsymbol{t}-\boldsymbol{\mu })-\frac{\nu \lambda }{2}\right\}\frac{1}{\Gamma (\nu /2)}{\left(\frac{\nu }{2}\right)^{\nu /2}}{\lambda ^{\nu /2-1}}d\lambda \\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\int _{0}^{\infty }}\exp \left\{-\frac{1}{2}{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}(\lambda \boldsymbol{\Sigma })(\boldsymbol{t}-\boldsymbol{\mu })-\frac{\nu \lambda }{2}\right\}{\lambda ^{n/2+\nu /2-1}}d\lambda .\end{aligned}\]
Let’s define
\[\begin{aligned}{}{\Delta ^{2}}& ={(\boldsymbol{t}-\boldsymbol{\mu })^{T}}\boldsymbol{\Sigma }(\boldsymbol{t}-\boldsymbol{\mu }),\\ {} z& =\frac{\lambda }{2}({\Delta ^{2}}+\nu ),\end{aligned}\]
then we have
(3.2)
\[\begin{aligned}{}St(x\mid \boldsymbol{\mu },\boldsymbol{\Sigma },\nu )& =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\int _{0}^{\infty }}\exp (-z){\left(\frac{2z}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2-1}}\cdot \frac{2}{{\Delta ^{2}}+\nu }dz\\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\left(\frac{2}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2}}{\int _{0}^{\infty }}\exp (-z){z^{n/2+\nu /2-1}}dz\\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\left(\frac{2}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2}}\Gamma (\frac{\nu +n}{2})\\ {} & =\frac{\Gamma (\frac{\nu +n}{2})}{\Gamma (\frac{\nu }{2})}\frac{|\boldsymbol{\Sigma }{|^{\frac{1}{2}}}}{{(\nu \pi )^{\frac{n}{2}}}}{\left(1+\frac{1}{\nu }{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}\boldsymbol{\Sigma }(\boldsymbol{t}-\boldsymbol{\mu })\right)^{-\frac{\nu +n}{2}}}.\end{aligned}\](3.3)
\[ \mathit{St}(\boldsymbol{t}|\mathbf{0},\boldsymbol{I},\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}|\mathbf{0},\frac{1}{\lambda }\boldsymbol{I})\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda .\]For the rest of this section, we are interested in higher moments of $\boldsymbol{T}$. The $\boldsymbol{k}$ moment of $\boldsymbol{T}$ is defined as
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})=\int {t_{1}^{{k_{1}}}}{t_{2}^{{k_{2}}}}\dots {t_{n}^{{k_{n}}}}\cdot \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\]
Similarly,
\[ \mathbb{E}(|\boldsymbol{T}{|^{\boldsymbol{k}}})=\int |{t_{1}}{|^{{k_{1}}}}|{t_{2}}{|^{{k_{2}}}}\dots |{t_{n}}{|^{{k_{n}}}}\cdot \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\]
To simplify the notations, in the following, we use $\textstyle\sum {k_{i}}$, $\textstyle\prod {k_{i}}$ to denote ${\textstyle\sum _{i=1}^{n}}{k_{i}}$, ${\textstyle\prod _{i=1}^{n}}{k_{i}}$, respectively. From the authors’ best knowledge, the following results are new.Theorem 3.1.
For $\textstyle\sum {k_{i}}\lt \nu $, we have:
-
1. If $\boldsymbol{T}\sim St(\boldsymbol{t}|\mathbf{0},\boldsymbol{I},\nu )$, then
-
• the raw moments satisfy\[ \hspace{-12.0pt}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \textit{if}\hspace{2.5pt}\textit{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{is odd},\\ {} {\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\displaystyle \frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})}}\textstyle\prod ({k_{i}}/2)!},& \textit{if}\hspace{2.5pt}\textit{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{are even};\end{array}\right.\]
-
-
2. If $\boldsymbol{T}\sim \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )$, denote ${\boldsymbol{\Sigma }^{-1}}=({\overline{\sigma }_{ij}})$ and ${\boldsymbol{e}_{i}}=(0,\dots ,1,\dots ,0)$, the ith unit vector of ${\mathbb{R}^{n}}$. Then we have the following recursive formula to compute the moments of $\boldsymbol{T}$:
Proof.
For 1), first from (3.3), we have
where $\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}})\equiv \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$ is the $\boldsymbol{k}$ moment of $N(\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$. Recall that the pdf of $N(\boldsymbol{\mu },\frac{1}{t}{\Sigma ^{-1}})$ is given by
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})={\int _{0}^{\infty }}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\]
where $\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})$ is the $\boldsymbol{k}$ moment of a $N(\mathbf{0},\frac{1}{t}\boldsymbol{I})$. Using Theorem 2.1, we have
\[ \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})={\prod \limits_{i=1}^{n}}\mathbb{E}({X_{i}^{{k_{i}}}}|0,\frac{1}{t})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} \displaystyle \frac{{t^{-\textstyle\sum {k_{i}}/2}}\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})/2}}\textstyle\prod ({k_{i}}/2)!},& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even}.\end{array}\right.\]
As a result,
\[\begin{aligned}{}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})& =\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} \displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})/2}}\textstyle\prod ({k_{i}}/2)!}\\ {} \hspace{1em}\times {\displaystyle \int _{0}^{\infty }}{t^{-\textstyle\sum {k_{i}}/2}}\text{Gamma}(t|\displaystyle \frac{\nu }{2},\displaystyle \frac{\nu }{2})dt,& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even},\end{array}\right.\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} {\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\displaystyle \frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})}}\textstyle\prod ({k_{i}}/2)!},& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even}.\end{array}\right.\end{aligned}\]
Similarly, we have
\[ \mathbb{E}(|{\boldsymbol{T}^{\boldsymbol{k}}}|)={\int _{0}^{\infty }}\mathbb{E}(|{X_{1}}{|^{{k_{1}}}}|{X_{2}}{|^{{k_{2}}}}\dots |{X_{n}}{|^{{k_{n}}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\]
where
\[ \mathbb{E}(|{X_{1}}{|^{{k_{1}}}}|{X_{2}}{|^{{k_{2}}}}\dots |{X_{n}}{|^{{k_{n}}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})={\prod \limits_{i=1}^{n}}\mathbb{E}(|{X_{i}}{|^{{k_{i}}}}|0,\frac{1}{t})=\prod \frac{1}{{t^{{k_{i}}/2}}}{2^{{k_{i}}/2}}\frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}.\]
Therefore,
\[\begin{aligned}{}\mathbb{E}(|{\boldsymbol{T}^{\boldsymbol{k}}}|)& ={2^{\textstyle\sum {k_{i}}/2}}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}{\int _{0}^{\infty }}{t^{-\textstyle\sum {k_{i}}/2}}\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\\ {} & ={\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}\hspace{1em}\text{if}\hspace{2.5pt}\sum {k_{i}}\lt \nu .\end{aligned}\]
For 2), from (3.1),
(3.4)
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})={\int _{0}^{\infty }}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\]
\[ N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})=\frac{1}{{(2\pi )^{n/2}}|\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}{|^{\frac{1}{2}}}}{e^{-\frac{1}{2}{(\boldsymbol{x}-\boldsymbol{\mu })^{T}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })}}.\]
Similar to Theorem 1 in [6], we have
\[ -\frac{\partial N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})}{\partial \boldsymbol{x}}=t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}).\]
Hence
\[ -\int {\boldsymbol{x}^{\boldsymbol{k}}}\frac{\partial N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})}{\partial \boldsymbol{x}}d\boldsymbol{x}=\int {\boldsymbol{x}^{\boldsymbol{k}}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
By integration by parts, we arrive at
\[ \int {k_{j}}{\boldsymbol{x}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}=\int {\boldsymbol{x}^{\boldsymbol{k}}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
Or equivalently,
\[ \int {\boldsymbol{x}^{\boldsymbol{k}}}(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}=\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\int {k_{j}}{\boldsymbol{x}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
This in turn implies that
\[ \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})={\mu _{i}}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}})+\frac{1}{t}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\]
Plugging this into the equation (3.4), we have the following recursive equation
\[\begin{aligned}{}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})& ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}){\int _{0}^{\infty }}\frac{1}{t}\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\\ {} & ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{2}\frac{\Gamma (\frac{\nu }{2}-1)}{\Gamma (\frac{\nu }{2})}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}})\\ {} & ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{\nu -2}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\end{aligned}\]
This completes the proof of the theorem. □Lastly, for $\boldsymbol{a}=({a_{1}},{a_{2}},\dots ,{a_{n}})$ and $\boldsymbol{b}=({b_{1}},{b_{2}},\dots ,{b_{n}})\in {\mathbb{R}^{n}}$, let ${\boldsymbol{a}_{(j)}}$ be the vector obtained from $\boldsymbol{a}$ by deleting the jth element of $\boldsymbol{a}$. For $\boldsymbol{\Sigma }=({\sigma _{ij}})$, let ${\sigma _{i}^{2}}={\sigma _{ii}}$ and ${\boldsymbol{\Sigma }_{i,(j)}}$ stand for the ith row of Σ with its jth element removed. Analogously, let ${\boldsymbol{\Sigma }_{(i),(j)}}$ stand for the matrix Σ with ith row and jth column removed.
Consider the following truncated $\boldsymbol{k}$ moment
where ${\boldsymbol{X}^{\boldsymbol{k}}}\sim N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$. Using Theorem 1 in [6], we have for $n\gt 1$
where ${\mathbf{c}_{\mathbf{k}}}$ satisfies
Thus, we have the following recursive formula
\[\begin{aligned}{}{F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )& ={\int _{\boldsymbol{a}}^{\boldsymbol{b}}}{\boldsymbol{t}^{\boldsymbol{k}}}St(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d\boldsymbol{t}\\ {} & \equiv {\int _{{a_{1}}}^{{b_{1}}}}\dots {\int _{{a_{n}}}^{{b_{n}}}}{t_{1}^{{k_{1}}}}\dots {t_{n}^{{k_{n}}}}St(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\end{aligned}\]
We have
(3.5)
\[ \displaystyle {F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}^{\boldsymbol{k}}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\](3.6)
\[\begin{aligned}{}\mathbb{E}({X_{\boldsymbol{k}+{\boldsymbol{e}_{i}}}^{n}};\boldsymbol{a},\boldsymbol{b},\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})& :=\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}_{\boldsymbol{k}+{\mathbf{e}_{i}}}^{n}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]\\ {} & ={\mu _{i}}\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}_{\boldsymbol{k}}^{n}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]+\frac{1}{t}{\mathbf{e}_{i}^{\top }}{\boldsymbol{\Sigma }^{-1}}{\mathbf{c}_{\mathbf{k}}},\end{aligned}\]
\[\begin{aligned}{}{\mathbf{c}_{\mathbf{k},j}}& ={k_{j}}\mathbb{E}({X_{\boldsymbol{k}-{\boldsymbol{e}_{i}}}^{n}};\boldsymbol{a},\boldsymbol{b},\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})\\ {} & \hspace{1em}+{a_{j}^{{k_{j}}}}N({a_{j}}|{\mu _{j}},\frac{1}{t}{\overline{\sigma }_{j}^{2}})\mathbb{E}({X_{{\boldsymbol{k}_{(j)}}}^{n-1}};{\boldsymbol{a}_{(j)}},{\boldsymbol{b}_{(j)}},{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{a}}},\frac{1}{t}{\widehat{\boldsymbol{\Sigma }}_{j}})\\ {} & \hspace{1em}-{b_{j}^{{k_{j}}}}N({b_{j}}|{\mu _{j}},\frac{1}{t}{\overline{\sigma }_{j}^{2}})\mathbb{E}({X_{{\boldsymbol{k}_{(j)}}}^{n-1}};{\boldsymbol{a}_{(j)}},{\boldsymbol{b}_{(j)}},{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{b}}},\frac{1}{t}{\widehat{\boldsymbol{\Sigma }}_{j}}),\hspace{1em}j=1,2,\dots ,n,\end{aligned}\]
with
(3.7)
\[ \left\{\begin{array}{l}{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{a}}}={\boldsymbol{\mu }_{(j)}}+{\boldsymbol{\Sigma }_{(j),j}^{-1}}\frac{{a_{j}}-{\mu _{j}}}{{\overline{\sigma }_{j}^{2}}},\\ {} {\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{b}}}={\boldsymbol{\mu }_{(j)}}+{\boldsymbol{\Sigma }_{(j),j}^{-1}}\frac{{b_{j}}-{\mu _{j}}}{{\overline{\sigma }_{j}^{2}}},\\ {} {\widehat{\boldsymbol{\Sigma }}_{j}}={\boldsymbol{\Sigma }_{(j),(j)}^{-1}}-\frac{1}{{\overline{\sigma }_{j}^{2}}}{\boldsymbol{\Sigma }_{(j),j}^{-1}}{\boldsymbol{\Sigma }_{j,(j)}^{-1}}.\end{array}\right.\]
\[ {F_{\boldsymbol{k}+{\mathbf{e}_{i}}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\mu _{i}}{F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )+\frac{\nu }{\nu -2}{\mathbf{e}_{i}^{\top }}\boldsymbol{\Sigma }{\mathbf{d}_{\mathbf{k}}},\]
where
\[\begin{aligned}{}{\mathbf{d}_{\mathbf{k},j}}=& {k_{j}}{F_{\boldsymbol{k}-{\mathbf{e}_{i}}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )+{a_{j}^{{k_{j}}}}St({a_{j}}|{\mu _{j}},{\overline{\sigma }_{j}^{2}},\nu ){F_{{\boldsymbol{k}_{(j)}}}^{n-1}}({\mathbf{a}_{(j)}},{\mathbf{b}_{(j)}};{\widehat{\boldsymbol{\mu }}_{j}^{\mathbf{a}}},\widehat{\boldsymbol{\Sigma }},\nu )\\ {} & -{b_{j}^{{k_{j}}}}St({b_{j}}|{\mu _{j}},{\overline{\sigma }_{j}^{2}},\nu ){F_{{\boldsymbol{k}_{(j)}}}^{n-1}}({\mathbf{a}_{(j)}},{\mathbf{b}_{(j)}};{\widehat{\boldsymbol{\mu }}_{j}^{\mathbf{b}}},\widehat{\boldsymbol{\Sigma }},\nu ),\hspace{1em}j=1,2,\dots ,n.\end{aligned}\]
Note that by convention the first term, second term, and third term in the expression of ${\mathbf{d}_{\mathbf{k},j}}$ equal 0 when ${k_{j}}=0$, ${a_{j}}=\infty $, ${b_{j}}=-\infty $, respectively.4 Conclusion
We have derived the closed form formulae for the raw moments, absolute moments, and central moments of the Student’s t-distribution with arbitrary degrees of freedom. We provide results in one and n dimensions, which unify and extend the existing literature for the Student’s t-distribution. It would be interesting to investigate tail quantile approximations or asymptotic tail properties of a higher (generalized) Student’s t-distribution as done in [15] and [4]. We leave this as an interesting project for future studies.