Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. To appear
  3. Moments of Student’s t-distribution: a u ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • More
    Article info Full article

Moments of Student’s t-distribution: a unified approach
Justin Lars Kirkby   Dang H. Nguyen   Duy Nguyen ORCID icon link to view author Duy Nguyen details  

Authors

 
Placeholder
https://doi.org/10.15559/25-VMSTA278
Pub. online: 25 April 2025      Type: Research Article      Open accessOpen Access

Received
1 August 2024
Revised
31 March 2025
Accepted
31 March 2025
Published
25 April 2025

Abstract

In this paper, new closed form formulae for moments of the (generalized) Student’s t-distribution are derived in the one dimensional case as well as in higher dimensions through a unified probability framework. Interestingly, the closed form expressions for the moments of the Student’s t-distribution can be written in terms of the familiar Gamma function, Kummer’s confluent hypergeometric function, and the hypergeometric function. This work aims to provide a concise and unified treatment of the moments for this important distribution.

1 Introduction

In probability and statistics, the location (e.g., mean), spread (e.g, standard deviation), skewness, and kurtosis play an important role in the modeling of random processes. One often uses the mean and standard deviation to construct confidence intervals or conduct hypothesis testing, and significant skewness or kurtosis of a data set indicates deviations from normality. Moreover, moment matching algorithms are among the most widely used fitting procedures in practice. As a result, it is important to be able to find the moments of a given distribution. Winkelbauer, in his popular note [18], gave the closed form formulae for the moments as well as absolute moments of a normal distribution $N(\mu ,{\sigma ^{2}})$. The obtained results are beautiful and have been well received. Recently, Ogasawara [13] has provided a unified, nonrecursive formulae for moments of normal distribution with strip truncation. Also see [12, 17] for the binomial family. Given the close relationship between the normal and Student’s t-distributions, a natural question arises: Can we derive similar formulae for the family of Student’s t-distributions? From the authors’ best knowledge, no such set of formulae exist for (generalized) Student’s t-distributions. The purpose of this note is to provide a complete set of closed form formulae for raw moments, central raw moments, absolute moments, and central absolute moments for (generalized) Student’s t-distributions in the one-dimensional case and n-dimensional case. In particular, the formulae given in (2.5)–(2.8) and Proposition 3.1 are new in the literature. In this sense, we unify existing results and provide extensions to higher dimensions, within a common probabilistic framework.
Notation.
For later use, we denote the probability density function (pdf) of a Gamma distribution with parameters $\alpha \gt 0$, $\beta \gt 0$ by
\[ \text{Gamma}(x|\alpha ,\beta )=\frac{{\beta ^{\alpha }}}{\Gamma (\alpha )}{x^{\alpha -1}}{e^{-\beta x}},\hspace{1em}x\in (0,\infty ).\]
Similarly, the probability density function of a normal distribution $X\sim N(\mu ,{\sigma ^{2}})$ is denoted by
\[ N(x|\mu ,{\sigma ^{2}})=\frac{1}{\sqrt{2\pi }\sigma }\exp \left(-\frac{{(x-\mu )^{2}}}{2{\sigma ^{2}}}\right),\hspace{1em}x\in (-\infty ,+\infty ).\]
This is extended naturally to higher dimensional cases.
We will also require two common special functions. The Kummer’s confluent hypergeometric function is defined by
\[ K(\alpha ,\gamma ;z)\equiv {_{1}}{F_{1}}(\alpha ,\gamma ;z)={\sum \limits_{n=0}^{\infty }}\frac{{\alpha ^{\overline{n}}}{z^{n}}}{{\gamma ^{\overline{n}}}n!}.\]
The hypergeometric function is defined by
\[ {_{2}}{F_{1}}(a,b,c;z)={\sum \limits_{n=0}^{\infty }}\frac{{a^{\overline{n}}}{b^{\overline{n}}}}{{c^{\overline{n}}}}\cdot \frac{{z^{n}}}{n!},\]
where
\[ {a^{\overline{n}}}=\frac{\Gamma (a+n)}{\Gamma (a)}=\left\{\begin{array}{l@{\hskip10.0pt}l}1,& n=0,\\ {} a(a+1)\dots (a+n-1),& n\gt 0.\end{array}\right.\]

2 Student’s t-distribution: one dimensional case

Recall that the probability density function (pdf) of a standard Student’s t-distribution with $\nu \in \{1,2,3,\dots \}$ degrees of freedom, denoted by $\mathit{St}(t|0,1,\nu )$, is given by
(2.1)
\[ \mathit{St}(t|0,1,\nu )=\frac{\Gamma (\frac{\nu +1}{2})}{\Gamma (\frac{\nu }{2})}\frac{1}{\sqrt{\nu \pi }}{\left(1+\frac{{t^{2}}}{\nu }\right)^{-\frac{\nu +1}{2}}},\hspace{1em}-\infty \lt t\lt \infty ,\]
where the Gamma function is defined as
\[ \Gamma (z)={\int _{0}^{\infty }}{t^{z-1}}{e^{-t}}dt.\]
More generally, the probability density function of a location-scale (or generalized) Student’s t-distribution with $\nu \gt 0$ degrees of freedom is denoted by
(2.2)
\[ St(t|\mu ,\sigma ,\nu )=\frac{\Gamma (\frac{\nu +1}{2})}{\Gamma (\frac{\nu }{2})}{\left(\frac{\sigma }{\nu \pi }\right)^{\frac{1}{2}}}{\left(1+\frac{\sigma }{\nu }{\left(t-\mu \right)^{2}}\right)^{-\frac{\nu +1}{2}}},\hspace{1em}-\infty \lt t\lt \infty ,\]
where $\mu \in (-\infty ,\infty )$ is the location, $\sigma \gt 0$ determines the scale, and $\nu \in \{1,2,3,\dots \}$ represents the number of the degrees of freedom. The thickness of its tails is determined by the degrees of freedom. When $\nu =1$, the pdf in (2.2) reduces to the pdf of $\text{Cauchy}(\mu ,\sigma )$, while the pdf in (2.2) converges to the pdf of the normal $N(t|\mu ,{(1/\sqrt{\sigma })^{2}})$ as $\nu \to \infty $.
While the tails of the normal distribution decay at an exponential rate, the Student’s t-distribution is heavy-tailed, with a polynomial decay rate. Because of this, the Student’s t-distribution has been widely adopted in robust data analysis including (non)linear regression [9], sample selection models [10], and linear mixed effect models [14]. It is also among the most widely applied distributions for financial risk modeling, see [11, 16, 8]. The reader is invited to refer to [7] for more.
The mean and variance of a Student’s t-distribution T are well known and can be found in closed form by using the properties of the Gamma function. Specifically for $\nu \gt 2$, we have (see, for example, [5]):
\[ \mathbb{E}(T)=\mu ,\hspace{2em}\text{Var}(T)=\frac{1}{\sigma }\frac{\nu }{\nu -2}.\]
However, for higher order raw or central moments, the calculation quickly becomes tedious.
We note that one can use the fact that the Student’s t-distribution can be written as $T=X/\sqrt{Z/\nu }$ where $X\sim N(0,1)$, $Z\sim {\chi _{\nu }^{2}}$, $X,Z$ are independent. From there, one can derive the probability density function of T. We adopt the mixture approach which is surprisingly simple and will be very useful in later derivations. It provides a representation of a conditional Student’s t-distribution in terms of a normal distribution, see, for example, [3, page 103]. More specifically, we have the following lemma.
Lemma 2.1.
Assume that for $\nu \gt 0$, $\Lambda \sim \textit{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean μ and variance $1/(\sigma \lambda )$. Then T is a $\mathit{St}(t|\mu ,\sigma ,\nu )$ Student’s t-distribution.
Proof.
As the proof is very concise, we reproduce it here for the reader’s convenience. Let ${f_{T}}(t)$ be the probability density function of T. We have
\[\begin{aligned}{}{f_{T}}(t)=& {\int _{0}^{\infty }}N(t|\mu ,\frac{1}{\sigma \lambda })\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} =& {\int _{0}^{\infty }}\frac{\sqrt{\sigma \lambda }}{\sqrt{2\pi }}{e^{-\frac{\sigma \lambda }{2}{(t-\mu )^{2}}}}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}{\lambda ^{\nu /2-1}}{e^{-\frac{\nu }{2}\lambda }}d\lambda \\ {} =& \frac{\sqrt{\sigma }}{\sqrt{2\pi }}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}\frac{\Gamma (\frac{\nu +1}{2})}{{(\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}})^{\frac{\nu +1}{2}}}}\\ {} \hspace{1em}& \times {\int _{0}^{\infty }}\text{Gamma}(\lambda |\frac{\nu +1}{2},\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}}))d\lambda \\ {} =& \frac{\sqrt{\sigma }}{\sqrt{2\pi }}\frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu /2)}\frac{\Gamma (\frac{\nu +1}{2})}{{(\frac{\nu }{2}+\frac{\sigma }{2}{(t-\mu )^{2}})^{\frac{\nu +1}{2}}}}\\ {} =& \mathit{St}(t|\mu ,\sigma ,\nu ).\end{aligned}\]
This completes the proof of the lemma.  □
The equalities in Theorem 2.1 below are well known.
Theorem 2.1.
We have:
  • 1. If $X\sim N(0,{\sigma ^{2}})$, then
    \[ \mathbb{E}({X^{m}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \textit{if}\hspace{2.5pt}m=2k+1,k\in \mathbb{N}\\ {} \displaystyle \frac{{\sigma ^{m}}m!}{{2^{m/2}}(m/2)!},& \textit{if}\hspace{2.5pt}m=2k,k\in \mathbb{N}.\end{array}\right.\]
  • 2. If $X\sim \textit{Gamma}(\alpha ,\beta )$, then $\mathbb{E}({X^{\nu }})=\frac{{\beta ^{-\nu }}\Gamma (\nu +\alpha )}{\Gamma (\alpha )}$ for $-\alpha \lt \nu \in \mathbb{R}$.
With this and Lemma 2.1 above, we are able to find moments of the Student’s t-distribution. More specifically, we have the following comprehensive theorem in one dimension.
Theorem 2.2.
For $k\in {\mathbb{N}_{+}}$, $0\lt k\lt \nu $, the following results hold:
  • 1. For $T\sim \mathit{St}(t|0,1,\nu )$, the raw and absolute moments satisfy
    (2.3)
    \[ \mathbb{E}({T^{k}})=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\cdot \frac{{\nu ^{k/2}}}{{\textstyle\textstyle\prod _{i=1}^{k/2}}(\frac{\nu }{2}-i)},& k\hspace{2.5pt}\textit{even},\\ {} 0,& k\hspace{2.5pt}\textit{odd};\end{array}\right.\]
    (2.4)
    \[ \mathbb{E}(|T{|^{k}})=\frac{{\nu ^{k/2}}\Gamma ((k+1)/2)\Gamma ((\nu -k)/2)}{\sqrt{\pi }\Gamma (\nu /2)}.\]
  • 2. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the raw moments satisfy
    (2.5)
    \[ \mathbb{E}({T^{k}})=\left\{\begin{array}{l}{(\nu /\sigma )^{k/2}}\displaystyle \frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\displaystyle \frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\displaystyle \frac{k}{2},\displaystyle \frac{\nu }{2}-\displaystyle \frac{k}{2},\displaystyle \frac{1}{2};-\displaystyle \frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{even},\\ {} 2\mu {(\nu /\sigma )^{(k-1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k-1}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(\frac{1-k}{2},\frac{\nu }{2}-\frac{k-1}{2},\frac{3}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{odd};\end{array}\right.\]
    (2.6)
    \[ \mathbb{E}({(T-\mu )^{k}})=\frac{(1+{(-1)^{k}})}{2}{(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu -k}{2})}{\Gamma (\frac{\nu }{2})}.\]
  • 3. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the absolute moments satisfy
    (2.7)
    \[ \mathbb{E}(|T{|^{k}})={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\frac{k}{2},\frac{\nu }{2}-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),\]
    (2.8)
    \[ \mathbb{E}(|T-\mu {|^{k}})={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu -k}{2})}{\Gamma (\frac{\nu }{2})}.\]
In general, the moments are undefined when $k\ge \nu $.
Proof.
First assume that $T\sim \mathit{St}(t|0,1,\nu )$; we will find $\mathbb{E}(|T{|^{k}})$. The proof for $\mathbb{E}({T^{k}})$ follows from similar ideas in combination with the result obtained in Theorem 2.1. Assume $\Lambda \sim \text{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean 0 and variance $1/\lambda $. From the equation (17) in [18], we have
\[\begin{aligned}{}\mathbb{E}(|T{|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}|t{|^{k}}\hspace{2.5pt}\text{N}(t|0,\frac{1}{\lambda })\hspace{2.5pt}dt=\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right).\end{aligned}\]
Hence we have
\[\begin{aligned}{}& \mathbb{E}(|T{|^{k}})=\mathbb{E}(\mathbb{E}(|T{|^{k}}|\lambda ))\\ {} & ={\underset{0}{\overset{\infty }{\int }}}\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}{\lambda ^{\nu /2-1}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}{\underset{0}{\overset{\infty }{\int }}}{\lambda ^{\nu /2-1-k/2}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}\\ {} & \hspace{1em}\cdot \frac{\Gamma (\frac{\nu -k}{2})}{{(\frac{\nu }{2})^{\frac{\nu -k}{2}}}}{\underset{0}{\overset{\infty }{\int }}}\frac{{(\frac{\nu }{2})^{\frac{\nu -k}{2}}}}{\Gamma (\frac{\nu -k}{2})}{\lambda ^{(\nu -k)/2-1}}\exp \Big(-\frac{\nu }{2}\lambda \Big)\hspace{2.5pt}d\lambda \\ {} & ={2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};0\right)\cdot \frac{{\nu ^{\nu /2}}}{{2^{\nu /2}}\Gamma (\nu 2)}\cdot \frac{\Gamma (\frac{\nu -k}{2})}{{(\nu /2)^{\frac{\nu -k}{2}}}}\\ {} & =\frac{{\nu ^{k/2}}\Gamma ((k+1)/2)\Gamma ((\nu -k)/2)}{\sqrt{\pi }\Gamma (\nu /2)},\end{aligned}\]
where we have used the fact that $K\left(-\frac{k}{2},\frac{1}{2};0\right)=1$.
Next, assume that $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$ and $\Lambda \sim \text{Gamma}(\lambda |\nu /2,\nu /2)$. Additionally, given $\Lambda =\lambda $, assume further that $T|\lambda $ is a normal distribution with mean μ and variance $1/(\sigma \lambda )$. Using the following facts (obtained in [18])
\[\begin{aligned}{}\mathbb{E}({(T-\mu )^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}{(t-\mu )^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt\\ {} & =(1+{(-1)^{k}})\frac{1}{{\lambda ^{k/2}}}{2^{k/2-1}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\end{aligned}\]
and
\[\begin{aligned}{}\mathbb{E}(|T-\mu {|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}|t-\mu {|^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt=\frac{{\sigma ^{-k/2}}}{{\lambda ^{k/2}}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }},\end{aligned}\]
the derivations for $\mathbb{E}({(T-\mu )^{k}})$ and $\mathbb{E}(|T-\mu {|^{k}})$ follow similarly.
Next assume that $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$ and we would like to compute the absolute raw moment $\mathbb{E}(|T{|^{k}})$ of T. From the equation (17) in [18], we have
\[\begin{aligned}{}\mathbb{E}(|T{|^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}\hspace{-0.1667em}|t{|^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt=\frac{1}{{\lambda ^{k/2}}}{2^{k/2}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\hspace{-0.1667em}\left(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda \right).\end{aligned}\]
Hence, using Part 2) of Theorem 2.1, we have for $k\lt \nu $
\[\begin{aligned}{}& \mathbb{E}(|T{|^{k}})=\mathbb{E}(\mathbb{E}(|T{|^{k}}|\lambda ))\\ {} & =\int \frac{1}{{\lambda ^{k/2}}}{2^{k/2}}{\sigma ^{-k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}K\left(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda \right)\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}\int {\lambda ^{n-k/2}}\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda \\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n+k/2}}\frac{\Gamma (n-k/2+\nu /2)}{\Gamma (\nu /2)}\\ {} & ={\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n+k/2}}{(\frac{\nu }{2}-\frac{k}{2})^{\overline{n}}}\\ {} & ={\sigma ^{-k/2}}{2^{k/2}}{(\nu /2)^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}\\ {} & \hspace{1em}\times {\sum \limits_{n=0}^{\infty }}\frac{{(-k/2)^{\overline{n}}}}{{(1/2)^{\overline{n}}}}\frac{{(-{\mu ^{2}}/2)^{n}}{\sigma ^{n}}}{n!}{(\nu /2)^{-n}}{(\frac{\nu }{2}-\frac{k}{2})^{\overline{n}}}\\ {} & ={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\frac{k}{2},\frac{\nu }{2}-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}\sigma }{\nu }).\end{aligned}\]
Lastly, from the equation (12) in [18], we have
\[\begin{aligned}{}\mathbb{E}({T^{k}}|\lambda )& ={\underset{-\infty }{\overset{\infty }{\int }}}{t^{k}}\hspace{2.5pt}\text{N}(t|\mu ,\frac{1}{\lambda \sigma })\hspace{2.5pt}dt\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}{\sigma ^{-k/2}}{2^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{1}{{\lambda ^{k/2}}}K(-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda ),& k\hspace{2.5pt}\text{even},\\ {} \mu {\sigma ^{-(k-1)/2}}{2^{(k+1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{1}{{\lambda ^{(k-1)/2}}}K(\frac{1-k}{2},\frac{3}{2};-\frac{{\mu ^{2}}}{2}\sigma \lambda ),& k\hspace{2.5pt}\text{odd}.\end{array}\right.\end{aligned}\]
Similar to the calculations done for $\mathbb{E}(|T{|^{k}})$, we have
\[ \mathbb{E}({T^{k}})=\left\{\hspace{-0.1667em}\begin{array}{l@{\hskip10.0pt}l}{(\nu /\sigma )^{k/2}}\displaystyle \frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\displaystyle \frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\displaystyle \frac{k}{2},\displaystyle \frac{\nu }{2}-\displaystyle \frac{k}{2},\displaystyle \frac{1}{2};-\displaystyle \frac{{\mu ^{2}}\sigma }{\nu }),& k\hspace{2.5pt}\text{even},\\ {} 2\mu {(\nu /\sigma )^{(k-1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k-1}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(\frac{1-k}{2},\frac{\nu }{2}-\frac{k-1}{2},\frac{3}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),& k\hspace{2.5pt}\text{odd}.\end{array}\right.\]
This completes the proof of the theorem.  □
Remark 2.1.
  • 1. The formulae given in (2.5)–(2.8) are new in the literature. Also when $T\sim \mathit{St}(t|0,1,\nu )$, $\mathbb{E}({T^{k}})$ is well known. Moreover, one can directly use the definition to find $\mathbb{E}(|T{|^{k}})$ through the class of β-functions defined in Section 6.2 of [1] and arrive at the same formula. However, this direct approach no longer works for expectations of the forms $\mathbb{E}(|T{|^{k}})$, $\mathbb{E}({T^{k}})$ when $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, or for higher dimensional moments considered in Section 3. Also, clearly (2.5) is reduced to (2.3), and (2.7) is reduced to (2.4) when $\mu =0$ and $\sigma =1$.
  • 2. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, and once $\mathbb{E}({(T-\mu )^{i}}),0\le i\le k$, have been computed, we can use them to compute $\mathbb{E}({T^{k}})$ for $k\lt \nu $ using the expansion
    \[ \mathbb{E}({T^{k}})=\mathbb{E}({(T-\mu +\mu )^{k}})={\sum \limits_{i=0}^{k}}{\mu ^{k-i}}\left(\genfrac{}{}{0pt}{}{k}{i}\right)\mathbb{E}({(T-\mu )^{i}}).\]
  • 3. We also note that an alternative proof of the central moment formula (2.6) was later provided in [2] based on a recursive formula.

3 Higher-dimensional case

Now we consider the case when $n\ge 2$. Denote $\boldsymbol{t}=({t_{1}},{t_{2}},\dots ,{t_{n}})\in {\mathbb{R}^{n}}$. Denote the pdf of n-dimensional Normal random variable as
\[ N(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma })=\frac{1}{{(2\pi )^{n/2}}|\boldsymbol{\Sigma }{|^{\frac{1}{2}}}}{e^{-\frac{1}{2}{(\boldsymbol{x}-\boldsymbol{\mu })^{T}}{\boldsymbol{\Sigma }^{-1}}(\boldsymbol{x}-\boldsymbol{\mu })}},\hspace{1em}\boldsymbol{x}\in {\mathbb{R}^{n}},\]
where $\boldsymbol{\mu }\in {\mathbb{R}^{n}}$ and $|\boldsymbol{\Sigma }|$ is the determinant of the $n\times n$ symmetric positive definite matrix Σ. Similarly to the 1-dimensional case, we have the probability density of the n-dimensional Student’s t-distribution defined as
(3.1)
\[ St(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}|\boldsymbol{\mu },{(\lambda \boldsymbol{\Sigma })^{-1}})\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda ,\]
where $\boldsymbol{\mu }$ is called the location, Σ is the scale matrix, and ν is the parameter, the degrees of freedom. Similarly to Lemma 2.1, we have
\[\begin{aligned}{}& St(\boldsymbol{t}\mid \boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}\mid \boldsymbol{\mu },{(\lambda \boldsymbol{\Sigma })^{-1}})\text{Gamma}\left(\lambda \mid \frac{\nu }{2},\frac{\nu }{2}\right)d\lambda \\ {} & ={\int _{0}^{\infty }}\frac{|\lambda \boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}}\exp \left\{-\frac{1}{2}{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}(\lambda \boldsymbol{\Sigma })(\boldsymbol{t}-\boldsymbol{\mu })-\frac{\nu \lambda }{2}\right\}\frac{1}{\Gamma (\nu /2)}{\left(\frac{\nu }{2}\right)^{\nu /2}}{\lambda ^{\nu /2-1}}d\lambda \\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\int _{0}^{\infty }}\exp \left\{-\frac{1}{2}{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}(\lambda \boldsymbol{\Sigma })(\boldsymbol{t}-\boldsymbol{\mu })-\frac{\nu \lambda }{2}\right\}{\lambda ^{n/2+\nu /2-1}}d\lambda .\end{aligned}\]
Let’s define
\[\begin{aligned}{}{\Delta ^{2}}& ={(\boldsymbol{t}-\boldsymbol{\mu })^{T}}\boldsymbol{\Sigma }(\boldsymbol{t}-\boldsymbol{\mu }),\\ {} z& =\frac{\lambda }{2}({\Delta ^{2}}+\nu ),\end{aligned}\]
then we have
(3.2)
\[\begin{aligned}{}St(x\mid \boldsymbol{\mu },\boldsymbol{\Sigma },\nu )& =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\int _{0}^{\infty }}\exp (-z){\left(\frac{2z}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2-1}}\cdot \frac{2}{{\Delta ^{2}}+\nu }dz\\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\left(\frac{2}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2}}{\int _{0}^{\infty }}\exp (-z){z^{n/2+\nu /2-1}}dz\\ {} & =\frac{{(\nu /2)^{\nu /2}}|\boldsymbol{\Sigma }{|^{1/2}}}{{(2\pi )^{n/2}}\Gamma (\nu /2)}{\left(\frac{2}{{\Delta ^{2}}+\nu }\right)^{n/2+\nu /2}}\Gamma (\frac{\nu +n}{2})\\ {} & =\frac{\Gamma (\frac{\nu +n}{2})}{\Gamma (\frac{\nu }{2})}\frac{|\boldsymbol{\Sigma }{|^{\frac{1}{2}}}}{{(\nu \pi )^{\frac{n}{2}}}}{\left(1+\frac{1}{\nu }{(\boldsymbol{t}-\boldsymbol{\mu })^{T}}\boldsymbol{\Sigma }(\boldsymbol{t}-\boldsymbol{\mu })\right)^{-\frac{\nu +n}{2}}}.\end{aligned}\]
Note that in the standardized case of $\boldsymbol{\mu }=\mathbf{0}$ and $\boldsymbol{\Sigma }=\boldsymbol{I}$, the representation in (3.1) is reduced to
(3.3)
\[ \mathit{St}(\boldsymbol{t}|\mathbf{0},\boldsymbol{I},\nu )={\int _{0}^{\infty }}N(\boldsymbol{t}|\mathbf{0},\frac{1}{\lambda }\boldsymbol{I})\text{Gamma}(\lambda |\frac{\nu }{2},\frac{\nu }{2})d\lambda .\]
Let’s write $\boldsymbol{T}=({T_{1}},{T_{2}},\dots ,{T_{n}})$, and $\boldsymbol{k}=({k_{1}},{k_{2}},\dots ,{k_{n}})$ with $0\le {k_{i}}\in \mathbb{N}$. For $\nu \gt 2$, it is known that (see, for example, [3, page 105]),
\[ \mathbb{E}[\boldsymbol{T}]=\boldsymbol{\mu },\text{Cov}(\boldsymbol{T})=\frac{\nu }{\nu -2}{\boldsymbol{\Sigma }^{-1}}.\]
For the rest of this section, we are interested in higher moments of $\boldsymbol{T}$. The $\boldsymbol{k}$ moment of $\boldsymbol{T}$ is defined as
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})=\int {t_{1}^{{k_{1}}}}{t_{2}^{{k_{2}}}}\dots {t_{n}^{{k_{n}}}}\cdot \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\]
Similarly,
\[ \mathbb{E}(|\boldsymbol{T}{|^{\boldsymbol{k}}})=\int |{t_{1}}{|^{{k_{1}}}}|{t_{2}}{|^{{k_{2}}}}\dots |{t_{n}}{|^{{k_{n}}}}\cdot \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\]
To simplify the notations, in the following, we use $\textstyle\sum {k_{i}}$, $\textstyle\prod {k_{i}}$ to denote ${\textstyle\sum _{i=1}^{n}}{k_{i}}$, ${\textstyle\prod _{i=1}^{n}}{k_{i}}$, respectively. From the authors’ best knowledge, the following results are new.
Theorem 3.1.
For $\textstyle\sum {k_{i}}\lt \nu $, we have:
  • 1. If $\boldsymbol{T}\sim St(\boldsymbol{t}|\mathbf{0},\boldsymbol{I},\nu )$, then
    • • the raw moments satisfy
      \[ \hspace{-12.0pt}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \textit{if}\hspace{2.5pt}\textit{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{is odd},\\ {} {\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\displaystyle \frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})}}\textstyle\prod ({k_{i}}/2)!},& \textit{if}\hspace{2.5pt}\textit{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{are even};\end{array}\right.\]
    • • the absolute moments satisfy
      \[ \mathbb{E}(|\boldsymbol{T}{|^{\boldsymbol{k}}})={\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}.\]
  • 2. If $\boldsymbol{T}\sim \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )$, denote ${\boldsymbol{\Sigma }^{-1}}=({\overline{\sigma }_{ij}})$ and ${\boldsymbol{e}_{i}}=(0,\dots ,1,\dots ,0)$, the ith unit vector of ${\mathbb{R}^{n}}$. Then we have the following recursive formula to compute the moments of $\boldsymbol{T}$:
    \[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{\nu -2}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\]
Proof.
For 1), first from (3.3), we have
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})={\int _{0}^{\infty }}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\]
where $\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})$ is the $\boldsymbol{k}$ moment of a $N(\mathbf{0},\frac{1}{t}\boldsymbol{I})$. Using Theorem 2.1, we have
\[ \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})={\prod \limits_{i=1}^{n}}\mathbb{E}({X_{i}^{{k_{i}}}}|0,\frac{1}{t})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} \displaystyle \frac{{t^{-\textstyle\sum {k_{i}}/2}}\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})/2}}\textstyle\prod ({k_{i}}/2)!},& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even}.\end{array}\right.\]
As a result,
\[\begin{aligned}{}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})& =\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} \displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})/2}}\textstyle\prod ({k_{i}}/2)!}\\ {} \hspace{1em}\times {\displaystyle \int _{0}^{\infty }}{t^{-\textstyle\sum {k_{i}}/2}}\text{Gamma}(t|\displaystyle \frac{\nu }{2},\displaystyle \frac{\nu }{2})dt,& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even},\end{array}\right.\\ {} & =\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \text{if}\hspace{2.5pt}\text{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{is odd},\\ {} {\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\displaystyle \frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})}}\textstyle\prod ({k_{i}}/2)!},& \text{if}\hspace{2.5pt}\text{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\text{are even}.\end{array}\right.\end{aligned}\]
Similarly, we have
\[ \mathbb{E}(|{\boldsymbol{T}^{\boldsymbol{k}}}|)={\int _{0}^{\infty }}\mathbb{E}(|{X_{1}}{|^{{k_{1}}}}|{X_{2}}{|^{{k_{2}}}}\dots |{X_{n}}{|^{{k_{n}}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\]
where
\[ \mathbb{E}(|{X_{1}}{|^{{k_{1}}}}|{X_{2}}{|^{{k_{2}}}}\dots |{X_{n}}{|^{{k_{n}}}}|\mathbf{0},\frac{1}{t}\boldsymbol{I})={\prod \limits_{i=1}^{n}}\mathbb{E}(|{X_{i}}{|^{{k_{i}}}}|0,\frac{1}{t})=\prod \frac{1}{{t^{{k_{i}}/2}}}{2^{{k_{i}}/2}}\frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}.\]
Therefore,
\[\begin{aligned}{}\mathbb{E}(|{\boldsymbol{T}^{\boldsymbol{k}}}|)& ={2^{\textstyle\sum {k_{i}}/2}}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}{\int _{0}^{\infty }}{t^{-\textstyle\sum {k_{i}}/2}}\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\\ {} & ={\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}\hspace{1em}\text{if}\hspace{2.5pt}\sum {k_{i}}\lt \nu .\end{aligned}\]
For 2), from (3.1),
(3.4)
\[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})={\int _{0}^{\infty }}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\]
where $\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}})\equiv \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$ is the $\boldsymbol{k}$ moment of $N(\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$. Recall that the pdf of $N(\boldsymbol{\mu },\frac{1}{t}{\Sigma ^{-1}})$ is given by
\[ N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})=\frac{1}{{(2\pi )^{n/2}}|\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}{|^{\frac{1}{2}}}}{e^{-\frac{1}{2}{(\boldsymbol{x}-\boldsymbol{\mu })^{T}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })}}.\]
Similar to Theorem 1 in [6], we have
\[ -\frac{\partial N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})}{\partial \boldsymbol{x}}=t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}).\]
Hence
\[ -\int {\boldsymbol{x}^{\boldsymbol{k}}}\frac{\partial N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})}{\partial \boldsymbol{x}}d\boldsymbol{x}=\int {\boldsymbol{x}^{\boldsymbol{k}}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
By integration by parts, we arrive at
\[ \int {k_{j}}{\boldsymbol{x}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}=\int {\boldsymbol{x}^{\boldsymbol{k}}}t\boldsymbol{\Sigma }(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
Or equivalently,
\[ \int {\boldsymbol{x}^{\boldsymbol{k}}}(\boldsymbol{x}-\boldsymbol{\mu })N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}=\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\int {k_{j}}{\boldsymbol{x}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})d\boldsymbol{x}.\]
This in turn implies that
\[ \mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})={\mu _{i}}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}}})+\frac{1}{t}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{X}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\]
Plugging this into the equation (3.4), we have the following recursive equation
\[\begin{aligned}{}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})& ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}){\int _{0}^{\infty }}\frac{1}{t}\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt\\ {} & ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{2}\frac{\Gamma (\frac{\nu }{2}-1)}{\Gamma (\frac{\nu }{2})}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}})\\ {} & ={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{\nu -2}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\end{aligned}\]
This completes the proof of the theorem.  □
Lastly, for $\boldsymbol{a}=({a_{1}},{a_{2}},\dots ,{a_{n}})$ and $\boldsymbol{b}=({b_{1}},{b_{2}},\dots ,{b_{n}})\in {\mathbb{R}^{n}}$, let ${\boldsymbol{a}_{(j)}}$ be the vector obtained from $\boldsymbol{a}$ by deleting the jth element of $\boldsymbol{a}$. For $\boldsymbol{\Sigma }=({\sigma _{ij}})$, let ${\sigma _{i}^{2}}={\sigma _{ii}}$ and ${\boldsymbol{\Sigma }_{i,(j)}}$ stand for the ith row of Σ with its jth element removed. Analogously, let ${\boldsymbol{\Sigma }_{(i),(j)}}$ stand for the matrix Σ with ith row and jth column removed.
Consider the following truncated $\boldsymbol{k}$ moment
\[\begin{aligned}{}{F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )& ={\int _{\boldsymbol{a}}^{\boldsymbol{b}}}{\boldsymbol{t}^{\boldsymbol{k}}}St(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d\boldsymbol{t}\\ {} & \equiv {\int _{{a_{1}}}^{{b_{1}}}}\dots {\int _{{a_{n}}}^{{b_{n}}}}{t_{1}^{{k_{1}}}}\dots {t_{n}^{{k_{n}}}}St(\boldsymbol{x}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )d{t_{1}}\dots d{t_{n}}.\end{aligned}\]
We have
(3.5)
\[ \displaystyle {F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\int _{0}^{\infty }}\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}^{\boldsymbol{k}}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]\text{Gamma}(t|\frac{\nu }{2},\frac{\nu }{2})dt,\]
where ${\boldsymbol{X}^{\boldsymbol{k}}}\sim N(\boldsymbol{x}|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})$. Using Theorem 1 in [6], we have for $n\gt 1$
(3.6)
\[\begin{aligned}{}\mathbb{E}({X_{\boldsymbol{k}+{\boldsymbol{e}_{i}}}^{n}};\boldsymbol{a},\boldsymbol{b},\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})& :=\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}_{\boldsymbol{k}+{\mathbf{e}_{i}}}^{n}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]\\ {} & ={\mu _{i}}\mathbb{E}\left[{\mathbf{1}_{\{\mathbf{a}\le \mathbf{X}\le \mathbf{b}\}}}{\boldsymbol{X}_{\boldsymbol{k}}^{n}}\Big|\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}}\right]+\frac{1}{t}{\mathbf{e}_{i}^{\top }}{\boldsymbol{\Sigma }^{-1}}{\mathbf{c}_{\mathbf{k}}},\end{aligned}\]
where ${\mathbf{c}_{\mathbf{k}}}$ satisfies
\[\begin{aligned}{}{\mathbf{c}_{\mathbf{k},j}}& ={k_{j}}\mathbb{E}({X_{\boldsymbol{k}-{\boldsymbol{e}_{i}}}^{n}};\boldsymbol{a},\boldsymbol{b},\boldsymbol{\mu },\frac{1}{t}{\boldsymbol{\Sigma }^{-1}})\\ {} & \hspace{1em}+{a_{j}^{{k_{j}}}}N({a_{j}}|{\mu _{j}},\frac{1}{t}{\overline{\sigma }_{j}^{2}})\mathbb{E}({X_{{\boldsymbol{k}_{(j)}}}^{n-1}};{\boldsymbol{a}_{(j)}},{\boldsymbol{b}_{(j)}},{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{a}}},\frac{1}{t}{\widehat{\boldsymbol{\Sigma }}_{j}})\\ {} & \hspace{1em}-{b_{j}^{{k_{j}}}}N({b_{j}}|{\mu _{j}},\frac{1}{t}{\overline{\sigma }_{j}^{2}})\mathbb{E}({X_{{\boldsymbol{k}_{(j)}}}^{n-1}};{\boldsymbol{a}_{(j)}},{\boldsymbol{b}_{(j)}},{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{b}}},\frac{1}{t}{\widehat{\boldsymbol{\Sigma }}_{j}}),\hspace{1em}j=1,2,\dots ,n,\end{aligned}\]
with
(3.7)
\[ \left\{\begin{array}{l}{\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{a}}}={\boldsymbol{\mu }_{(j)}}+{\boldsymbol{\Sigma }_{(j),j}^{-1}}\frac{{a_{j}}-{\mu _{j}}}{{\overline{\sigma }_{j}^{2}}},\\ {} {\widehat{\boldsymbol{\mu }}_{j}^{\boldsymbol{b}}}={\boldsymbol{\mu }_{(j)}}+{\boldsymbol{\Sigma }_{(j),j}^{-1}}\frac{{b_{j}}-{\mu _{j}}}{{\overline{\sigma }_{j}^{2}}},\\ {} {\widehat{\boldsymbol{\Sigma }}_{j}}={\boldsymbol{\Sigma }_{(j),(j)}^{-1}}-\frac{1}{{\overline{\sigma }_{j}^{2}}}{\boldsymbol{\Sigma }_{(j),j}^{-1}}{\boldsymbol{\Sigma }_{j,(j)}^{-1}}.\end{array}\right.\]
Thus, we have the following recursive formula
\[ {F_{\boldsymbol{k}+{\mathbf{e}_{i}}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )={\mu _{i}}{F_{\boldsymbol{k}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )+\frac{\nu }{\nu -2}{\mathbf{e}_{i}^{\top }}\boldsymbol{\Sigma }{\mathbf{d}_{\mathbf{k}}},\]
where
\[\begin{aligned}{}{\mathbf{d}_{\mathbf{k},j}}=& {k_{j}}{F_{\boldsymbol{k}-{\mathbf{e}_{i}}}^{n}}(\mathbf{a},\mathbf{b};\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )+{a_{j}^{{k_{j}}}}St({a_{j}}|{\mu _{j}},{\overline{\sigma }_{j}^{2}},\nu ){F_{{\boldsymbol{k}_{(j)}}}^{n-1}}({\mathbf{a}_{(j)}},{\mathbf{b}_{(j)}};{\widehat{\boldsymbol{\mu }}_{j}^{\mathbf{a}}},\widehat{\boldsymbol{\Sigma }},\nu )\\ {} & -{b_{j}^{{k_{j}}}}St({b_{j}}|{\mu _{j}},{\overline{\sigma }_{j}^{2}},\nu ){F_{{\boldsymbol{k}_{(j)}}}^{n-1}}({\mathbf{a}_{(j)}},{\mathbf{b}_{(j)}};{\widehat{\boldsymbol{\mu }}_{j}^{\mathbf{b}}},\widehat{\boldsymbol{\Sigma }},\nu ),\hspace{1em}j=1,2,\dots ,n.\end{aligned}\]
Note that by convention the first term, second term, and third term in the expression of ${\mathbf{d}_{\mathbf{k},j}}$ equal 0 when ${k_{j}}=0$, ${a_{j}}=\infty $, ${b_{j}}=-\infty $, respectively.

4 Conclusion

We have derived the closed form formulae for the raw moments, absolute moments, and central moments of the Student’s t-distribution with arbitrary degrees of freedom. We provide results in one and n dimensions, which unify and extend the existing literature for the Student’s t-distribution. It would be interesting to investigate tail quantile approximations or asymptotic tail properties of a higher (generalized) Student’s t-distribution as done in [15] and [4]. We leave this as an interesting project for future studies.

References

[1] 
Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables vol. 55. US Government printing office (1948) MR0415956
[2] 
Bignozzi, V., Merlo, L., Petrella, L.: Inter-order relations between equivalence for ${L_{p}}$-quantiles of the Student’s t distribution. Insur. Math. Econ. 116, 44–50 (2024) MR4708514. https://doi.org/10.1016/j.insmatheco.2024.02.001
[3] 
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006) MR2247587. https://doi.org/10.1007/978-0-387-45528-0
[4] 
Finner, H., Dickhaus, T., Roters, M.: Asymptotic tail properties of Student’s t-distribution. Commun. Stat., Theory Methods 37(2), 175–179 (2008) MR2412618. https://doi.org/10.1080/03610920701649019
[5] 
Jackman, S.: Bayesian Analysis for the Social Sciences. John Wiley and Sons (2009) MR2584520. https://doi.org/10.1002/9780470686621
[6] 
Kan, R., Robotti, C.: On moments of folded and truncated multivariate normal distributions. J. Comput. Graph. Stat. 26(4), 930–934 (2017) MR3765356. https://doi.org/10.1080/10618600.2017.1322092
[7] 
Kotz, S., Nadarajah, S.: Multivariate T-distributions and Their Applications. Cambridge University Press (2004) MR2038227. https://doi.org/10.1017/CBO9780511550683
[8] 
Kwon, O.K., Satchell, S.: The distribution of cross sectional momentum returns when underlying asset returns are Student’s t-distributed. J. Financ. Risk Manag. 13(2), 27 (2020) MR3849785. https://doi.org/10.1016/j.jedc.2018.06.002
[9] 
Lange, K.L., Little, R.J., Taylor, J.M.: Robust statistical modeling using the t distribution. J. Am. Stat. Assoc. 84(408), 881–896 (1989) MR1134486
[10] 
Marchenko, Y.V., Genton, M.G.: A heckman selection-t model. J. Am. Stat. Assoc. 107(497), 304–317 (2012) MR2949361. https://doi.org/10.1080/01621459.2012.656011
[11] 
McNeil, A.J., Frey, R., Embrechts, P.: Quantitative Risk Management: Concepts, Techniques, and Tools, 2nd edn. Princeton Series in Finance (2015) MR3445371
[12] 
Nguyen, D.: A probabilistic approach to the moments of binomial random variables and application. Am. Stat. 75(1), 101–103 (2021) MR4203486. https://doi.org/10.1080/00031305.2019.1679257
[13] 
Ogasawara, H.: Unified and non-recursive formulas for moments of the normal distribution with stripe truncation. Commun. Stat., Theory Methods 51(19), 1–38 (2020) MR4471422. https://doi.org/10.1080/03610926.2020.1867742
[14] 
Pinheiro, J.C., Liu, C., Wu, Y.N.: Efficient algorithms for robust estimation in linear mixed-effects models using the multivariate t distribution. J. Comput. Graph. Stat. 10(2), 249–276 (2001) MR1939700. https://doi.org/10.1198/10618600152628059
[15] 
Schlüter, S., Fischer, M.: A tail quantile approximation for the Student’s t distribution. Commun. Stat., Theory Methods 41(15), 2617–2625 (2012) MR2946636. https://doi.org/10.1080/03610926.2010.513784
[16] 
Shaw, W.T.: Sampling Student’s t distribution—use of the inverse cumulative distribution function. J. Comput. Finance 9, 37–73 (2006)
[17] 
Skorski, M.: Handy formulas for binomial moments. Mod. Stoch. Theory Appl., 1–15 (2024) MR4852561
[18] 
Winkelbauer, A.: Moments and absolute moments of the normal distribution. arXiv preprint arXiv:1209.4340 (2014)
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 Student’s t-distribution: one dimensional case
  • 3 Higher-dimensional case
  • 4 Conclusion
  • References

Copyright
© 2025 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Normal distribution Student’s t-distribution moment raw moment absolute moment multivariate

MSC2020
92D25 37H15 60H10 60J60

Metrics
since March 2018
97

Article info
views

40

Full article
views

28

PDF
downloads

11

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Theorems
    3
Theorem 2.1.
Theorem 2.2.
Theorem 3.1.
Theorem 2.1.
We have:
  • 1. If $X\sim N(0,{\sigma ^{2}})$, then
    \[ \mathbb{E}({X^{m}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \textit{if}\hspace{2.5pt}m=2k+1,k\in \mathbb{N}\\ {} \displaystyle \frac{{\sigma ^{m}}m!}{{2^{m/2}}(m/2)!},& \textit{if}\hspace{2.5pt}m=2k,k\in \mathbb{N}.\end{array}\right.\]
  • 2. If $X\sim \textit{Gamma}(\alpha ,\beta )$, then $\mathbb{E}({X^{\nu }})=\frac{{\beta ^{-\nu }}\Gamma (\nu +\alpha )}{\Gamma (\alpha )}$ for $-\alpha \lt \nu \in \mathbb{R}$.
Theorem 2.2.
For $k\in {\mathbb{N}_{+}}$, $0\lt k\lt \nu $, the following results hold:
  • 1. For $T\sim \mathit{St}(t|0,1,\nu )$, the raw and absolute moments satisfy
    (2.3)
    \[ \mathbb{E}({T^{k}})=\left\{\begin{array}{l@{\hskip10.0pt}l}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\cdot \frac{{\nu ^{k/2}}}{{\textstyle\textstyle\prod _{i=1}^{k/2}}(\frac{\nu }{2}-i)},& k\hspace{2.5pt}\textit{even},\\ {} 0,& k\hspace{2.5pt}\textit{odd};\end{array}\right.\]
    (2.4)
    \[ \mathbb{E}(|T{|^{k}})=\frac{{\nu ^{k/2}}\Gamma ((k+1)/2)\Gamma ((\nu -k)/2)}{\sqrt{\pi }\Gamma (\nu /2)}.\]
  • 2. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the raw moments satisfy
    (2.5)
    \[ \mathbb{E}({T^{k}})=\left\{\begin{array}{l}{(\nu /\sigma )^{k/2}}\displaystyle \frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\displaystyle \frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\displaystyle \frac{k}{2},\displaystyle \frac{\nu }{2}-\displaystyle \frac{k}{2},\displaystyle \frac{1}{2};-\displaystyle \frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{even},\\ {} 2\mu {(\nu /\sigma )^{(k-1)/2}}\frac{\Gamma (\frac{k}{2}+1)}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k-1}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(\frac{1-k}{2},\frac{\nu }{2}-\frac{k-1}{2},\frac{3}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),\\ {} k\hspace{2.5pt}\textit{odd};\end{array}\right.\]
    (2.6)
    \[ \mathbb{E}({(T-\mu )^{k}})=\frac{(1+{(-1)^{k}})}{2}{(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu -k}{2})}{\Gamma (\frac{\nu }{2})}.\]
  • 3. If $T\sim \mathit{St}(t|\mu ,\sigma ,\nu )$, the absolute moments satisfy
    (2.7)
    \[ \mathbb{E}(|T{|^{k}})={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu }{2}-\frac{k}{2})}{\Gamma (\frac{\nu }{2})}{_{2}}{F_{1}}(-\frac{k}{2},\frac{\nu }{2}-\frac{k}{2},\frac{1}{2};-\frac{{\mu ^{2}}\sigma }{\nu }),\]
    (2.8)
    \[ \mathbb{E}(|T-\mu {|^{k}})={(\nu /\sigma )^{k/2}}\frac{\Gamma (\frac{k+1}{2})}{\sqrt{\pi }}\frac{\Gamma (\frac{\nu -k}{2})}{\Gamma (\frac{\nu }{2})}.\]
In general, the moments are undefined when $k\ge \nu $.
Theorem 3.1.
For $\textstyle\sum {k_{i}}\lt \nu $, we have:
  • 1. If $\boldsymbol{T}\sim St(\boldsymbol{t}|\mathbf{0},\boldsymbol{I},\nu )$, then
    • • the raw moments satisfy
      \[ \hspace{-12.0pt}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})=\left\{\begin{array}{l@{\hskip10.0pt}l}0,& \textit{if}\hspace{2.5pt}\textit{at least one}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{is odd},\\ {} {\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\displaystyle \frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\displaystyle \frac{\textstyle\prod ({k_{i}})!}{{2^{(\textstyle\sum {k_{i}})}}\textstyle\prod ({k_{i}}/2)!},& \textit{if}\hspace{2.5pt}\textit{all}\hspace{2.5pt}{k_{i}}\hspace{2.5pt}\textit{are even};\end{array}\right.\]
    • • the absolute moments satisfy
      \[ \mathbb{E}(|\boldsymbol{T}{|^{\boldsymbol{k}}})={\nu ^{\frac{\textstyle\sum {k_{i}}}{2}}}\frac{\Gamma (\frac{\nu -\textstyle\sum {k_{i}}}{2})}{\Gamma (\frac{\nu }{2})}\prod \frac{\Gamma (\frac{{k_{i}}+1}{2})}{\sqrt{\pi }}.\]
  • 2. If $\boldsymbol{T}\sim \mathit{St}(\boldsymbol{t}|\boldsymbol{\mu },\boldsymbol{\Sigma },\nu )$, denote ${\boldsymbol{\Sigma }^{-1}}=({\overline{\sigma }_{ij}})$ and ${\boldsymbol{e}_{i}}=(0,\dots ,1,\dots ,0)$, the ith unit vector of ${\mathbb{R}^{n}}$. Then we have the following recursive formula to compute the moments of $\boldsymbol{T}$:
    \[ \mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}+{\boldsymbol{e}_{i}}}})={\mu _{i}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}}})+\frac{\nu }{\nu -2}{\sum \limits_{j=1}^{n}}{\overline{\sigma }_{ij}}{k_{j}}\mathbb{E}({\boldsymbol{T}^{\boldsymbol{k}-{\boldsymbol{e}_{j}}}}).\]

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy