Modern Stochastics: Theory and Applications logo


  • Help
Login Register

  1. Home
  2. Issues
  3. Volume 6, Issue 3 (2019)
  4. Taylor’s power law for the N-stars netwo ...

Modern Stochastics: Theory and Applications

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • More
    Article info Full article

Taylor’s power law for the N-stars network evolution model
Volume 6, Issue 3 (2019), pp. 311–331
István Fazekas   Csaba Noszály   Noémi Uzonyi  

Authors

 
Placeholder
https://doi.org/10.15559/19-VMSTA137
Pub. online: 16 September 2019      Type: Research Article      Open accessOpen Access

Received
13 March 2019
Revised
9 August 2019
Accepted
9 August 2019
Published
16 September 2019

Abstract

Taylor’s power law states that the variance function decays as a power law. It is observed for population densities of species in ecology. For random networks another power law, that is, the power law degree distribution is widely studied. In this paper the original Taylor’s power law is considered for random networks. A precise mathematical proof is presented that Taylor’s power law is asymptotically true for the N-stars network evolution model.

1 Introduction

Taylor’s power law is a well-known empirical pattern in ecology. Its general form is
\[ V(\mu )\approx a{\mu ^{b}},\]
where μ is the mean and $V(\mu )$ is the variance of a non-negative random variable, a and b are constants. $V(\mu )$ is also called the variance function (see [15]). Taylor’s power law is called after the British ecologist L. R. Taylor (see [18]). Taylor’s power law is observed for population densities of hundreds of species in ecology. It is observed in medical sciences, demography ([6]), physics, finance (for an overview see [11]). Most papers on the topic present empirical studies, but some of them offer models as well (e.g. [6] for mortality data, [7] for population dynamics, [11] for complex systems). We mention that in the theory of complex systems Taylor’s power law is called ‘fluctuation scaling’, $V(\mu )$ is called the fluctuation and μ is the average. There are papers studying Taylor’s power law on networks (see, e.g. [9]). In those papers Taylor’s law concerns some random variable produced by a certain process on the network.
However, there is another power law for networks. There are large networks satisfying ${p_{k}}\sim C{k^{-\gamma }}$ as $k\to \infty $, where ${p_{k}}$ is the probability that a node has degree k. This relation is often referred to as a power-law degree distribution or a scale-free network. Here and in what follows ${a_{k}}\sim {b_{k}}$ means that ${\lim \nolimits_{k\to \infty }}{a_{k}}/{b_{k}}=1$. In their seminal paper [3] Barabási and Albert list several scale-free large networks (actor collaboration, WWW, power grid, etc.), they introduce the preferential attachment model and give an argument and numerical evidence that the preferential attachment rule leads to a scale-free network. A short description of the preferential attachment network evolution model is the following. At every time step $t=2,3,\dots \hspace{0.1667em}$ a new vertex with N edges is added to the existing graph so that the N edges link the new vertex to N old vertices. The probability ${\pi _{i}}$ that the new vertex will be connected to the old vertex i depends on the degree ${d_{i}}$ of vertex i, so that ${\pi _{i}}={d_{i}}/{\textstyle\sum _{j}}{d_{j}}$, where ${\textstyle\sum _{j}}{d_{j}}$ is the cumulated sum of degrees. A rigorous definition of the preferential attachment model was given in [4], where a mathematical proof of the power law degree distribution was also presented. The idea of preferential attachment and the scale-free property incited enormous research activity. The mathematical theory is described in the monograph [19] written by van der Hofstad (see also [10] and [5]). The general aspects of network theory are included in the comprehensive book [2] by A. L. Barabási.
There are lot of modifications of the preferential attachment model, here we can list only a few of them. The following general graph evolution model was introduced by Cooper and Frieze in [8]. At each time step either a new vertex or an old one generates new edges. In both cases the terminal vertices can be chosen either uniformly or according to the preferential attachment rule. In [1, 13] and [12] the ideas of Cooper and Frieze [8] were applied, but instead of the original preferential attachment rule, the terminal vertices were chosen according to the weights of certain cliques.
In several cases the connection of two edges in a network can be interpreted as co-operation (collaboration). For example in the movie actor network two actors are connected by an edge if they have appeared in a film together. In the collaboration graph of scientists an edge connects two people if they have been co-authors of a paper (see, e.g. [10]). In social networks, besides connections of two members, other structures are also important. In [17] or [1] cliques are considered to describe co-operations. In a clique any two vertices are connected, that is, any two members of the clique co-operate. However, in real-life examples, in a co-operation the members can play different roles. In a team usually one person plays central role and the other ones play peripheral roles. Trying to handle this situation and to find a mathematically tractable model leads to the study of star-like structures, see [14].
In [14] the concept of [13] was applied but instead of cliques, star-like structures were considered. A team has star structure if there is a head of the team and all other members are connected to him/her. We call a graph N-star graph if it has N vertices, one of them is called the central vertex, the remaining $N-1$ vertices are called peripheral vertices, and it has $N-1$ edges. The edges are directed, they start from the $N-1$ peripheral vertices and their end point is the central vertex. In [14] the following N-stars network evolution model was presented. In this model at each step either a new N-star is constructed or an old one is selected (activated) again. When N vertices form an N star, then we say that they are in interaction (in other words they co-operate). During the evolution, a vertex can be in interaction several times. We define for any vertex its central weight and its peripheral weight. The central weight of a vertex is ${w_{1}}$, if the vertex is a central vertex in interactions ${w_{1}}$ times. The peripheral weight of a vertex is ${w_{2}}$, if the vertex is a peripheral vertex in interactions ${w_{2}}$ times. In [14] asymptotic power law distribution was proved both for ${w_{1}}$ and ${w_{2}}$.
We are interested in the following general question. Is the original Taylor’s power law true for random networks? First we considered data sets of real life networks. We analysed them and the statistical analysis showed that there are cases when Taylor’s law is true and there are cases when it is not true (our empirical results will be published elsewhere). So we encountered the following more specific problem: Find network structures where Taylor’s power law is true. To this end we analysed the above N-stars network evolution model.
In this paper we prove an asymptotic Taylor’s power law for the N-stars network evolution model. We shall calculate the mean and the variance of ${w_{2}}$ when ${w_{1}}$ is fixed, and we shall see that the variance function is asymptotically quadratic. In Section 2, the precise mathematical description of the model and the results are given. We recall from [14] the asymptotic joint distribution of ${w_{1}}$ and ${w_{2}}$ (Proposition 2.1). Then we calculate the marginal distribution (Proposition 2.2), the expectation (Proposition 2.3), and the second moment (Proposition 2.4). The main result is Theorem 2.1. The proofs are presented in Section 3. Besides mathematical proofs, we give also a numerical evidence. In Section 4 simulation results are presented supporting our theoretical results.

2 The N-stars network evolution model and the main results

First we give a short mathematical description of our random graph model from [14].
Let $N\ge 3$ be a fixed number. We start at time 0 with an N-star graph. Throughout the paper we call a graph N-star graph if it has N vertices, one of them is the central vertex, the remaining $N-1$ vertices are peripheral ones, and the graph has $N-1$ directed edges. The edges start from the $N-1$ peripheral vertices and their end point is the central vertex. So the central vertex has in-degree $N-1$, and each of the $N-1$ peripheral vertices has out-degree 1. The evolution of our graph is governed by the weights of the N-stars and the $(N-1)$-stars. In our model, the initial weight of the N-star is 1, and the initial weights of its $(N-1)$-star sub-graphs are also 1. (An $(N-1)$-star sub-graph is obtained if a peripheral vertex is deleted from the N-star graph. The number of these $(N-1)$-star sub-graphs is $N-1$.)
We first explain the model on a high level, before giving a formal definition in the next paragraphs. The general rules of the evolution of our graph are the following. At each time step, N vertices interact, that is, they form an N-star. It means that we draw all edges from the peripheral vertices to the central vertex so that the vertices will form an N-star graph. During the evolution we allow parallel edges. When N vertices interact, not only new edges are drawn, but the weights of the stars are also increased. At the first interaction of N vertices the newly created N-star gets weight 1, and its new $(N-1)$-star sub-graphs also get weight 1. If an $(N-1)$-star sub-graph is not newly created, then its weight is increased by 1. When an existing N-star is selected (activated) again, then its weight and the weights of its $(N-1)$-star sub-graphs are increased by 1. So the weight of an N-star is the number of its activations. We can see that the weight of an $(N-1)$-star is equal to the sum of the weights of the N-stars containing it. The weights will play crucial role in our model. The higher the weight of a star the higher the chance that it will be selected (activated) again.
Now we describe the details of the evolution steps of our graph. We have two options in every step of the evolution. Option I has probability p. In this case we add a new vertex, and it interacts with $N-1$ old vertices. Option II has probability $1-p$. In this case we do not add any new vertex, but N old vertices interact. Here $0<p\le 1$ is fixed.
Option I. In this case, that is, when a new vertex is born, we have again two possibilities: I/1 and I/2.
I/1. The first possibility, which has probability r, is the following. (Here $0\le r\le 1$ is fixed.) We choose one of the existing $(N-1)$-star sub-graphs according to the preferential attachment rule, and its $N-1$ vertices and the new vertex will interact. Here the preferential attachment rule means that an $(N-1)$-star of weight ${v_{t}}$ is chosen with probability ${v_{t}}/{\textstyle\sum _{h}}{v_{h}}$, where ${\textstyle\sum _{h}}{v_{h}}$ is the cumulated weight of the $(N-1)$-stars. The interaction of the new vertex and the old $(N-1)$-star means that they establish a new N-star. In this newly created N-star the center will be the vertex which was the center in the old $(N-1)$-star, the former $N-2$ peripheral vertices remain peripheral and the newly born vertex will be also peripheral. A new edge is drawn from each peripheral vertex to the central one, and then the weights are increased by 1. More precisely, the just created N-star gets weight 1, among its $(N-1)$-star sub-graphs there are $(N-2)$ new ones, so each of them gets weight 1, finally the weight of the only old $(N-1)$-star sub-graph is increased by 1.
I/2. The second possibility has probability $1-r$. In this case we choose $N-1$ old vertices uniformly at random, and they will form an N-star graph with the new vertex, so that the new vertex will be the center. The edges are drawn from the peripheral vertices to the center. As here the newly created N-star graph and all of its $(N-1)$-star sub-graphs are new, so all of them get weight 1.
Option II. In this case, that is, when we do not add any new vertex, we have two ways again: II/1 and II/2.
II/1. The first way has probability q. (Here $0\le q\le 1$ is fixed.) We choose one of the existing N-star sub-graphs by the preferential attachment rule, then draw a new edge from each of its peripheral vertices to its center vertex. Then the weight of the N-star and the weights of its $(N-1)$-star sub-graphs are increased by 1. Here the preferential attachment rule means that an N-star of weight ${v_{t}}$ is chosen with probability ${v_{t}}/{\textstyle\sum _{h}}{v_{h}}$, where ${\textstyle\sum _{h}}{v_{h}}$ is the cumulated weight of the N-stars.
II/2. The second way has probability $1-q$. In this case we choose N old vertices uniformly at random, and they establish an N-star graph. Its center is chosen again uniformly at random out of the N vertices. Then, as before, new edges are drawn from the peripheral vertices to the central one, and the weights of the N-star and its $(N-1)$-star sub-graphs are increased by 1.
Remark 2.1.
For every vertex we shall use its central weight and its peripheral weight. The central weight of a vertex is ${w_{1}}$, if the vertex was a central vertex in interactions ${w_{1}}$ times. The peripheral weight of a vertex is ${w_{2}}$, if the vertex was a peripheral vertex in interactions ${w_{2}}$ times. We can see that the central weight of a vertex is equal to ${w_{1}}=\frac{{d_{1}}}{N-1}$ and the peripheral weight of a vertex is equal to ${w_{2}}={d_{2}}$, where ${d_{1}}$ denotes the in-degree of the vertex and ${d_{2}}$ denotes its out-degree. The weights ${w_{1}}$ and ${w_{2}}$ describe well the role of a vertex in the network. Moreover, we use ${w_{1}}$ and ${w_{2}}$ instead of degrees to obtain symmetric formulae that allow us to translate the result from ${w_{1}}$ to ${w_{2}}$ and vice versa without having to change the proofs.
Throughout the paper $0<p\le 1$, $0\le r\le 1$, $0\le q\le 1$ are fixed numbers. In our formulae the following parameters are used.
(2.1)
\[\begin{array}{r@{\hskip0pt}l@{\hskip0pt}r@{\hskip0pt}l}\displaystyle {\alpha _{11}}& \displaystyle =pr,\hspace{2em}& \displaystyle {\alpha _{12}}& \displaystyle =(1-p)q,\\ {} \displaystyle {\alpha _{1}}& \displaystyle ={\alpha _{11}}+{\alpha _{12}},\hspace{2em}& \displaystyle {\alpha _{2}}& \displaystyle =pr\frac{N-2}{N-1}+(1-p)q,\\ {} \displaystyle {\beta _{1}}& \displaystyle =\frac{(1-p)(1-q)}{p},\hspace{2em}& \displaystyle {\beta _{2}}& \displaystyle =(N-1)\bigg[(1-r)+\frac{(1-p)(1-q)}{p}\bigg],\\ {} \displaystyle \alpha & \displaystyle ={\alpha _{1}}+{\alpha _{2}},\hspace{2em}& \displaystyle \beta & \displaystyle ={\beta _{1}}+{\beta _{2}}.\end{array}\]
In [14] it was shown that the above evolution leads to a scale-free graph. To describe the result, let ${V_{n}}$ denote the number of all vertices and let $X(n,{w_{1}},{w_{2}})$ denote the number of vertices with central weight ${w_{1}}$ and peripheral weight ${w_{2}}$ after the nth step.
Proposition 2.1 (Theorem 2.1 of [14]).
Let $0<p<1$, $0<q<1$, $0<r<1$. Then for any fixed ${w_{1}}$ and ${w_{2}}$ with either ${w_{1}}=0$ and $1\le {w_{2}}$ or $1\le {w_{1}}$ and ${w_{2}}\ge 0$ we have
(2.2)
\[ \frac{X(n,{w_{1}},{w_{2}})}{{V_{n}}}\to {x_{{w_{1}},{w_{2}}}}\]
almost surely as $n\to \infty $, where ${x_{{w_{1}},{w_{2}}}}$ are fixed non-negative numbers.
Let ${w_{2}}$ be fixed, then as ${w_{1}}\to \infty $
(2.3)
\[ {x_{{w_{1}},{w_{2}}}}\sim A({w_{2}}){w_{1}^{-(1+\frac{{\beta _{2}}+1}{{\alpha _{1}}})}},\]
where
(2.4)
\[ A({w_{2}})=\frac{1-r}{{\alpha _{1}}}\frac{1}{{w_{2}}!}\frac{\varGamma ({w_{2}}+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}.\]
Let ${w_{1}}$ be fixed. Then, as ${w_{2}}\to \infty $,
(2.5)
\[ {x_{{w_{1}},{w_{2}}}}\sim C({w_{1}}){w_{2}^{-(1+\frac{{\beta _{1}}+1}{{\alpha _{2}}})}},\]
where
(2.6)
\[ C({w_{1}})=\frac{r}{{\alpha _{2}}}\frac{1}{{w_{1}}!}\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}.\]
Here Γ denotes the Gamma function.
Remark 2.2.
Using ${w_{1}}$ and ${w_{2}}$, we obtained symmetric formulae in the following sense. If we interchange subscripts 1 and 2 of α and β (and use r instead of $1-r$), then we obtain formulae (2.5)–(2.6) from formulae (2.3)–(2.4). Therefore we do not need new proofs when we interchange the roles of ${w_{1}}$ and ${w_{2}}$. (Of course the basic relations (2.3)–(2.4) and (2.5)–(2.6) were proved separately. To do it we applied the properties of our model and introduced the appropriate parametrization given in (2.1), see [14].)
Remark 2.3.
We see that ${x_{{w_{1}},{w_{2}}}}$ is the asymptotic joint distribution of the central weight and the peripheral weight. To obtain Taylor’s power law, we have to find the conditional expectation ${E_{{w_{1}}}}$ and the conditional second moment ${M_{{w_{1}}}}$ given that ${w_{1}}$ is fixed. Then the asymptotic behaviour of ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$ will imply that Taylor’s power law is satisfied asymptotically. We underline that the asymptotic relations (2.3) and (2.5) do not provide enough information to find the asymptotics of ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$. So we need deep analysis of the joint distribution ${x_{{w_{1}},{w_{2}}}}$ to obtain Taylor’s power law.
Now we turn to the new results of this paper. First we consider the marginals of the asymptotic joint distribution ${x_{{w_{1}},{w_{2}}}}$. Let
(2.7)
\[ {x_{{w_{1}},\cdot }}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}\]
be the first marginal distribution.
Proposition 2.2.
Let $0<p<1$, $0<q<1$, $0<r<1$. Then
(2.8)
\[ {x_{0,\cdot }}=\frac{r}{{\beta _{1}}+1},\]
and for ${w_{1}}>0$
(2.9)
\[ {x_{{w_{1}},\cdot }}={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}\bigg(\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+1\bigg).\]
Moreover
(2.10)
\[ {x_{1,\cdot }}=\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg),\]
and
(2.11)
\[ {x_{{w_{1}},\cdot }}=\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}\]
for ${w_{1}}>1$. We have a proper distribution, that is, ${\textstyle\sum _{{w_{1}}=0}^{\infty }}{x_{{w_{1}},\cdot }}=1$.
Now we turn to the conditional expectations of the asymptotic distribution. Let
(2.12)
\[ {E_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}l/{x_{{w_{1}},\cdot }}\]
be the expectation when the central weight ${w_{1}}$ is fixed.
Proposition 2.3.
Let $0<p<1$, $0<q<1$, $0<r<1$. Then for ${w_{1}}>1$ we have
(2.13)
\[ {E_{{w_{1}}}}=\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}\frac{{A_{1}}}{{x_{1,\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}},\]
where
(2.14)
\[\begin{aligned}{}{A_{1}}& =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+(1-r)\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}.\end{aligned}\]
Moreover
(2.15)
\[ {E_{{w_{1}}}}\sim \frac{{A_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}.\]
That is, the magnitude of ${E_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $.
Now we turn to the conditional second moments of the asymptotic distribution. Let
(2.16)
\[ {M_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}{l^{2}}/{x_{{w_{1}},\cdot }}\]
be the second moment when the central weight ${w_{1}}$ is fixed.
Proposition 2.4.
Let $0<p<1$, $0<q<1$, $0<r<1$. Assume that ${\beta _{1}}+1>2{\alpha _{2}}$. Then for ${w_{1}}>1$ we have
(2.17)
\[\begin{aligned}{}{M_{{w_{1}}}}& =\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}\frac{{B_{1}}}{{x_{1,\cdot }}}\\ {} & \hspace{1em}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\end{aligned}\]
where
\[\begin{aligned}{}{B_{1}}& =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{r}{{\beta _{1}}+1-2{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{1-r}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}.\end{aligned}\]
Moreover
(2.18)
\[ {M_{{w_{1}}}}\sim \frac{{B_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}},\]
that is, the magnitude of ${M_{{w_{1}}}}$ approaches ${w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $.
Now Propositions 2.3, 2.4 imply the main result of this paper, that is, we obtain that Taylor’s law is satisfied asymptotically.
Theorem 2.1.
Let $0<p<1$, $0<q<1$, $0<r<1$. Assume that ${\beta _{1}}+1>2{\alpha _{2}}$. Then
(2.19)
\[ {M_{{w_{1}}}}\sim C{E_{{w_{1}}}^{2}}\]
as ${w_{1}}\to \infty $, where C is an appropriate constant. So Taylor’s law is satisfied asymptotically with exponent 2.
Remark 2.4.
How can we observe the above Taylor’s law in practice? As ${x_{{w_{1}},{w_{2}}}}$ is the asymptotic joint distribution of the central weight and the peripheral weight, we should consider a network large enough for asymptotic properties to show up. Fix the central weight at ${w_{1}}$, calculate the expectation ${E_{{w_{1}}}}$ and the second moment ${M_{{w_{1}}}}$ of the peripheral weight. Then we shall find that ${M_{{w_{1}}}}$ is approximately equal to $C{E_{{w_{1}}}^{2}}$ for large ${w_{1}}$. Our simulation results in Section 4 will show a bit more, that is, the result takes place for small ${w_{1}}$, too.
Remark 2.5.
If ${\beta _{1}}+1\le 2{\alpha _{2}}$, then ${M_{{w_{1}}}}$ is not finite, so Taylor’s law is not satisfied.
Remark 2.6.
Now we consider the case when we interchange the roles of ${w_{1}}$ and ${w_{2}}$. Let ${w_{2}}$ be fixed and let
\[ {x_{\cdot ,{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}},\hspace{1em}{E_{{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}}l/{x_{\cdot ,{w_{2}}}},\hspace{1em}{M_{{w_{2}}}}={\sum \limits_{l=0}^{\infty }}{x_{l,{w_{2}}}}{l^{2}}/{x_{\cdot ,{w_{2}}}}\]
be the other marginal distribution, conditional expectation and conditional second moment. By Remark 2.2, if we interchange subscripts 1 and 2 of α and β (and use r instead of $1-r$), then from Proposition 2.2 we obtain the description of the behaviour of ${x_{\cdot ,{w_{2}}}}$. Similarly, from Proposition 2.3 (resp. Proposition 2.4) we get the appropriate results for ${E_{{w_{2}}}}$ (resp. ${M_{{w_{2}}}}$). Finally, Theorem 2.1 implies the following. If ${\beta _{2}}+1>2{\alpha _{1}}$, then Taylor’s law is satisfied asymptotically with exponent 2 for ${E_{{w_{2}}}}$ and ${M_{{w_{2}}}}$, too.
Remark 2.7.
For the in-degree ${d_{1}}$ of a vertex we have ${d_{1}}=(N-1){w_{1}}$ and for the out-degree we have ${d_{2}}={w_{2}}$. Let ${E_{{d_{1}}}}$ be the conditional expectation of the out-degree if ${d_{1}}$ is fixed and let ${M_{{d_{1}}}}$ be the conditional second moment of the out-degree if ${d_{1}}$ is fixed. Then Theorem 2.1 implies that ${M_{{d_{1}}}}\sim \mathrm{const}.{E_{{d_{1}}}^{2}}$ as ${d_{1}}\to \infty $. Similarly, Remark 2.6 implies that ${M_{{d_{2}}}}\sim \mathrm{const}.{E_{{d_{2}}}^{2}}$ as ${d_{2}}\to \infty $. So Taylor’s power law is true for the in-degrees and the out-degrees.

3 Proofs and auxiliary results

For the joint limiting distribution we have the following result.
Lemma 3.1 (Lemma 3.2 of [14]).
Let $p>0$ and let ${w_{1}}\ge 0$, ${w_{2}}\ge 0$ with ${w_{1}}+{w_{2}}\ge 1$. Then ${x_{{w_{1}},{w_{2}}}}$ are positive numbers satisfying the following recurrence relation
(3.1)
\[\begin{aligned}{}{x_{1,0}}& =\frac{1-r}{{\alpha _{1}}+\beta +1},\hspace{1em}{x_{0,1}}=\frac{r}{{\alpha _{2}}+\beta +1},\\ {} {x_{{w_{1}},{w_{2}}}}& =\frac{({\alpha _{1}}({w_{1}}-1)+{\beta _{1}}){x_{{w_{1}}-1,{w_{2}}}}+({\alpha _{2}}({w_{2}}-1)+{\beta _{2}}){x_{{w_{1}},{w_{2}}-1}}}{{\alpha _{1}}{w_{1}}+{\alpha _{2}}{w_{2}}+\beta +1}\end{aligned}\]
if $1<{w_{1}}+{w_{2}}$.
Throughout the proof we shall use the following facts on the Γ-function.
(3.2)
\[ {\sum \limits_{i=0}^{n}}\frac{\varGamma (i+a)}{\varGamma (i+b)}=\frac{1}{a-b+1}\bigg[\frac{\varGamma (n+a+1)}{\varGamma (n+b)}-\frac{\varGamma (a)}{\varGamma (b-1)}\bigg],\]
see [16]. Stirling’s formula implies that
(3.3)
\[ \frac{\varGamma (n+a)}{\varGamma (n+b)}\sim {n^{-(b-a)}}.\]
The above two formulae imply that
(3.4)
\[ {\sum \limits_{i=0}^{\infty }}\frac{\varGamma (i+a)}{\varGamma (i+b)}=\frac{1}{b-a-1}\frac{\varGamma (a)}{\varGamma (b-1)}\]
if $b>a+1$. The following facts on ${x_{{w_{1}},{w_{2}}}}$ will be useful.
Lemma 3.2 (See the proof of Theorem 3.2 of [14]).
Let ${w_{1}}=0$, then
(3.5)
\[ {x_{0,1}}=\frac{r}{{\alpha _{2}}+\beta +1}>0,\]
(3.6)
\[ {x_{0,l}}=\frac{1}{l{\alpha _{2}}+\beta +1}\big((l-1){\alpha _{2}}+{\beta _{2}}\big){x_{0,l-1}},\hspace{1em}l>1,\]
and
(3.7)
\[ {x_{0,l}}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}=C(0)\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}.\]
When ${w_{2}}=0$, then we have
(3.8)
\[ {x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}>0,\]
(3.9)
\[ {x_{k,0}}=\frac{1}{k{\alpha _{1}}+\beta +1}\big((k-1){\alpha _{1}}+{\beta _{1}}\big){x_{k-1,0}},\hspace{1em}k>1,\]
and
(3.10)
\[ {x_{k,0}}=\frac{(1-r)\varGamma (1+\frac{\beta +1}{{\alpha _{1}}})}{{\alpha _{1}}\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (k+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (k+\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}})}=A(0)\frac{\varGamma (k+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (k+\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}})}.\]
If ${w_{1}}>0$ and $l>0$, then
(3.11)
\[ {x_{{w_{1}},l}}={\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}+{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}},\]
where
(3.12)
\[ {b_{{w_{1}}-1,i}^{(l)}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})},\]
for $1\le i\le l$, and
(3.13)
\[ {b_{{w_{1}},0}^{(l)}}=\frac{\varGamma (1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}.\]
Now we turn to the proofs of the new results. First we deal with the marginal distribution.
Proof of Proposition 2.2.
To calculate the marginal distribution ${x_{{w_{1}},\cdot }}={\textstyle\sum _{l=0}^{\infty }}{x_{{w_{1}},l}}$ we shall use mathematical induction. So first consider ${x_{0,\cdot }}$. Because ${x_{0,0}}=0$, by equation (3.7) we have
\[\begin{aligned}{}{x_{0,\cdot }}& ={\sum \limits_{l=1}^{\infty }}{x_{0,l}}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}\\ {} & =\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\end{aligned}\]
By (3.4), the sum in the above formula is always finite, and we have
(3.14)
\[ {x_{0,\cdot }}=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{{\alpha _{2}}}{{\beta _{1}}+1}\frac{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}=\frac{r}{{\beta _{1}}+1}.\]
For ${w_{1}}>0$, by (3.11), we have
(3.15)
\[\begin{aligned}{}{x_{{w_{1}},\cdot }}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}+{x_{{w_{1}},0}}\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}+{x_{{w_{1}},0}}.\end{aligned}\]
The coefficients ${b_{k,i}}$ satisfy formulae (3.12) and (3.13). Therefore we shall use (3.4) for the two sums in the above expression. We can see that both sums are always finite and
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}\frac{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}.\end{aligned}\]
For the other sum, a similar calculation shows that
\[ {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}=\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}.\]
Insert these expressions into (3.15) to obtain
(3.16)
\[ {x_{{w_{1}},\cdot }}={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}+{x_{{w_{1}},0}}.\]
From this point we should proceed carefully, as we should distinguish the case of ${x_{1,0}}$ and the case of ${x_{{w_{1}},0}}$ for ${w_{1}}>1$. From equation (3.14) ${x_{0,\cdot }}=\frac{r}{{\beta _{1}}+1}$, from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.16) gives that
(3.17)
\[\begin{aligned}{}{x_{1,\cdot }}& =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}{x_{0,\cdot }}+{x_{1,0}}\bigg(\frac{{\beta _{2}}}{{\alpha _{1}}+{\beta _{1}}+1}+1\bigg)\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\alpha _{1}}+\beta +1}\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}+{\beta _{1}}+1}\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg).\end{aligned}\]
For ${w_{1}}>1$ equation (3.9) gives us ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$, so equation (3.16) implies
\[ {x_{{w_{1}},\cdot }}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{x_{{w_{1}}-1,\cdot }}.\]
Therefore, using (3.17) and recursion, for ${w_{1}}>1$ we obtain that
(3.18)
\[\begin{aligned}{}{x_{{w_{1}},\cdot }}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1}{x_{{w_{1}}-1,\cdot }}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1}{{\alpha _{1}}}}{x_{1,\cdot }}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg)\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}.\end{aligned}\]
Now we check if the sum of the values of ${x_{{w_{1}},\cdot }}$ is equal to 1.
\[\begin{aligned}{}{x_{0,\cdot }}+{x_{1,\cdot }}+{\sum \limits_{{w_{1}}=2}^{\infty }}{x_{{w_{1}},\cdot }}& =\frac{r}{{\beta _{1}}+1}+\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1}\bigg(\frac{r}{{\beta _{1}}+1}+\frac{1-r}{{\beta _{1}}}\bigg)\\ {} & \hspace{1em}+{\sum \limits_{{w_{1}}=2}^{\infty }}\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{1-r+{\beta _{1}}}{{\beta _{1}}({\beta _{1}}+1)}.\end{aligned}\]
Here
\[ {\sum \limits_{{w_{1}}=2}^{\infty }}\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}={\alpha _{1}}\frac{\varGamma (2+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}.\]
Therefore, after some calculation, we get
(3.19)
\[ {\sum \limits_{{w_{1}}=0}^{\infty }}{x_{{w_{1}},\cdot }}=1,\]
so we have a proper distribution.  □
Now we consider the expectation.
Proof of Proposition 2.3.
We calculate
\[ {E_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}l/{x_{{w_{1}},\cdot }}\]
that is, the expectation when the central weight ${w_{1}}$ is fixed. We shall calculate the value of
(3.20)
\[ {A_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\]
As
(3.21)
\[ {A_{{w_{1}}}}={x_{{w_{1}},\cdot }}\bigg({E_{{w_{1}}}}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\]
using ${A_{{w_{1}}}}$, we shall find the value of ${E_{{w_{1}}}}$.
Let us start with ${A_{0}}$. Because ${x_{0,0}}=0$, and using equation (3.7), we have
\[\begin{aligned}{}{A_{0}}& ={\sum \limits_{l=1}^{\infty }}{x_{0,l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+\frac{{\beta _{2}}}{{\alpha _{2}}})(l+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+\frac{{\alpha _{2}}+\beta +1}{{\alpha _{2}}})}\\ {} & =\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\end{aligned}\]
By (3.4), the sum in the above formula is always finite, moreover
(3.22)
\[ {A_{0}}=\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\frac{\varGamma (2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}=\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\]
If ${w_{1}}>0$, then by (3.11) we have
(3.23)
\[\begin{aligned}{}{A_{{w_{1}}}}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\]
We know that the coefficients ${b_{k,i}}$ satisfy formulae (3.12) and (3.13). Therefore we can apply (3.4) for the first two terms in the above expression. So we can see that both sums are always finite and
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\\ {} & \hspace{2em}\times \frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{\varGamma (i+1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
Similarly
\[ {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{\varGamma (2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}.\]
Using these expressions, from (3.23) we get
(3.24)
\[\begin{aligned}{}{A_{{w_{1}}}}& ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+{x_{{w_{1}},0}}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\]
Now we should distinguish the case of ${w_{1}}=1$ and the case of ${w_{1}}>1$. For ${w_{1}}=1$ we use that from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.24) implies that
(3.25)
\[\begin{aligned}{}{A_{1}}& ={A_{0}}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}+{x_{1,0}}\frac{{\alpha _{2}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{1,0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & \hspace{1em}+\frac{1-r}{{\alpha _{1}}+\beta +1}\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{{\alpha _{1}}+\beta +1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}\\ {} & =\frac{r}{{\beta _{1}}+1-{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}+(1-r)\frac{{\beta _{2}}}{{\alpha _{2}}}\frac{1}{{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}.\end{aligned}\]
For ${w_{1}}>1$ we know that ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$, therefore equation (3.24) implies that
\[ {A_{{w_{1}}}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\]
From this equation we obtain that
(3.26)
\[\begin{aligned}{}{A_{{w_{1}}}}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-{\alpha _{2}}}{A_{{w_{1}}-1}}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}}{A_{1}}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{A_{1}}.\end{aligned}\]
Therefore, by equation (3.21), we have
(3.27)
\[\begin{aligned}{}{E_{{w_{1}}}}& =\frac{{A_{{w_{1}}}}}{{x_{{w_{1}},\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\\ {} & =\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}\frac{{A_{1}}}{{x_{1,\cdot }}}-\frac{{\beta _{2}}}{{\alpha _{2}}}.\end{aligned}\]
Therefore, by (3.3), the magnitude of ${E_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\beta _{1}}+1}{{\alpha _{1}}}-\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}}}={w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ when ${w_{1}}\to \infty $. More precisely,
(3.28)
\[ {E_{{w_{1}}}}\sim \frac{{A_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{\frac{{\alpha _{2}}}{{\alpha _{1}}}}}.\]
 □
Now we turn to the second moment.
Proof of Proposition 2.4.
To find the second moment
\[ {M_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}{l^{2}}/{x_{{w_{1}},\cdot }}\]
when the central weight ${w_{1}}$ is fixed, we shall calculate the value of
(3.29)
\[ {B_{{w_{1}}}}={\sum \limits_{l=0}^{\infty }}{x_{{w_{1}},l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\]
We can see that
(3.30)
\[ {B_{{w_{1}}}}={x_{{w_{1}},\cdot }}\bigg({M_{{w_{1}}}}+\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg).\]
Therefore, using ${B_{{w_{1}}}}$, we shall find the value of ${M_{{w_{1}}}}$.
We start with ${B_{0}}$. As ${x_{0,0}}=0$, applying equation (3.7), we obtain
\[ {B_{0}}={\sum \limits_{l=1}^{\infty }}{x_{0,l}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)=\frac{r}{{\alpha _{2}}}\frac{\varGamma (1+\frac{\beta +1}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+3+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+2+\frac{\beta +1}{{\alpha _{2}}})}.\]
By (3.4), the sum in the above formula is finite if ${\beta _{1}}+1>2{\alpha _{2}}$, and in this case
(3.31)
\[ {B_{0}}=\frac{r}{{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (3+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (1+\frac{{\beta _{2}}}{{\alpha _{2}}})}.\]
Now turn to ${B_{{w_{1}}}}$ when ${w_{1}}>0$. By (3.11),
(3.32)
\[\begin{aligned}{}{B_{{w_{1}}}}& ={\sum \limits_{l=1}^{\infty }}{\sum \limits_{i=1}^{l}}{b_{{w_{1}}-1,i}^{(l)}}{x_{{w_{1}}-1,i}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}{x_{{w_{1}},0}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}{\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+{x_{{w_{1}},0}}{\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
We use formulae (3.12) and (3.13), then apply (3.4) for the first two terms in the above expression. So we obtain that both sums are finite if ${w_{1}}{\alpha _{1}}+{\beta _{1}}+1>2{\alpha _{2}}$, and
\[\begin{aligned}{}& {\sum \limits_{l=i}^{\infty }}{b_{{w_{1}}-1,i}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=0}^{\infty }}\frac{\varGamma (l+i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+i+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{\alpha _{2}}}\frac{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\\ {} & \hspace{2em}\times \frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (i+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (i+\frac{{\beta _{2}}}{{\alpha _{2}}})}\end{aligned}\]
and
\[\begin{aligned}{}& {\sum \limits_{l=1}^{\infty }}{b_{{w_{1}},0}^{(l)}}\bigg(l+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(l+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}=\frac{\varGamma (1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}{\sum \limits_{l=1}^{\infty }}\frac{\varGamma (l+2+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (l+1+\frac{{w_{1}}{\alpha _{1}}+\beta +1}{{\alpha _{2}}})}\\ {} & \hspace{1em}=\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{\varGamma (3+\frac{{\beta _{2}}}{{\alpha _{2}}})}{\varGamma (\frac{{\beta _{2}}}{{\alpha _{2}}})}.\end{aligned}\]
Inserting these expressions into (3.32), we get
(3.33)
\[\begin{aligned}{}{B_{{w_{1}}}}& ={\sum \limits_{i=1}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(i+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\\ {} & \hspace{1em}+{x_{{w_{1}},0}}\frac{{\alpha _{2}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)+{x_{{w_{1}},0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1\hspace{0.1667em}+\hspace{0.1667em}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
When ${w_{1}}=1$, we use that from equation (3.8) ${x_{1,0}}=\frac{1-r}{{\alpha _{1}}+\beta +1}$ and ${x_{0,0}}=0$, so equation (3.33) implies that
(3.34)
\[\begin{aligned}{}{B_{1}}& ={B_{0}}\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}+{x_{1,0}}\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg[1+\frac{2{\alpha _{2}}+{\beta _{2}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\bigg]\\ {} & =\frac{{\beta _{1}}}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}\frac{r}{{\beta _{1}}+1-2{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(2+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & \hspace{1em}+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\frac{1-r}{{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}.\end{aligned}\]
When ${w_{1}}>1$, equation (3.33) implies that
(3.35)
\[ {B_{{w_{1}}}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}{\sum \limits_{i=0}^{\infty }}{x_{{w_{1}}-1,i}}\bigg(i+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\bigg(i+1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg),\]
where we applied that ${x_{{w_{1}},0}}=\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+\beta +1}{x_{{w_{1}}-1,0}}$. From equation (3.35) we obtain that
(3.36)
\[\begin{aligned}{}{B_{{w_{1}}}}& =\frac{({w_{1}}-1){\alpha _{1}}+{\beta _{1}}}{{w_{1}}{\alpha _{1}}+{\beta _{1}}+1-2{\alpha _{2}}}{B_{{w_{1}}-1}}={\prod \limits_{k=2}^{{w_{1}}}}\frac{k-1+\frac{{\beta _{1}}}{{\alpha _{1}}}}{k+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}}}{B_{1}}\\ {} & =\frac{\varGamma ({w_{1}}+\frac{{\beta _{1}}}{{\alpha _{1}}})}{\varGamma (1+\frac{{\beta _{1}}}{{\alpha _{1}}})}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{B_{1}}.\end{aligned}\]
So from equation (3.30) we obtain that
(3.37)
\[\begin{aligned}{}{M_{{w_{1}}}}& =\frac{{B_{{w_{1}}}}}{{x_{{w_{1}},\cdot }}}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg)\\ {} & =\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}\frac{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{\varGamma ({w_{1}}+1+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}\frac{{B_{1}}}{{x_{1,\cdot }}}\\ {} & \hspace{1em}-\bigg(1+2\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg){E_{{w_{1}}}}-\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg(1+\frac{{\beta _{2}}}{{\alpha _{2}}}\bigg).\end{aligned}\]
Therefore (3.3) implies that the magnitude of ${M_{{w_{1}}}}$ approaches ${w_{1}^{\frac{{\beta _{1}}+1}{{\alpha _{1}}}-\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}}}}={w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}}$ as ${w_{1}}\to \infty $. More precisely,
(3.38)
\[ {M_{{w_{1}}}}\sim \frac{{B_{1}}}{{x_{1,\cdot }}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})}{\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{w_{1}^{2\frac{{\alpha _{2}}}{{\alpha _{1}}}}}.\]
 □
Proof of Theorem 2.1.
Propositions 2.3 and 2.4 imply
(3.39)
\[ \frac{{M_{{w_{1}}}}}{{E_{{w_{1}}}^{2}}}\sim \frac{{B_{1}}{x_{1,\cdot }}}{{A_{1}^{2}}}\frac{\varGamma (2+\frac{{\beta _{1}}+1-2{\alpha _{2}}}{{\alpha _{1}}})\varGamma (2+\frac{{\beta _{1}}+1}{{\alpha _{1}}})}{{(\varGamma (2+\frac{{\beta _{1}}+1-{\alpha _{2}}}{{\alpha _{1}}}))^{2}}}\]
as ${w_{1}}\to \infty $. So Taylor’s law is satisfied asymptotically.  □

4 Numerical results

Here we present some numerical evidence supporting our result. The scheme of our computer experiment is the following. We fixed the size N of the stars, the values of the probabilities p, q and r and generated the graph as described in Section 2 up to a fixed step n. Then we calculated ${E_{{w_{1}}}}$ and ${M_{{w_{1}}}}$, that is, the expectation and the second moment of peripheral weight ${w_{2}}$ of the vertices when their central weight ${w_{1}}$ is fixed. We visualized the function ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ using the logarithmic scale on both axes. According to Theorem 2.1 the result should approximately be a straight line with slope 2. We also calculated ${E_{{w_{2}}}}$ and ${M_{{w_{2}}}}$, that is, the expectation and the second moment of central weight ${w_{1}}$ of the vertices when their peripheral weight ${w_{2}}$ was fixed. We visualized the function ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ using the logarithmic scale on both axes. This curve also should approximately be a straight line with slope 2.
In the following five experiments we used various parameter sets. The step size was always $n={10^{8}}$. One can check that in these five examples the conditions ${\beta _{1}}+1>2{\alpha _{2}}$ and ${\beta _{2}}+1>2{\alpha _{1}}$ are satisfied. In each case we see that both ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ are approximately straight lines on the log-log scale.
Experiment 4.1.
Here $N=4$, $p=0.4$, $q=0.4$, $r=0.4$. On Figure 1 we see that both ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) are approximately straight lines on the log-log scale.
vmsta137_g001.jpg
Fig. 1.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=4$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
Experiment 4.2.
$N=5$, $p=0.4$, $q=0.4$ and $r=0.4$. The results can be seen on Figure 2.
vmsta137_g002.jpg
Fig. 2.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
Experiment 4.3.
$N=5$, $p=0.5$, $q=0.5$ and $r=0.5$. The results can be seen on Figure 3.
vmsta137_g003.jpg
Fig. 3.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.5$, $q=0.5$, $r=0.5$, and $n={10^{8}}$
Experiment 4.4.
$N=6$, $p=0.3$, $q=0.6$ and $r=0.3$. The results can be seen on Figure 4.
vmsta137_g004.jpg
Fig. 4.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=6$, $p=0.3$, $q=0.6$, $r=0.3$, and $n={10^{8}}$
Experiment 4.5.
$N=10$, $p=0.4$, $q=0.2$ and $r=0.7$. The results can be seen on Figure 5.
vmsta137_g005.jpg
Fig. 5.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=10$, $p=0.4$, $q=0.2$, $r=0.7$, and $n={10^{8}}$
Finally, we show a numerical result when the conditions of Theorem 2.1 are not satisfied.
Experiment 4.6.
Let $N=5$, $p=0.9$, $q=0.5$ and $r=0.9$, $n={10^{8}}$. On Figure 6 one can see that Taylor’s power law is not satisfied.
vmsta137_g006.jpg
Fig. 6.
A case when Taylor’s power law is not satisfied. ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.9$, $q=0.5$, $r=0.9$, and $n={10^{8}}$

Acknowledgments

The authors are grateful to the referees and to the editor for the careful reading of the paper and for the valuable suggestions.

References

[1] 
Backhausz, A., Móri, T.F.: Weights and degrees in a random graph model based on 3-interactions. Acta Math. Hung. 143/1, 23–43 (2014) MR3215601. https://doi.org/10.1007/s10474-014-0390-8
[2] 
Barabási, A.L.: Network Science. Cambridge University Press, Cambridge (2016)
[3] 
Barabási, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999) MR2091634. https://doi.org/10.1126/science.286.5439.509
[4] 
Bollobás, B., Riordan, O.M., Spencer, J., Tusnády, G.: The degree sequence of a scale-free random graph process. Random Struct. Algorithms 18, 279–290 (2001) MR1824277. https://doi.org/10.1002/rsa.1009
[5] 
Chung, F., Lu, L.: Complex Graphs and Networks. AMS and CBMS, Providence (2006) MR2248695. https://doi.org/10.1090/cbms/107
[6] 
Cohen, J.E., Bohk-Ewald, C., Rau, R.: Gompertz, Makeham, and Siler models explain Taylor’s law in human mortality data. Demogr. Res. 38, 773–842 (2018)
[7] 
Cohen, J.E., Xu, M., Schuster, W.S.F.: Stochastic multiplicative population growth predicts and interprets Taylor’s power law of fluctuation scaling. Proc. R. Soc. B 280, 20122955 (2013)
[8] 
Cooper, C., Frieze, A.: A general model of web graphs. Random Struct. Algorithms 22, 311–335 (2003) MR1966545. https://doi.org/10.1002/rsa.10084
[9] 
de Menezes, M.A., Barabási, A.L.: Fluctuations in network dynamics. Phys. Rev. Lett. 92, 028701 (2004)
[10] 
Durrett, R.: Random Graph Dynamics. Cambridge University Press, Cambridge (2007) MR2271734
[11] 
Eisler, Z., Bartos, I., Kertész, J.: Fluctuation scaling in complex systems: Taylor’s law and beyond. Adv. Phys. 57/1, 89142 (2008)
[12] 
Fazekas, I., Porvázsnyik, B.: Limit theorems for the weights and the degrees in an n-interactions random graph model. Open Math. 14/1, 414–424 (2016) MR3514903. https://doi.org/10.1515/math-2016-0039
[13] 
Fazekas, I., Porvázsnyik, B.: Scale-free property for degrees and weights in an n-interactions random graph model. J. Math. Sci. 214/1, 69–82 (2016) MR3476251. https://doi.org/10.1007/s10958-016-2758-5
[14] 
Fazekas, I., Noszály, C., Perecsényi, A.: The n-star network evolution model. J. Appl. Probab. 56/2, 416–440 (2019) MR3986944. https://doi.org/10.1017/jpr.2019.21
[15] 
Morris, C.N.: Natural exponential families with quadratic variance functions. Ann. Stat. 10/1, 65–80 (1982) MR0642719
[16] 
Prudnikov, A.P., Brychkov, Y.A., Marichev, O.I.: Integrals and Series. Gordon & Breach Science Publishers, New York (1986) MR0888165
[17] 
Sridharan, A., Gao, Y., Wu, K., Nastos, J.: Statistical behavior of embeddedness and communities of overlapping cliques in online social networks. In: Proceedings IEEE INFOCOM. IEEE (2011) MR2723225. https://doi.org/10.1109/TCSI.2009.2025803
[18] 
Taylor, L.R.: Aggregation, variance and the mean. Nature 189, 732–735 (1961)
[19] 
van der Hofstad, R.: Random Graphs and Complex Networks. Cambridge University Press, Cambridge (2017)
Reading mode PDF XML

Table of contents
  • 1 Introduction
  • 2 The N-stars network evolution model and the main results
  • 3 Proofs and auxiliary results
  • 4 Numerical results
  • Acknowledgments
  • References

Copyright
© 2019 The Author(s). Published by VTeX
by logo by logo
Open access article under the CC BY license.

Keywords
Taylor’s power law random graph preferential attachment scale free gamma function

MSC2010
05C80 62E10

Funding
The research was supported by the construction EFOP-3.6.3-VEKOP-16-2017-00002; the project was supported by the European Union, co-financed by the European Social Fund.

Metrics
since March 2018
1054

Article info
views

434

Full article
views

388

PDF
downloads

175

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

  • Figures
    6
  • Theorems
    1
vmsta137_g001.jpg
Fig. 1.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=4$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
vmsta137_g002.jpg
Fig. 2.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
vmsta137_g003.jpg
Fig. 3.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.5$, $q=0.5$, $r=0.5$, and $n={10^{8}}$
vmsta137_g004.jpg
Fig. 4.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=6$, $p=0.3$, $q=0.6$, $r=0.3$, and $n={10^{8}}$
vmsta137_g005.jpg
Fig. 5.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=10$, $p=0.4$, $q=0.2$, $r=0.7$, and $n={10^{8}}$
vmsta137_g006.jpg
Fig. 6.
A case when Taylor’s power law is not satisfied. ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.9$, $q=0.5$, $r=0.9$, and $n={10^{8}}$
Theorem 2.1.
vmsta137_g001.jpg
Fig. 1.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=4$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
vmsta137_g002.jpg
Fig. 2.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.4$, $q=0.4$, $r=0.4$, and $n={10^{8}}$
vmsta137_g003.jpg
Fig. 3.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.5$, $q=0.5$, $r=0.5$, and $n={10^{8}}$
vmsta137_g004.jpg
Fig. 4.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=6$, $p=0.3$, $q=0.6$, $r=0.3$, and $n={10^{8}}$
vmsta137_g005.jpg
Fig. 5.
${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=10$, $p=0.4$, $q=0.2$, $r=0.7$, and $n={10^{8}}$
vmsta137_g006.jpg
Fig. 6.
A case when Taylor’s power law is not satisfied. ${E_{{w_{1}}}}\to {M_{{w_{1}}}}$ (left) and ${E_{{w_{2}}}}\to {M_{{w_{2}}}}$ (right) on the log-log scale, when $N=5$, $p=0.9$, $q=0.5$, $r=0.9$, and $n={10^{8}}$
Theorem 2.1.
Let $0<p<1$, $0<q<1$, $0<r<1$. Assume that ${\beta _{1}}+1>2{\alpha _{2}}$. Then
(2.19)
\[ {M_{{w_{1}}}}\sim C{E_{{w_{1}}}^{2}}\]
as ${w_{1}}\to \infty $, where C is an appropriate constant. So Taylor’s law is satisfied asymptotically with exponent 2.

MSTA

MSTA

  • Online ISSN: 2351-6054
  • Print ISSN: 2351-6046
  • Copyright © 2018 VTeX

About

  • About journal
  • Indexed in
  • Editors-in-Chief

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • ejournals-vmsta@vtex.lt
  • Mokslininkų 2A
  • LT-08412 Vilnius
  • Lithuania
Powered by PubliMill  •  Privacy policy