Second order elliptic partial differential equations driven by L\'evy white noise

This paper deals with linear stochastic partial differential equations with variable coefficients driven by L\'{e}vy white noise. We first derive an existence theorem for integral transforms of L\'{e}vy white noise and prove the existence of generalized and mild solutions of second order elliptic partial differential equations. Furthermore, we discuss the generalized electric Schr\"odinger operator for different potential functions $V$.


Introduction
Since the beginning of studying partial differential equations the Laplacian operator ∆ ∶= ∂ i (a ij (x)∂ j u) was introduced, where the matrix function A satisfies some ellipticity condition. This kind of operator is for example used in the Maxwell equations in general media (see [18]). The fundamental solution of the Laplace equation is well-known, but there is no explicit form for a fundamental solution of a general divergence form operator, but there exist upper and lower bounds, see for example [11]. The goal of this paper is to obtain generalized solutions of the equation whereL is a so-called generalized Lévy white noise and p is a partial differential operator of the form −div(A(x)∇u) + b(x) ⋅ ∇u + V (x)u, u ∈ C ∞ (R d ), (1.1) for a uniformly elliptic R d -valued matrix function A and functions b ∶ R d → R d , V ∶ R d → R. We especially achieve generalized and mild solutions for the generalized electric Schrödinger operator driven by a Lévy white noise, i.e. we are looking for a solution u of the stochastic partial differential equation −div(A(x)∇u) + V (x)u =L, (1.2) where A is a uniformly elliptic d × d matrix, the potential V > 0 belongs to the reverse Hölder class andL is a Lévy white noise. Since the fundamental solution of the Schrödinger operator has exponential decay, we will derive weaker assumptions on the Lévy white noise in comparison to the general case (1.1) to show the existence of generalized and mild solutions. This can be seen as an extension of the theory founded in [2] by D. Berger, but the results are not directly applicable. In order to overcome this shortcoming we derive existence results for generalized random processes constructed by integral transforms of the underlying Lévy white noise. Furthermore, we study different distributional properties of these solutions and show that we can construct periodically stationary generalized random processes. We are solving the stochastic partial differential equations in distributional sense, i.e. a solution s is a distribution valued random variable such that ⟨s, p(x, D) * ϕ⟩ = ⟨L, ϕ⟩ for every ϕ in our function space. For a good introduction to distributional solutions of partial differential equations see for example [10]. Until now there does not exist a good understanding of Lévy white noise driven stochastic partial differential equations under general moment conditions, but there exists literature for the case of Gaussian white noise and Lévy white noise with stricter moment conditions. In [19] SPDEs driven by Gaussian white noise where studied. Moreover, a similar approach for Lévy white noise can be found in [9] and [12]. In the case of stochastic partial differential equations with constant coefficients see also [4] and [2]. Our method is inspired by the papers of [6] and the results of [14].
In Section 3 we provide the general framework needed to discuss stochastic partial differential equations driven by Lévy white noise, whose solutions are defined as generalized random process. We introduce Lévy white noise as a generalized random process in the sense of I.M. Gelfand and N.Y. Vilenkin (see [7]). Theorem 3.4 implies that a large class of linear stochastic partial differential equations driven by a Lévy white noise has a generalized solution, where we used a more general kernel G ∶ R m × R d → R compared to Theorem 3.4 of D. Berger in [2]. Furthermore, we study the moment properties of generalized random processes s driven by Lévy white noiseL. For a well-defined random process s(ϕ) = ⟨L, G(ϕ)⟩, ϕ ∈ D(R d ) we show in Theorem 3.8 that ifL has finite β > 0 moment, then s has also finite β-moment under further conditions on the kernel G. Moreover, we show that if s has finite β-moment, then alsoL has finite β-moment. In Section 4 we discuss our first example, the partial differential operators of the form (1.1) and give existence results for generalized solutions. Furthermore, we discuss periodically stationary solutions s for this example. Afterwards we consider the generalized electric Schrödinger operator driven by Lévy white noise and show under weaker conditions, as in the example above, the existence of generalized solutions. We also study the concept of mild solutions of (1.2), i.e. a solution u which is a random field and given by the convolution of the Lévy white noise with the fundamental solution of (1.2). In Proposition 4.11 we mention when such a solution u exists and is stochastically continuous.

Notation and Preliminaries
Let us recall a few key concepts and techniques which will be needed later on: Most of our notation is standard or self-explanatory; where λ d is the Lebesgue measure on R d .

Integral transforms and generalized stochastic processes driven by Levy white noise
We provide the general framework needed to discuss stochastic partial differential equations driven by Lévy white noise and introduce Lévy white noise as generalized random processes in the sense of I.M. Gelfand and N.Y. Vilenkin (see [7]). In [2] it was shown that a convolution operator, with certain properties regarding his integrability, defines a generalized random process, assuming low moment conditions on the Lévy white noise. Similar to [2], we will use the characterization of the extended domain (see [6,Proposition 3.7.]) and achieve new results for a more general kernel G ∶ R m × R d → R, which allows us in Section 4 to model different kinds of stationarity assumptions and also to obtain generalized solutions of Lévy driven stochastic partial differential equations. Let (Ω, F , P) be a probability space. . The linearity means that, for every ϕ 1 , ϕ 2 ∈ D(R d ) and µ ∈ R, s(ϕ 1 + µϕ 2 ) = s(ϕ 1 ) + µs(ϕ 2 ) almost surely.
The continuity means that if ϕ n → ϕ in D(R d ), then s(ϕ n ) converges to s(ϕ) in probability.
Due to the nuclear structure on D(R d ) it follows with [19,Corollary 4.2] that a generalized random process has a version which is a measurable function from (Ω, F ) to (D ′ (R d ), C) with respect to the cylindrical σ-field C generated by the sets . From now on it is always meant such a version. The probability law of a generalized random process s is the probability measure on D ′ (R d ) given by The characteristic functional of a generalized random process s is the functional exp(i⟨u, ϕ⟩)dP s (u).
The characteristic functional characterizes the law of s in the sense that two random processes are equal in law if and only if they have the same characteristic functional. Now we define the Lévy white noise, which is closely connected to a Lévy process. In general, a Lévy process is a stochastically continuous process with independent and stationary increments starting in 0. A Lévy process (L t ) t≥0 is characterized by its characteristic function, it holds that for every z ∈ R and t ≥ 0. We call ψ the Lévy exponent which can be characterized by an a ≥ 0, γ ∈ R and a Lévy measure ν, i.e. a measure such that For all z ∈ R it holds that Definition 3.2. A Lévy white noiseL on R d is a generalized random process with characteristic functional of the form where ψ ∶ R → C is a Lévy exponent, i.e. there exist a ∈ R + , γ ∈ R and ν a Lévy-measure, such that The function ψ is uniquely characterized by the triplet (a, γ, ν) known as the characteristic triplet.
The existence of the Lévy white noise was shown in [7]. Another possible way to construct Lévy white noise would be as an independently scattered random measures, i.e. a random process whose test functions are indicator functions and are independently scattered when two indicator functions with disjoint supports define independent random variables (see B.S. Rajput and J. Rosinski [14]). In [6] J. Fageot and T. Humeau unified these two approaches by extending the Lévy white noise, defined as generalized random processes, to independently scattered random measures. This connection led to results in [6], which made it possible to extend the domain of definition of Lévy white noise to some Borel-measurable functions f ∶ R d → R. We say that the function f is in the domain ofL if there exists a sequence of elementary functions f n converging almost everywhere to f such that ⟨L, f n 1 A ⟩ converges in probability for n → ∞ for every Borel set A and set ⟨L, f ⟩ as the limit in probability of ⟨L, f n ⟩ for n → ∞, where ⟨L, f n ⟩ is defined by ∑ m j=1 a j ⟨L, 1 A j ⟩ for a elementary function f n ∶= ∑ m j=1 a j 1 A j , see also [6,Definition 3.6]. For the maximal domain of the Lévy white noiseL we write D(L). By setting L(A) ∶= ⟨L, 1 A ⟩ for bounded Borel sets A, the extension of a Lévy white noiseL can be identified with a Lévy basis L in the sense of Rajput and Rosinski [14], see [6, Theorem 3.5 and Theorem 3.7]. As a Lévy basis can be identified with a Lévy white noise in a canonical way, i.e. ⟨L, ϕ⟩ ∶= ∫ R d ϕ(x)dL(x) for ϕ ∈ D(R d ), we make no difference between a Lévy white noise and a Lévy basis. In particular, a Borel-measurable function f ∶ R d → R is in D(L) if and only if f is integrable with respect to the Lévy basis L in the sense of Rajput and Rosinski [14], see [6, With the aid of the distribution function we can now obtain a sufficient condition for the existence of the generalized random process s defined by This will be crucial in Section 4 for proving the existence of generalized processes as solutions to stochastic partial differential equations as in (1.1).
Theorem 3.4. LetL be a Lévy white noise on R m with characteristic triplet (a, γ, ν) and G ∶ R m × R d → R be a measurable function. Define for every x ∈ R m and R > 0 defines a generalized random process.
Proof. The proof is similar to that of [2, Theorem 3.4], hence we only mention the needed modifications. We need to show that G(ϕ) ∈ D(L) and ⟨L, G(ϕ n )⟩ → ⟨L, G(ϕ)⟩ as n → ∞ in probability for a sequence (ϕ n ) n∈N converging to ϕ in D(R d ).
As ⟨L, G(⋅)⟩ is linear, this is equivalent to check that ⟨L, G(ϕ n − ϕ)⟩ → 0 as n → ∞ in probability (see [6], Theorem 3.10.). Now given Theorem 2.7 in [14], we have to show In the following we give a pointwise upper bound for G(ϕ). Let therefore be R > 0 such that supp(ϕ n ) ⊂ B r (0) for some r < R. Then it holds for every Now we get (3.2) with similar arguments as in the proof of Theorem 3.4 of [2], where we use (3.7) instead of the Young Inequality. Since it holds (3.4) again with the same arguments as in the proof of Theorem 3.4 in [2]. Hence G(ϕ n ) → G(ϕ) in D(L) as n → ∞.
In Theorem 3.4 we assumed that G R ∈ L 1 (R m )∩L 2 (R m ). In the following Proposition we will show that, if the Lévy white noise has no Gaussian part and it holds ∫ R r β 1 r ≤1 ν(dr) < ∞, for β ∈ (1, 2), then we can assume a measurable function and for R > 0 let G R and h R be defined as in Theorem 3.4. Furthermore, letL be a Lévy white noise on R m with characteristic triplet (0, γ, ν) such that (3.1) holds. If further defines a generalized random process, where G(ϕ) is defined as in Theorem 3.4.
Proof. Again, the proof is similar to that of [2, Theorem 3.4] and hence we only mention the needed modifications. As G R ∈ L 1 (R m ) we only have to consider the terms which were estimated with G R L 2 (R m ) as can be seen from the proof of [2,Theorem 3.4]. These are and we have to show that they converge to 0 as ϕ n → 0 in D(R d ). We have So it follows that the term (3.8) converges to 0 as ϕ n → 0 in D(R d ). Furthermore, it holds This shows that the term (3.9) converges to 0 as ϕ n → 0 in D(R d ) and the rest of the proof follows with similar arguments as mentioned in the proof of Theorem 3.4.
When G R ∉ L 1 (R m ) we can still obtain a generalized process s under some extra conditions. Similar to Theorem 3.5 in [2] we have where G R and G(ϕ), ϕ ∈ D(R d ) are defined as in Theorem 3.4. If the first moment of the Lévy white noiseL on R m with characteristic triplet (a, γ, ν) vanishes, i.e.
Proof. Let (ϕ n ) n∈N be a sequence converging to 0 in D(R d ) such that supp ϕ n ⊂ B R (0) for some R > 0 and all n ∈ N. This proof follows with the same arguments as in the proof of [ We obtain that Furthermore, we observe for a Lévy white noiseL with characteristic triplet (a, γ, ν) that defines a well-defined generalized random process.
3.1. Moment properties. Next we show, that if the Lévy white noiseL has finite β > 0 moment, then so has the generalized random process IfL has finite β-moment, then so has s. If β ≥ 2 it is sufficient to assume that G R ∈ L β (R d ). ii) If s has finite β-moment, thenL has also finite β-moment.
Proof. From [14, Theorem 2.7] we know that the Lévy measure of the random variable ⟨s, ϕ⟩ is given by Then ⟨s, ϕ⟩ has finite β-moment if and only if ∫ i) LetL have finite β-moment and assume at first that 0 < β < 2. We calculate with where R > 0 is such that supp ϕ ⊂ B R (0). If β ≥ 2 we obtain by similar arguments as above that which is indeed finite. ii) Assume that s has finite β-moment and that G is different from 0. So we know that there exists a function ϕ ∈ D(R d ) such that hence there exists an r 0 > 1 with hence ∫ r >r 0 r β ν(dr) < ∞ so thatL has finite β-moment.

4.
Second order elliptic partial differential equations driven by Lévy white noise 4.1. Second order elliptic partial differential equations in divergence form driven by Levy white noise. In this section we discuss elliptic partial differential operators of second order with variable coefficients in divergence form, i.e. partial differential operators p(x, D) of the form is a uniformly elliptic matrix, i.e. there exists a C > 0 such that Now letL be a Lévy white noise on R d with characteristic triplet (a, γ, ν) and p(x, D) be a partial differential operator (PDO) of the form (4.1). We say that a generalized stochastic process s ∶ D(R d ) → L 0 (Ω) is a generalized solution of the equation where p(x, D) * is the adjoint of p(x, D), i.e.
In the first theorem we derive sufficient conditions for the existence of such a solution in terms of the characteristic triplet (a, γ, ν), which is just a a simple extension of the Laplacian case. Afterwards we discuss stationarity of these generalized processes, e.g. if the coefficients are y−periodic for some y ∈ R d , then s is y−periodically stationary. We assume for the complete section that the coefficients of p(x, D) are in C ∞ (R d ). Proof. By [11,Chapter 10] there exists a locally integrable left inverse E ∶ R d ×R d → R of the operator p(x, D) * such that for all ϕ ∈ D(R d ) Moreover, there exists an N ∈ N such that The solution s ∶ D(R d ) → L 0 (Ω) is not unique, which is quite clear. For example, let p(x, D) = −∆ and define ⟨s ′ , ϕ⟩ ∶= ⟨s, ϕ⟩ + where s is the solution constructed in Theorem 4.1 for the equation −∆s =L. Then it is easy to see that s ′ is also a solution of (4.3).

Remark 4.2.
We assumed that the coefficients of the partial differential operator p(x, D) are infinitely often differentiable, but this is not necessary. It would be sufficient if a ij ∈ C 1 (R d ) for all i, j ∈ {1, . . . , d}. under some suitable assumptions for the functions A, b and V , as the fundamental solution E of the elliptic operator above can be bounded from above by a constant times x − y d−2 for all x ≠ y. For a very general result see [3]. Observe that in the most general case the fundamental solution solves the equation only in the weak sense. We will discuss in the next section what we understand under a weak solution.
As a next step we discuss stationarity properties, which depend heavily on the matrix (a ij (x)) d i,j=1 . For example, if a ij ∶ R d → R is constant, it is easily seen that E(x, y) = E(x − y) for all x ≠ y and hence we observe that the constructed solution s ∶ D(R d ) → L 0 (Ω) in Theorem 4.1 is stationary.

Definition 4.4.
A generalized process s on D(R d ) is called periodic with period l ∈ R d , if s(⋅ + l) has the same law as s, and stationary if s is periodic for every period l ∈ R d . Here, s(⋅ + l) is defined by ⟨s(⋅ + l), ϕ⟩ ∶= ⟨s, ϕ(⋅ − l)⟩ for every ϕ ∈ D(R d ).
Remark 4.5. Let G ∶ R m × R d → R be a measurable function which fulfills the assumptions of Theorem 3.4 with m = d. Assume that G(x, y + l) = G(x + l, y) for all x, y ∈ R d and for some l ∈ R d . Then it is easily seen that for ϕ ∈ D(R d ) Observe that (s(ϕ(⋅+ly))) y∈Z is then a stationary process for all ϕ ∈ D(R d ). Therefore, these models seem to be useful in statistics to model periodic processes or random fields. In the case that G(x, y + l) = G(x + l, y) for all l, x, y ∈ R d , we see that s will be stationary.
Proposition 4.6. Let p(x, D) ∶ D(R d ) → C(R d ) be an elliptic partial differential operator of the form (4.1), d ≥ 5 and assume that the matrix-valued function A ∶ R d → R d×d is periodic with period y ∈ R d , i.e. A(x + y) = A(x) for all x ∈ R d . LetL be a Lévy white noise such that it satisfies the assumption of Theorem 4.1. Then there exists a solution s ∶ D(R d ) → L 0 (Ω) of p(x, D)s =L, which is periodically stationary with period y.
Proof. It is enough to show that where E is again the fundamental solution of the operator p(x, D) * . The assertion follows then from the stationarity of the Lévy white noiseL. We see that where (4.4) follows from (3.5) and (3.12). By the maximum principle for uniformly elliptic equations we obtain E(ϕ)(x + y) − E(ϕ(⋅ + y))(x) = 0 for all x ∈ R d , hence we obtain that s ∶ D(R d ) → L 0 (Ω) is periodically stationary.
From this result we can construct a stationary process on a certain group as long as the coefficients of the partial differential operator satisfy some periodicity condition.
Corollary 4.7. Let (G, +) be a subgroup of (R d , +) and p(x, D) ∶ D(R d ) → C(R d ) be an elliptic partial differential operator of the form (4.1) and assume that the matrixvalued function A ∶ R d → R d×d is periodic with period y ∈ G for all y ∈ G. LetL be a Lévy white noise satisfying the assumption of Theorem 4.1 and s be the generalized solution of p(x, D)s =L constructed in Theorem 4.1. Then for every ϕ ∈ D(R d ) the process (s ϕ (y)) y∈G ∶= (⟨s, ϕ(⋅ + y)⟩) y∈G is a stationary process in G.
Proof. This is a direct consequence of Proposition 4.6.

4.2.
The generalized and mild solutions of the electric Schrödinger equation driven by Lévy white noise. We saw in Remark 4.3 before, that we can find generalized solutions of stochastic partial differential equations given by for suitable A and V by assuming that the dimension d ≥ 5, the first moment of the Lévy white noise vanishes and under the moment condition In the case that V lies in a Reverse Hölder class these assumptions seem to be not necessary. We show that we find generalized and mild solutions in dimension 3 under much weaker conditions. At first we introduce the Reverse Hölder class RH p (R d ) and if V is in this class, the moment assumption reduces to some kind of a logarithm moment condition (dependent on V ), which is very similar to the case that V is a positive constant. We first define what is meant by a mild solution of (4.5). We in the weak sense for all ϕ ∈ D(R d ). We set u(x) ∶= ⟨L, E(x, ⋅)⟩ to be the mild solution of (4.5), if u(x) exists for all x ∈ R d , i.e. if E(x, ⋅) ∈ D(L) for all x ∈ R d . Then Theorem 4.9 i) will give a sufficient condition for that to hold. In the following we define the maximum function m and Agmon distance γ of the potential V , to apply the estimates of the fundamental solution of the generalized electric Schrödinger operator shown in [17] and [13].
with ω > 0 a.e. belongs to the Reverse Hölder class RH p (R d ) if there exists a constant C so that for any ball B ⊂ R d , Furthermore, we define for ω ∈ RH p (R d ) the maximum function m(x, ω) by and the distance function where Γ ∶ [0, 1] → R d is absolutely continuous and Γ(0) = x and Γ(1) = y. Moreover, we define for R > 0 the ball The set RH p (R d ) is closely connected to the space of Muckenhoupt weights A s , s ≥ 1, where ω measurable and non-negative is in where s ′ ∈ R such that 1 s + 1 s ′ = 1. For further information see for example [16]. Especially it holds that ω ∈ A s for some s ≥ 1 if and only if there exists a p > 1 such that ω ∈ RH p (R d ). We see that the set of all positive and measurable functions bounded from above and strictly away from zero given by We state now an existence theorem for a mild and generalized solution of the equation where V lies in RH d 2 (R d ) and show that under much weaker moment conditions there exists a generalized solution. We use that the weak fundamental solution E of the operator p(x, D) can be bounded as follows where k, C > 0, see [13,Corollary 6.16,page 40]. From now on the constant k > 0 is fixed and such that (4.6) is satisfied.
be a real, uniformly bounded and elliptic matrix and V ∈ RH d 2 (R d ). LetL be a a Lévy white noise on R d with characteristic triplet (a, γ, ν) such that it holds iii) Under the assumption that the first moment of the Lévy white noise exists, the mild solution u from i) gives rise to a generalized solution s of the stochastic partial differential equation We will prove Theorem 4.9 in Section 4.4. Here we will calculate the moment condition forL for functions which are greater than a positive constant.
where C, C 1 > 0. Since where Γ(d + 1, log(r)) denotes the upper incomplete gamma function, this leads to where C 2 , C 3 > 0. So if we assume that the Lévy white noiseL with characteristic triplet (a, γ, ν) satisfies r >1 log( r ) d ν(dr) < ∞ then the assumptions of Theorem 4.9 are satisfied and we obtain generalized and mild solutions, if d ≥ 3 or d = 3 respectively.

4.3.
Existence and continuity of mild solutions. In the following we give sufficient conditions for the existence and continuity of a random field u(x) ∶= (⟨L, E(x, ⋅)⟩) x∈R m , where E ∶ R m × R d → R is a kernel. This will be used in the proof of Theorem 4.9, where E is the weak fundamental solution of the generalized electric Schrödinger operator.
Proposition 4.11. LetL be a Lévy basis on R d with characteristic triplet (a, γ, ν) and let E ∶ R m × R d → R be a measurable function. We define for every x ∈ R m a function h x ∶ R + → R + by for every x ∈ R m . Then E(x, ⋅) ∈ D(L) for every x ∈ R m and hence the random field u = (u(x)) x∈R m given by u( and L 2 (R d ) and for every x ∈ R m there exists an ε > 0 such that then the process u = (u(x)) x∈R m is stochastically continuous.
As it holds it follows from Lebesgue's dominated convergence theorem that as n → ∞. For the last term in (4.7) we observe by [8, Prop. 1.13 and 1.14] that By Lebesgue's dominated convergence theorem we obtain that So we showed (4.7). In order to see (4.8) observe that Now by similar arguments as in the proof of [2,Theorem 3.4] we see that (4.8) holds true. Furthermore, it is clear that (4.9) holds, since T E is continuous. Now we state under which conditions a mild solution of a stochastic partial differential equation gives rise to a generalized solution.
Theorem 4.12. LetL be a Lévy white noise on R d with characteristic triplet (a, γ, ν) with existing first moment and p(x, D) be a partial differential operator of the form for all compact sets K ⊂ R d for p = 1, 2. Then the mild solution of p(x, D)u =L gives rise to a generalized solution s of the stochastic partial differential equation Proof. We want to apply a stochastic Fubini theorem. Therefore we have to show that With similar calculations as done in the proof of [2, Proposition 5.6] we get that for every ϕ ∈ D(R d ) min rE(x, y)ϕ(x) , rE(x, y)ϕ(x) 2 ≤ 1 r >1 rE(x, y)ϕ(x) + 1 r ≤1 rE(x, y)ϕ(x) 2 .
Let ϕ ∈ D(R d ) such that supp ϕ ⊂ B R (0), R > 0. We observe that This shows (4.10). Since ϕ ∈ D(R d ) has compact support and we have that λ d is finite on the support of ϕ and with [1, Theorem 3.1 p. 926] we get that E(x, y)ϕ(x)λ d (dx)dL(y) a.s. and further it can be chosen a version of u such that u(t)ϕ(t) is integrable with respect to λ d . The linearity of s ∶ D(R d ) → L 0 (Ω) is clear and the estimates above show that it is also continuous, hence s is a generalized random process. In order to see that p(x, D)s =L, we observe that for arbitrary f ∈ D(R d ) As f ∈ D(R d ) was arbitrary, it follows from the fundamental lemma of calculus of variations that so we see that s is a generalized solution.
Proof. i) Similar to [17,Remark 3.21] we observe that we can estimate the distance function γ in (4.6) and obtain for the weak fundamental solution E of the generalized electric Schrödinger operator p(x, D) that it holds for some constants C 1 , C 2 > 0 and 0 < θ < 1. Hence, we obtain that where C 3 > 0. For α > 0 and x, y ∈ R d , x ≠ y it follows with the triangular inequality that (observe that d = 3 and hence the Lebesgue measure of a ball with radius r is It follows with (4.6) that R r 1 r >1 1 r 0 d E(x,⋅) (α)λ 1 (dα)ν(dr) by assumption, where 0 < C 4 (x) < ∞. Proposition 4.11 i) now gives the existence of a mild solution.
To show the continuity of the mild solution by the previous estimates and Proposition 4.11 ii) it is sufficient to prove that ⋅), is continuous. Let x 0 ∈ R d and (x n ) n∈N be a sequence such that x n → x 0 as n → ∞. Let 0 < 2 x 0 − x n < r 0 for all n ≥ M, M ∈ N. We calculate that E(x 0 , ⋅) − E(x n , ⋅) L 1 (R d ) ≤ E(x 0 , ⋅) − E(x n , ⋅) L 1 (Br 0 (x)) + E(x 0 , ⋅) − E(x n , ⋅) L 1 (R d ∖Br 0 (x)) .
As (x n ) n≥M is bounded we can find an integrable majorant on R d ∖ B r 0 (x). We know from [13, chapter 7] that E is continuous and by Lebesgue's Dominated Convergence Theorem we obtain lim n→∞ E(x 0 , ⋅) − E(x n , ⋅) L 1 (R d ∖Br 0 (x)) = 0.
We see that By letting r 0 go to 0 we obtain that lim n→∞ E(x 0 , ⋅) − E(x n , ⋅) L 1 (R d ) = 0. The same proof works for the L 2 -norm. ii) LetẼ be the left inverse of p(x, D) * , i.e it holds R dẼ (x, y)p(y, D) * ϕ(y)λ d (dy) = ϕ(x) for ϕ ∈ D(R d ). We have to show thatẼ R ∈ L 1 (R d ) ∩ L 2 (R d ) in order to satisfy the assumptions of Theorem 3.4. AsẼ(x, y) = E(y, x) we can show by a similar argument as in i) that for R > 0 Ẽ (x, y) λ d (dy) ≤ B R (0) C 1 e −C 2 (1+m(y,V ) x−y ) θ x − y 2−d λ d (dy).
By using (4.12) we obtain that where C R , C 1 R ,C R > 0. Therefore we obtain that Ẽ R L 1 (R d ) + Ẽ R L 2 (R d ) < ∞. We observe from (4.6) and [17,Remark 3.21] by applying the triangular inequality that where C ′ R , C ′′ R > 0 are constants dependent on R. This leads with similar arguments as in i) to R r 1 r >1 for a constant C R > 0 dependent on R > 0. With Theorem 3.4 follows the existence of a generalized solution s ∶ D(R d ) → L 0 (Ω). iii) Given the mild solution from i) we obtain with (4.12) for R > 0 that Hence, we obtain the assertion by Theorem 4.12.