entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
1.12k
primary_category
stringclasses
114 values
categories
sequencelengths
1
8
text
stringlengths
5
364k
http://arxiv.org/abs/2407.13544v1
20240718141836
Drilling holes in the Brownian disk: The Brownian annulus
[ "Jean-François Le Gall", "Alexis Metz-Donnadieu" ]
math.PR
[ "math.PR", "60D05, 60F17" ]
Research on Tibetan Tourism Viewpoints information generation system based on LLM Jinhu Qi Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China qijinhu1218@gmail.com   Shuai Yan* Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China yanshuai1@cdjcc.edu.cn *Corresponding author   Wentao Zhang Department of Software Engineering Chengdu Jincheng College Chengdu, China vraniumzwt@gmail.com   Yibo Zhang Department of Computer Science Chengdu Jincheng College Chengdu, China z1575075389@gmail.com Zirui Liu Department of Software Engineering Chengdu Jincheng College Chengdu, China liuzirui733@gmail.com Ke Wang Department of Artificial Intelligence Chengdu Jincheng College Chengdu, China wangke@cdjcc.edu.cn =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT We give a new construction of the Brownian annulus based on removing a hull centered at the distinguished point in the free Brownian disk. We use this construction to prove that the Brownian annulus is the scaling limit of Boltzmann triangulations with two boundaries. We also prove that the space obtained by removing hulls centered at the two distinguished points of the Brownian sphere is a Brownian annulus. Our proofs rely on a detailed analysis of the peeling by layers algorithm for Boltzmann triangulations with a boundary. § INTRODUCTION Brownian surfaces are basic models of random geometry that have been the subject of intensive research in the recent years. They arise as scaling limits of large classes of random planar random maps viewed as random metric spaces, for the Gromov-Hausdorff topology. The first result in this direction was the convergence to the Brownian sphere <cit.>, which is a Brownian surface in genus 0 with no boundary. This convergence has been extended to many different classes of random planar maps by several authors. The recent paper of Bettinelli et Miermont <cit.> constructs general Brownian surfaces in arbirary genus g and with a finite number of boundaries of given sizes, as the scaling limit of large random quadrangulations with boundaries (boundaries of quadrangulations are distinguished faces with arbitrary degrees, whereas the other faces have degree 4). The construction of <cit.> applies to the case where the volume of the surface is fixed as well as the boundary sizes, and it is also of interest to consider “free” models where this volume is not fixed, which appear as scaling limits of planar maps distributed according to Boltzmann weights. The special case where there is only one boundary in genus 0 corresponds to the so-called Brownian disk, which has been studied extensively (see in particular <cit.>). Our object of interest in this work is the free Brownian annulus, which is a Brownian surface in genus 0 with two boundaries. As noted in <cit.>, the free Brownian annulus is one of the very few Brownian surfaces (together with the Brownian disk and the pointed Brownian disk) for which the free model makes sense under a probability measure — for instance, the free Brownian sphere is defined under an infinite measure. One motivation for the present work came from the recent paper of Ang, Rémy and Sun <cit.>, which studies the modulus of Brownian annuli in random conformal geometry. The definition of the Brownian annulus in <cit.> is based on considering the complement of a hull in the free pointed Brownian disk conditionally on the event that the hull boundary has a fixed size. As the authors of <cit.> observe, this definition leads certain technical difficulties due to conditioning on an event of probability zero. In this work, we give a slightly different construction of the Brownian annulus which involves only conditioning on an event of positive probability. We show that this definition is equivalent to the one in <cit.>, and we also relate our construction to the scaling limit approach of Bettinelli et Miermont <cit.> by showing that the Brownian annulus is the scaling limit of large random triangulations with two boundaries — this was asserted without proof in <cit.>. Let us give a more precise description of our main results. We start from a free pointed Brownian disk (𝔻,D) with boundary size a>0. As usual, ∂𝔻 denotes the boundary of 𝔻. Then, 𝔻 has a distinguished interior point denoted by x_*. For every r∈ (0,D(x_*,∂𝔻)), we denote the closed ball of radius r centered at x_* by B_r(x_*), and the hull H_r is obtained by “filling in the holes” of B_r(x_*). In more precise terms, 𝔻∖ H_r is the connected component of 𝔻∖ B_r(x_*) that contains the boundary ∂𝔻. The perimeter or boundary size of H_r may then be defined as 𝒫_r=lim_→ 0^-2𝐕({x∈𝔻∖ H_r:D(x,H_r)<}), where 𝐕 is the volume mesure of 𝔻. The process (𝒫_r)_0<r<D(x_*,∂𝔻) has a modification with càdlàg sample paths and no positive jumps, and, for every b>0, we set r_b=inf{r∈ (0,D(x_*,∂𝔻)):𝒫_r=b}, where inf∅=∞. Then (r_b<∞)=a/(a+b) (Lemma <ref>), and, on the event {r_b<∞}, r_b is the radius of the first hull of boundary size b. Under the conditional probability (·| r_b<∞), we define the Brownian annulus of boundary sizes a and b, denoted by ℂ_(a,b), as the closure of 𝔻∖ H_r_b, which is equipped with the continuous extension d^∘ of the intrinsic metric on 𝔻∖ H_r_b and with the restriction of the volume measure of 𝔻 (Theorem <ref>). It is convenient to view ℂ_(a,b) as a measure metric space marked with two compact subsets (the “boundaries”) which are here ∂𝔻 and ∂ H_r_b. Much of the present work is devoted to proving that the space ℂ_(a,b) is the Gromov-Hausdorff limit of rescaled triangulations with two boundaries. More precisely, for every sufficiently large integer L, let 𝒞^L be a random planar triangulation with two simple boundaries of respective sizes ⌊ a L⌋ and ⌊ b L⌋ (see <cit.> for precise definitions of triangulations with boundaries). Assume that 𝒞^L is distributed according to Boltzmann weights, meaning that the probability of a given triangulation τ is proportional to (12√(3))^-k(τ) where k(τ) is the number of internal vertices of τ. We equip the vertex set V(𝒞^L) with the graph distance rescaled by the factor √(3/2) L^-1/2, which we denote by d^∘_L. Then, Theorem <ref> states that (V(𝒞^L),d^∘_L) (d)L→∞⟶ (ℂ_(a,b),d^∘), in distribution in the Gromov-Hausdorff sense. Theorem <ref> gives a stronger version of this convergence by considering the Gromov-Hausdorff-Prokhorov distance on measure metric spaces marked with two boundaries, in the spirit of <cit.> (see Section <ref> below). The proof of the convergence (<ref>) relies on two main ingredients. The first one is a result of Albenque, Holden and Sun <cit.> showing that the free Brownian disk is the scaling limit of Boltzmann triangulations with a simple boundary, when the boundary size tends to ∞. The second ingredient is the peeling by layers algorithm for Boltzmann triangulations with a simple boundary, which was already investigated in the recent paper <cit.> in view of studying the spatial Markov property of Brownian disks. Roughly speaking, a peeling algorithm “explores” a Boltzmann triangulation 𝒟^L with boundary size ⌊ a L⌋ step by step, starting from a distinguished interior vertex, and, in the special case of the peeling by layers, the explored region at every step is close to a (discrete) hull centered at the distinguished vertex. At the first time when the boundary size of the explored region becomes equal to ⌊ b L⌋ (conditionally on the event that this time exists), the unexplored region is a Boltzmann triangulation with two simple boundaries of sizes ⌊ a L⌋ and ⌊ b L⌋, and is therefore distributed as 𝒞^L. One can then use the main result of <cit.> giving the scaling limit of 𝒟^L to derive the convergence (<ref>). Making this argument precise requires a number of preliminary results, and in particular a detailed study of asymptotics for the peeling process of Boltzmann triangulations with a boundary, which is of independent interest (see Section <ref> below). These asymptotics are closely related to the similar results for the peeling process of the UIPT obtained in <cit.>. As a by-product of our construction, we obtain several other results relating the Brownian annulus to the Brownian disk or the Brownian sphere. Consider again the free pointed Brownian disk (𝔻,D) of perimeter a, but now fix r>0. Conditionally on {D(x_*,∂𝔻)>r, 𝒫_r=b} the closure of 𝔻∖ H_r equipped with an (extended) intrinsic metric has the same distribution as ℂ_(a,b) (Proposition <ref>) and furthermore is independent of the hull H_r also viewed a random metric space for the appropriate intrinsic metric. This result in fact corresponds to the definition of the Brownian annulus in <cit.>. Another related result involves removing two disjoint hulls in the free Brownian sphere. Write (_∞, 𝐃) for the free Brownian sphere, which has two distinguished points denoted by _* and _0 that play symmetric roles. For every r>0 and x∈_∞, let B^∞_r(x) be the closed ball of radius r centered at x in _∞. Then, for r∈(0,𝐃(_*,_0)), let the hull B^∙_r(_*) be the complement of the connected component of _∞∖ B^∞_r(_*) that contains _0, and define B^∙_r(_0) by interchanging the roles of _* and _0. Let r,r'>0. Then, conditionally on the event {𝐃(_*,_0)>r+r'}, the three spaces B^∙_r(_*), B^∙_r'(_0) and _∞∖ (B^∙_r(_*)∪ B^∙_r'(_0)) are independent conditionally on the perimeters |∂ B^∙_r(_*)| and |∂ B^∙_r'(_0)| (these perimeters are defined by a formula analogous to (<ref>)), and _∞∖ (B^∙_r(_*)∪ B^∙_r'(_0)) is a Brownian annulus with boundary sizes |∂ B^∙_r(_*)| and |∂ B^∙_r'(_0)| (see Theorem <ref> below for a more precise statement, and <cit.> for a closely related result). In addition to our main results, we obtain certain explicit formulas, which are of independent interest. In particular, Proposition <ref> gives the distribution of 𝒫_r under (·∩{ r<D(x_*,∂𝔻)}) (note that the distribution of D(x_*,∂𝔻) was computed in <cit.>). We also consider the “length” ℒ_(a,b) of ℂ_(a,b), which is the minimal distance between the two boundaries. By combining our definition of ℂ_(a,b) with the Bettinelli-Miermont construction of the Brownian disk, one gets that ℒ_(a,b) is distributed as the last passage time at level b for a continuous-state branching process with branching mechanism ψ(λ):=√(8/3) λ^3/2 started with initial density 3/2 a^3/2(a+z)^-5/2, and conditioned to hit b (this conditioning event has probability a/(a+b)). Unfortunately, we have not been able to use this description to derive an explicit formula for the distribution of ℒ_(a,b), but Proposition <ref> gives a remarkably simple formula for its first moment: [ℒ_(a,b)]=√(3π/2)(a+b)(√( a^-1)+√(b^-1)-√(a^-1+b^-1)). The paper is organized as follows. Section <ref> gathers a number of preliminaries, concerning in particular the peeling algorithm for random triangulations, the Bettinelli-Miermont construction of the free pointed Brownian disk, and a useful embedding of the Brownian disk in the Brownian sphere. Then, Section <ref> presents our construction of the Brownian annulus, and also proves a technical lemma that will be used in the proof of the convergence of rescaled triangulations to the Brownian annulus. In Section <ref>, we recall the key convergence of rescaled triangulations with a boundary to the Brownian disk, and we use this result to investigate the convergence of certain explored regions in the peeling algorithm of Boltzmann triangulations towards hulls in the Brownian disk. Section <ref> is devoted to asymptotics for the perimeter process in the peeling by layers algorithm of Boltzmann triangulations: the ultimate goal of these asymptotics is to verify that the (suitably rescaled) first radius at which the perimeter of the explored region hits the value ⌊ bL⌋ converges to r_b, and that this convergence takes place jointly with the convergence to the Brownian disk (Corollary <ref>). Section <ref> gives the proof of the scaling limit (<ref>). If 𝒞^L is constructed via the peeling algorithm as explained above, a technical difficulty comes from the fact that it is not easy to control distances near the boundary of the unexplored region, and, to overcome this problem, we use approximating spaces obtained by removing a tubular neighborhood of the latter boundary. In Section <ref>, we explain how the convergence (<ref>) can be sharpened to hold in the sense of the Gromov-Hausdorff-Prokhorov topology on measure metric spaces marked with two boundaries. Section <ref> explains the relation between our construction of the Brownian annulus and the definition of <cit.>, and also proves Theorem <ref> showing that the complement of the union of two hulls centered at the distinguished points of the Brownian sphere is a Brownian annulus. Finally, Section <ref> discusses the distribution of the length ℒ_(a,b) of the Brownian annulus. § PRELIMINARIES In this section, we recall the basic definitions and the theoretical framework that we will use in this paper. Section <ref> introduces Boltzmann triangulations as well as the peeling by layers algorithm, which plays an important role in this work. In Section <ref>, we recall the definition of the Gromov-Hausdorff-Prohorov topology for measure metric spaces, using the formalism of <cit.>. Section <ref> gives a construction of the free Brownian disk, which is the compact metric space arising as the scaling limit of Boltzmann triangulations with a boundary. §.§ Boltzmann triangulations of the disk and the annulus For two integers L≥ 1 and k≥ 0, we let 𝕋^1(L, k) be the set of all pairs (τ, e), where τ is a type I planar triangulation with a simple boundary ∂τ of length L and k internal vertices, and where e is a distinguished edge on ∂τ. Here, type I means that we allow the presence of multiple edges and loops, but the boundary has to remain simple. Each edge e of ∂τ is oriented so that the outer face lies to the left of e (see Figure 1), and we write |∂τ|=L for the boundary size of τ. By convention, we will consider the map consisting of a single oriented (simple) edge e as an element of 𝕋^1(2, 0) and in that special case it is convenient to consider that ∂τ consists of two oriented edges, namely e and e with the reverse orientation. For integers L,p≥ 1 and k≥ 0, we let 𝕋^2(L, p,k) be the set of triplets (τ, e_0, e_1), where τ is a planar triangulation of type I having two vertex-disjoint simple boundaries — namely an outer boundary ∂_0τ of length L and an inner boundary ∂_1τ of length p — and k internal vertices, and where e_0 and e_1 are distinguished edges on ∂_0τ and ∂_1τ respectively. The edges on the boundaries are again oriented so that the boundary faces lie on their left. See <cit.> for more precise definitions. We have the following explicit enumeration formulas (cf. <cit.>): ∀ (L, k)≠ (1, 0), Card 𝕋^1(L, k)=4^k-1(2L+3k-5)!!/k!(2L+k-1)!!L2L L, ∀ L, p≥ 1, k≥ 0, Card 𝕋^2(L, p, k)=4^k(2(L+p)+3k-2)!!/k!(2(L+p)+k)!!L2L Lp2p p, with the convention (-1)!!=1. Note that, in the case (L,k)=(2, 0), formula (<ref>) remains valid thanks to the previous convention making the map composed of a single edge an element of 𝕋^1(2,0). In the following, we are interested in triangulations for which the number of internal vertices is random, and we set 𝕋^1(L)=⋃_k≥ 0𝕋^1(L, k) and 𝕋^2(L, p)= ⋃_k≥ 0𝕋^2(L, p, k). A random triangulation 𝒯 in 𝕋^1(L) (resp. in 𝕋^2(L, p)) is said to be Boltzmann distributed if, for every k≥ 0 and every θ∈𝕋^1(L, k) (resp. θ∈𝕋^2(L, p , k)), the probability that 𝒯=θ is proportional to (12√(3))^-k. More precisely, asymptotics of (<ref>) and (<ref>) show that the quantities Z^1(L):=∑_k≥ 0 (12√(3))^-kCard 𝕋^1(L, k), and Z^2(L, p):=∑_k≥ 0 (12√(3))^-kCard 𝕋^2(L,p,k), are finite. The Boltzmann measure on 𝕋^1(L) gives probability Z^1(L)^-1(12√(3))^-k to any triangulation θ∈𝕋^1(L, k), where k≥ 0. Similarly, the Boltzmann measure on 𝕋^2(L,p) gives probability Z^2( L, p)^-1(12√(3))^-k to any triangulation θ∈𝕋^2(L,p, k). By <cit.>, Section 2.2, we have the explicit expression: ∀ L≥ 1, Z^1(L)=6^L(2L-5)!!/8√(3) L!, where again (-1)!!=1. In the following, it will also be useful to define Z^1(0):=(24√(3))^-1. Finally, we let 𝕋^1, ∙(L, k) be the set of all triangulations in 𝕋^1(L, k) that have (in addition to the distinguished edge on the boundary) another distinguished oriented edge chosen among all edges of the triangulation. This second distinguished edge may or may not be part of the boundary, but we will call it the distinguished interior edge with some abuse of terminology. The Boltzmann measure on 𝕋^1,∙(L)=⋃_k≥ 0𝕋^1,∙(L, k) is again the probability measure that gives probability proportional to (12√(3))^-k to any τ∈𝕋^1,∙(L, k). This makes sense because a triangulation τ∈𝕋^1(L, k) has 3k+2L -3 edges, by Euler's formula, so that the number of ways of choosing an oriented edge in τ is 6k+4L-6 and we have: Z^1, ∙(L):= ∑_k≥ 0(6k+4L-6)(12√(3))^-kCard 𝕋^1( L, k)<∞, since Card 𝕋^1(L, k)= O((12√(3))^kk^-5/2) when k→∞. Note that Card 𝕋^1, ∙(2, 0)=2. Peeling and the discrete spatial Markov property We now recall the main properties of the so-called peeling algorithm. We refer to <cit.> for a more detailed introduction to this algorithm. In the following, it will be convenient to add an isolated point † to the different state spaces that we will consider. The point † will play the role of a cemetery point when the exploration given by the peeling algorithm hits the boundary. Fix p≥ 1, γ∈𝕋^2(L, p) and let e be an edge of ∂_1γ (this edge will be called the peeled edge). Let u be the vertex opposite e in the unique internal face f of γ incident to e. Three configurations may occur: * u is an internal vertex of γ, in this case we call peeling of γ along the edge e the sub-triangulation of γ consisting of the internal faces of γ distinct from f. We see this triangulation as an element of 𝕋^2(L, p+1). * u is an element of the inner boundary ∂_1γ. In this case f splits γ into two components, only one of which is incident to the outer boundary ∂_0γ. We call peeling of γ along the edge e the sub-triangulation consisting of the faces of this component, that we see as an element of 𝕋^2(L, p') for some 1 ≤ p'≤ p. * Finally, if u belongs to the outer boundary of γ, we say by convention that the “triangulation” obtained by peeling γ along e is †. Note that this description is slightly incomplete since it would be necessary to specify (in the first two cases) how the new distinguished edge on the inner boundary is chosen. In what follows, we will iterate the peeling algorithm, and it will be sufficient to say that this new distinguished edge is chosen at every step as a deterministic function of the rooted planar map that is made of the initial inner boundary and of the faces that have been “removed” by the peeling algorithm up to this step. Let us fix an algorithm 𝒜 that chooses for any triangulation τ∈⋃_p≥ 1𝕋^1, ∙(p) an edge e of ∂τ. The peeling of a triangulation according to the algorithm 𝒜 consists in recursively applying the peeling procedure described above, choosing the peeled edge at each step as prescribed by 𝒜. Let us give a more precise description. We start with a triangulation γ∈𝕋^1,∙(L) and we let e_0 be its distinguished interior edge. If e_0 is incident to the boundary ∂γ of γ, we set by convention γ_0=τ_0=†. Otherwise, if e_0 is a loop, we let τ_0 be the triangulation induced by the faces of γ inside the loop and we let γ_0 be the triangulation that consists of the faces of γ outside this loop. We view τ_0 as an element of 𝕋^1, ∙(1, k) for some k≥ 0 (we let both distinguished edges to be the loop e_0 oriented clockwise) and we view γ_0 as an element of 𝕋^2(L, 1) by seeing the loop as bounding an internal face of degree one. Finally, if e_0 is a simple edge (not incident to ∂γ), we let τ_0 be the unique element of 𝕋^1, ∙(2, 0) with both distinguished edges oriented in the same direction, and γ_0 is the element of 𝕋^2 (L, 2) obtained from γ by splitting the edge e_0 so as to create an inner boundary face of degree 2 (cf. figure 2) – note that our special convention for ∂τ_0 explained at the beginning of Section <ref> allows us to identify ∂_1γ_0 with ∂τ_0 in that case. We then build recursively two sequences (τ_i)_i≥ 0 (the explored part) and (γ_i)_i≥ 0 (the unexplored part), in such a way that, for every i≥ 0 such that τ_i≠†, we have τ_i∈𝕋^1, ∙(p) and γ_i∈𝕋^ 2(L, p), for some p≥ 1, and the inner boundary ∂_1γ_i is identified with ∂τ_i. Assume that we have constructed τ_i and γ_i for some i≥ 0. If τ_i=†, we set τ_i+1=γ_i+1=†. Otherwise the algorithm 𝒜 applied to τ_i yields an edge e of ∂τ_i=∂_1γ_i. The triangulation γ_i+1 is obtained by peeling γ_i along this edge. If γ_i+1≠†, we let τ_i+1 be the triangulation obtained by adding to τ_i the faces of γ_i that we removed by the peeling of e. The distinguished edge on the boundary of τ_i+1 is the one that is identified to the distinguished edge of γ_i+1 on its second boundary, and the other distinguished edge of τ_i+1 is taken to be the same as the one of τ_i. Finally, if γ_i+1=†, we simply take τ_i+1=†. In the case of Boltzmann triangulations, the peeling is a “Markovian exploration”. More precisely, we apply the peeling procedure described above to a random triangulation 𝒟^L distributed according to the Boltzmann measure on 𝕋^1, ∙(L). This gives rise to two sequences of random triangulations (T_i^L)_i≥ 0 (explored parts) and (U_i^L)_i≥ 0 (unexplored parts). Then, conditionally on the event {T_i^L≠†} and on the value |∂ T_i^L|, the triangulation U_i^L is distributed according to the Boltzmann measure on 𝕋^2(L, |∂ T_i^L|) and is independent of T_i^L. We will call this property the spatial Markov property for the peeling of Boltzmann triangulations. Peeling by layers and perimeter process Let x_*^L be the root of the distinguished interior edge of 𝒟^L and let Δ^L be the graph distance in 𝒟^L. In the following, we will use a particular peeling algorithm — that is, a particular choice of 𝒜 — which we call the peeling by layers. This algorithm is designed to satisfy the following additional property: for every i such that T_i^L≠†, if we set h_i^L:=Δ^L(x_*^L, ∂ T_i^L), then for every vertex u of ∂ T^L_i, we have h_i^L≤Δ^L(u, x_*^L)≤ h_i^L+1. In other words, the distances from boundary vertices of T_i^L to x_*^L in 𝒟^L can only take at most one of two consecutive values at any time. It is easy to choose the peeling algorithm so that this property holds, and we will assume that (T_i^L)_i≥ 0 and (U_i^L)_i≥ 0 are obtained by such a peeling algorithm. We refer to <cit.> for a more precise description of the peeling by layers algorithm. An important object for us is the random sequence (|∂ T_i^L|)_i≥ 0 taking values in ℕ∪{†} and recording the evolution of the perimeter of the part explored by the peeling algorithm by layers, where by convention |∂ T_i^L|=† if T_i^L=†. By the arguments of <cit.>, Section 3, conditionally on the value of |T_0^L|∈{1, 2, †}, this perimeter process is a Markov chain on ℕ∪{†} starting from |T_0^L| ∈{1,2,†} whose transition kernel q_L is given for every k≥ 1 and m∈{-1, 0, … , k-1} by: q_L(k, k-m)=2Z^1(m+1)Z^2(L, k-m)/Z^2(L, k), and q_L(k, †)=1-∑_m=-1^k-1 q_L(k, k-m) for all k≥ 1, q_L(†, †)=1. This kernel is closely related to the transition kernel q_∞ of the perimeter process of the UIPT of type I (cf <cit.>, Section 6.1) which is defined for every k≥ 1 and m∈{-1,0, …, k-1} by: q_∞(k, k-m)= 2Z^1(m+1)C^(1)(k-m)/C^(1)(k). where we wrote C^(1)(k):=3^k-2/4√(2π) k2k k. As noted in <cit.>, the Markov chain associated with the kernel q_L is a Doob h-transform of the chain associated to q_∞, for the harmonic function 𝐡_L(j):=L/L+j, j≥ 1. More precisely, for every p≥ 1, m∈{-1,0, …, p-1}: q_L(p, p-m)=𝐡_L(p-m)/𝐡_L(p)q_∞(p, p-m). §.§ Convergence of metric spaces In order to state the convergence of (rescaled) Boltzmann triangulations with two boundaries towards the Brownian annulus, we will consider the space 𝕄 of all isometry classes of compact metric spaces, and we will write d_𝙶𝙷 for the usual Gromov-Hausdorff distance on 𝕄. Then (𝕄, d_𝙶𝙷) is a Polish space. We will use analogs of the Gromov-Hausdorff distance for spaces marked with subspaces and measures, which we present along the lines of <cit.>. Here and in what follows, if (E,Δ) is a compact metric space E, we will write Δ_𝙷 and Δ_𝙿 for the Hausdorff and Prohorov metrics associated with Δ, which are defined respectively on the set of all nonempty compact subsets of E and on the set of all finite Borel measures on E. For l∈ℕ, we let 𝕄^l, 1 be the set of all isomorphism classes (for an obvious notion of isomorphism) of compact metric spaces marked with l compact subspaces and a finite measure. More precisely, we consider marked spaces of the form ((𝒳, d_𝒳), 𝐀, μ) where: • (𝒳, d_𝒳) is a compact metric space, • 𝐀=(𝐀_1,…,𝐀_l) is an l-tuple of compact subsets of 𝒳, • μ is a finite Borel measure on 𝒳. The set 𝕄^l,1 is endowed with a metric d^l,1_𝙶𝙷𝙿, which is defined for any two spaces 𝕏=((𝒳, d_𝒳), 𝐀, μ) and 𝕐=((𝒴, d_𝒴), 𝐁, ρ) in 𝕄^l,1 by: d^l,1_𝙶𝙷𝙿(𝕏, 𝕐)=inf_(𝒵, Δ) ι_X: 𝒳↪𝒵 ι_Y: 𝒴↪𝒵max{Δ_𝙷(ι_𝒳 (𝒳), ι_𝒴(𝒴)), max_1≤ i≤ lΔ_𝙷(ι_𝒳(𝐀_i), ι_𝒴(𝐁_i)), Δ_𝙿(ι_X_* μ, ι_Y_*ρ)}, where the infimum is taken over all compact metric spaces (𝒵, Δ) and isometric embeddings ι_𝒳:(𝒳, d_𝒳)→ (𝒵, Δ) and ι_𝒴:(𝒴, d_𝒴)→ (𝒵, Δ). Then d_𝙶𝙷𝙿^l,1 is a metric on 𝕄^l, 1. Furthermore, (𝕄^l,1,d^l,1_𝙶𝙷𝙿) is a Polish space. In what follows, we will be interested in the case l=2: the Brownian annulus comes with a volume measure and with two distinguished subsets which are its boundaries. §.§ The Bettinelli-Miermont construction of the Brownian disk This section presents a variant of the Bettinelli-Miermont construction of the free Brownian disk, which is based on a quotient space defined from a Poisson family of Brownian trees. We borrow the formalism of <cit.>. The Brownian snake We start with a brief presentation of the Brownian snake, referring to <cit.> for more details. Let 𝒲 be the set of continuous paths w:[0, ζ(w)]→ℝ, where ζ(w)≥ 0 is a nonnegative real number called the lifetime of w. We endow this set with the distance: d_𝒲(w, w')=|ζ(w)-ζ(w')|+sup_t≥ 0 |w(t∧ζ(w))-w(t∧ζ(w'))|. For every x∈ℝ, let 𝒲_x be the set of all w∈𝒲 such that w(0)=x. We identify the unique element of 𝒲_x having lifetime 0 to the real number x. A snake trajectory starting at x is a continuous function ω:ℝ_+→𝒲_x satisfying: • ω_0=x and σ(ω):=sup{s≥ 0, ω_s≠ x}<∞; • for all 0≤ s≤ s', ω_s(t)=ω_s'(t) whenever t≤min_u∈ [s, s']ζ(ω_u). Let 𝒮_x be the set of snake trajectories starting from x, that we endow with the distance: d_𝒮_x(ω, ω')=|σ(ω)-σ(ω')|+sup_s≥ 0d_𝒲(ω_s,ω'_s). If ω∈𝒮_x, we let ζ_ω:ℝ_+→ℝ_+ be the function defined by setting ζ_ω(s):=ζ(ω_s) and we also write ω̂ for the function called the head of the snake trajectory ω defined by ω̂(s):= ω_s(ζ_ω(s)). One easily verifies that ω is entirely determined by the two functions ζ_ω and ω̂. We will also use the notation W_*(ω)=min{ω̂_s:s∈ [0,σ(ω)]}. Given a snake trajectory ω, we can define a (labelled compact) ℝ tree T_ω, which is called the genealogical tree of ω. To construct this tree, we introduce the pseudo-distance d_ω on [0, σ(ω)] given by: ∀ s, t∈ [0, σ(ω)], d_ω(s, t)= ζ_ω(s)+ζ_ω(t)-2min_u∈ [s, t]ζ_ω(u), and we define T_ω as the quotient space of [0, σ(ω)] for the equivalence relation s∼ t iff d_ω(s, t) =0, which is equipped with the metric induced by d_ω. We let p_ω:[0, σ(ω)]→ T_ω be the canonical projection and we write ρ_ω:=p_ω(0) for the “root” of T_ω. The volume measure on T_ω is just the pushforward of Lebesgue measure on [0,σ(ω)] under the projection p_ω. By the definition of snake trajectories, the property p_ω(s)=p_ω(t) implies that ω̂(s)=ω̂(t). Thus we can define a natural labelling ℓ_ω:T_ω→ℝ by requiring that ω̂=ℓ_ω∘ p_ω. Let x∈. The Brownian snake excursion measure with initial point x is the σ-finite measure ℕ_x on 𝒮_x such that the pushforward of ℕ_x under the function ω↦ζ_ω is the Itô measure of positive Brownian excursions, normalized so that ℕ_x(sup_s≥ 0ζ_ω(s)≥ε)=1/2ε, and such that, under ℕ_x and conditionally on ζ_ω, the process (ω̂_s)_s≥ 0 is a Gaussian process centered at x with covariance kernel K(s, s ')=min_u∈[s, s']ζ_ω(u) when s≤ s'. We will use some properties of exit measures of the Brownian snake. If w∈𝒲, and y∈ℝ, we write τ_y(w)=inf{t≤ζ(w): w(t)=y} with the convention inf∅ =+∞. If x∈ℝ and y∈(-∞,x), the quantity: 𝒵_y(ω):=lim_ε→ 01/ε^2∫_0^σ(ω)1_{τ_y(ω_s)= ∞, ω̂(s)<y+ε}ds, exists ℕ_x(dω) almost everywhere and is called the exit measure at y. The process (𝒵_y(ω))_y∈(-∞,x) has a càdlàg modification with no positive jumps, which we consider from now on. The free Brownian sphere Let us now recall the construction of the free Brownian sphere under the measure ℕ_0(dω). We start by recalling the definition of “intervals” on the genealogical tree T_ω of a snake trajectory ω. We use the convention that, if s,t∈[0,σ(ω)] and s>t, the interval [s,t] is defined by [s,t]=[s,σ(ω)]∪[0,t]. Then, if u,v∈ T_ω, there is a smallest interval [s,t], with s,t∈[0,σ(ω)], such that p_ω(s)=u and p_ω(t)=v, and we define u,v=p_ω([s,t]). We set, for every u,v∈ T_ω, 𝐃^∘(u, v):=ℓ_ω(u)+ℓ_ω(v)-2max(min_w∈ u, vℓ_ω(w), min_w∈ v, uℓ_ω(w)), and 𝐃(u, v):= inf_u=u_0,u_1, …, u_p=v∑_j=1^p𝐃^∘(u_i, u_i+1), where the infimum is taken over all choices of the integer p≥ 1 and the points u_0,…, u_p∈ T_ω such that u_0=u and u_p=v. Then, 𝐃 is a pseudo-metric on T_ω, and the free Brownian sphere is the associated quotient space _∞=T_ω /{𝐃=0}, which is equipped with the metric induced by 𝐃, for which we keep the same notation. We note that the free Brownian sphere is a geodesic space (any two points are linked by at least one geodesic). We emphasize that the free Brownian sphere is defined under the infinite measure _0, but later we will consider specific conditionings of _0 giving rise to finite measures. We write Π for the canonical projection from T_ω onto _∞. The volume measure Vol(·) on _∞ is the pushforward of the volume measure on T_ω under Π. For u,v∈ T_ω, the property 𝐃(u,v)=0 implies ℓ_ω(u)=ℓ_ω(v), and so we can define ℓ(x) for every x∈_∞, in such a way that ℓ(x)=ℓ_ω(u) whenever x=Π(u). There is a unique point _* of _∞ such that ℓ(_*)=min_x∈_∞ℓ(x), and we have 𝐃(_*,x)= ℓ(x)-ℓ(_*) for every x∈_∞. We will write _*:=-ℓ(_*). We also observe that the free Brownian sphere has another distinguished point, namely _0:=Π(ρ_ω). Note that 𝐃(_*,_0)=-ℓ(_*)=_* Let us now turn to hulls. For every r>0 and x∈_∞, we write B^∞_r(x) for the closed ball of radius r centered at x in _∞. Then, for every r∈(0,_*), the hull B^∙_r(_*) is the complement of the connected component of _∞∖ B^∞_r(_*) that contains _0 (this makes sense because _0∉ B^∞_r(_*) when r<_*). Note that all points of ∂ B^∙_r(_*) are at distance r from _*. By definition, the perimeter of the hull B^∙_r(_*) is the exit measure 𝐏_r:=𝒵_r-r_*. This definition is justified by the property 𝐏_r=lim_ε→ 01/ε^2Vol({x∈_∞∖ B^∙_r(_*):𝐃(x,B^∙_r(_*))<ε}), which can be deduced from (<ref>). The process (𝐏_r)_r∈(0,_*) has càdlàg sample paths and no positive jumps. The Bettinelli-Miermont construction of the Brownian disk We now present a construction of the free pointed Brownian disk, which is the compact metric space that appears as the scaling limit of Boltzmann triangulations in 𝕋^1, ∙(L). We fix a>0 and let (𝚎(t))_t∈ [0, a] be a positive Brownian excursion of duration a. Conditionally on (𝚎(t))_t∈[0,a], let 𝒩=∑_i∈ Iδ_(t_i, ω^i) be a Poisson point measure on [0, a]×𝒮 with intensity 2 t ℕ_√(3)𝚎(t)(ω). We let ℐ be the quotient space of [0, a]∪⋃_i∈ I T_ω^i , for the equivalence relation that identifies ρ_ω^i and t_i for every i∈ I (and no other pair of points is identified). We endow ℐ with the maximal distance d_ℐ whose restriction to each tree T_ω^i coincides with d_ω^i, and whose restriction to [0,a] is the usual distance. More explicitly, the distance between two points x∈ T_ω^i and y∈ T_ω^j, i≠ j is given by d_ω^i(x, ρ_ω^i)+|t_i-t_j|+ d_ω^j(y, ρ_ω^ j). Then ℐ is a compact metric space (in fact, a compact -tree), and we can consider the labelling ℓ:ℐ→ℝ defined by: ℓ(x)={[ ℓ_ω^i(x) if x∈ T_ω^i, for some i∈ I,; √(3)𝚎(x) if x∈ [0, a]. ].. By standard properties of the Itô measure, one verifies that the quantity Σ:=∑_i∈ Iσ(ω^i) is almost surely finite and it is possible to concatenate the functions p_ω^i to obtain a “contour exploration” π:[0, Σ]→ℐ. Formally, to define π, let μ=∑_i∈ Iσ(ω^i)δ_t_i be the point measure on [0, a] giving weight σ(ω^i) to t_i, for every i∈ I, and consider the left-continuous inverse μ^-1 of its cumulative distribution function, μ^-1(s):=inf{t∈[0,a]: μ([0, t])≥ s} for every s∈[0,Σ]. For every s∈ [0, Σ], we set π(s)=ω^i(s-μ([0, μ^-1(s)))) if μ^-1(s)=t_i for some i∈ I and π(s)=μ^-1(s) otherwise. This contour exploration π allows us to define intervals on ℐ, in a way similar to what we did on T_ω. For every u, v∈ℐ, there exists a smallest interval [s, t] in [0, Σ] such that π(s)=u and π (t)=v, where by convention [s, t]=[s, Σ]∪[0, t] if s> t, and we write u, v for the subset of ℐ defined by u, v={π(b), b∈ [s, t]}. We then set ∀ u, v∈ℐ, D^∘(u, v):=ℓ(u)+ℓ(v)-2max(min_w∈ u, vℓ(w), min_w∈ v, uℓ(w)), and we consider the pseudo-metric D on ℐ defined for u, v∈ℐ by: D(u, v):= inf_u=u_0,u_1, …, u_p=v∑_j=1^pD^∘(u_i, u_i+1), where the infimum is taken over all choices of the integer p≥ 1 and the points u_0,…, u_p∈ℐ such that u_0=u and u_p=v. The space 𝔻_(a) is defined as the quotient space ℐ/{D=0}, which we equip with the distance induced by D, for which we keep the notation D. Then 𝔻_(a) is a compact metric space. Let Π:ℐ→𝔻_(a) be the canonical projection. It is easy to verify that Π(a)=Π(b) implies ℓ(a)=ℓ(b), and so 𝔻_(a) inherits a labelling function, still denoted by ℓ(·) from the labelling of ℐ. We can then define: • 𝐕=(Π∘π)_* λ_[0, Σ], where λ_[0, Σ] denotes Lebesgue measure on [0, Σ]. This is a finite Borel measure on 𝔻_(a) called the volume measure. • ∂𝔻_(a):=Π([0, a]), which is the “boundary” of 𝔻_(a). • x_* is the point of minimal label in 𝔻_(a). We then view ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) as a random variable in 𝕄^2, 1. This is the free pointed Brownian disk of perimeter a. As the (free) Brownian sphere, the (free pointed) Brownian disk is a geodesic space. In a way similar to the Brownian sphere, we have D(x,x_*)=ℓ(x)-ℓ(x_*) for every x∈_(a). In particular, if we set r_*:= -ℓ(x_*) =-min_x∈𝔻_(a)ℓ(x) we have r_*=D(x_*, ∂𝔻_(a)) (note that ℓ(u)=√(3) 𝚎(u)≥ 0 for every u∈ [0,a]⊂ℐ). Occasionally (in particular in Proposition <ref> below), we will also say that the space ((𝔻_(a),D),x_*,𝐕) — which is a random element of 𝕄^1,1 — is a free pointed Brownian disk of perimeter a: This makes no real difference, as the boundary ∂_(a) can be recovered as the closed subset of consisting of points that have no neighborhood homeomorphic to the open unit disk. Hulls in the Brownian disk Consider the Brownian disk _(a) as defined above. For every r>0, let B_r(x_*) stand for the closed ball of radius r centered at x_* in 𝔻_(a). For every r∈(0,r_*], we define the hull H_r as the complement in 𝔻_(a) of the connected component of 𝔻_(a)∖ B_r(x_*) that intersects the boundary ∂𝔻_(a) (in fact, for r<r_*, this connected component must contain the whole boundary). Points of ∂ H_r are at distance r from x_*. In a way analogous to the definition of 𝐏_r for the Brownian sphere, we define the perimeter of H_r by 𝒫_r=∑_i∈ I_r-r_*(ω^i). Then 𝒫_r satisfies a formula analogous to (<ref>) (if r<r_*, there are only finitely many indices i such that _r-r_*(ω^i)>0). We also take 𝒫_0=0. The process (𝒫_r)_r∈[0,r_*] has càdlàg sample paths and no positive jumps. Let r>0. Then the law of 𝒫_r under (·∩{r<r_*}) has density y↦ 3√(3/2π) r^-3 a/a+y √(y) e^-3y/(2r^2) with respect to Lebesgue measure on (0,∞). We postpone the proof to the Appendix, as this result is not really needed in what follows. It will be useful to describe the hull H_r in terms of the labelled tree ℐ of the Bettinelli-Miermont construction. Let x∈ℐ and suppose first that x∈ T_ω^i for some i∈ I. Since T_ω^i is an -tree, there is a unique continuous injective path linking x to the root ρ_ω^ i of T_ω^i, which is called the ancestral line of x. We let m_x be the minimum label along this path. If x∈[0,a], we take m_x=ℓ(x). Then we have m_x=m_y if Π(x)=Π(y), and thus the mapping ℐ∋ x↦ m_x induces a continuous function from 𝔻_(a) to ℝ which we denote again by 𝔻_(a)∋ u ↦ m_u. Using the cactus bound (see <cit.> for this bound in the setting of the Brownian sphere, which is easily extended), one gets that: H_r= {u∈𝔻, m_u≤ -r_*+r}. Similarly, the boundary ∂ H_r of H_r in 𝔻 is the image under Π of the set of all points x∈ℐ such that we have both ℓ(x)=r-r_* and all points of the ancestral line of x (with the exception of x) have a label greater than r-r_*. Brownian disks in the Brownian sphere We now explain how the free pointed Brownian disk of the previous section can be obtained as a subset of the free Brownian sphere under a particular conditioning of the measure _0. We first recall a result from <cit.>. Let r>0, and argue under the conditional probability measure _0(·|_*>r). We can then consider the hull B^∙_r(_*), and we write B̌^∘_r(_*)=_∞∖ B^∙_r(_*), and B̌^∙_r(_*) for the closure of B̌^∘_r(_*). We equip the open set B̌^∘_r(_*) with the intrinsic metric 𝐝^∘: for every x,y∈B̌^∘_r(_*), 𝐝^∘(x,y) is the infimum of lengths of continuous paths connecting x to y that stay in B̌^∘_r(_*). Then, according to <cit.>, under the probability measure _0(·|_*>r), the intrinsic metric on the set B̌^∘_r(_*) has a continuous extension to its closure B̌^∙_r(_*), which is a metric on B̌^∙_r(_*), and the random metric space (B̌^∙_r(_*),𝐝^∘) equipped with the restriction of the volume measure on _∞ and with the distinguished point _0 is a free pointed Brownian disk of (random) perimeter _r. For our purposes, it will be useful to have a version of the preceding result when r is replaced by a random radius. For every a>0, we define, under _0, _a:=inf{r∈(0,_*): _r-_*=a} with the usual convention inf∅=∞. By <cit.>, we have _0(_a<∞)=(2a)^-1. For future use, we record the following simple fact. If (a_n)_n∈ is a sequence decreasing to a, we have _a_n↓_a as n→∞, _0 a.e. on the event {𝐫_a<∞}. This follows from the description of the law of the process (_r)_r<0 under _0, as a time change of the excursion of a stable Lévy process, see <cit.>. Let a>0. Almost surely under the probability measure _0(·|_a<∞), the intrinsic measure on the set B̌^∘__a(_*) has a continuous extension to its closure B̌^∙__a(_*), which is a metric on B̌^∙__a(_*), and the resulting random metric space equipped with the restriction of the volume measure on _∞ and with the distinguished point _0 is a free pointed Brownian disk of perimeter a. The shortest way to prove this proposition is to use Proposition 10 in <cit.>, which determines the distribution under _0(dω|_a<∞) of the snake trajectory ω truncated at level _a-_*, which is denoted by tr__a-_*(ω) (we refer e.g. to <cit.> for a definition of this truncation operation). On one hand, the space _∞∖ B^∙__a(_*) equipped with its intrinsic measure can be obtained as a function of tr__a-_*(ω), as it is explained in the proof of <cit.>. On the other hand, Proposition 10 in <cit.> shows that this snake trajectory has exactly the distribution of the random snake trajectory that codes the free pointed Brownian disk in the construction of <cit.> — which is known to be equivalent to the Bettinelli-Miermont construction presented above. We omit the details, since Proposition <ref> is clearly a variant of Theorem 8 in <cit.>. Proposition <ref> allows us to couple Brownian disks with different perimeters. Consider a decreasing sequence (a_n)_n∈ that converges to a. On the event {_a_n<∞}, B̌^∙__a_n(_*) and B̌^∙__a(_*) are both well defined, and we have trivially B̌^∙__a_n(_*)⊂B̌^∙__a(_*). Furthermore, a.e. on the event {_a<∞}, we have _a_n<∞ for all n large enough, _a_n↓_a as n→∞, and sup{𝐃(x,∂B̌^∙__a(_*)):x∈B̌^∙__a(_*)∖B̌^∙__a_n(_*)} 0. Let us justify (<ref>). First note that, for every x∈B̌^∘__a(_*), there is a path from x to _* that does not hit B^∙__a(_*), and thus stays at positive distance from ∂B̌^∙__a(_*). Since _a_n↓_a, it follows that x∈B̌^∘__a_n(_*) for n large enough, and we have proved that, a.e. on the event {_a<∞}, B̌^∘__a(_*)=⋃_n∈,_a_n<∞B̌^∘__a_n(_*), from which (<ref>) easily follows via a compactness argument. § THE BROWNIAN ANNULUS §.§ The definition of the Brownian annulus We again fix a>0 and write (𝔻_(a),D) for the free pointed Brownian disk of perimeter a in the Bettinelli-Miermont construction described above. Recall the notation x_* for the distinguished point of 𝔻_(a) and r_*=D(x_*,∂𝔻_(a)). Also recall that 𝒫_r stands for the perimeter of the hull H_r of radius r. We fix b>0, and set r_b=inf{r∈[0,r_*): 𝒫_r=b}, with again the convention inf∅=∞. Note that r_b<∞ if and only b<𝒫^*, where 𝒫^*=sup{𝒫_r:r∈[0,r_*)}. The next theorem is then an analog of Proposition <ref>. Almost surely under the probability measure (·| r_b<∞), the intrinsic metric on 𝔻_(a)∖ H_r_b has a continuous extension to the closure of 𝔻_(a)∖ H_r_b, which is a metric on this set. The resulting random metric space, which we denote by (_(a,b),d^∘), is called the Brownian annulus with perimeters a and b. The terminology will be justified by forthcoming results showing that the Brownian annulus is the Gromov-Hausdorff limit of triangulations with two boundaries. We note that the Brownian annulus _(a,b) has two “boundaries”, namely ∂_0_(a,b)=∂_(a), and ∂_1_(a,b)=∂ H_r_b. Furthermore, distances in _(a,b) from the second boundary ∂_1_(a,b) correspond to labels in the Bettinelli-Miermont construction. More precisely, for every z∈_(a,b), D(z,∂_1_(a,b))=D(z,x_*)-r_b=ℓ(z)-(r_b-r_*). This follows from the interpretation of labels in terms of distances from x_*, recalling that all points of ∂_1_(a,b)=∂ H_r_b are at distance r_b from x_*. We may and will assume that the Brownian disk 𝔻_(a) is constructed as the subset B̌^∙__a(_*) of the free Brownian sphere _∞ under the probability measure _0(·|_a<∞), as in Proposition <ref>, and, in particular, the distinguished point of 𝔻_(a) is the point _0 of the Brownian sphere. Furthermore, for every r∈ (0,D(_0,∂𝔻_(a))), the hull H_r in the Brownian disk 𝔻_(a) coincides with the hull B^∙_r(_0) in _∞ (defined as the complement of the connected component of _∞∖ B^∞_r(_0) that contains _*). In particular, on the event {r_b<∞}, we have r_b=_b, where _b is the hitting time of b by the process of perimeters of the hulls B^∙_r(_0), r∈ (0,𝐫_*). Furthermore, conditioning 𝔻_(a) on the event that r_b<∞ is equivalent to arguing under the conditional probability _0(·|𝐃(_0,_*)>_a+_b). Now note that _* and _0 play symmetric roles in the Brownian sphere _∞ (cf. <cit.>), and that proving that the intrinsic metric on 𝔻_(a)∖ H_r_b has a continous extension, which is a metric, to its closure is equivalent to proving that the intrinsic metric on _∞∖ B^∙__b(_0) has a continuous extension, which is a metric, to its closure. By symmetry, this equivalent to proving that the metric on _∞∖ B^∙__b(_*) has a continuous extension, which is a metric, to its closure. But we know from Proposition <ref> that this is true. It turns out that the probability of the conditioning event {r_b<∞} has a very simple expression, which will be useful in forthcoming calculations. We have (r_b<∞)=a/a+b. Let us set 𝒫̌_r=𝒫_r_*-r for r∈[0,r_*], so that 𝒫̌_r=∑_i∈ I_-r(ω^i), in the notation of (<ref>). From the identification of the law of the exit measure process under _0 (see e.g. Section 2.4 in <cit.>), it is not hard to verify that (𝒫̌_r)_r∈[0,r_*] is a continuous-state branching process with branching mechanism ψ(λ):=√(8/3) λ^3/2. Furthermore, Remark (ii) at the end of <cit.> shows that the initial value 𝒫̌_0=𝒫_r_* of this continuous-state branching process has density 3/2 a^3/2 (a+z)^-5/2. The classical Lamperti transformation allows us to write (𝒫̌_r)_r∈[0,r_*] as a time change of a (centered) spectrally positive Lévy process with Laplace exponent ψ and the same initial distribution, which is stopped upon hitting 0. For this Lévy process started at z, the probability that it never hits b is equal to √((b-z)^+/b) (cf. <cit.>). From the preceding considerations, we get (r_b=∞)= 3/2 a^3/2 ∫_0^b dz/(a+z)^5/2 √(b-z/b)= b/a+b. This completes the proof. §.§ A technical lemma We keep the notation of the preceding section. In the following lemma, lengths of paths refer to the metric on the Brownian disk _(a). Let η>0. Then, almost surely, for every x,y∈_(a,b)∖∂_1_(a,b), for every continuous path γ in _(a,b) connecting x to y and with finite length L(γ), we can find a path γ' staying in _(a,b)∖∂_1_(a,b) and connecting x to y, whose length is bounded by L(γ)+η. Let us set ^∘_(a,b)=_(a,b)∖(∂_0_(a,b)∪∂_1_(a,b)) which can be viewed as the “interior” of _(a,b). In order to prove Lemma <ref>, it is enough to consider the case where x,y∈^∘_(a,b) and the path γ stays in _(a,b)∖∂_0_(a,b). If not the case, we can cover the set of times t at which γ(t) belongs to ∂_1_(a,b) by finitely many disjoint closed intervals I=[s_I,t_I] such that γ(t)∈_(a,b)∖∂_0_(a,b) for every t∈ I and γ(s_I),γ(t_I)∉∂_1_(a,b), and we consider the restriction of γ to each of these intervals. Fix >0 and, for every u>0, let E_(a,u) denote the event where u<𝒫^* and there exist x,y∈^∘_(a,u) and a path γ_0 with finite length L(γ_0) connecting x to y and staying in _(a,u)∖∂_0_(a,u), such that any path γ' connecting x to y and staying in ^∘_(a,u) has length at least L(γ_0)+ε. Also set, for every u∈(0,𝒫^*) and x,y∈^∘_(a,u), F(x,y,u)=inf{L(γ):γ is a path connecting x to y in ^∘_(a,u)}. If E_(a,u) holds, then clearly there exist x,y∈^∘_(a,u) such that the function v↦ F(x,y,v) has a (positive) jump at v=u (take γ_0 as above and note that F(x,y,v)≤ L(γ_0) if 0<v<u). The same then holds for every x',y'∈^∘_(a,u) sufficiently close to x,y: To see this, consider the path obtained by concatenating γ_0 with geodesics from x to x' and from y to y'. Hence, if for n≥ 1, we consider the monotone nonincreasing function (0,𝒫^*)∋ v ↦ G_n(a,v)=∫_^∘_(a,v) (n-F(x,y,v))^+ 𝐕( x)𝐕( y) we obtain that this function has a jump at u when E_(a,u) holds, at least when n is large enough. It follows that 1_E_(a,u)≤lim inf_n→∞1_{G_n(a,u+)<G_n(a,u-)}, with an obvious notation for the right and left limits of v ↦ G_n(a,v) at u. Hence, (E_(a,u))≤lim inf_n→∞({u<𝒫^*}∩{G_n(a,u+)<G_n(a,u-)})). Since the function (0,𝒫^*)∋ v ↦ G_n(a,v) has at most countably many discontinuities, if follows that ∫_0^∞(E_(a,u)) u=0 and therefore (E_(a,u))=0 for Lebesgue almost all u. To obtain the statement of the lemma, we need to prove that (E_(a,u))=0 for every u>0. Fix u>0, and let (a_n)_n≥ 0 be a sequence of reals decreasing to a. We will verify that lim inf_n→∞(E_(a_n,u))≥(E_(a,u)). Thanks to Proposition <ref>, we may assume that _(a)=B̌^∙__a(_*), resp. _(a_n)=B̌^∙__a_n(_*), which is a Brownian disk of perimeter a, resp. of perimeter a_n, under _0(·|_a<∞), resp. under _0(·|_a_n<∞). If E_(a,u) holds, we can find a path γ_0 staying in _(a,u)∖∂_(a) that satisfies the properties stated at the beginning of the proof, and this path stays at positive distance from ∂_(a). On the other hand, by (<ref>), we have sup{𝐃(x,∂_(a)):x∈_(a)∖_(a_n)} 0, _0 a.e. on {_a<∞}. It follows that the path γ_0 stays in _(a_n,u)∖∂_(a_n) when n is large, so that E_(a_n,u) also holds when n is large. Hence, we get lim inf_n→∞_0(E_(a_n,u)∩{_a<∞})≥_0(E_(a,u)∩{_a<∞}), and using also the fact that _0(_a_n<∞)⟶_0(_a<∞) as n→∞ we get (<ref>). From (<ref>) and a scaling argument, we have then lim inf_u'↑ u(E_(a,u'))≥(E_(a,u)). Clearly, this implies that we have (E_(a,u))=0. Since >0 was arbitrary, this completes the proof. § PRELIMINARY CONVERGENCE RESULTS §.§ Convergence towards the Brownian disk Let a>0. For every integer L≥ 1/a, let 𝒟^L_(a) a Boltzmann triangulation in 𝕋^1, ∙(⌊ aL⌋). Let Δ^L be the graph distance on 𝒟^L_(a) and consider the rescaled distance d_L=√(3/2) L^-1/2Δ^L. Let ν^L be the counting measure, rescaled by the factor (3/4)L^-2, on the vertex set of 𝒟^L_(a). Then, ((𝒟^L_(a), d_L), (x_*^L, ∂𝒟^L_(a)), ν^L) ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕), where ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) is the free pointed Brownian disk with perimeter a as constructed in Section <ref>, and the convergence holds in 𝕄^2,1 endowed with the metric d_𝙶𝙷𝙿^2,1 introduced in Section <ref>. In the last display, we abusively identify 𝒟^L_(a) with its vertex set (we will often make this abuse of notation in what follows). The convergence (<ref>) follows from <cit.>. Note that Theorem 1.1 in <cit.> deals with the so-called GHPU convergence including the uniform convergence of the “boundary curves”, but it is straightforward to verify that this also implies the convergence (<ref>) in 𝕄^2,1. Also, <cit.> considers Boltzmann triangulations in 𝕋^1(⌊ aL⌋) instead of 𝕋^1, ∙(⌊ aL⌋), and the limit is therefore the free (unpointed) Brownian disk. However, as explained in <cit.>, the convergence for pointed objects easily follows from that for pointed ones (since we are here pointing at an edge and not at a point, we also need Lemma 5.1 in <cit.>, stated for quadrangulations but easily extended, to verify that the degree-biased measure on the vertex set is close to the uniform measure — we omit the details). §.§ The processes of perimeters and volumes of hulls We consider the free pointed Brownian disk ((𝔻_(a), D), (x_*, ∂𝔻_(a)), 𝐕) as given in the Bettinelli-Miermont construction. Recall that r_*=D(x_*, ∂𝔻_(a)). For r∈(0,r_*], the perimeter 𝒫_r of the hull H_r was defined in formula (<ref>), and we set 𝒱_r=𝐕(H_r). We also define 𝒱_0=0. It is not hard to verify that the process (𝒫_r,𝒱_r)_r∈[0,r_*] has càdlàg sample paths. Let r>0 and let us argue conditionally on the event {r_*> r}. Recall that _(a) is obtained as a quotient space of the labelled tree ℐ, and that, for x∈ℐ, m_x denotes the minimal label along the ancestral line of x. We can use the restriction of the contour exploration π:[0,Σ]⟶ℐ to every connected component of the open set {s∈[0,Σ] : m_π(s)< r-r_*}, in order to define a snake trajectory with initial point r-r_*, which we call an excursion away from r-r_*. More precisely, if (α,β) is such a connected component, there is an index i∈ I such that (α,β)⊂ (a_i,b_i), where [a_i,b_i]={s∈[0,Σ]:π(s)∈ T_ω^i}. Then, setting α'=α-a_i and β'=β-a_i, we have ω^i_α'=ω^i_β', ω̂^i_α'=ω̂^i_β'=r-r_* and ζ(ω^i_s)>ζ(ω^i_α') for every s∈(α',β'), and we define a snake trajectory ω by taking ω_s(t)=ω^i_(α'+s)∧β'(ζ(ω^i_α')+t) for every 0≤ t≤ζ(ω^i_(α'+s)∧β')-ζ(ω^i_α') (in the language of <cit.>, ω is an excursion of ω^i away from r-r_*). As a straightforward consequence of Proposition 12 in <cit.>, the snake trajectories obtained in this way and shifted so that there initial point is 0 correspond to the atoms of a point measure 𝒩_r which conditionally on 𝒫_r is Poisson with intensity 𝒫_rℕ_0(·∩{W_* >-r}) and to which we add an extra atom ω_* distributed according to ℕ_0(·| W_*=-r) (the law of the latter atom is described in <cit.> in terms of a Bessel process of dimension 9). Using formula (<ref>), it is not hard to verify that the process (𝒫_s,𝒱_s)_s∈[0,r] is determined as a function of the point measure 𝒩_r+δ_ω_* (in particular, 𝒫_s=𝒵_s-r(ω_*)+∫𝒩_r(dω) 𝒵_s-r(ω) for 0<s<r). Let us now consider the Brownian plane of <cit.>. For the Brownian plane, we can also define the processes of perimeter and volume of hulls (𝒫_s^∞, 𝒱_s^ ∞)_s≥ 0 and the law of this pair of processes is described in <cit.>. It follows from the preceding observations and the construction of <cit.> that, for every u>0, the conditional distribution of (𝒫^∞_s,𝒱^∞_s)_s∈[0,r] knowing 𝒫^∞_r=u is the same as the conditional distribution of (𝒫_s,𝒱_s)_s∈[0,r] knowing 𝒫_r=u. Since 𝒫_r and 𝒫_r^∞ both have a positive density on (0,∞) (by Proposition <ref> and <cit.>), we arrive at the following lemma. The law of (𝒫_s, 𝒱_s)_s≤ r conditionally on the event {r_*> r} is absolutely continuous with respect to the law of the pair (𝒫_s^∞, 𝒱_s^∞)_s≤ r. We end this section by stating a technical property showing that the perimeter process can be recovered as a deterministic function of the volume process. For every r>0, we have almost surely on the event {r<r_*}: 𝒫_r =lim_α→ 0^+(1/αlim_ϵ→ 0^+ϕ(ϵ)^-1Card{s∈ [r-α, r]:Δ𝒱_s≥ϵ}), where ϕ(ϵ)=c_0ϵ^-3/4, with c_0=2^1/4Γ(4/3), and Δ𝒱_s=𝒱_s-𝒱_s-. It is explained in the proof of <cit.> that (<ref>) holds if 𝒫_r and 𝒱_s are replaced by 𝒫^∞_r and 𝒱^∞_s respectively. It then suffices to use the absolute continuity property of Lemma <ref>. §.§ Joint convergence of hulls One expects that the explored sets T_i^L in the peeling by layers will correspond in the limit (<ref>) to the hulls H_r. This section aims to give a precise result in this direction. Let us start with a technical proposition giving some information about the geometry of 𝔻_(a). For every δ>0 and s∈(0,r_*), let 𝒰_δ^s be the set of all points x∈𝔻_(a) such that there is a continuous path from x to ∂𝔻_(a) that stays at distance at least s-δ from x_*. Almost surely, for every s which is not a jump of the perimeter process (𝒫_r)_r∈(0,r_*) and every ε>0, there exists δ>0 such that: 𝒰_δ^s ⊂{x∈𝔻_(a) : D(x, 𝔻_(a)∖ H_s)<ε}. We argue by contradiction. If the statement of the proposition fails, we can find ε>0 and s∈(0,r_*) which is not a jump of the perimeter process, and then a sequence δ_n↓ 0 and points x_n∈𝔻_(a) such that D( x_n, 𝔻_(a)∖ H_s)≥ε and there is a path linking x_n to ∂𝔻_(a) and remaining at distance at least s-δ_n from x_*. By compactness, we may assume that the sequence (x_n) converges to a point x_∞, which therefore satisfies D(x_∞, 𝔻_(a)∖ H_s)≥ε. We have m_x_n≤ -r_*+s since x_n∈ H_s, and, on the other hand, an application of the cactus bound <cit.> gives m_x_n≥ -r_*+s-δ_n. Letting n→∞ we get m_x_∞= -r_*+s. On the ancestral line of x_∞, we can find a point x close to x_∞ whose label is strictly greater than -r_*+s and is still such that m_x=-r_*+s (if no such x existed, this would mean that x_∞∈∂ H_s, contradicting D(x_∞, 𝔻_(a)∖ H_s)≥ε). Then all points in a sufficiently small neighbourhood of x are in H_s but not in H_s-δ for any δ>0. In other words the process (𝒱_r)_r∈(0,r_*) has a jump at s. Since the jumps of (𝒱_r) and (𝒫_r) almost surely coincide (this holds for 𝒱^∞ and 𝒫^∞ by <cit.> and therefore also for 𝒱 and 𝒫 using Lemma <ref>), we end up with a contradiction. As in Section <ref>, we consider the sequences of random triangulations (T_i^L) and (U_i^L) obtained by applying the peeling by layers algorithm to the Boltzmann triangulation 𝒟^L_(a). It will be convenient to view the triangulations that we consider as geodesic spaces. To this end we just need to identify each edge with a copy of the interval [0,1] in the way explained in <cit.>. If the vertex set of 𝒟^L_(a) is replaced by the union of all edges equipped with the obvious extension of the (rescaled) graph distance, the convergence (<ref>) remains valid, and this has the advantage of making 𝒟^L_(a) a geodesic space. From now on, we will always view triangulations as geodesic metric spaces as we just explained. In particular, we can consider continuous paths in 𝒟^L_(a) as in Lemma <ref> below, and, similarly, in the next proposition, we interpret ∂ T^L_k as the union of the edges on the boundary of T^L_k. By Skorokhod's representation theorem, we may assume that (<ref>) holds almost surely. From now on until the end of this section, we fix ω∈Ω for which the (almost sure) convergence (<ref>) does take place. By a straightforward extension of <cit.>, we may assume the metric spaces (𝒟^L_(a), d^L) and (𝔻_(a), D) are embedded isometrically in the same compact metric space (E, Δ) in such a way that 𝒟^L_(a) and ∂𝒟^L_(a) converge to 𝔻_(a) and ∂𝔻_(a) respectively, for the Hausdorff metric Δ_𝙷, x_*^L converges to x_* and ν^L converges weakly to 𝐕. In particular, we will consider the triangulations T_i^L and U_i^L as subsets of E so that we can speak about the Δ_𝙷-convergence of these objects in the following proposition. If γ:[0, σ]→ E and γ':[0, σ']→ E are two continuous paths in E, we will say that γ' is ε-close to γ if Δ(γ(0),γ'(0))≤ε, Δ(γ(σ), γ'(σ'))≤ε and if sup_t∈ [0, σ']Δ(γ'(t), γ)≤ε, where we identify γ and the compact subset γ([0, σ])⊂ E. Note that this definition is not symmetric in γ and γ'. We also write ℓ_Δ(γ) for the length of the path γ in (E,Δ). Let ω be fixed as above and let s∈(0,r_*) such that the perimeter process (𝒫_r) is continuous at s. Recall the notation h_k^L:=Δ(x^*_L, ∂ T_k^L). For every sequence of integers (N_L)_L≥ 1 such that (√(3/2L)h_N_L^L)_L≥ 1 converges to s, we have the convergences: T_N_L^L H_s, ∂ T_N_L^L ∂ H_s, U_N_L^L C̅_s, where C̅_s denotes the closure of C_s:=𝔻_(a)∖ H_s. To simplify notation, we set c_L=√(3/2)L^-1/2 and recall that d_L=c_L Δ^L. The convergences of T_N_L^L and U_N_L^L are proved in a way very similar to Lemma 12 in <cit.> (which deals with the case where N_L is replaced by the hitting time of ∂𝒟^L_(a) by the peeling algorithm). We only give here the main steps of the proof. We start with a simple lemma. For every η>0 and A>0, there exists δ>0 and L_0≥ 0 such that, for every L≥ L_0 and any choice of points x,y∈𝔻_(a) and x^L, y^L∈𝒟^L_(a) satisfying Δ(x, x^L)≤δ and Δ(y, y^L)≤δ, we have: * For any continuous path γ from x to y in 𝔻_(a), there exists a continuous path γ^L from x^L to y^ L in 𝒟_(a)^L which is η-close to γ. If γ has length at most A, one can choose γ^L such that ℓ_Δ(γ^L)≤ℓ_Δ(γ)+η. * For any continuous path γ^L from x^L to y^L in 𝒟_(a)^L, there is a continuous path γ from x to y in 𝔻_(a) which is η-close to γ^L. If γ^L has length at most A, one can choose γ such that ℓ_Δ(γ)≤ℓ_Δ(γ^L)+η. We omit the proof of this lemma (see <cit.>), and proceed to the proof of Proposition <ref>. We first consider U_N_L^L. If ε>0 and K⊂ E, we write K^ε={x∈ E, Δ(x, K)≤ε} (only in this proof and the next one). If ε>0 is fixed, we need to verify that, for L large, U_N_L^L⊂ (C_s)^ε and C̅_s ⊂ (U^L_N_L)^ε. Let x∈C̅_s∖∂ H_s=C_s. Then there is a path γ connecting x to a point y of ∂𝔻_(a) that stays in C_s. By compactness, this path stays at distance at least α>0 from ∂ H_s, hence at distance at least s+α from x_*. We can assume that α≤ε. By part 1 of Lemma <ref>, and using the fact that ∂𝒟_(a)^L converges towards ∂𝔻_(a), we can find, for L large enough, points x^L∈𝒟_(a)^L and y^L∈∂𝒟_(a)^L and a path γ^L in 𝒟_(a)^L from x^L to y^L that is (α/2)-close to γ. Since x^L_* converges to x_* and c_L h_N_L^L converges to s, we get (taking L even larger if necessary) that all points of γ^L lie at distance greater than c_L(h^L_N_L+1) from x^L_*. However, by the construction of the peeling by layers, points of ∂ T_N_L^L are at a distance at most c_L(h_N_L^L+1) from x_*^L . Therefore we found a path connecting x^L to a point of ∂𝒟_(a)^L that does not visit ∂ T_N_L^L, and it follows that x^L is a point of U_N_L^L. Since Δ(x^L, x)≤α/2<ε we then have x∈ (U_N_L^L)^ε for large L. If x∈∂ H_s, this is also true because we can approximate x by a point of C_s. A compactness argument finally allows us to conclude that C̅_s⊂ (U_N_L^L)^ε for any L large enough. Let us show conversely that U_N_L^L⊂ (C_s)^ε when L is large. We choose δ∈(0,ε) such that the conclusion of Proposition <ref> holds with ε replaced by ε/2. Let v^L∈ U_N_L^L, which implies in particular that v^L is at Δ-distance at least c_L h_N_L^L from x_*^L. Then there is a path γ^L in U_N_L^L connecting v^L to ∂𝒟_(a)^L. Using part 2 of Lemma <ref> and the convergence of ∂𝒟_(a)^L to ∂𝔻_(a), if L is large enough (independently of the choice of v^L), we can approximate γ^L by a path γ in 𝔻_(a) that is (δ/2)-close to γ^L and connects a point v∈𝔻_(a) to a point of ∂𝔻_(a). Notice that Δ(v,v^L)≤δ/2<ε/2. Provided that L has been chosen even larger if necessary (again independently of the choice of v^L), it follows that the path γ contains only points at distance at least s-δ from x_*. By our choice of δ, this implies that Δ(v,C_s)<ε/2 and thus Δ(v^L,C_s)<ε. We therefore have v^L∈ (C_s)^ε and we have obtained that U^L_N_L⊂ ( C_s)^ε, thus completing the proof of the convergence of U_N_L^L to C̅_s. Let us now discuss the convergence of ∂ T_N_L^L. We let ℬ^L(c_Lh_N_L^L) and ℬ^L(c_L(h_N_L^L+ 2)) denote the closed balls of respective radii c_Lh_N_L^L and c_L(h_N_L^L+2) centered at x_*^L in (𝒟_(a)^ L, Δ). We also write B_s=B_s(x_*) for the closed ball of radius s centered at x_* in (𝔻_(a), Δ). Since 𝒟^L_(a) and _(a) are both length spaces and 𝒟^L_(a) converges to _(a) for the Hausdorff distance on E, we get that ℬ^L(c_Lh_N_L^L) and ℬ^L(c_L(h_N_L^L+2)) both converge to B_s for the Hausdorff distance. However, ℬ^L(c_L h_N_L^L)⊂ℬ^L(c_L h_N_L^L)∪∂ T^L_N_L⊂ℬ^L(c_L(h_N_L^L+2)). It follows that ℬ'_L:=ℬ^L(c_L h_N_L^L)∪∂ T^L_N_L also converges towards B_s when L→∞. Observe that ∂ T_N_L^L=ℬ'_L∩ U_N_L^L and ∂ H_s= B_s∩C̅_s. Let ε>0. Using the convergence of ℬ'_L towards B_s and the convergence of U_N_L^L towards C̅_s, we get that for L sufficiently large and for every x∈∂ H_s, we have Δ(x, ℬ'_L)< ε and Δ(x,U_N_L^L)< ε. Fix x∈∂ H_s and let u_1∈ U_N_L^L and u_2∈ℬ'_L such that Δ(u_1, x)≤ε and Δ(u_2, x)≤ε. In particular, Δ(u_1, u_2)≤ 2ε and since a geodesic path between u_1 and u_2 in 𝒟_(a)^L must intersect ∂ T_N_L^L, it follows that one can find v∈∂ T_N_L^L with Δ(u_1, v)≤ 2ε. This implies Δ(x, v)≤ 3ε, but since this is true for any x∈∂ H_s, we conclude that ∂ H_s is contained in the 3ε-neighbourhood of ∂ T_N_L^L as soon as L is large enough. A similar argument shows that ∂ T_N_L^L is contained in the 3-neighbourhood of ∂ H_s when L is large enough. This proves the convergence of ∂ T_N_L^L towards ∂ H_s. Once we have obtained the convergence of U^L_N_L to C̅_s and the convergence of ∂ T^L_N_L to ∂ H_s, the convergence of T_N_L^L towards H_s follows from straightforward arguments, and we leave the details to the reader. For every integer k≥ 1, we set σ_k^L:=inf{ n∈ℕ : h_n^L≥ k}. On the event {σ^L_k<∞}, the (discrete) hull of radius k in 𝒟_(a)^L is defined by ℋ^L_k:=T_σ_k^L^L. Recall that ω is fixed as explained before Proposition <ref>. Let s∈(0,r_*) such that the perimeter process (𝒫_r) has no jump at s. Then the hull ℋ_⌊ s/c_L⌋^L converges towards H_s for the Hausdorff metric, and its volume ν^L(ℋ_⌊ s/c_L⌋^L) converges towards 𝒱_s. The convergence of ℋ_⌊ s/c_L⌋^L towards H_s is an immediate corollary of the previous proposition, since by construction c_L h^L_σ^L_⌊ s/c_L⌋⟶ s as L→∞. It remains to show that ν^L(ℋ_⌊ s/c_L⌋^L) converges to 𝒱_s. We keep the notation K^ε={x∈ E, Δ(x, K)≤ε} introduced in the previous proof. It is easy to verify that 𝐕(∂ H_s)=0. Then, if ε>0 is fixed, we can find δ>0 such that 𝐕((∂ H_s)^δ)<ε. Since ν_L converges weakly to 𝐕, we get, for L large enough, ν_L(ℋ^L_⌊ s/c_L⌋)≤ν_L((H_s)^δ/2)≤𝐕((H_s)^δ)+ε≤𝐕(H_s)+𝐕((∂ H_s)^δ)+ε≤𝐕(H_s)+2ε. On the other hand, ∂ℋ_⌊ s/c_L⌋^L→∂ H_s when L→∞ (by Proposition <ref>), so that we have also ∂ℋ_⌊ s/c_L⌋^L⊂ (∂ H_s)^δ/2 for every large enough L. It follows that, for large enough L, we have ν_L((∂ℋ_⌊ s/c_L ⌋^L)^δ/2) ≤𝐕((∂ H_s)^δ)+ε≤ 2ε. Hence we get for L large, 𝐕(H_s)≤ν_L((ℋ^L_⌊ s/c_L⌋)^δ/2)+ε≤ν_L(ℋ^L_⌊ s/c_L⌋)+ν_L((∂ℋ^L_⌊ s/c_L⌋)^δ/2)+ε≤ν_L(ℋ^L_⌊ s/c_L⌋)+3ε. The desired convergence of ν_L(ℋ^L_⌊ s/c_L⌋) towards 𝐕(H_s)=𝒱_s follows from the last two displays. § LIMIT THEOREMS FOR THE PERIMETER AND THE VOLUME OF THE EXPLORED REGION In this section, we take a=1 for simplicity, and (as in Section <ref>) we write 𝒟^L instead of 𝒟^L_(1) for a Boltzmann triangulation in 𝕋^1,∙(L). Recall that (T_i^L)_i≥ 0 is the sequence of (explored) triangulations we get when we apply the peeling by layers algorithm of Section <ref> to 𝒟^L. We also set S_L:=inf{i≥ 0 : T_i^L=†}, which corresponds to the hitting time of ∂𝒟^L. To simplify notation, we let P^L_k=|∂ T^L_k| be the boundary size of T^L_k, for every 0≤ k<S_L. Still for 0≤ k<S_L, we also write V^L_k for the number of vertices of T^L_k, and we recall that h^L_k is the graph distance from the distinguished vertex x^L_* to the boundary ∂ T^L_k. Properties of the peeling by layers ensure that the graph distance from x^L_* to any point of ∂ T^L_k is equal to h^L_k or h^L_k+1. Let (T_i^∞)_i≥ 0 be the sequence of triangulations with a boundary obtained by applying the same peeling algorithm to the UIPT (we refer to <cit.> for a discussion of the peeling by layers algorithm for the UIPT). We define P^∞_k, V^∞_k and h^∞_k, now for every integer k≥ 0, by replacing T^L_k with T^∞_k in the respective definitions of P^L_k, V^L_k and h^L_k. Finally, we set S_L=L^-3/2(S_L-1) if S_L>0, and by convention we also take S_L=0 when S_L=0. Recall the notation c_L=√(3/2) L^-1/2. We introduce the rescaled processes P^L_t=1/L P^L_⌊ L^3/2t⌋, V^L_t=3/4L^2 V^L_⌊ L^3/2t⌋, h^L_t=c_L h^L_⌊ L^3/2t⌋. for 0≤ t≤ S_L (by convention, P^L_0= V^L_0= h^L_0=0 when S_L=0). We similarly define, for every t≥ 0, P^∞,L_t=1/L P^∞_⌊ L^3/2t⌋, V^∞,L_t=3/4L^2 V^∞_⌊ L^3/2t⌋, h^∞,L_t=c_L h^∞_⌊ L^3/2t⌋. From <cit.> (more precisely, from the version of this result for type I triangulations, as explained in Section 6.1 of <cit.>), we have ( P^∞,L_t, V^∞,L_t, h^∞,L_t)_ t≥ 0(𝒮^+_t, 𝕍_t, 2^-3/2∫_0^t u/𝒮^+_u)_t≥ 0, where the convergence holds in distribution in the sense of the Skorokhod topology. Here the limiting process (𝒮^+_t,t≥ 0) is a stable Lévy process with no positive jumps and Laplace exponent ψ̃(λ)=3^-1/2λ^3/2 started at 0 and conditioned to stay positive (see <cit.> for the definition of this process), and we refer to <cit.> for the description of the conditional law of the process 𝕍 knowing 𝒮^+. The next proposition gives an analog of (<ref>) where P^∞,L, V^∞,L, and h^∞,L are replaced by P^L, V^L, and h^L respectively. We have (( P^L_t∧ S_L, V^L_t∧ S_L, h^L_t∧ S_L)_t≥ 0, S_L)((𝒫_t, 𝒱_t,𝒜_t)_t≥ 0, Σ_∞)_t≥ 0, where 𝒜_t=2^-3/2∫_0^t∧Σ_∞ u/𝒫_u and the distribution of ((𝒫_t, 𝒱_t)_t≥ 0,Σ_∞) is determined by [ G((𝒫_t, 𝒱_t)_t≥0) f(Σ_∞)] = √(3π)/4 ∫_0^∞ u f(u) [ G((𝒮^+_t∧ u)_t≥ 0,(𝕍_t∧ u)_t≥0) 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2] for any measurable functions f:_+⟶_+, and G:(_+,_+^2)⟶_+. We first derive the convergence in distribution of ( P^L_t∧ S_L)_t≥ 0 to (𝒫_t)_t≥ 0. The h-transform relation between the Markov chains P^L and P^∞, which was discussed in Section <ref>, shows that, for every integer k≥ 0 and every bounded function F on ^k+1, [F(P^L_0,…,P^L_k) 1_{k<S_L}| S_L>0]= [F(P^∞_0,…,P^∞_k) 𝐡_L(P^∞_k)/𝐡_L(P^∞_0)], where we recall that 𝐡_L(j)=L/L+j, and we note that P^∞_0=1 if the root edge of the UIPT is a loop, and P^∞_0=2 otherwise. By the Markov property, we have (S_L=k+1| S_L>k,P_0,P_1,…,P_k)=q_L(P_k,†). It then follows that [F(P^L_0,…,P^L_k) 1_{S_L=k+1}| S_L>0]= [F(P^∞_0,…,P^∞_k) 𝐡_L(P^∞_k)/𝐡_L(P^∞_0) q_L(P^∞_k,†)]. Let G be a bounded continous function on (_+,_+)×_+, such that 0≤ G≤ 1. Using (<ref>), we have [G(( P^L_t∧ S_L)_t≥ 0, S_L)| S_L>0] =∑_k=0^∞[1_{S_L=k+1} G(( P^L_t∧ (L^-3/2k))_t≥ 0,L^-3/2k)| S_L>0] = ∑_k=0^∞[G(( P^∞_t∧ (L^-3/2k))_t≥ 0,L^-3/2k)𝐡_L(P^∞_k)/𝐡_L(P^∞_0) q_L(P^∞_k,†)] = L^3/2∫_0^∞ u [G(( P^∞,L_t∧ (L^-3/2⌊ L^3/2 u⌋))_t≥ 0,L^-3/2⌊ L^3/2 u⌋) 𝐡_L(P^∞_⌊ L^3/2 u⌋)/𝐡_L(P^∞_0) q_L(P^∞_⌊ L^3/2 u⌋,†)] Note that (S_L>0) tends to 1 as L→∞. Furthermore, 𝐡_L(P^∞_⌊ L^3/2 u⌋)=1/1+ P^∞,L_u and we also know from <cit.> that L^3/2q_L(P^∞_⌊ L^3/2 u⌋,†) ∼√(3π)/4 1/√( P^∞,L_u) (1+ P^∞,L_u)^-3/2 when L and P^∞_⌊ L^3/2 u⌋ are large (see the last display before Section 3.2 in <cit.>). Using the convergence (<ref>) (which we may assume to hold a.s. by the Skorokhod representation theorem) and the preceding observations, we get from an application of Fatou's lemma that lim inf_L→∞[G(( P^L_t∧ S_L)_t≥ 0, S_L)] ≥∫_0^∞ u [ G((𝒮^+_t∧ u)_t≥ 0,u) √(3π)/4 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]. At this stage, we observe that ∫_0^∞ u [√(3π)/4 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]=1. Indeed, by the identification of the potential kernel of 𝒮^+ in <cit.>, we know that, for any measurable function f:_+⟶_+, [∫_0^∞ u f(𝒮^+_u)]=∫_0^∞ x W̃(x) f(x) where the function W̃ is determined by its Laplace transform ∫_0^∞ e^-λ xW̃(x) x=1/ψ̃(λ)= 3^1/2 λ^-3/2. It follows that W̃(x)=2√(3)/√(π) √(x), and the left-hand side of (<ref>) is equal to √(3π)/4×2√(3)/√(π)∫_0^∞ x (1+x)^-5/2= 1 as desired. Thanks to (<ref>), we can replace G by 1-G in (<ref>) to get the analog of (<ref>) for the limsup instead of the liminf, and we conclude that lim_L→∞[G(( P^L_t∧ S_L)_t≥ 0, S_L)] = √(3π)/4 ∫_0^∞ u [ G((𝒮^+_t∧ u)_t≥ 0,u) 1/√(𝒮^+_u) (1+𝒮^+_u)^-5/2]. This gives the convergence of (( P^L_t∧ S_L)_t≥ 0, S_L) to the pair ((𝒫_t)_t≥ 0,Σ_∞) introduced in the proposition. We can deduce the more general statement of the proposition from the convergence (<ref>) by exactly the same arguments. The point is the fact that the perimeter process (P^L_k)_k≥ 0 is Markov with respect to the discrete filtration generated by the sequence (T^L_k). We leave the details to the reader. Let R_L:=h^L_S_L-1 (we argue on the event where S_L>0). By previous observations, the graph distance between x^L_* and ∂𝒟^L is either R_L or R_L+1. Recall the notation σ^L_k=inf{n∈:h^L_n≥ k}, so that σ^L_k is finite for 1≤ k≤ R_L. For 1≤ k≤ R_L, we write 𝒫^L_k:=P^L_σ^L_k and 𝒱^L_k:=V^L_σ^L_k, which are respectively the perimeter and the volume of the discrete hull ℋ^L_k=T^L_σ^L_k. We also set r^L_*=c_LR_L, which essentially corresponds to the rescaled graph distance between x^L_* and ∂𝒟^L. Finally, we introduce rescaled versions of the processes 𝒫^L_k and 𝒱^L_k by setting 𝒫^L_t:=1/L𝒫^L_⌊ t/c_L⌋ and 𝒱^L_t:=3/4L^2𝒱^L_⌊ t/c_L⌋ for 0≤ t≤ r^L_*. Recall the processes 𝒫_t, 𝒱_t,𝒜_t in Proposition <ref>. We have ( (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*, L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0, r^L_*) ( (𝒫^∞_t, 𝒱^∞_t,η_t)_t≥ 0, r^∞_*) where r^∞_*=𝒜_∞ and, for every t≥0, 𝒫^∞_t=𝒫_η_t, 𝒱^∞_t=𝒱_η_t with η_t=inf{s≥ 0: 𝒜_s≥ t∧𝒜_∞}. Moreover the convergence in distribution (<ref>) holds jointly with (<ref>). Since c_LR_L=c_L h^L_S_L-1= h^L_ S_L, Proposition <ref> implies the convergence in distribution of r_*^L=c_LR_L towards the variable 𝒜_∞, and this convergence holds jointly with the one stated in Proposition <ref>. Then, for 0≤ t≤ c_LR_L, L^-3/2σ^L_⌊ t/c_L⌋∧ R_L=L^-3/2min{j:h^L_j≥⌊ t/c_L⌋}=inf{s≥ 0: h^L_s≥ c_L⌊ t/c_L⌋} Since we know from Proposition <ref> that ( h^L_t∧ S_L)_t≥ 0 converges in distribution to (𝒜_t)_t≥ 0, it is now easy to obtain that (L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 converges in distribution to (η_t)_t≥ 0, and this convergence holds jointly with that of Proposition <ref> (very similar arguments are used in Section 4.4 of <cit.>). Then, by our definitions, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0 =( P^L_L^-3/2σ^L_⌊ t/c_L⌋∧ R_L, V^L_L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 and we just have to use (<ref>) together with the convergence of (L^-3/2σ^L_⌊ t/c_L⌋∧ R_L)_t≥ 0 towards (η_t)_t≥ 0 to get the desired result. Recall the processes (𝒫^∞_t,𝒱^∞_t)_t≥ 0 giving the perimeters and volumes of hulls in the Brownian plane. Then, for every r>0, the distribution of the pair (𝒫^∞_t, 𝒱^∞_t)_0≤ t≤ r in Corollary <ref> under (·| r^∞_*>r) is absolutely continuous with respect to the distribution of (𝒫^∞_t,𝒱^∞_t)_0≤ t≤ r. This follows by observing that (𝒫^∞_t,𝒱^∞_t)_t≥ 0 is obtained from the pair (S^+_t,𝕍_t) in (<ref>) by the same time change as the one giving (𝒫^∞_t, 𝒱^∞_t)_t≥ 0 from (𝒫_t,𝒱_t)_t≥ 0 (combine formula (56) in <cit.> with the description of the pair (𝒫^∞_t,𝒱^∞_t)_t≥ 0 in <cit.> — some care is needed here because the scaling constants in <cit.> are not the same as in the present work). The preceding absolute continuity property implies that the approximation (<ref>) holds when 𝒫_r and 𝒱_s are replaced by 𝒫^∞_r and 𝒱^∞_s respectively, a.s. for every r<r^∞_*. In other words, we can recover 𝒫^∞_r as a deterministic function of ( 𝒱^∞_s)_s∈[0,r] which is the same as the one giving 𝒫_r from (𝒱_s)_s∈ [0,r]. We have (𝒟^L, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0, r^L_*) (_(1), (𝒫_t∧ r_*,𝒱_t∧ r_*)_t≥ 0,r_*), where the convergence holds in distribution in 𝕄^2,1×(_+,_+^2)×_+. Moreover, this convergence holds jointly with (<ref>) and (<ref>). By a tightness argument using Corollary <ref>, we may assume that, along a sequence of values of L, the triplet (𝒟^L, (𝒫^L_t∧ r^L_*, 𝒱^L_t∧ r^L_*)_t≥ 0, r^L_*) converges in distribution to a limit which we may denote as (_(1), (𝒫^∞_t, 𝒱^∞_t)_t≥ 0,r^∞_*). By the Skorokhod representation theorem, we may assume that this convergence holds a.s. Since r^L_* is the rescaled graph distance between x^L_* and ∂𝒟^L (up to an error which is O(L^-1/2)), it immediate that r^∞_*=r_*. On the other hand, for t<r^L_*, we have 𝒱^L_t= 3/4L^2𝒱^L_⌊ t/c_L⌋= ν^L(ℋ^L_⌊ s/c_L⌋), and Corollary <ref> then allows us to identify (𝒱^∞_t)_t≥ 0 with (𝒱_t∧ r_*)_t≥ 0. Finally, we saw that, for r<r^∞_*=r_*, 𝒫^∞_r must be given by the same deterministic function of ( 𝒱^∞_s)_s∈[0,r] as the one giving 𝒫_r from (𝒱_s)_s∈ [0,r], and we conclude that we have also (𝒫^∞_t)_t≥ 0=(𝒫_t∧ r_*)_t≥ 0, which completes the proof. We now fix b>0 and recall the notation r_b=inf{r∈[0,r_*): 𝒫_r=b}. For every L≥ 1, we also set k^L_b=inf{k∈{1,…, S_L-1}: P^L_k=⌊ bL⌋}, and r^L_b=c_Lh^L_k^L_b on the event where k^L_b<∞. In other words, r^L_b corresponds to the (rescaled) distance between the distinguished vertex and the boundary of the first explored region with perimeter ⌊ bL⌋. If k^L_b=∞, we take r^L_b=∞. We let _(1)^(b) be distributed as _(1) conditioned on the event {r_b<∞} and similarly, for every L≥ 1, we let 𝒟^L,(b) be distributed as 𝒟^L conditioned on {k^L_b<∞}. The convergence in distribution (, r_b^L) (_(1) ,r_b). holds in 𝕄^2, 1×ℝ. By the Skorokhod representation theorem, we may assume that the convergence of Theorem <ref> holds almost surely, as well as the convergences (<ref>) and (<ref>). Proposition <ref> will follow if we can verify that r^L_b ⟶ r_b a.s. as L→∞ (in particular 1_{r^L_b<∞}⟶1_{r_b<∞}). Set ξ^L_b:=inf{j∈{0,1,…,R_L}: 𝒫^L_j≥⌊ bL⌋}. From the (a.s.) convergence of (𝒫^L_t∧ r^L_*)_t≥ 0 to (𝒫_t∧ r_*)_t≥ 0, one infers that c_Lξ^L_b converges a.s. to r_b as L→∞, on the event {r_b<∞}. To be precise, we need to know that immediately after time r_b, the process 𝒫_t takes values greater than b, but this follows (via a time change argument) from the analogous property satisfied by the process 𝒫_t in Proposition <ref>. Argue on the event {r_b<∞}. Then, for L large we have ξ^L_b<∞ and σ^L_ξ^L_b≥ k^L_b (because P^L_σ^L_ξ^L_b=𝒫^L_ξ^L_b≥⌊ bL⌋}). Hence, c_Lξ^L_b=c_Lh^L_σ^L_ξ^L_b≥ c_Lh^L_k^L_b=r^L_b and, since c_Lξ^L_b converges to r_b, lim sup_L→∞ r^L_b ≤ r_b. To get the analogous result for the liminf, fix ∈(0,b) and argue on the event where r_b-<∞. Since 𝒫_r=𝒫_η_r for 0<r<r_*, we have sup_s≤η_r_b-𝒫_s=sup_t≤ r_b-𝒫_η_t=sup_r≤ r_b-𝒫_r≤ b-. Using the (a.s.) convergence (<ref>), we thus get that for L large, sup_s≤η_r_b- P^L_s∧ S_L<b-/2, or equivalently 1/Lsup_j≤ L^3/2η_r_b- P^L_j∧ S_L<b-/2, which implies k^L_b≥ L^3/2η_r_b-. Finally, r^L_b=c_L h^L_k^L_b≥ c_L h^L_⌊ L^3/2η_r_b-⌋ and the right-hand side converges as L→∞ to 𝒜_η_r_b-=r_b-. We conclude that lim inf_L→∞ r^L_b ≥ r_b- on the event where r_b-<∞. Since this holds for any >0, the proof is complete. § CONVERGENCE TO THE BROWNIAN ANNULUS §.§ Statement of the result We no longer assume that a=1. The definitions of and _(1) given before Proposition <ref> can then be extended. In particular, we write _(a) for a Boltzmann triangulation in 𝕋^1,∙(⌊ aL⌋) conditioned on the event {k^L_b<∞}, where k^L_b is the first time at which the perimeter of the explored region in the peeling algorithm is equal to ⌊ bL⌋. We keep the notation d_L for the (rescaled) distance on _(a) and r^b_L for the d_L-distance between the distinguished vertex and the boundary of the explored region at time k^L_b. Similarly, ^(b)_(a) is distributed as _(a) conditioned on the event that the process of hull perimeters hits b, and r_b is the corresponding hitting radius. We keep the notation D for the distance on ^(b)_(a). The convergence (<ref>) is then immediately extended to give (_(a), r_b^L) (_(a) ,r_b). in distribution in 𝕄^2,1×. Recall that the metric space ℂ_(a,b) is defined as the complement of the (interior of the) hull H_r_b in _(a), and is equipped with the (extension of the) intrinsic metric d^∘. The two boundaries of _(a,b) are ∂_0_(a,b)=∂_(a)^(b), and ∂_1_(a,b)=∂ H_r_b. To simplify notation, we write instead of _(a,b) in this section and the next one. We also let be the unexplored triangulation at time k_b^L in the peeling algorithm applied to _(a). We equip with the graph distance scaled by the factor √(3/2) L^-1/2, which we denote by d_L^∘. Recall from Section <ref> the definition of the outer boundary ∂_0=∂_(a) and the inner boundary ∂_1. Our goal in this section is to prove the following theorem. Recall the Gromov-Hausdorff space (𝕄,d_𝙶𝙷) introduced in Section <ref>. The random metric spaces (, d_L^∘) converge in distribution towards (, d^∘) in (𝕄,d_𝙶𝙷). Before we proceed to the proof of Theorem <ref>, we start with some preliminaries. By the Skorokhod representation theorem, we may assume that the convergence (<ref>) holds almost surely, (_(a), r_b^L) a.s.L→∞⟶(_(a), r_b). In the following, it will be useful to argue on a fixed value of ω for which (<ref>) holds. In fact, we will need more. We observe that the triangulation is Boltzmann distributed on the set 𝕋^2(⌊ aL⌋,⌊ bL⌋) of all triangulations with two boundaries of sizes ⌊ aL⌋ and ⌊ bL⌋, and therefore a and b play a symmetric role in the distribution of . For any L≥ 0, we may introduce a random triangulation H_0^L, with a boundary, which is independent of 𝒞^L and distributed as the triangulation discovered by the peeling algorithm applied to a Boltzmann triangulation in 𝕋^1,∙(⌊ bL ⌋) at the first time when the perimeter of the explored region hits the value ⌊ aL⌋ (conditionally on the event that this hitting time is finite). Let _(b) be the triangulation obtained by gluing H^L_0 onto along the boundary (thus identifying ∂ H^L_0 and and their distinguished boundary edges). See Fig. 3 for an illustration. By construction, _(b) is a Boltzmann triangulation in 𝕋^1,∙(⌊ bL ⌋) conditioned on the event that the perimeter process (associated with the peeling algorithm) hits ⌊ aL⌋. Hence, by Proposition <ref>, _(b)_(b), where it is implicit that distances on the spaces _(b) are scaled by √(3/2)L^-1/2 and (_(b),D̃) is a (free pointed) Brownian disk with perimeter b conditioned on the event that the process of hull perimeters hits a. By a tightness argument, we may assume that (<ref>) holds jointly with (<ref>) along a subsequence of values of L, and, from now on, we restrict our attention to this subsequence. By the Skorokhod representation theorem, we may assume that we have both the almost sure convergences (<ref>) and _(b)⟶_(b). From now on until the end of Section <ref>, we fix ω such that both these convergences hold. For this value of ω, we will prove that (, d_L^∘) converges to (, d^∘) in 𝕄. §.§ Reduction to approximating spaces Since (<ref>) holds for the value ω that we have fixed, we may and will assume that _(a) and the spaces _(a) are embedded isometrically in the same compact metric space (E, Δ), in such a way that we have _(a)L→∞_(a), x_*^LL→∞ x_*, _(a)_(a). Note that the restriction of Δ to _(a) is the distance d_L and the restriction of Δ to _(a) is the distance D. As a first important remark, we observe that (since r_b is not a jump point of the perimeter process (𝒫_r)), Proposition <ref> and the fact that r^L_b⟶ r_b give , , A difficulty in the proof of Theorem <ref> comes from the fact that the behaviour of the spaces near the boundaries is not easily controlled. We will deal with this problem by first considering approximating subspaces which are obtained from , resp. from , by removing a neighborhood of the boundary ∂_1𝒞^L, resp. of ∂_1, and then showing that the convergence in Theorem <ref> can be reduced to that of the approximating subspaces. For δ>0, we introduce the space ={x∈ : d_L(x, )≥δ}={x∈ : Δ(x, )≥δ}, which is equipped with the restriction of the distance d_L^∘, and its continuous counterpart ={x∈ : D(x,∂_1)≥δ}={x∈ :Δ(x, )≥δ}, which is equipped with the restriction of the distance d^∘. In what follows, we always assume that δ is small enough so that is not empty and even contains points x such that d^∘(x,∂_1)>δ. Then, (from (<ref>)) it follows that is not empty at least when L is large. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ, then: Δ_𝙷(, ) 0. Let us fix ε>0 and η>0. By (<ref>), for every large enough L, we have Δ_𝙷(, )< η/2. If x∈ is such that Δ(x, )≥δ+η, then (by (<ref>) again), we can find a point x_L∈ such that Δ(x_L, x)≤ε∧η/2, and it follows that: Δ(x_L, )≥Δ(x, )-Δ(x, x_L)-Δ_𝙷(, )≥δ, so that x_L∈ and Δ(x, )≤Δ(x, x_L)≤ε. If x is at distance exactly δ from , we can approximate x by a point x' such that Δ(x', )>δ (we use our assumption that δ is not a local maximum of y↦Δ(y, ∂_1ℂ)), and, for L large enough, the same argument allows us to find a point x_L∈ such that Δ(x, x_L)≤ε. A compactness argument then gives sup_x∈Δ(x, )≤ε when L is large. Since ε was arbitrary, we have proved that sup_x∈Δ(x, )→0 as L→∞. A similar argument yields sup_x∈Δ(x, )→ 0, which completes the proof. Set 𝔻^(b)_(a),δ={x∈𝔻^(b)_(a): Δ(x,∂𝔻^(b)_(a))≥δ}, 𝒟^L,(b)_(a),δ={x∈𝒟^L,(b)_(a): Δ(x,∂𝒟^L,(b)_(a)) ≥δ}. Then, for every δ>0 that is not a local maximum of the function x↦Δ(x, ∂𝔻^(b)_(a)) on 𝔻^(b)_(a), we have lim_L→∞Δ_𝙷(𝒟^L,(b)_(a),δ,𝔻^(b)_(a),δ)=0, and consequently lim sup_L→∞( sup_x∈𝒟^L,(b)_(a)Δ(x,𝒟^L,(b)_(a),δ)) ≤sup_x∈𝔻^(b)_(a)Δ(x,𝔻^(b)_(a),δ). The first assertion of the lemma is proved by arguments similar to the proof of Lemma <ref>, and we omit the details. The second assertion is an easy consequence of the first one and the fact that Δ_𝙷(𝒟^L,(b)_(a),𝔻^(b)_(a)) tends to 0 (cf. (<ref>)). Remark. The first assertion of Lemma <ref> obviously requires our particular embedding of the spaces 𝒟^L,(b)_(a) and 𝔻^(b)_(a) in (E,Δ), but the second one holds independently of this embedding provided we replace Δ by d_L in the left-hand side and by D in the right-hand side. Let us turn to the proof of Theorem <ref>. For every δ>0, we have d_𝙶𝙷(𝒞^L, ℂ)≤ d_𝙶𝙷(𝒞^L, 𝒞^L_δ)_(A_L,δ) + d_𝙶𝙷(𝒞^L_δ, )_(A'_L,δ) + d_𝙶𝙷(, )_(A”_δ). where we recall that and are equipped with the distances d_L^∘ and d^∘ respectively, and and are equipped with the restrictions of these distances. Our goal is to prove that d_𝙶𝙷(𝒞^L, ℂ) tends to 0 as L tends to infinity. To this end, we will deal separately with each of the terms A_L,δ, A'_L,δ and A”_δ. Let us start with A”_δ. We have lim_δ→ 0d_𝙶𝙷(, )=0. It is enough to verify that sup_x∈ℂ d^∘(x,ℂ_δ) 0. If this does not hold, we can find α>0 and sequences x_n∈ℂ, δ_n⟶ 0, such that d^∘(x_n,ℂ_δ_n)≥α. By compactness we can assume that x_n⟶ x_∞∈ℂ, and we get that d^∘(x_∞,ℂ_δ)≥α/2 for every δ>0, which is absurd because we know that x_∞ must be the limit (with respect to d^∘) of a sequence of points in ∖∂_1=∪_δ>0_δ. Let us now discuss A_L,δ. We have lim_δ→ 0( lim sup_L→∞ d_𝙶𝙷(𝒞^L, 𝒞^L_δ))=0. We need to verify that lim_δ→ 0( lim sup_L→∞(sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ))) = 0. Here it is convenient to view 𝒞^L as a subset of the triangulation _(b) introduced in Section <ref>. We denote the rescaled distance on _(b) by d̃_L, and, for every δ>0, we set _(b),δ={x∈_(b): d̃_L(x,∂_(b)) ≥δ}. We then claim that, for δ>0 small enough, for every sufficiently large L, we have sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) =sup_x∈_(b)d̃_L(x,_(b),δ). Indeed, the properties ⟶ and =∂𝒟^L,(b)_(a)⟶ ensure that for δ>0 small, for every sufficiently large L, all points of ∂_0𝒞^L are at distance greater than δ from ∂_1𝒞^L, and it follows that 𝒞^L ∖𝒞^L_δ is identified with _(b)∖_(b),δ. Our claim easily follows. At this stage, we can use Lemma <ref> (with the roles of a and b interchanged) and the subsequent remark : except possibly for countably many values of δ, we have lim sup_L→∞(sup_x∈_(b)d̃_L(x,_(b),δ) )≤sup_x∈𝔻̃^(a)_(b)D̃(x,𝔻̃^(a)_(b),δ) where 𝔻̃^(a)_(b),δ={x∈𝔻̃^(a)_(b) : D̃(x,∂𝔻̃^(a)_(b))≥δ}. It follows from the preceding considerations that, except possibly for countably many values of δ, lim sup_L→∞(sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) )≤sup_x∈𝔻̃^(a)_(b)D̃(x,𝔻̃^(a)_(b),δ). The right-hand side tends to 0 as δ→ 0, which completes the proof. It remains to study the terms A'_L,δ. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ, we have lim_L→∞ d_𝙶𝙷(𝒞^L_δ, )=0. Let us postpone the proof of Lemma <ref> to the next section, and recall the bound (<ref>). By letting first L tend to infinity and then δ tend to 0, using Lemmas <ref>, <ref> and <ref>, we get lim sup_L→∞ d_𝙶𝙷(𝒞^L, ℂ)=0 which completes the proof of Theorem <ref>. Therefore, it only remains to prove Lemma <ref>. §.§ Proof of the key lemma In this section, we prove Lemma <ref>. We let δ>0 such that δ is not a local maximum of the function x↦Δ(x, ∂_1ℂ). Recalling Lemma <ref>, we define a correspondence between and by setting ℛ_L={(x_L,x')∈×ℂ_δ:Δ(x_L,x')≤Δ_𝙷(, )}. By the classical result expressing the Gromov-Hausdorff distance in terms of distortions of correspondences <cit.>, the statement of Lemma <ref> will follow if we can prove that the distorsion of ℛ_L tends to 0 as L→∞, or equivalently sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L|d^∘_L(x_L,y_L)-d^∘(x',y')| 0. We first verify that sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L(d^∘_L(x_L,y_L)-d^∘(x',y')) 0. To this end, we argue by contradiction. If (<ref>) does not hold, we can find η>0 and sequences L_k↑∞, and (x_L_k,x'_k), (y_L_k,y'_k) in ℛ_L_k such that d^∘_L_k(x_L_k,y_L_k)>d^∘(x'_k,y'_k) + η. We may assume that x'_k⟶ x'_∞ and y'_k⟶ y'_∞ where x'_∞,y'_∞∈ℂ_δ, and for k large we have also d^∘_L_k(x_L_k,y_L_k)>d^∘(x'_∞,y'_∞) + η/2. From (<ref>) and the definition of the correspondence ℛ_L, we also get that Δ(x_L_k,x'_∞)⟶ 0 and Δ(y_L_k,y'_∞)⟶ 0. Since d^∘ coincides with the (extension of the) intrinsic distance on ℂ∖∂_1, we can find a path γ from x'_∞ to y'_∞ in ℂ that does not hit the boundary ∂_1ℂ and whose length is bounded above by d^∘(x'_∞,y'_∞)+η/4. From part 1 of Lemma <ref>, if k is large, we can approximate γ by a path γ_L_k going from x_L_k to y_L_k in 𝒞^L_k, whose length is bounded above by d^∘(x'_∞,y'_∞)+3η/8, such that γ_L_k will not hit ∂_1𝒞^L_k (we use the convergence of ∂_1𝒞^L_k to ∂_1ℂ) and therefore stays in 𝒞^L_k. It follows that d^∘_L_k(x_L_k,y_L_k) is bounded above by the length of γ_L_k giving a contradiction with (<ref>). This completes the proof of (<ref>). In order to complete the proof of (<ref>), we still need to verify that sup_(x_L,x')∈ℛ_L,(y_L,y')∈ℛ_L(d^∘(x',y') -d^∘_L(x_L,y_L)) 0. This is slightly more delicate than the proof of (<ref>), and we will need the following lemma. Let η>0. There exist ε>0 and L_0≥ 0 such that, for any choice of x^L, y^L∈ with L≥ L_0, there is a path between x^ L and y^L in which stays at distance at least ε from and whose length is bounded by d_L^∘(x^L, y^ L)+η. [Proof of Lemma <ref>] Let us argue by contradiction. If the desired property does not hold, we can find sequences ε_n⟶ 0, L_n⟶∞, x_n,y_n∈𝒞^L_n_δ such that any path from x_n to y_n that stays at distance at least ε_n from ∂_1𝒞^L has length greater than d_L_n^∘(x_n, y_n)+η. By compactness, we may assume that x_n⟶ x_∞ and y_n⟶ y_∞ in (E,Δ) and, by (<ref>), we have x_∞,y_∞∈_δ. Additionally, since the diameters of (𝒞^L_n_δ,d^∘_L_n) are bounded (this follows from (<ref>) since the diameter of ℂ is finite), we can assume that ℓ_n:=d^∘_L_n(x_n,y_n) converges to some real ℓ_∞≥ 0. For every n, let γ_n be a geodesic from x_n to y_n. By a standard argument, we can extract from the sequence (γ_n(t∧ℓ_n),t∈[0,ℓ_∞+η/3]) a subsequence that converges uniformly (for the metric Δ) to a path γ_∞ =(γ_∞(t),t∈[0,ℓ_∞+η/3]) that connects x_∞ to y_∞ in 𝔻_(a)^(b). By (<ref>), γ_∞ takes values in ℂ. Moreover, from the analogous property for the discrete paths γ_n, we get that γ_∞ is 1-Lipschitz, meaning that Δ(γ_∞(s),γ_∞(t))≤ |t-s| for every s,t. It follows in particular that the length of γ_∞ is at most ℓ_∞+η/3. The path γ_∞ may hit ∂_1ℂ. Using Lemma <ref>, we can however find another path γ'_∞ connecting x_∞ to y_∞ in ℂ, which does not hit ∂_1ℂ and has length at most ℓ_∞+2η/3. The path γ'_∞ stays at positive distance α from ∂_1ℂ. Using part 2 of Lemma <ref> (and the fact that ∂_1𝒞^L converges to ∂_1ℂ for the Δ-Hausdorff measure, by (<ref>)), we can then, for n large enough, find a path γ'_n connecting x_n to y_n in 𝒞^L_n, with length smaller that d^∘_L_n(x_n,y_n)+η, that will stay at distance at least α/2 from ∂_1𝒞^L_n. This is a contradiction as soon as ε_n<α/2. Let us complete the proof of (<ref>). We again argue by contradiction. If (<ref>) does not hold, we can find η>0 and sequences L_k↑∞, and (x_L_k,x'_k), (y_L_k,y'_k) in ℛ_L_k such that d^∘(x'_k,y'_k) >d^∘_L_k(x_L_k,y_L_k)+ η. We may assume that x'_k⟶ x'_∞ and y'_k⟶ y'_∞ where x'_∞,y'_∞∈ℂ_δ. By Lemma <ref>, we can find ε>0 such that, for every large enough k, there is a path γ_L_k from x_L_k to y_L_k in 𝒞^L_k that stays at distance at least ε from ∂_1𝒞^L_k and whose length is bounded by d^∘_L_k(x_L_k,y_L_k)+η/2. We have Δ(x_L_k,x'_∞)⟶ 0 and Δ(y_L_k,y'_∞)⟶ 0, and, by part 2 of Lemma <ref>, we can (for k large) find a path γ'_k from x'_∞ to y'_∞ in 𝔻_(a)^(b) that stays at distance at least ε/2 from ∂_1ℂ (we again use the convergence of ∂_1𝒞^L_k to ∂_1ℂ) and has length smaller than d^∘_L_k(x_L_k,y_L_k)+ 3η/4. Hence d^∘(x'_∞,y'_∞)<d^∘_L_k(x_L_k,y_L_k)+ 3η/4, and also, for k large, d^∘(x'_k,y'_k)<d^∘_L_k(x_L_k,y_L_k)+ 7η/8. We get a contradiction, which completes the proof of (<ref>) and of Theorem <ref>. □ § CONVERGENCE OF BOUNDARIES AND VOLUME MEASURES In the last section, we showed that the sequence of metric spaces (, d^∘_L) converges in law towards (, d^∘) for the Gromov-Hausdorff topology. We will now explain how to extend this result to the setting of marked measure metric spaces. We write μ_L for the restriction to of the (scaled) counting measure ν^L, and μ for the restriction to of the volume measure 𝐕. The random marked measure metric spaces 𝒳^L:=((, d^∘_L), (∂_0, ∂_1), μ_L), converge towards 𝒴:=((, d^∘), (, ),μ) in distribution in the space 𝕄^2, 1. As in the previous section, we may restrict our attention to a sequence of values of L such that the convergences (<ref>) and (<ref>) hold almost surely. Fixing ω in the underlying probability space, we can assume that _(a) and the spaces _(a) are embedded isometrically in the same compact metric space (E, Δ), in such a way that the convergences (<ref>) hold for the Hausdorff distance associated with Δ, and moreover the measures ν_L converge weakly to 𝐕. As explained at the beginning of Section <ref>, we can also assume that (<ref>) holds. We recall the definition of and in (<ref>) and (<ref>), and we also set ∂_1 = {x∈ :Δ(x, )=δ} and ∂_1 = {x∈ : Δ(x, )=δ}. In what follows, we always assume that δ>0 is small enough so that Δ(∂_0ℂ,∂_1ℂ)>δ, and in particular is not empty. If δ>0 is not a local maximum of the function x↦Δ(x, ∂_1ℂ) on ℂ we have Δ_𝙷(∂_1 𝒞^L_δ, ∂_1 ℂ_δ) 0. Moreover, lim_δ→ 0( lim sup_L→∞μ_L(𝒞^L∖)) =0. The first part of the lemma is derived by arguments similar to the proof of Lemma <ref>. The second part follows from the weak convergence of ν_L to 𝐕 and the fact that 𝐕 puts no mass on ∂_1ℂ. We leave the details to the reader. We then set θ_L(δ)= max(Δ_𝙷(,),Δ_𝙷(∂_1 𝒞^L_δ, ∂_1 ℂ_δ),Δ_𝙷(∂_0 𝒞^L, ∂_0 ℂ)). By (<ref>), (<ref>) and Lemma <ref>, we have lim_L→∞θ_L(δ) = 0 except possibly for countably many values of δ. We then slightly modify the definition of the correspondence ℛ_L by setting ℛ'_L={(x_L,x')∈×ℂ_δ:Δ(x_L,x')≤θ_L(δ)}. The very same arguments as in Section <ref> show that the distortion of ℛ'_L tends to 0 as L→∞ (again except possibly for countably many values of δ). We will now extend ℛ'_L to a correspondence between and . We start by fixing η>0, and we set α_L(δ):=sup_x∈𝒞^L d^∘_L(x,𝒞^L_δ) , α(δ):= sup_x∈ℂ d^∘(x,ℂ_δ). By (<ref>) and (<ref>), we can choose δ∈(0,η) small enough so that we have both α(δ)≤η and α_L(δ)≤η for every sufficiently large L. Additionally, the second assertion of the lemma allows us to assume that μ_L(∖𝒞^L_2δ)<η for L large. In what follows, we fix δ∈(0,η) so that the preceding properties hold (and both δ and 2δ do not belong to the countable set that was excluded above). To simplify notation, we write α_L=α_L(δ) and α=α(δ). We define a correspondence between and by setting ℛ^*_L:={(x_L,x)∈×: ∃x̃_L∈, x̃∈ s.t. (x̃_L,x̃)∈ℛ'_L, d^∘_L(x_L,x̃_L)≤α_L, d^∘(x,x̃)≤α}. Then, we can easily bound the distortion dis(ℛ^*_L) of ℛ^*_L in terms of the the distortion dis(ℛ'_L) of ℛ'_L: if (x_L,x),(y_L,y)∈ℛ^*_L, we can find (x̃_L,x̃),(ỹ_L,ỹ)∈ℛ'_L such that |d^∘_L(x_L,y_L)-d^∘(x,y)|≤ 2α_L+|d^∘_L(x̃_L,ỹ_L)-d^∘(x̃,ỹ)| + 2α, and it follows that dis(ℛ^*_L)≤ 2(α+α_L)+ dis(ℛ'_L)≤ 4η + dis(ℛ'_L). To prove the desired convergence of 𝒳^L towards 𝒴 in 𝕄^2,1, we will use the definition of the Gromov-Hausdorff-Prokhorov distance d^2,1_𝙶𝙷𝙿. By a classical argument (cf. <cit.>), we can define a distance Δ^L,* on the disjoint union 𝒞^L⊔ℂ, such that the restriction of Δ^L,* to is d^∘_L, the restriction of Δ^L,* to is d^∘, and, for every x_L∈𝒞^L and x∈ℂ: Δ^L,*(x_L,x)= 1/2dis(ℛ^*_L)+(y_L,y)∈ℛ_L^*inf(d_L^∘(x_L,y_L)+d^∘(x,y)). Since (𝒞^L, d^∘_L) and (ℂ, d^∘) are embedded isometrically in (𝒞^L⊔ℂ, Δ^L,*), we can then use the definition of the Gromov-Hausdorff-Prokhorov distance to bound d^2,1_𝙶𝙷𝙿(𝒳^L,𝒴). We need to bound each of the four terms appearing in the infimum of the definition. We again use the notation Δ^L,*_𝙷, resp. Δ^L,*_𝙿, for the Hausdorff distance, resp. the Prokhorov distance, associated with Δ^L,*. First step. We verify that max(Δ^L,*_𝙷(,),Δ^L,*_𝙷(∂_1,∂_1),Δ^L,*_𝙷(∂_0,∂_0))≤1/2dis(ℛ^*_L) +max(α,α_L)+δ. First, it is immediate from the definition of Δ^L,* that Δ^L,*_𝙷(,)≤1/2dis(ℛ^*_L). Similarly, the fact that Δ_𝙷(∂_0 𝒞^L, ∂_0 ℂ)≤θ_L(δ) and the definition of ℛ'_L give Δ^L,*_𝙷(∂_0,∂_0)≤1/2dis(ℛ^*_L). Let us bound Δ^L,*_𝙷(∂_1,∂_1). Let x_L∈∂_1. From the definition of α_L, we can find x'_L∈ such that d^∘_L(x_L,x'_L)≤α_L. By considering a geodesic from x'_L to x_L, we can even assume that x'_L∈∂_1. It follows that there exists x'∈∂_1 such that Δ(x',x'_L)≤Δ_𝙷(∂_1,∂_1), hence (x'_L,x')∈ℛ'_L⊂ℛ^*_L. From the definition of Δ^L,*, we get Δ^L,*(x_L,x')≤1/2dis(ℛ^*_L) + d^∘_L(x_L,x'_L)≤1/2dis(ℛ^*_L)+α_L. Finally, since x'∈∂_1, we can find x”∈∂_1 such that Δ^L,*(x',x”)=δ, and we get Δ^L,*(x_L,x”)≤1/2dis(ℛ^*_L)+α_L+δ. In a symmetric manner, we can verify that, for any y∈∂_1, we can find y_L∈∂_1𝒞^L such that Δ^L,*(y_L,y)≤1/2dis(ℛ^*_L)+α+δ. This gives the desired bound for Δ^L,*_𝙷(∂_1,∂_1), thus completing the proof of (<ref>). As an immediate consequence, using also our estimate for dis(ℛ^*_L), we get lim sup_L→∞(max(Δ^L,*_𝙷(,),Δ^L,*_𝙷(∂_1,∂_1),Δ^L,*_𝙷(∂_0,∂_0)))≤ 4η. Second step. We now want to bound Δ^L,*_𝙿(μ_L,μ). We start by observing that, if L is large enough, if x∈ and x_L∈ are such that Δ(x_L,x)<δ/2, we have Δ^L,*(x_L,x)≤Δ(x_L,x)+ 1/2dis(ℛ^*_L)+θ_L(δ). Indeed, we can find x'∈ such that Δ(x_L,x')≤θ_L(δ) (and in particular (x_L,x')∈ℛ'_L), then Δ(x,x')≤Δ(x,x_L)+θ_L(δ)<δ provided that L is large enough so that θ_L(δ)<δ/2. Since x and x' both belong to and Δ(x,x')<δ, we must have Δ(x,x')=d^∘(x,x'), and Δ^L,*(x_L,x)≤1/2dis(ℛ^*_L)+d^∘(x,x')=1/2dis(ℛ^*_L)+Δ(x,x')≤1/2dis(ℛ^*_L)+θ_L(δ)+Δ(x_L,x), which gives our claim (<ref>). Let A be a measurable subset of . We have μ_L(A)≤μ_L(A∩𝒞^L_2δ)+ μ_L(∖𝒞^L_2δ) and we know that μ_L(∖𝒞^L_2δ)<η when L is large. On the other hand, by the weak convergence of ν_L to 𝐕, we have also for L large, μ_L(A∩𝒞^L_2δ)=ν_L(A∩𝒞^L_2δ)≤𝐕({x∈𝔻^(b)_(a):Δ(x,A∩𝒞^L_2δ)<δ/2})+δ/2. Since Δ_𝙷(𝒞^L_2δ,ℂ_2δ) tends to 0 as L→∞, the properties x∈𝔻^(b)_(a) and Δ(x,𝒞^L_2δ)<δ/2 imply (for L large) that x∈, and in particular we can replace 𝔻^(b)_(a) by and 𝐕 by μ in the last display. But then we can use (<ref>) to get that, for x∈_δ, Δ(x,A∩𝒞^L_2δ)<δ/2 ⇒ Δ^L,*(x,A∩𝒞^L_2δ)<δ/2 + 1/2dis(ℛ^*_L)+θ_L(δ). Finally, we have, for L large, μ_L(A)≤μ_L(A∩𝒞^L_2δ) +η ≤μ({x∈:Δ(x,A∩𝒞^L_2δ)<δ/2})+δ/2+η ≤μ({x∈:Δ^L,*(x,A)<δ/2+1/2dis(ℛ^*_L)+θ_L(δ)}) +δ/2+η ≤μ({x∈:Δ^L,*(x,A)<3η}) + 2η. A symmetric argument (left to the reader) shows that for L large, for any measurable subset A of , we have μ(A)≤μ_L({x∈:Δ^L,*(x_L,A)<3η}) + 2η. This proves that Δ_𝙿(μ_L,μ)≤ 3η when L is large. Since η was arbitrary, we can combine this with (<ref>) to get the desired convergence of d^2,1_𝙶𝙷𝙿(𝒳^L,𝒴) to 0. § THE COMPLEMENT OF TWO HULLS IN THE BROWNIAN SPHERE In this section, we fix r, r'>0. Recall that B_r^∙(_*) is the hull of radius r centered at _* in the free Brownian sphere _∞ (this hull is defined on the event {𝐃(_*,_0)>r}). It is shown in <cit.> that the intrinsic metric on B^∘_r(_*)= B^∙_r(_*)∖∂ B^∙_r(_*) has a.s. a continuous extension to its closure B^∙_r(_*). In the following, we implicitly endow B^∙_r(_*) with this extended intrinsic metric and we equip it with the restriction of the volume measure on _∞, the distinguished point _* and the boundary ∂ B^∙_r(_*), so that we can consider B_r^∙(_*) as a random variable in 𝕄^2, 1. Since _* and _0 play symmetric roles in the Brownian sphere <cit.>, we can similarly consider, on the event {𝐃(_*,_0)>r'}, the hull of radius r' centered at _0 in _∞, which we denote by B_r'^∙(_0) (this is defined as the complement of the connected component of _∞∖ B^∞_r'(_0) that contains _*). We can endow this space with its (extended) intrinsic metric as we did for B_r^∙(_*) and consider B_r'^∙(_0) as a random variable in 𝕄^2, 1 by equipping it with the restriction of the volume measure on _∞, the distinguished point _0 and the boundary ∂ B^∙_r'(_0). We also consider the perimeter of these hulls. The perimeter of B_r^∙(_*) is 𝒵_r^_*:=𝐏_r as given by formula (<ref>) and symmetrically the perimeter 𝒵_r'^_0 of B_r'^∙(_0) may be defined by the analog of (<ref>) where _* is replaced by _0: 𝒵_r'^_0:=lim_ε→ 01/ε^2Vol({x∈_∞∖ B^∙_r'(_0):𝐃(x,B^∙_r'(_0))<ε}). On the event where 𝐃(_*, _0)>r+r', the hulls B^∙_r(_*) and B^∙_r'(_0) are disjoint, and we consider the subspace 𝒞^_*, _0_r,r':= Closure(_∞∖(B_r^∙(_*)∪ B_r'^∙(_0))). It is shown in <cit.> that, a.s. on the event {𝐃(_*, _0)>r+r'}, the intrinsic metric on _∞∖(B_r^∙(_*)∪ B_r'^∙(_0)) has a continuous extension on 𝒞^_*, _0_r,r', which is a metric on this space (to be precise, <cit.> considers only the case r=r', but the argument is the same without this condition). So we can view 𝒞^_*, _0_r,r' as a random variable in 𝕄^2,1 by equipping this space with the restriction of the volume measure of _∞ and with the “boundaries” ∂ B^∙_r(_*) and ∂ B^∙_r'(_0). We finally recall the notion of a standard hull with radius r and perimeter z>0, as defined in <cit.>. Under the probability measure _0(· | 𝐃(_*, _0)>r+r'), the three spaces B_r^∙(_*), B_r'^∙(_0) and 𝒞^_*, _0_r, r' are conditionally independent given the pair (𝒵_r^_*,𝒵_r'^_0), and their conditional distribution can be described as follows. The spaces B_r^∙(_*) and B_r'^∙(_0) are standard hulls of respective radii r and r' and of respective perimeters 𝒵_r^_* and 𝒵_r'^_0. The space 𝒞^_*, _0_r,r' is a Brownian annulus with perimeters 𝒵_r^_* and 𝒵_r'^_0. This theorem is closely related to <cit.> (see also <cit.>). In fact, <cit.> (stated for r=r' but easily extended) already gives the conditional independence of B_r^∙(_*), B_r'^∙(_0) and 𝒞^_*, _0_r,r' given (𝒵_r^_*,𝒵_r'^_0), and identifies the conditional distribution of the hulls B_r^∙(_*) and B_r'^∙(_0). In order to complete the proof of Theorem <ref>, it only remains to identify the conditional distribution of 𝒞^_*, _0_r,r'. To do so, we will first state and prove a proposition, which may be viewed as a variant of our definition of the Brownian annulus. This proposition also corresponds to Definition 1.1 in <cit.>). We consider now the (free pointed) Brownian disk 𝔻_(a). Recall the notation H_r for the hull of radius r centered at the distinguished point x_* of 𝔻_a, which is defined on the event {r<r_*}. We also let C_r be the closure of 𝔻_(a)∖ H_r. In a way similar to the results recalled at the beginning of this section, one proves that the intrinsic metric on H_r∖∂ H_r (resp. on 𝔻_(a)∖ H_r) has a continuous extension to H_r (resp. to C_r) which is a metric on this space. The shortest way to verify these properties is to view the Brownian disk 𝔻_(a) as embedded in the Brownian sphere, as in Proposition <ref> above, and then to use the analogous properties in the Brownian sphere recalled at the beginning of this section (we omit the details). In the next proposition, we thus view H_r (resp. C_r) equipped with the extended intrinsic metric, with the marked subsets {x_*} and ∂ H_r (resp. with the boundaries ∂𝔻_(a) and ∂ H_r) and with the restriction of the volume measure on 𝔻_(a), as a random variable in 𝕄^2,1. Recall the notation 𝒫_r for the boundary size of H_r. Under (·| r<r_*), C_r and H_r are conditionally independent given 𝒫_r, H_r is distributed as a standard hull with radius r and perimeter 𝒫_r and C_r is distributed as a free Brownian annulus with perimeters a and 𝒫_r. By Proposition <ref>, we may and will assume that the Brownian disk 𝔻_(a) is constructed as the subspace B̌^∙__a(_*) of the free Brownian sphere _∞ under _0(·|_a<∞), where _a=inf{r∈(0,_*):_r-_*=a}, and we recall that B̌^∙__a(_*) is the closure of _∞∖ B^∙__a(_*). The distance between the distinguished point x_*=_0 and the boundary of _(a) is then r_*=_*-_a, where _*= 𝐃(_0, _*). Furthermore, conditioning 𝔻_(a) on the event {r<r_*} is then equivalent to arguing under _0(·| r+_a<_*). On the event {r+_a<_*}, the hull H_r is identified to the hull B^∙_r(_0) and C_r is identified to B̌^∙__a(_*)∖ B^∘_r(_0) where B^∘_r(_0) denotes the interior of B^∙_r(_0). In particular, the perimeter 𝒫_r of H_r is identified with the boundary size 𝒵^_0_r of B^∙_r(_0). As explained at the beginning of this section, we view B^∙_r(_0) as a random variable in 𝕄^2,1. Similarly <cit.> (with the roles of _* and _0 interchanged) allows us to view B̌^∙_r(_0):=_∞∖ B^∘_r(_0), equipped with the extended intrinsic metric, as a random variable in 𝕄^2,1 (the marked subsets are {_*} and ∂ B^∙_r(_0)). Fact. Under _0(· | _*>r), B^∙_r(_0) and B̌^∙_r(x_0) are independent conditionally given the perimeter 𝒵^_0_r, B^∙_r(_0) is distributed as a standard hull of radius r and perimeter 𝒵^_0_r, and B̌^∙_r(x_0) is distributed as a free Brownian disk of perimeter 𝒵^_0_r. This follows from <cit.>, up to the interchange of _* and _0. We then note that the event {r+_a<_*} is measurable with respect to B̌^∙_r(_0), and that, on this event, B̌^∙__a(_*)∖ B^∘_r(_0) is a function of B̌^∙_r(_0) (indeed B̌^∙__a(_*)∖ B^∘_r(_0) is obtained from B̌^∙_r(_0) by “removing” the hull of radius _a centered at _*). It follows from these observations and the preceding Fact that, under _0(·| r+_a<_*), B^∙_r(_0) and B̌^∙__a(_*)∖ B^∘_r(_0) are independent conditionally on 𝒵^_0_r. To get the statement of the proposition, it only remains to determine the conditional distribution of B̌^∙__a(_*)∖ B^∘_r(_0) knowing 𝒵^_0_r, under _0(·| r+_a<_*). To this end, we observe that, by construction, on the event {r+_a<_*}, B̌^∙__a(_*)∖ B^∘_r(_0)=B̌^∙_r(_0)∖ B^∘__a(_*), By the preceding Fact, B̌^∙_r(_0) is distributed under _0(· | _*>r) as a free pointed Brownian disk with perimeter 𝒵^_0_r (whose distinguished point is _*). Under _0(·| r+_a<_*), this Brownian disk is further conditioned on the event that there is a hull of perimeter a centered at the distinguished point _*, and _a is the first radius at which this occurs. By our definition of the Brownian annulus, this means that, under _0(·| r+_a<_*) and conditionally on 𝒵^_0_r, B̌^∙_r(_0)∖ B^∘__a(_*) is a Brownian annulus with perimeters 𝒵^_0_r and a. This completes the proof. [Proof of Theorem <ref>] As in the preceding proof (interchanging again the roles of _* and _0), we know that, under _0(· | _*>r) and conditionally on ^(_*)_r, B̌^∙_r(_*) is a (free pointed) Brownian disk with perimeter ^(_*)_r, whose distinguished point is _0. Under _0(· | _*>r+r'), this Brownian disk is conditioned on the event that the distinguished point is at distance greater than r' from the boundary. We can thus apply Proposition <ref>, with r replaced by r', to this Brownian disk, and it follows that, under _0(· | _*>r+r'), conditionally on the pair (^(_*)_r,^(_0)_r'), the space B̌^∙_r(_*)∖ B^∘_r'(_0) is a Brownian annulus with perimeters ^(_*)_r and ^(_0)_r'. This completes the proof since 𝒞^_*, _0_r,r'= B̌^∙_r(_*)∖ B^∘_r'(_0) by construction. § EXPLICIT COMPUTATIONS FOR THE LENGTH OF THE ANNULUS Recall the setting of Section <ref>. We define the length ℒ_(a, b) of the annulus _(a,b) as the distance between the two boundaries ∂_1ℂ_(a,b) and ∂_0ℂ_(a,b). Our goal in this section is to discuss the distribution of ℒ_(a, b). From formula (<ref>), we get that ℒ_(a, b) is given under the probability measure (·| r_b<∞) by the formula ℒ_(a, b)= r_*-r_b. From the discussion in the proof of Lemma <ref>, we see that the distribution of ℒ_(a, b) is the law of the last hitting time of b for a continuous-state branching process with branching mechanism ψ(λ)=√(8/3) λ^3/2 with initial distribution 3/2a^3/2 (a+z)^-5/2, conditionally on the fact that this process visits b. Unfortunately, we were not able to use this interpretation to derive an explicit analytic expression for the law of ℒ_(a, b), but the following proposition still gives some useful information. The first moment of ℒ_(a, b) is √(3π/2)(a+b)(√( a^-1)+√(b^-1)-√(a^-1+b^-1)). Furthermore, the probability of the event {ℒ_a, b>u} is asymptotic to 3(a+b)u^-2 when u→∞. To simplify notation, we consider first the case a=1 and we write ℒ_b=ℒ_1, b. For every x≥ 0, we write (Z_t)_t≥ 0 for a continuous-state branching process with branching mechanism ψ that starts from x under the probability measure _x. Similarly, we write (X_t)_t≥ 0 for a spectrally positive Lévy process with Laplace exponent ψ starting from x under _x, and we also set T_0=inf{t≥ 0:X_t=0}. By the Lamperti transformation, we have for every measurable function f:ℝ_+→ℝ_+ such that f(0)=0, 𝔼_x[∫_0^∞ f(Z_t)t]= 𝔼_x[∫_0^T_0 f(X_t)dt/X_t] On the other hand, the potential kernel of the Lévy process X killed upon hitting 0 is computed in the proof of Theorem VII.18 in <cit.>: for every measurable function g:ℝ_+→ℝ_+, 𝔼_x[∫_0^T_0g(X_t)dt]= ∫_0^∞ g(y)( W(y)-1_{x<y}W(y-x))dy, where W(u) is the scale function of the Lévy process -X, which is given here by W(u)=√((3/2π)u). Suppose then that Z starts with initial density 3/2 (1+x)^-5/2 under the probability measure . It follows from the preceding two displays that 𝔼[∫_0^∞ f(Z_t)dt] = 3/2√(3/2π)∫_0^∞dx/(1+x)^5/2∫_0^∞f(y)/y(√(y)-1_{x<y}√(y-x))dy = √(3/2π)∫_0^∞f(y)/y(√(y)- 3/2∫_0^y√(y-x)/(1+x)^5/2 dx)dy =√(3/2π)∫_0^∞f(y)/y(√(y)-y^3/2/1+y)dy = √(3/2π)∫_0^∞f(y)/√(y) (1+y)dy. Next let L_b:=sup{t≥ 0: Z_t=b}, with the convention sup∅=0. For u>0, the conditional probability that L_b>u given Z_u is the probability that Z started from Z_u visits b, and it was already noticed in the proof of Lemma <ref> that this probability is equal to 1-√((b-Z_u)^+/b). Hence, we get ℙ(L_b>u)=𝔼[1-√((b-Z_u)_+/b)], and we integrate with respect to u, using (<ref>), to get 𝔼[L_b]= √(3/2π)∫_0^∞(1-√((b-y)^+/b)) dy/√(y)(1+y). After some straightforward changes of variables, we arrive at 𝔼[L_b]= √(3π/2)(1-√(b)/π∫_ℝx^2/(1+b+x^2)(1+x^2)dx). The integral in the right-hand side is computed via a standard application of the residue theorem, and we get 𝔼[L_b]=√(3π/2)(1-√(1+b)-1/√(b)). As discussed at the beginning of the section, the first moment of ℒ_1,b is equal to [L_b| L_b>0], and we know from Lemma <ref> that (L_b>0)=(r_b<∞)=(1+b)^-1. Hence the first moment of ℒ_1,b is (1+b) [L_b], and we get the first assertion of the proposition when a=1. In the general case, we just have to use a scaling argument, noting that ℒ_(a, b) has the same law as √(a)ℒ_1, b/a. Let us turn to the second assertion. Again, by scaling, it suffices to consider the case a=1. We use the fact that _x(Z_u=0)=exp(-3x/2u^2), which follows from the explicit form of the Laplace transform of Z_u (see e.g. formula (1) in <cit.>). Then (Z_u>0)= 3/2∫_0^∞ (1-exp(-3x/2u^2)) dx/(1+x)^5/2= 9/4u^2∫_0^∞x dx/(1+x)^5/2 + O(u^-3) =3u^-2+O(u^-3), as u→∞. Again using the Laplace transform of Z_u, it is straightforward to verify that (Z_u∈(0,b])=O(u^-3) as u→∞. Since ℙ(Z_u>b)≤ℙ(L_b>u)≤ℙ(Z_u>0), we get that ℙ(L_b>u)=3u^-2+O(u^-3) as u→∞. Finally, the probability that ℒ_(a,b)>u is equal to (1+b)ℙ(L_b>u), which gives the desired asymptotics. § APPENDIX In this appendix, we prove Proposition <ref>. Recall the notation in formula (<ref>), and also set 𝒴_s=∑_i∈ I𝒵_s(ω^i), for every s≤ 0, in such a way that 𝒫_r=𝒴_r-r_* for r∈ (0,r_*]. It is easy to adapt the arguments of <cit.> (see in particular formula (34) in this reference) to get the formula [1_{r_*>r}exp(-λ𝒴_r-r_*)]=3r^-3 ∫_-∞^0 ds [𝒴_s exp(-(λ+3/2r^2) 𝒴_s)]. We already noticed in the proof of Lemma <ref> that 𝒴_0 has density 3/2a^3/2 (a+z)^-5/2. Using this and the special Markov property of the Brownian snake (see e.g. <cit.>), we get, for every μ>0 and s<0, [exp(-μ𝒴_s)] =∫_0^∞ z 3/2 a^3/2(a+z)^-5/2exp(-z_0(1-exp(-μ_s))). According to formula (6) in <cit.>, _0(1-exp(-μ_s))=(μ^-1/2 +√(2/3)|s|)^-2. If we substitute this in the previous display, and then differentiate with respect to μ, we arrive at [𝒴_s exp(-μ𝒴_s)]=3/2 a^3/2∫_0^∞ z z/(a+z)^-5/2(1+|s|√(2μ/3))^-3exp(-z(μ^-1/2 +√(2/3)|s|)^-2). We take μ=λ +3/2r^2 and use formula (<ref>) to obtain [1_{r_*>r}exp(-λ𝒴_r-r_*)] =9/2 r^-3a^3/2∫_0^∞ z z/(a+z)^5/2∫_0^∞ s (1+s√(2μ/3))^-3exp(-z(μ^-1/2 +√(2/3)s)^-2) =9/4√(3/2) r^-3a^3/2 μ^-3/2∫_0^∞ z/(a+z)^5/2 (1-e^-μ z) =3/2√(3/2) r^-3a^3/2 μ^-1/2∫_0^∞ z/(a+z)^3/2 e^-μ z Writing μ^-1/2 = 1/√(π) ∫_0^∞ x/√(x) e^-μ x, we arrive at [1_{r_*>r}exp(-λ𝒴_r-r_*)] =3/2 √(3/2π) r^-3 a^3/2∫_0^∞ y e^-μ y∫_0^y z/(a+z)^3/2 (y-z)^1/2. Finally, a straightforward calculation gives for y>0, ∫_0^y z/(a+z)^3/2 (y-z)^1/2= 2√(y)/√(a) (a+y), so that recalling μ=λ +3/2r^2, we have [1_{r_*>r}exp(-λ𝒴_r-r_*)] =3 √(3/2π) r^-3∫_0^∞ y e^-λ y √(y) a/a+y e^-3y/(2r^2). This completes the proof. 99 ABS L. Addario-Berry, Y. Wen, Joint convergence of random quadrangulations and their cores. Ann. Inst. H. Poincaré Probab. Stat. 53, 1890–1920 (2017) albenque2020scaling M. Albenque, N. Holden, X. Sun, Scaling limit of large triangulations of polygons. Electron. J. Probab. 25, Paper No. 135, 43 pp. (2020) ang2022moduli M. Ang, G. Rémy, X. Sun, The moduli of annuli in random conformal geometry. Preprint, arXiv:2203.12398 PercOnRandMapsI O. Angel, N. Curien, Percolations on random maps I: Half-plane models. Ann. Inst. H. Poincaré Probab. Statist. 51, 405–431 (2015) bernardiFusy O. Bernardi, É. Fusy, Bijections for planar maps with boundaries. J. Combin. Theory Ser. A 158, 176–227 (2018) Bet0 J. Bettinelli, Scaling limit of random planar quadrangulations with a boundary. Ann. Inst. H. Poincaré Probab. Stat. 51, 432–477 (2015) BrownianDiskBettineli J. Bettinelli, G. Miermont, Compact Brownian surfaces I. Brownian disks. Probab. Theory Related Fields 167, 555-614 (2017) BrownianSurfacesII J. Bettinelli, G. Miermont, Compact Brownian surfaces II. Orientable surfaces. Preprint arXiv:2212.12511 Bertoin J. Bertoin, Lévy Processes. Cambridge University Press, 1996. BBY D. Burago, Y. Burago, S. Ivanov, A Course in Metric Geometry. Graduate Studies in Mathematics, vol. 33. Amer. Math. Soc., Boston, 2001. peeling N. Curien, Peeling Random Planar Maps. Lecture notes from the 2019 Saint-Flour Probability Summer School. Lecture Notes in Mathematics 2335. Springer, Berlin, 2023. Hull N. Curien, J.-F. Le Gall, The hull process of the Brownian plane. Probab. Theory Related Fields 166, 187–231 (2016) ScalingUIPT N. Curien, J.-F. Le Gall, Scaling limits for the peeling process on random maps. Ann. Inst. H. Poincaré Probab. Stat. 53, 322–357 (2017) GM1 E. Gwynne, J. Miller, Scaling limit of the uniform infinite half-plane quadrangulation in the Gromov-Hausdorff-Prokhorov-uniform topology. Electron. J. Probab. 22, Paper No. 84, 47 pp. (2017) GM0 E. Gwynne, J. Miller, Convergence of the free Boltzmann quadrangulation with simple boundary to the Brownian disk. Ann. Inst. Henri Poincaré Probab. Stat. 55, 551–589 (2019) CSBPRandomSnakes J.-F. Le Gall, Spatial Branching Processes, Random Snakes and Partial Differential Equations. Lectures in Mathematics ETH Zürich. Birkhäuser, Boston, 1999. CactusBound J.-F. Le Gall, Geodesics in large planar maps and in the Brownian map. Acta Mathematica 205, 287–360 (2010) Le_Gall_2013 J.-F. Le Gall, Uniqueness and universality of the Brownian map. Ann. Probab. 41, 2880–2960 (2013) BesselProc J.-F. Le Gall, Bessel processes, the Brownian snake and super-Brownian motion. In: Séminaire de Probabilités XLVII. Lecture Notes Math. 2137. Springer 2015. BrowDiskandtheBrowSnake J.-F. Le Gall, Brownian disks and the Brownian snake. Ann. Inst. H. Poincaré Probab. Stat. 55, 237–313 (2019) Stars J.-F. Le Gall, Geodesic stars in random geometry. Ann. Probab. 50, 1013–1058 (2022) GrowthFrag J.-F. Le Gall, A. Riera, Growth-fragmentation processes in Brownian motion indexed by the Brownian tree. Ann. Probab. 48, 1742–1784 (2020) spine J.-F. Le Gall, A. Riera, Spine representations for non-compact models of random geometry. Probab. Theory Related Fields 181, 571–645 (2021) MarkovSpatial J.-F. Le Gall, A. Riera, Spatial Markov property in Brownian disks. To appear in Ann. Inst. H. Poincaré Probab. Stat., arXiv:2302.01138 miermont2011brownian G. Miermont, The Brownian map is the scaling limit of uniform random plane quadrangulations. Acta Math., 210, 319–401 (2013)
http://arxiv.org/abs/2407.12346v1
20240717064214
Object-Aware Query Perturbation for Cross-Modal Image-Text Retrieval
[ "Naoya Sogi", "Takashi Shibata", "Makoto Terao" ]
cs.CV
[ "cs.CV", "cs.IR", "cs.LG" ]
N. Sogi et al. Visual Intelligence Research Laboratories, NEC Corporation, Kanagawa, Japan naoya-sogi@nec.com, t.shibata@ieee.org, m-terao@nec.com Object-Aware Query Perturbation for Cross-Modal Image-Text Retrieval Naoya Sogi Takashi Shibata Makoto Terao July 22, 2024 ====================================================================== § ABSTRACT The pre-trained vision and language (V&L) models have substantially improved the performance of cross-modal image-text retrieval. In general, however, V&L models have limited retrieval performance for small objects because of the rough alignment between words and the small objects in the image. In contrast, it is known that human cognition is object-centric, and we pay more attention to important objects, even if they are small. To bridge this gap between the human cognition and the V&L model's capability, we propose a cross-modal image-text retrieval framework based on “object-aware query perturbation.” The proposed method generates a key feature subspace of the detected objects and perturbs the corresponding queries using this subspace to improve the object awareness in the image. In our proposed method, object-aware cross-modal image-text retrieval is possible while keeping the rich expressive power and retrieval performance of existing V&L models without additional fine-tuning. Comprehensive experiments on four public datasets show that our method outperforms conventional algorithms. § INTRODUCTION Cross-modal image-text retrieval is one of the mainstream tasks in pattern recognition <cit.> and has various applications including e-commerce <cit.> and video surveillance <cit.>. Recent pre-trained vision-and-language (V&L) models <cit.> have caused a paradigm shift. Those pre-trained models substantially outperform legacy cross-modal image-text retrieval by leveraging massive amounts of training data while equipping advantages such as zero-shotness and generalizability. Nevertheless, those V&L models are not any panacea; those V&L models have limited performance for small objects due to the rough alignment between text and the fine-grained localization of these small targets in the image. An example of retrieval results obtained by a sophisticated V&L model, BLIP2 <cit.>, on the Flickr 30K <cit.> dataset is shown in Fig. <ref>(a). The matching between the target objects and the input query text is weak because the objects, e.g., the person and the rollerblades, are small, resulting in incorrect retrieval results. Although the drawback of the retrieval performance degradation related to those small objects is critical in actual applications, it has been hidden behind the overwhelming performance gains of the recent pre-trained V&L models on the public benchmark datasets. In contrast, humans can effectively understand visual scenes by an ability that lies in their object-centered (or compositional) perception <cit.>. Owing to this human object-centered perception, the human visual function is highly robust to the size of the target object. For example, objects critical to understanding a scene, e.g., a small rescue caller in an image of a disaster scene, will be gazed at regardless of the target object's size. The lack of such object-awareness in V&L models is a major issue, especially for human-centered vision tasks, e.g., image retrieval. Although legacy image retrieval algorithms using object detection have also been proposed <cit.>, these methods cannot inherit the strengths of recent pre-trained V&L models. There is a strong demand for a general framework that bridges the gap between human perception and the V&L models while inheriting the potential capabilities of pre-trained V&L models including zero-shotness and absolute high performance. This paper proposes an object-aware query-perturbation for cross-modal retrieval as a solution to the above demand. Our Query-Perturbation (Q-Perturbation) increases object awareness of V&L models by focusing on object information of interest even when the objects in an image are relatively small. An example of retrieval results by the proposed method is shown in Fig. <ref>(b). In contrast to the existing methods, our retrieval framework, i.e., a V&L model with Q-Perturbation, can perform accurate retrieval even for images that capture small objects. The core mechanism of Q-Perturbation is to enhance queries with keys corresponding to object regions, at cross-attention modules in a V&L model. Naively enhancing queries in the existing V&L model breaks the original weights, resulting in poor performance. Our query perturbation prevents weight breaking by enhancing queries within a subspace constructed using keys corresponding to target objects. That is, queries are first decomposed by subspaces representing object information and then enhanced using the decomposed information, which is object information retained in original queries. This process naturally selects queries to be enhanced as the process uses only object information in original queries, i.e., a query is not enhanced if it does not have object information. As a result, our Query Perturbation improves the object-awareness of a V&L model while inheriting the impressive performance of a V&L model. The proposed method is applicable to a variety of V&L models and can avoid increased computational cost due to data updates and catastrophic forgetting due to re-training because the proposed method is training-free and easy to implement. Comprehensive experiments on public datasets demonstrate the effectiveness of the proposed method. The contributions of this paper are including: 1) We propose an object-aware query perturbation (Q-Perturbation) for cross-modal image-text retrieval. 2) We construct the object-aware retrieval framework by plugging Q-Perturbation into state-of-the-art V&L models, e.g., BLIP2 <cit.>, COCA <cit.>, and InternVL <cit.>. 3) Comprehensive experiments on public data demonstrate the effectiveness of the proposed method. In addition, we propose a new metric that mitigates the dataset bias regarding object size. § RELATED WORKS Cross-Modal Image-Text Retrieval. The study of cross-modal image-text retrieval is a fundamental task in vision, and many existing methods exist <cit.>. A standard approach acquires a text-language common space from image and text datasets prepared as training data in advance <cit.>. To acquire an accurate image-text common space, several approaches have been introduced to improve the loss function and distance space, such as metric learning <cit.> and probabilistic distribution representation <cit.>. Various extensions have been proposed for fine-grained retrieval <cit.> by introducing object detection <cit.>, graph-based relationships between objects <cit.>, re-weighting strategy <cit.>, and the attention mechanism <cit.>. These existing approaches for fine-grained retrieval suggest that object awareness is an essential cue for locally detailed cross-modal retrieval. This paper focuses on object awareness for the pre-trained V&L models <cit.>. We propose a simple-yet-effective framework that can efficiently improve the performance of image-text retrieval for an image containing small objects that are semantically important. Pre-trained Vision & Language Model. In recent years, cross-modal image-text retrieval using the V&L model has been proposed as a new paradigm <cit.>. Vision language pre-training, such as CLIP <cit.>, trains vision language alignments from large numbers of image-text pairs through a self-supervised task. Before this paradigm, the existing image-text retrieval methods mainly focused on training algorithms using medium-sized datasets such as Flicker 30K and COCO. In contrast, the recent cross-modal image-text retrieval using the pre-trained V&L models outperforms those legacy cross-modal image-text retrieval methods, achieving high zero-shot performance on diverse datasets and enabling open vocabulary retrieval. In particular, the recently proposed BLIP2 <cit.> has overwhelming performance in cross-modal image-text retrieval. However, it has recently been pointed out that there is a weakness in the localization for the V&L models such as CLIP, and several simple improvements have been proposed <cit.>. As described later, this weakness has also been shared in cross-modal image-text retrieval. The proposed method is a novel framework that can overcome this weakness through cross-modal image-text retrieval while taking advantage of the potential capabilities of the existing V&L models. § PERFORMANCE DEGRADATION INDUCED BY SMALL OBJECTS We discuss the performance degradation induced by small objects in a target image. We compared the overall performance of text-image retrieval (called overall category) on Flickr-30K <cit.> and Flickr-FG <cit.> with the performance on a dataset consisting of images with only small detected objects (called small object category). Specifically, we selected images for the small object category, where the ratio of the largest detected object rectangle's area to the entire image's area is less than 10%. Figure <ref> shows the comparison results. Here, we used Recall@1 as the evaluation metric. It can be seen that as the area of the object detection rectangle becomes relatively smaller, the retrieval becomes more difficult, and the retrieval performance degrades. The degradation is observed not only in Flickr-30K but also in Flickr-FG, where more detailed captions are annotated. Interestingly, we also find that this is a common drawback in the recent pre-trained V&L models <cit.>. This drawback is underestimated in the standard evaluation metric, Recall@K, because the number of images belonging to the small object category is small. For example, the number of images in the small object category accounts for about 1.5% of all images in Flickr-FG and Flickr-30K. We introduce an object-aware query perturbation to improve the poor performance of images with such small objects. Furthermore, we also discuss the effectiveness of the proposed method using an evaluation metric that accounts for this data bias regarding the object size in Sec <ref>. § METHOD We describe the overview of our framework and key idea of the query perturbation (Q-Perturbation), then describe the details of the Q-Perturbations for the Q-Former module in BLIP2 <cit.>. Finally, we explain an extension of our Q-Perturbations to other V&L models <cit.>. §.§ Overview Our proposed method aims at improving the retrieval performance for an image containing small objects by extending the existing V&L models while inheriting the high expressiveness of these models. In general, V&L models contain a cross-modal projection module that aligns language features with image features. For example, BLIP2 introduces Q-Former architecture with a transformer structure as a cross-modal projector that combines image and text features. An overview of our proposed framework is shown in Fig. <ref>. The proposed framework is also an architecture that leverages cross-modal projectors such as Q-Former <cit.> and QLLaMA<cit.>. The input text, a retrieval query, is similar to a standard cross-modal retrieval, and feature representations of texts are generated using a text encoder. The proposed framework constructs an object-aware cross-modal projector by incorporating localization cues obtained from object detection into the existing cross-modal projector, in addition to image features obtained from existing image encoders. The key is how to incorporate the localization cues into existing cross-modal projection modules. To do this, the proposed method introduces an object-aware query perturbation, called Q-Perturbation, that adaptively adjusts the query according to the size of the detected objects and bounding boxes in the image. §.§ Basic Idea: Object-Aware Query Perturbation A standard approach to constructing a cross-modal projector in a transformer-based module is to introduce cross-attention. For example, in BLIP2, cross-attention is introduced in the Q-Former to integrate image features with queries, including learned queries and text tokens. Our goal is to incorporate object localization cues from the bounding boxes into cross-modal projection modules with minimal modification while taking advantage of the highly expressive power of the existing V&L models. To this end, the following must be satisfied: - Inheritability: The proposed method must be object-aware cross-modal projection without significantly destroying the weights and structures already learned, in order to maximize the potential of the existing V&L models. - Flexibility: The proposed method must be scalable and flexible regarding the size and number of detected objects. In the proposed method, as shown in Fig. <ref>, the Q-Perturbation is used to perturb the already obtained query to emphasize the object region features using object localization, i.e., bounding box. An object-aware cross-modal projection module can be implemented with minimal modifications by plugging the proposed Q-Perturbation module just before the cross-attention module. In the following, we first describe our Q-Perturbation module for the single object case and then extend it to multiple objects. §.§ Q-Perturbation Module for Single Objects The proposed Q-Perturbation consists of three components: 1) Object Key Pooling, 2) K-Subspace Construction, and 3) Query Enhancement as shown in Fig. <ref>. Let Q = {q_i}, K = {k_l}, and V = {v_l} be the set of queries before the cross-attention, and the set of the keys and values from the image encoder, respectively. Here, i and l are subscripts to distinguish each token. 1) Object Key Pooling. First, we select image tokens that overlap the detected bounding box obtained by object detection, as shown in Fig. <ref>. Our pooling step is in the same manner as ROI pooling in two-stage object detection <cit.>. In the following, those selected image tokens are called object image tokens, and the pooled set is denoted as K^obj = {k^obj_j}, where j is subscripts to distinguish each token. Note that, if there are multiple objects in an image, we perform the object key pooling for each detected object. 2) K-Subspace Construction. Next, the object-aware key subspace, called K-subspace, is generated from the pooled object image tokens k^obj_j for each object using Principal Component Analysis (for more detail, see supplementary material A.2). The K-subspace for each object is denoted by Φ=[ϕ_1, ϕ_2, ⋯, ϕ_p, ⋯ ], where ϕ_p is the p-th basis vector of the K-subspace. The K-subspace represents essential information about the corresponding object. 3) Query Enhancement. Finally, the set of the already obtained query Q = {q_i} is enhanced by decomposing each query into the K-subspace Φ and the complementary subspace using each basis vector ϕ_p of the K-subspace. q_i = q_i^∥ + q_i^⊥,   q_i^∥ = ΦΦ^Tq_i,   q_i^⊥ = ( I - ΦΦ^T) q_i, here, q_i^∥ and q_i^⊥ are the decomposed queries based on the K-subspace Φ and its complementary subspace. Our Q-Perturbation generates a perturbed query q̂_i that enhances the components belonging to the K-subspace, i.e., enhances object information retained in the original query. This enhancement by the Q-Perturbation is given by q̂_i = q_i + αq_i^∥, where α is the parameter that control the perturbation magnitude. This enhancement could be seen in that relevant queries to an object are automatically selected and then enhanced to have more object information, as the decomposition with the K-subspace has a mechanism to extract only object information retained in the original queries. Thus, Q-Perturbation enhances object awareness of V&L models without destroying the weights and structures of the V&L models, resulting in high inheritability. §.§ Extension to Multiple Objects So far, we have discussed Q-Perturbation on a single object. Our proposed Q-Perturbation can be easily extended to the case of multiple objects, i.e., it has high flexibility. For Q-Perturbation with multiple objects, the proposed method generates the object-aware K-subspace for each object, and then decomposition and enhancement for each query are performed based on these obtained K-subspaces. Let Φ_b, q_i,b^∥, and q_i,b^⊥ be the K-subspace corresponding to b-th object, query components belonging to the b-th K-subspace, and query components orthogonal to the b-th K-subspace, respectively. Here, b is a subscript to distinguish each detected object in an image. Formally, Q-Perturbation for multiple K-subspaces can be expressed as follows. q̂_i = q_i + α∑_b w(S_b) q_i,b^∥ = q_i + α∑_b w(S_b) Φ_bΦ_b^Tq_i, where S_b and w(S_b) are the area of the detected bounding box and the weight function for b-th detected object. In this paper, for simplicity, the weight function is given by w(S̅_b)=β+γS̅_b, where β, γ and S̅ are the adjustment parameter and normalized area, respectively. The normalized area S̅_b=S_b/S_I is calculated by dividing the whole area of each bounding box by the area S_I of the corresponding image. Note that, in this paper, no exhaustive search was performed for β and γ, and either { 0,±1,±0.5 } was used. §.§ Beyond the Q-Perturbation Module for Q-Former In previous sections, our Q-Perturbation has been described for the case of Q-Former-based model <cit.>. Finally, we discuss its extension to other V&L models and its potential for tasks other than cross-modal image-text retrieval. §.§.§ Extension to other pre-trained V&L models. Many pre-trained V&L models for cross-modal image-text retrieval have been proposed. It is expected that many more pre-trained models will be proposed in the future. Our proposed Q-Perturbation module is a general and versatile approach to perturb query in cross-attention using the localization cues from object detection and the obtained key features. In this sense, the proposed Q-Perturbation is applicable to other existing V&L models such as COCA <cit.> and InternVL <cit.>, as shown in Fig. <ref>. As discussed later, the proposed method with the existing V&L models allows for cross-modal image-text retrieval that is aware of smaller objects. §.§.§ Other Tasks with Our Q-Perturbation. In general, the Q-Former proposed in BLIP2 is used for other tasks, such as image captioning by using LLMs in the latter additional stage. In this sense, our proposed Q-Perturbation can be used to re-present the existing captions more object-aware. This extension suggests new possibilities for using pre-trained V&L models based on human perception. § EXPERIMENTS §.§ Settings §.§.§ Datasets and Experimental Protocols. We use two widely used benchmark datasets, i.e., Flickr-30K <cit.> and MSCOCO <cit.>, and fine-grained extensions of the two datasets, i.e., Flickr-FG and COCO-FG <cit.>. We adapt the commonly used Karapathy split <cit.> for all the datasets. The Flickr 30K dataset has 1,014 validation images and 1,000 test images. The MSCOCO dataset has 5,000 images for validation and test, respectively. Each image has five description texts. Therefore, there are 5,000 (=1,000 images × 5 texts) and 25,000 (=5,000 images × 5 texts) test image-text pairs for Flickr-30K and MSCOCO, respectively. Flickr-FG and COCO-FG are extensions of the above two datasets. Flickr-FG and COCO-FG replaced original description texts with fine-grained descriptions. These datasets have five fine-grained texts for an image, as with Flickr-30K and MSCOCO. Therefore, there are 5,000 and 25,000 test image-text pairs, as well. We conducted text-to-image (T2I) and image-to-text (I2T) tasks; the T2I task is to find the paired image of an input text, and the I2T task is vice versa. §.§.§ Evaluation Metrics. We use the standard evaluation metric, Recall@K (R@K). R@K is a ratio of correct retrievals to all retrievals. Here, the correct retrieval is identified by whether the paired image or texts are in the top K retrieval results. Following the previous studies, K is set to 1, 5, 10. We also use mean Recall@K (mR@K), which considers the size of objects in each image. mR@K is a harmonic mean of multiple R@Ks. As outlined in Fig. <ref> and elaborated on later, the retrieval difficulty depends on the object size in the image. However, traditional R@K does not consider object size and cannot correctly assess this challenge. To alleviate this problem, we propose to use an object size-aware evaluation metric, mean R@K, which is a harmonic mean of R@Ks on subsets split by object size. Here, each R@K is calculated on a subset of all text-image pairs, where each subset is determined by the largest normalized area S̅ (please see Sec. <ref>) of detected objects in each image. In this paper, we generate ten subsets by splitting largest areas by every 10% and calculate a harmonic mean (mR@K) of the ten R@Ks calculated on the subsets. §.§.§ Implementation Details. We use bounding boxes given by Flickr-Entities <cit.> and COCO-Entities <cit.>. Flickr-Entities and COCO-Entities have bounding boxes corresponding to each text. Flickr-Entities generates boxes manually by annotators, and COCO-Entities generates boxes semi-automatically, i.e., boxes are detected by Faster R-CNN <cit.> and matched nouns in each text by manually defined rules. We also use boxes that are detected automatically by CO-DINO <cit.>. Note that, object detection and image feature extraction can be carried out in advance for the T2I task, as they do not depend on a retrieval input text. To split datasets and calculate mR@K, we use Flickr-Entities's bounding boxes for Flickr-30K and Flickr-FG. For COCO and COCO-FG, we use bounding boxes detected by CO-DINO instead of COCO-Entities, as COCO-Entities are built by the traditional detector, Faster R-CNN. We applied the proposed Q-Perturbation to all cross-attentions in the Q-Former of BLIP2. We used Eva-CLIP <cit.> base BLIP2 finetuned on COCO validation data <cit.>. Following the previous study <cit.>, we apply the re-ranking technique; we first select 64 candidates by image-text contrastive (ITC) and then re-ranked them by image-text matching (ITM). We also evaluated our method combined with COCA and InternVL-G models <cit.>. We used the ViT-L/14 <cit.> based COCA model published by the OpenCLIP repository <cit.>, and the InternVL-14B-224px model <cit.>. Our Q-Perturbation is applied to the last cross-attention for COCA and to all cross-attentions in the QLLaMA for InternVL-G (see the supplementary material A for more detail). Q-Perturbation has three hyperparameters: 1) perturbation intensity α, 2) weight function w(S_b), and 3) dimension of object-aware subspaces. These parameters were tuned by the grid-search algorithm using validation data with the mR@1. The intensity α was selected from 2, 4, 6, 8 and 10 for BLIP2 and 0.2, 0.4, 0.6, 0.8 and 1 for COCA and InternVL. The weight function was selected from five functions: constant value (=1), S̅_b, S̅_b-0.5, 1-S̅_b, 0.5-S̅_b. The dimensions of subspaces were determined by using the contribution ratio. Thus, we tuned the threshold of the contribution ratio from 0.85, 0.9, 0.95, 0.99. §.§ Comparative results §.§.§ Results for small objects. Table <ref> shows the evaluation results for small objects and the overall of each dataset. The proposed Q-Perturbation improves the retrieval performance of small objects, which in turn improves the overall performance. The conventional method would have had difficulty in considering small objects in the image. The results suggest that our method mitigates this difficulty by enhancing object awareness of the conventional methods. Figure <ref> shows examples of image retrieval results by BLIP2-ITC and BLIP2-ITC with Q-Perturbation. We can see that our method selects the correct image at a higher rank than the original BLIP2. This is mainly due to the advantage of our object-aware mechanism; the proposed method efficiently utilizes information from small objects, such as “an older man”, and “a performer” for Fig. <ref> (a) (b), respectively. Table <ref> shows comparative results with additional evaluation metrics. We can confirm the effectiveness of our method again, as our method shows competitive results. Furthermore, our method stably improves performance even with noisy bounding boxes obtained by a detector automatically. This property is helpful in applying our method to real-world applications. As we discussed in Sec. <ref>, our Q-Perturbation is applicable to other tasks, such as image captioning, as our method is plugged into a pre-trained V&L model. We carried out image captioning to visualize the effect of Q-Perturbation, as shown in Fig. <ref>. It can be seen that output captions become object-aware compared with the original BLIP2s' results by emphasizing the corresponding objects, such as “glass”, with our Q-Perturbation. This object-aware property of our method helps in improving the retrieval performance of a V&L model. §.§.§ Overall results. We then discuss performance comparisons between our method and various cross-modal image-text retrieval methods. - Baselines: We compared our method with 1) object-aware models; SCAN <cit.>, IMRAM <cit.>, SHAN <cit.>, NAAF <cit.>, and 2) V&L pre-trained models; CLIP <cit.>, ALBEF <cit.>, UNITER <cit.>, BEIT-3 <cit.>, COCA <cit.>, InternVL <cit.>, BLIP2 <cit.>. - Results: Table <ref> shows comparative results with various conventional methods. Our method archives competitive results compared with state-of-the-art methods. These results suggest that our Q-Perturbation enhances object awareness of the pre-trained V&L model while inheriting its impressive performance (for more comparison with simple baselines, see supplementary material B). §.§ Results with Other State-of-The-Art V&L Models Our proposed Q-Perturbation can be plugged into any model, including cross-attention layers. To confirm the versatility of our method, we apply Q-Perturbation to two V&L models, COCA and InternVL. Table <ref> shows comparative results. We can see that Q-Perturbation is highly versatile as our method improves retrieval performance. COCA w/Q-pert. has a slight improvement. This is because the impact of the proposed method is small, as a cross-attention layer is only placed at the end of the vision encoder. Input features to QLLaMA, which is the cross-modal projector of InternVL, or cross-attentions may focus on the global context, as QLLaMA is followed by the large vision encoder (6B parameters ViT model). Even in such a difficult situation, the proposed method could enhance object-awareness and improve performance. §.§ Sensitivity on hyperparameters We analyze the sensitivity of the performance to the hyperparameters, including the weight function, scale factor, and dimension of subspaces, i.e., threshold of contribution ratio for PCA. In this experiment, we use bounding boxes by Flickr-Entities. Table <ref> shows the evaluation results with varying the hyperparameters. The proposed method has a high performance by adequately selecting the parameters, although the proposed method has low sensitivities to the hyperparameters. In this paper, we selected the hyperparameters from the manually set values using validation data. It would be an excellent direction to learn the hyperparameters in the future if learning data were available. § LIMITATIONS The key idea behind our method is to enhance object information in a V&L model's cross-attention features following an image encoder. It would, therefore, be problematic if object information had disappeared entirely in the image encoder. This paper uses all bounding boxes in each image. It is an excellent future direction to filter bounding boxes or adjust the scale α according to an input text to improve retrieval performance further. Note that this approach increases computational cost at the retrieval stage, as we need to extract bounding boxes and process a neural network, including cross-attentions, after receiving the input. This direction would be a trade-off between scalability and retrieval performance. § CONCLUSIONS In this paper, we proposed an object-aware query perturbation for cross-modal image-text retrieval. The key is to use query perturbation to focus on small target objects by enhancing the query weights with keys corresponding to object regions in the cross-attention module. The proposed method is applicable to various V&L models based on cross-modal projection, including COCA, BLIP2, and InternVL. Comprehensive experiments on four public datasets demonstrate the effectiveness of the proposed method. splncs04 Supplementary Material for Object-Aware Query Perturbation
http://arxiv.org/abs/2407.13622v1
20240718155804
"Misspecified $Q$-Learning with Sparse Linear Function Approximation: Tight Bounds on Approximation (...TRUNCATED)
[ "Ally Yalei Du", "Lin F. Yang", "Ruosong Wang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
"\nBeyond Dropout: Robust Convolutional Neural Networks Based on Local Feature Masking\n\n\n\n\n\n (...TRUNCATED)
http://arxiv.org/abs/2407.13485v1
20240718130412
Slope-semistability and moduli of coherent sheaves: a survey
[ "Mihai Pavel", "Matei Toma" ]
math.AG
[ "math.AG", "math.CV", "14D20, 32G13" ]
"\n\nSlope-semistability]Slope-semistability and moduli of coherent sheaves: a survey\n\n\nInstitute(...TRUNCATED)
http://arxiv.org/abs/2407.13525v1
20240718135859
"Short-period Post-Common Envelope Binaries with Balmer Emission from SDSS and LAMOST Based on ZTF P(...TRUNCATED)
[ "Lifang Li", "Fenghui Zhang" ]
astro-ph.SR
[ "astro-ph.SR" ]
"\n\nfirstpage–lastpage\nDiscussion: Effective and Interpretable Outcome Prediction by Training Sp(...TRUNCATED)
http://arxiv.org/abs/2407.13024v1
20240717212643
Length-preserving biconnection gravity and its cosmological implications
[ "Lehel Csillag", "Rattanasak Hama", "Mate Jozsa", "Tiberiu Harko", "Sorin V. Sabau" ]
gr-qc
[ "gr-qc" ]
"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ndefinitionDefinition[section]\nremarkRemark[section]\ntheoremTheore(...TRUNCATED)
http://arxiv.org/abs/2407.12696v1
20240717161950
Three-loop evolution kernel for transversity operator
[ "A. N. Manashov", "S. Moch", "L. A. Shumilov" ]
hep-th
[ "hep-th", "hep-ph" ]
"\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nC>c<#1#11-.25eml\(...TRUNCATED)
http://arxiv.org/abs/2407.12322v1
20240717054727
Frequency Guidance Matters: Skeletal Action Recognition by Frequency-Aware Mixed Transformer
[ "Wenhan Wu", "Ce Zheng", "Zihao Yang", "Chen Chen", "Srijan Das", "Aidong Lu" ]
cs.CV
[ "cs.CV" ]
"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n University of North Carolina at C(...TRUNCATED)
http://arxiv.org/abs/2407.12216v1
20240716235007
Mindful-RAG: A Study of Points of Failure in Retrieval Augmented Generation
[ "Garima Agrawal", "Tharindu Kumarage", "Zeyad Alghamdi", "Huan Liu" ]
cs.IR
[ "cs.IR" ]
"\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n(...TRUNCATED)
http://arxiv.org/abs/2407.13472v1
20240718124622
"On the origin of univalent Mg$^+$ ions in solution and their role in anomalous anodic hydrogen evol(...TRUNCATED)
["Florian Deißenbeck","Sudarsan Surendralal","Mira Todorova","Stefan Wippermann","Jörg Neugebauer"(...TRUNCATED)
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
"\n\n\n\n§ ABSTRACT \nAqueous metal corrosion is a major economic concern in modern society. A phen(...TRUNCATED)

Latest arXiv

You could always access the latest arXiv papers via this dataset.

We update the dataset weekly, on every Sunday. So the dataset always provides the latest arXiv papers created in the past week.

The current dataset on main branch contains the latest arXiv papers submitted from 2024-07-15 to 2024-07-22.

The data collection was conducted on 2024-07-22.

Use the dataset via:

ds = datasets.load_dataset('RealTimeData/arxiv_latest')

Previsou versions

You could access previous versions by requesting different branches.

For example, you could find the 2023-08-20 version via:

ds = datasets.load_dataset('RealTimeData/arxiv_latest', revision = '2023-08-20')

Check all available versions by clicking the "Files and versions" button on the top bar.

Downloads last month
6
Edit dataset card

Collection including RealTimeData/arxiv_latest