Dataset Preview
Go to dataset viewer
id (float) | title (string) | abstract (string) | full_text (string) | text_no_abstract (string) | translated (string) |
---|---|---|---|---|---|
704
| Sparsity-certifying Graph Decompositions
| We describe a new algorithm, the $(k,\ell)$-pebble game with colors, and use
it obtain a characterization of the family of $(k,\ell)$-sparse graphs and
algorithmic solutions to a family of problems concerning tree decompositions of
graphs. Special instances of sparse graphs appear in rigidity theory and have
received increased attention in recent years. In particular, our colored
pebbles generalize and strengthen the previous results of Lee and Streinu and
give a new proof of the Tutte-Nash-Williams characterization of arboricity. We
also present a new decomposition that certifies sparsity based on the
$(k,\ell)$-pebble game with colors. Our work also exposes connections between
pebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and
Westermann and Hendrickson.
| Sparsity-certifying Graph Decompositions
Ileana Streinu1∗, Louis Theran2
1 Department of Computer Science, Smith College, Northampton, MA. e-mail: streinu@cs.smith.edu
2 Department of Computer Science, University of Massachusetts Amherst. e-mail: theran@cs.umass.edu
Abstract. We describe a new algorithm, the (k, `)-pebble game with colors, and use it to obtain a charac-
terization of the family of (k, `)-sparse graphs and algorithmic solutions to a family of problems concern-
ing tree decompositions of graphs. Special instances of sparse graphs appear in rigidity theory and have
received increased attention in recent years. In particular, our colored pebbles generalize and strengthen
the previous results of Lee and Streinu [12] and give a new proof of the Tutte-Nash-Williams characteri-
zation of arboricity. We also present a new decomposition that certifies sparsity based on the (k, `)-pebble
game with colors. Our work also exposes connections between pebble game algorithms and previous
sparse graph algorithms by Gabow [5], Gabow and Westermann [6] and Hendrickson [9].
1. Introduction and preliminaries
The focus of this paper is decompositions of (k, `)-sparse graphs into edge-disjoint subgraphs
that certify sparsity. We use graph to mean a multigraph, possibly with loops. We say that a
graph is (k, `)-sparse if no subset of n′ vertices spans more than kn′− ` edges in the graph; a
(k, `)-sparse graph with kn′− ` edges is (k, `)-tight. We call the range k ≤ `≤ 2k−1 the upper
range of sparse graphs and 0≤ `≤ k the lower range.
In this paper, we present efficient algorithms for finding decompositions that certify sparsity
in the upper range of `. Our algorithms also apply in the lower range, which was already ad-
dressed by [3, 4, 5, 6, 19]. A decomposition certifies the sparsity of a graph if the sparse graphs
and graphs admitting the decomposition coincide.
Our algorithms are based on a new characterization of sparse graphs, which we call the
pebble game with colors. The pebble game with colors is a simple graph construction rule that
produces a sparse graph along with a sparsity-certifying decomposition.
We define and study a canonical class of pebble game constructions, which correspond to
previously studied decompositions of sparse graphs into edge disjoint trees. Our results provide
a unifying framework for all the previously known special cases, including Nash-Williams-
Tutte and [7, 24]. Indeed, in the lower range, canonical pebble game constructions capture the
properties of the augmenting paths used in matroid union and intersection algorithms[5, 6].
Since the sparse graphs in the upper range are not known to be unions or intersections of the
matroids for which there are efficient augmenting path algorithms, these do not easily apply in
∗ Research of both authors funded by the NSF under grants NSF CCF-0430990 and NSF-DARPA CARGO
CCR-0310661 to the first author.
2 Ileana Streinu, Louis Theran
Term Meaning
Sparse graph G Every non-empty subgraph on n′ vertices has ≤ kn′− ` edges
Tight graph G G = (V,E) is sparse and |V |= n, |E|= kn− `
Block H in G G is sparse, and H is a tight subgraph
Component H of G G is sparse and H is a maximal block
Map-graph Graph that admits an out-degree-exactly-one orientation
(k, `)-maps-and-trees Edge-disjoint union of ` trees and (k− `) map-grpahs
`Tk Union of ` trees, each vertex is in exactly k of them
Set of tree-pieces of an `Tk induced on V ′ ⊂V Pieces of trees in the `Tk spanned by E(V ′)
Proper `Tk Every V ′ ⊂V contains ≥ ` pieces of trees from the `Tk
Table 1. Sparse graph and decomposition terminology used in this paper.
the upper range. Pebble game with colors constructions may thus be considered a strengthening
of augmenting paths to the upper range of matroidal sparse graphs.
1.1. Sparse graphs
A graph is (k, `)-sparse if for any non-empty subgraph with m′ edges and n′ vertices, m′ ≤
kn′− `. We observe that this condition implies that 0 ≤ ` ≤ 2k− 1, and from now on in this
paper we will make this assumption. A sparse graph that has n vertices and exactly kn−` edges
is called tight.
For a graph G = (V,E), and V ′ ⊂ V , we use the notation span(V ′) for the number of edges
in the subgraph induced by V ′. In a directed graph, out(V ′) is the number of edges with the tail
in V ′ and the head in V −V ′; for a subgraph induced by V ′, we call such an edge an out-edge.
There are two important types of subgraphs of sparse graphs. A block is a tight subgraph of
a sparse graph. A component is a maximal block.
Table 1 summarizes the sparse graph terminology used in this paper.
1.2. Sparsity-certifying decompositions
A k-arborescence is a graph that admits a decomposition into k edge-disjoint spanning trees.
Figure 1(a) shows an example of a 3-arborescence. The k-arborescent graphs are described
by the well-known theorems of Tutte [23] and Nash-Williams [17] as exactly the (k,k)-tight
graphs.
A map-graph is a graph that admits an orientation such that the out-degree of each vertex is
exactly one. A k-map-graph is a graph that admits a decomposition into k edge-disjoint map-
graphs. Figure 1(b) shows an example of a 2-map-graphs; the edges are oriented in one possible
configuration certifying that each color forms a map-graph. Map-graphs may be equivalently
defined (see, e.g., [18]) as having exactly one cycle per connected component.1
A (k, `)-maps-and-trees is a graph that admits a decomposition into k− ` edge-disjoint
map-graphs and ` spanning trees.
Another characterization of map-graphs, which we will use extensively in this paper, is as
the (1,0)-tight graphs [8, 24]. The k-map-graphs are evidently (k,0)-tight, and [8, 24] show that
the converse holds as well.
1 Our terminology follows Lovász in [16]. In the matroid literature map-graphs are sometimes known as bases
of the bicycle matroid or spanning pseudoforests.
Sparsity-certifying Graph Decompositions 3
Fig. 1. Examples of sparsity-certifying decompositions: (a) a 3-arborescence; (b) a 2-map-graph; (c) a
(2,1)-maps-and-trees. Edges with the same line style belong to the same subgraph. The 2-map-graph is
shown with a certifying orientation.
A `Tk is a decomposition into ` edge-disjoint (not necessarily spanning) trees such that each
vertex is in exactly k of them. Figure 2(a) shows an example of a 3T2.
Given a subgraph G′ of a `Tk graph G, the set of tree-pieces in G′ is the collection of the
components of the trees in G induced by G′ (since G′ is a subgraph each tree may contribute
multiple pieces to the set of tree-pieces in G′). We observe that these tree-pieces may come
from the same tree or be single-vertex “empty trees.” It is also helpful to note that the definition
of a tree-piece is relative to a specific subgraph. An `Tk decomposition is proper if the set of
tree-pieces in any subgraph G′ has size at least `.
Figure 2(a) shows a graph with a 3T2 decomposition; we note that one of the trees is an
isolated vertex in the bottom-right corner. The subgraph in Figure 2(b) has three black tree-
pieces and one gray tree-piece: an isolated vertex at the top-right corner, and two single edges.
These count as three tree-pieces, even though they come from the same back tree when the
whole graph in considered. Figure 2(c) shows another subgraph; in this case there are three
gray tree-pieces and one black one.
Table 1 contains the decomposition terminology used in this paper.
The decomposition problem. We define the decomposition problem for sparse graphs as tak-
ing a graph as its input and producing as output, a decomposition that can be used to certify spar-
sity. In this paper, we will study three kinds of outputs: maps-and-trees; proper `Tk decompositions;
and the pebble-game-with-colors decomposition, which is defined in the next section.
2. Historical background
The well-known theorems of Tutte [23] and Nash-Williams [17] relate the (k,k)-tight graphs to
the existence of decompositions into edge-disjoint spanning trees. Taking a matroidal viewpoint,
4 Ileana Streinu, Louis Theran
Fig. 2. (a) A graph with a 3T2 decomposition; one of the three trees is a single vertex in the bottom right
corner. (b) The highlighted subgraph inside the dashed countour has three black tree-pieces and one gray
tree-piece. (c) The highlighted subgraph inside the dashed countour has three gray tree-pieces (one is a
single vertex) and one black tree-piece.
Edmonds [3, 4] gave another proof of this result using matroid unions. The equivalence of maps-
and-trees graphs and tight graphs in the lower range is shown using matroid unions in [24], and
matroid augmenting paths are the basis of the algorithms for the lower range of [5, 6, 19].
In rigidity theory a foundational theorem of Laman [11] shows that (2,3)-tight (Laman)
graphs correspond to generically minimally rigid bar-and-joint frameworks in the plane. Tay
[21] proved an analogous result for body-bar frameworks in any dimension using (k,k)-tight
graphs. Rigidity by counts motivated interest in the upper range, and Crapo [2] proved the
equivalence of Laman graphs and proper 3T2 graphs. Tay [22] used this condition to give a
direct proof of Laman’s theorem and generalized the 3T2 condition to all `Tk for k≤ `≤ 2k−1.
Haas [7] studied `Tk decompositions in detail and proved the equivalence of tight graphs and
proper `Tk graphs for the general upper range. We observe that aside from our new pebble-
game-with-colors decomposition, all the combinatorial characterizations of the upper range of
sparse graphs, including the counts, have a geometric interpretation [11, 21, 22, 24].
A pebble game algorithm was first proposed in [10] as an elegant alternative to Hendrick-
son’s Laman graph algorithms [9]. Berg and Jordan [1], provided the formal analysis of the
pebble game of [10] and introduced the idea of playing the game on a directed graph. Lee and
Streinu [12] generalized the pebble game to the entire range of parameters 0≤ `≤ 2k−1, and
left as an open problem using the pebble game to find sparsity certifying decompositions.
3. The pebble game with colors
Our pebble game with colors is a set of rules for constructing graphs indexed by nonnegative
integers k and `. We will use the pebble game with colors as the basis of an efficient algorithm
for the decomposition problem later in this paper. Since the phrase “with colors” is necessary
only for comparison to [12], we will omit it in the rest of the paper when the context is clear.
Sparsity-certifying Graph Decompositions 5
We now present the pebble game with colors. The game is played by a single player on a
fixed finite set of vertices. The player makes a finite sequence of moves; a move consists in the
addition and/or orientation of an edge. At any moment of time, the state of the game is captured
by a directed graph H, with colored pebbles on vertices and edges. The edges of H are colored
by the pebbles on them. While playing the pebble game all edges are directed, and we use the
notation vw to indicate a directed edge from v to w.
We describe the pebble game with colors in terms of its initial configuration and the allowed
moves.
Fig. 3. Examples of pebble game with colors moves: (a) add-edge. (b) pebble-slide. Pebbles on vertices
are shown as black or gray dots. Edges are colored with the color of the pebble on them.
Initialization: In the beginning of the pebble game, H has n vertices and no edges. We start
by placing k pebbles on each vertex of H, one of each color ci, for i = 1,2, . . . ,k.
Add-edge-with-colors: Let v and w be vertices with at least `+1 pebbles on them. Assume
(w.l.o.g.) that v has at least one pebble on it. Pick up a pebble from v, add the oriented edge vw
to E(H) and put the pebble picked up from v on the new edge.
Figure 3(a) shows examples of the add-edge move.
Pebble-slide: Let w be a vertex with a pebble p on it, and let vw be an edge in H. Replace
vw with wv in E(H); put the pebble that was on vw on v; and put p on wv.
Note that the color of an edge can change with a pebble-slide move. Figure 3(b) shows
examples. The convention in these figures, and throughout this paper, is that pebbles on vertices
are represented as colored dots, and that edges are shown in the color of the pebble on them.
From the definition of the pebble-slide move, it is easy to see that a particular pebble is
always either on the vertex where it started or on an edge that has this vertex as the tail. However,
when making a sequence of pebble-slide moves that reverse the orientation of a path in H, it is
sometimes convenient to think of this path reversal sequence as bringing a pebble from the end
of the path to the beginning.
The output of playing the pebble game is its complete configuration.
Output: At the end of the game, we obtain the directed graph H, along with the location
and colors of the pebbles. Observe that since each edge has exactly one pebble on it, the pebble
game configuration colors the edges.
We say that the underlying undirected graph G of H is constructed by the (k, `)-pebble game
or that H is a pebble-game graph.
Since each edge of H has exactly one pebble on it, the pebble game’s configuration partitions
the edges of H, and thus G, into k different colors. We call this decomposition of H a pebble-
game-with-colors decomposition. Figure 4(a) shows an example of a (2,2)-tight graph with a
pebble-game decomposition.
Let G = (V,E) be pebble-game graph with the coloring induced by the pebbles on the edges,
and let G′ be a subgraph of G. Then the coloring of G induces a set of monochromatic con-
6 Ileana Streinu, Louis Theran
(a) (b) (c)
Fig. 4. A (2,2)-tight graph with one possible pebble-game decomposition. The edges are oriented to
show (1,0)-sparsity for each color. (a) The graph K4 with a pebble-game decomposition. There is an
empty black tree at the center vertex and a gray spanning tree. (b) The highlighted subgraph has two
black trees and a gray tree; the black edges are part of a larger cycle but contribute a tree to the subgraph.
(c) The highlighted subgraph (with a light gray background) has three empty gray trees; the black edges
contain a cycle and do not contribute a piece of tree to the subgraph.
Notation Meaning
span(V ′) Number of edges spanned in H by V ′ ⊂V ; i.e. |EH(V ′)|
peb(V ′) Number of pebbles on V ′ ⊂V
out(V ′) Number of edges vw in H with v ∈V ′ and w ∈V −V ′
pebi(v) Number of pebbles of color ci on v ∈V
outi(v) Number of edges vw colored ci for v ∈V
Table 2. Pebble game notation used in this paper.
nected subgraphs of G′ (there may be more than one of the same color). Such a monochromatic
subgraph is called a map-graph-piece of G′ if it contains a cycle (in G′) and a tree-piece of G′
otherwise. The set of tree-pieces of G′ is the collection of tree-pieces induced by G′. As with
the corresponding definition for `Tk s, the set of tree-pieces is defined relative to a specific sub-
graph; in particular a tree-piece may be part of a larger cycle that includes edges not spanned
by G′.
The properties of pebble-game decompositions are studied in Section 6, and Theorem 2
shows that each color must be (1,0)-sparse. The orientation of the edges in Figure 4(a) shows
this.
For example Figure 4(a) shows a (2,2)-tight graph with one possible pebble-game decom-
position. The whole graph contains a gray tree-piece and a black tree-piece that is an isolated
vertex. The subgraph in Figure 4(b) has a black tree and a gray tree, with the edges of the black
tree coming from a cycle in the larger graph. In Figure 4(c), however, the black cycle does not
contribute a tree-piece. All three tree-pieces in this subgraph are single-vertex gray trees.
In the following discussion, we use the notation peb(v) for the number of pebbles on v and
pebi(v) to indicate the number of pebbles of colors i on v.
Table 2 lists the pebble game notation used in this paper.
4. Our Results
We describe our results in this section. The rest of the paper provides the proofs.
Sparsity-certifying Graph Decompositions 7
Our first result is a strengthening of the pebble games of [12] to include colors. It says
that sparse graphs are exactly pebble game graphs. Recall that from now on, all pebble games
discussed in this paper are our pebble game with colors unless noted explicitly.
Theorem 1 (Sparse graphs and pebble-game graphs coincide). A graph G is (k, `)-sparse
with 0≤ `≤ 2k−1 if and only if G is a pebble-game graph.
Next we consider pebble-game decompositions, showing that they are a generalization of
proper `Tk decompositions that extend to the entire matroidal range of sparse graphs.
Theorem 2 (The pebble-game-with-colors decomposition). A graph G is a pebble-game
graph if and only if it admits a decomposition into k edge-disjoint subgraphs such that each
is (1,0)-sparse and every subgraph of G contains at least ` tree-pieces of the (1,0)-sparse
graphs in the decomposition.
The (1,0)-sparse subgraphs in the statement of Theorem 2 are the colors of the pebbles; thus
Theorem 2 gives a characterization of the pebble-game-with-colors decompositions obtained
by playing the pebble game defined in the previous section. Notice the similarity between the
requirement that the set of tree-pieces have size at least ` in Theorem 2 and the definition of a
proper `Tk .
Our next results show that for any pebble-game graph, we can specialize its pebble game
construction to generate a decomposition that is a maps-and-trees or proper `Tk . We call these
specialized pebble game constructions canonical, and using canonical pebble game construc-
tions, we obtain new direct proofs of existing arboricity results.
We observe Theorem 2 that maps-and-trees are special cases of the pebble-game decompo-
sition: both spanning trees and spanning map-graphs are (1,0)-sparse, and each of the spanning
trees contributes at least one piece of tree to every subgraph.
The case of proper `Tk graphs is more subtle; if each color in a pebble-game decomposition
is a forest, then we have found a proper `Tk , but this class is a subset of all possible proper
`Tk decompositions of a tight graph. We show that this class of proper `Tk decompositions is
sufficient to certify sparsity.
We now state the main theorem for the upper and lower range.
Theorem 3 (Main Theorem (Lower Range): Maps-and-trees coincide with pebble-game
graphs). Let 0 ≤ ` ≤ k. A graph G is a tight pebble-game graph if and only if G is a (k, `)-
maps-and-trees.
Theorem 4 (Main Theorem (Upper Range): Proper `Tk graphs coincide with pebble-game
graphs). Let k≤ `≤ 2k−1. A graph G is a tight pebble-game graph if and only if it is a proper
`Tk with kn− ` edges.
As corollaries, we obtain the existing decomposition results for sparse graphs.
Corollary 5 (Nash-Williams [17], Tutte [23], White and Whiteley [24]). Let `≤ k. A graph
G is tight if and only if has a (k, `)-maps-and-trees decomposition.
Corollary 6 (Crapo [2], Haas [7]). Let k ≤ `≤ 2k−1. A graph G is tight if and only if it is a
proper `Tk .
Efficiently finding canonical pebble game constructions. The proofs of Theorem 3 and Theo-
rem 4 lead to an obvious algorithm with O(n3) running time for the decomposition problem.
Our last result improves on this, showing that a canonical pebble game construction, and thus
8 Ileana Streinu, Louis Theran
a maps-and-trees or proper `Tk decomposition can be found using a pebble game algorithm in
O(n2) time and space.
These time and space bounds mean that our algorithm can be combined with those of [12]
without any change in complexity.
5. Pebble game graphs
In this section we prove Theorem 1, a strengthening of results from [12] to the pebble game
with colors. Since many of the relevant properties of the pebble game with colors carry over
directly from the pebble games of [12], we refer the reader there for the proofs.
We begin by establishing some invariants that hold during the execution of the pebble game.
Lemma 7 (Pebble game invariants). During the execution of the pebble game, the following
invariants are maintained in H:
(I1) There are at least ` pebbles on V . [12]
(I2) For each vertex v, span(v)+out(v)+peb(v) = k. [12]
(I3) For each V ′ ⊂V , span(V ′)+out(V ′)+peb(V ′) = kn′. [12]
(I4) For every vertex v ∈V , outi(v)+pebi(v) = 1.
(I5) Every maximal path consisting only of edges with color ci ends in either the first vertex with
a pebble of color ci or a cycle.
Proof. (I1), (I2), and (I3) come directly from [12].
(I4) This invariant clearly holds at the initialization phase of the pebble game with colors.
That add-edge and pebble-slide moves preserve (I4) is clear from inspection.
(I5) By (I4), a monochromatic path of edges is forced to end only at a vertex with a pebble of
the same color on it. If there is no pebble of that color reachable, then the path must eventually
visit some vertex twice.
From these invariants, we can show that the pebble game constructible graphs are sparse.
Lemma 8 (Pebble-game graphs are sparse [12]). Let H be a graph constructed with the
pebble game. Then H is sparse. If there are exactly ` pebbles on V (H), then H is tight.
The main step in proving that every sparse graph is a pebble-game graph is the following.
Recall that by bringing a pebble to v we mean reorienting H with pebble-slide moves to reduce
the out degree of v by one.
Lemma 9 (The `+1 pebble condition [12]). Let vw be an edge such that H + vw is sparse. If
peb({v,w}) < `+1, then a pebble not on {v,w} can be brought to either v or w.
It follows that any sparse graph has a pebble game construction.
Theorem 1 (Sparse graphs and pebble-game graphs coincide). A graph G is (k, `)-sparse
with 0≤ `≤ 2k−1 if and only if G is a pebble-game graph.
6. The pebble-game-with-colors decomposition
In this section we prove Theorem 2, which characterizes all pebble-game decompositions. We
start with the following lemmas about the structure of monochromatic connected components
in H, the directed graph maintained during the pebble game.
Sparsity-certifying Graph Decompositions 9
Lemma 10 (Monochromatic pebble game subgraphs are (1,0)-sparse). Let Hi be the sub-
graph of H induced by edges with pebbles of color ci on them. Then Hi is (1,0)-sparse, for
i = 1, . . . ,k.
Proof. By (I4) Hi is a set of edges with out degree at most one for every vertex.
Lemma 11 (Tree-pieces in a pebble-game graph). Every subgraph of the directed graph H
in a pebble game construction contains at least ` monochromatic tree-pieces, and each of these
is rooted at either a vertex with a pebble on it or a vertex that is the tail of an out-edge.
Recall that an out-edge from a subgraph H ′ = (V ′,E ′) is an edge vw with v∈V ′ and vw /∈ E ′.
Proof. Let H ′ = (V ′,E ′) be a non-empty subgraph of H, and assume without loss of generality
that H ′ is induced by V ′. By (I3), out(V ′)+ peb(V ′) ≥ `. We will show that each pebble and
out-edge tail is the root of a tree-piece.
Consider a vertex v ∈ V ′ and a color ci. By (I4) there is a unique monochromatic directed
path of color ci starting at v. By (I5), if this path ends at a pebble, it does not have a cycle.
Similarly, if this path reaches a vertex that is the tail of an out-edge also in color ci (i.e., if the
monochromatic path from v leaves V ′), then the path cannot have a cycle in H ′.
Since this argument works for any vertex in any color, for each color there is a partitioning
of the vertices into those that can reach each pebble, out-edge tail, or cycle. It follows that each
pebble and out-edge tail is the root of a monochromatic tree, as desired.
Applied to the whole graph Lemma 11 gives us the following.
Lemma 12 (Pebbles are the roots of trees). In any pebble game configuration, each pebble of
color ci is the root of a (possibly empty) monochromatic tree-piece of color ci.
Remark: Haas showed in [7] that in a `Tk , a subgraph induced by n′ ≥ 2 vertices with m′
edges has exactly kn′−m′ tree-pieces in it. Lemma 11 strengthens Haas’ result by extending it
to the lower range and giving a construction that finds the tree-pieces, showing the connection
between the `+1 pebble condition and the hereditary condition on proper `Tk .
We conclude our investigation of arbitrary pebble game constructions with a description of
the decomposition induced by the pebble game with colors.
Theorem 2 (The pebble-game-with-colors decomposition). A graph G is a pebble-game
graph if and only if it admits a decomposition into k edge-disjoint subgraphs such that each
is (1,0)-sparse and every subgraph of G contains at least ` tree-pieces of the (1,0)-sparse
graphs in the decomposition.
Proof. Let G be a pebble-game graph. The existence of the k edge-disjoint (1,0)-sparse sub-
graphs was shown in Lemma 10, and Lemma 11 proves the condition on subgraphs.
For the other direction, we observe that a color ci with ti tree-pieces in a given subgraph can
span at most n− ti edges; summing over all the colors shows that a graph with a pebble-game
decomposition must be sparse. Apply Theorem 1 to complete the proof.
Remark: We observe that a pebble-game decomposition for a Laman graph may be read out
of the bipartite matching used in Hendrickson’s Laman graph extraction algorithm [9]. Indeed,
pebble game orientations have a natural correspondence with the bipartite matchings used in
10 Ileana Streinu, Louis Theran
Maps-and-trees are a special case of pebble-game decompositions for tight graphs: if there
are no cycles in ` of the colors, then the trees rooted at the corresponding ` pebbles must be
spanning, since they have n− 1 edges. Also, if each color forms a forest in an upper range
pebble-game decomposition, then the tree-pieces condition ensures that the pebble-game de-
composition is a proper `Tk .
In the next section, we show that the pebble game can be specialized to correspond to maps-
and-trees and proper `Tk decompositions.
7. Canonical Pebble Game Constructions
In this section we prove the main theorems (Theorem 3 and Theorem 4), continuing the inves-
tigation of decompositions induced by pebble game constructions by studying the case where a
minimum number of monochromatic cycles are created. The main idea, captured in Lemma 15
and illustrated in Figure 6, is to avoid creating cycles while collecting pebbles. We show that
this is always possible, implying that monochromatic map-graphs are created only when we
add more than k(n′−1) edges to some set of n′ vertices. For the lower range, this implies that
every color is a forest. Every decomposition characterization of tight graphs discussed above
follows immediately from the main theorem, giving new proofs of the previous results in a
unified framework.
In the proof, we will use two specializations of the pebble game moves. The first is a modi-
fication of the add-edge move.
Canonical add-edge: When performing an add-edge move, cover the new edge with a color
that is on both vertices if possible. If not, then take the highest numbered color present.
The second is a restriction on which pebble-slide moves we allow.
Canonical pebble-slide: A pebble-slide move is allowed only when it does not create a
monochromatic cycle.
We call a pebble game construction that uses only these moves canonical. In this section
we will show that every pebble-game graph has a canonical pebble game construction (Lemma
14 and Lemma 15) and that canonical pebble game constructions correspond to proper `Tk and
maps-and-trees decompositions (Theorem 3 and Theorem 4).
We begin with a technical lemma that motivates the definition of canonical pebble game
constructions. It shows that the situations disallowed by the canonical moves are all the ways
for cycles to form in the lowest ` colors.
Lemma 13 (Monochromatic cycle creation). Let v ∈ V have a pebble p of color ci on it and
let w be a vertex in the same tree of color ci as v. A monochromatic cycle colored ci is created
in exactly one of the following ways:
(M1) The edge vw is added with an add-edge move.
(M2) The edge wv is reversed by a pebble-slide move and the pebble p is used to cover the reverse
edge vw.
Proof. Observe that the preconditions in the statement of the lemma are implied by Lemma 7.
By Lemma 12 monochromatic cycles form when the last pebble of color ci is removed from a
connected monochromatic subgraph. (M1) and (M2) are the only ways to do this in a pebble
game construction, since the color of an edge only changes when it is inserted the first time or
a new pebble is put on it by a pebble-slide move.
Sparsity-certifying Graph Decompositions 11
vw vw
Fig. 5. Creating monochromatic cycles in a (2,0)-pebble game. (a) A type (M1) move creates a cycle by
adding a black edge. (b) A type (M2) move creates a cycle with a pebble-slide move. The vertices are
labeled according to their role in the definition of the moves.
Figure 5(a) and Figure 5(b) show examples of (M1) and (M2) map-graph creation moves,
respectively, in a (2,0)-pebble game construction.
We next show that if a graph has a pebble game construction, then it has a canonical peb-
ble game construction. This is done in two steps, considering the cases (M1) and (M2) sepa-
rately. The proof gives two constructions that implement the canonical add-edge and canonical
pebble-slide moves.
Lemma 14 (The canonical add-edge move). Let G be a graph with a pebble game construc-
tion. Cycle creation steps of type (M1) can be eliminated in colors ci for 1 ≤ i ≤ `′, where
`′ = min{k, `}.
Proof. For add-edge moves, cover the edge with a color present on both v and w if possible. If
this is not possible, then there are `+1 distinct colors present. Use the highest numbered color
to cover the new edge.
Remark: We note that in the upper range, there is always a repeated color, so no canonical
add-edge moves create cycles in the upper range.
The canonical pebble-slide move is defined by a global condition. To prove that we obtain
the same class of graphs using only canonical pebble-slide moves, we need to extend Lemma
9 to only canonical moves. The main step is to show that if there is any sequence of moves that
reorients a path from v to w, then there is a sequence of canonical moves that does the same
thing.
Lemma 15 (The canonical pebble-slide move). Any sequence of pebble-slide moves leading
to an add-edge move can be replaced with one that has no (M2) steps and allows the same
add-edge move.
In other words, if it is possible to collect `+ 1 pebbles on the ends of an edge to be added,
then it is possible to do this without creating any monochromatic cycles.
12 Ileana Streinu, Louis Theran
Figure 7 and Figure 8 illustrate the construction used in the proof of Lemma 15. We call this
the shortcut construction by analogy to matroid union and intersection augmenting paths used
in previous work on the lower range.
Figure 6 shows the structure of the proof. The shortcut construction removes an (M2) step
at the beginning of a sequence that reorients a path from v to w with pebble-slides. Since one
application of the shortcut construction reorients a simple path from a vertex w′ to w, and a
path from v to w′ is preserved, the shortcut construction can be applied inductively to find the
sequence of moves we want.
Fig. 6. Outline of the shortcut construction: (a) An arbitrary simple path from v to w with curved lines
indicating simple paths. (b) An (M2) step. The black edge, about to be flipped, would create a cycle,
shown in dashed and solid gray, of the (unique) gray tree rooted at w. The solid gray edges were part
of the original path from (a). (c) The shortened path to the gray pebble; the new path follows the gray
tree all the way from the first time the original path touched the gray tree at w′. The path from v to w′ is
simple, and the shortcut construction can be applied inductively to it.
Proof. Without loss of generality, we can assume that our sequence of moves reorients a simple
path in H, and that the first move (the end of the path) is (M2). The (M2) step moves a pebble
of color ci from a vertex w onto the edge vw, which is reversed. Because the move is (M2), v
and w are contained in a maximal monochromatic tree of color ci. Call this tree H ′i , and observe
that it is rooted at w.
Now consider the edges reversed in our sequence of moves. As noted above, before we make
any of the moves, these sketch out a simple path in H ending at w. Let z be the first vertex on
this path in H ′i . We modify our sequence of moves as follows: delete, from the beginning, every
move before the one that reverses some edge yz; prepend onto what is left a sequence of moves
that moves the pebble on w to z in H ′i .
Sparsity-certifying Graph Decompositions 13
Fig. 7. Eliminating (M2) moves: (a) an (M2) move; (b) avoiding the (M2) by moving along another path.
The path where the pebbles move is indicated by doubled lines.
Fig. 8. Eliminating (M2) moves: (a) the first step to move the black pebble along the doubled path is
(M2); (b) avoiding the (M2) and simplifying the path.
Since no edges change color in the beginning of the new sequence, we have eliminated
the (M2) move. Because our construction does not change any of the edges involved in the
remaining tail of the original sequence, the part of the original path that is left in the new
sequence will still be a simple path in H, meeting our initial hypothesis.
The rest of the lemma follows by induction.
Together Lemma 14 and Lemma 15 prove the following.
Lemma 16. If G is a pebble-game graph, then G has a canonical pebble game construction.
Using canonical pebble game constructions, we can identify the tight pebble-game graphs
with maps-and-trees and `Tk graphs.
14 Ileana Streinu, Louis Theran
Theorem 3 (Main Theorem (Lower Range): Maps-and-trees coincide with pebble-game
graphs). Let 0 ≤ ` ≤ k. A graph G is a tight pebble-game graph if and only if G is a (k, `)-
maps-and-trees.
Proof. As observed above, a maps-and-trees decomposition is a special case of the pebble game
decomposition. Applying Theorem 2, we see that any maps-and-trees must be a pebble-game
graph.
For the reverse direction, consider a canonical pebble game construction of a tight graph.
From Lemma 8, we see that there are ` pebbles left on G at the end of the construction. The
definition of the canonical add-edge move implies that there must be at least one pebble of
each ci for i = 1,2, . . . , `. It follows that there is exactly one of each of these colors. By Lemma
12, each of these pebbles is the root of a monochromatic tree-piece with n− 1 edges, yielding
the required ` edge-disjoint spanning trees.
Corollary 5 (Nash-Williams [17], Tutte [23], White and Whiteley [24]). Let `≤ k. A graph
G is tight if and only if has a (k, `)-maps-and-trees decomposition.
We next consider the decompositions induced by canonical pebble game constructions when
`≥ k +1.
Theorem 4 (Main Theorem (Upper Range): Proper Trees-and-trees coincide with peb-
ble-game graphs). Let k≤ `≤ 2k−1. A graph G is a tight pebble-game graph if and only if it
is a proper `Tk with kn− ` edges.
Proof. As observed above, a proper `Tk decomposition must be sparse. What we need to show
is that a canonical pebble game construction of a tight graph produces a proper `Tk .
By Theorem 2 and Lemma 16, we already have the condition on tree-pieces and the decom-
position into ` edge-disjoint trees. Finally, an application of (I4), shows that every vertex must
in in exactly k of the trees, as required.
Corollary 6 (Crapo [2], Haas [7]). Let k ≤ `≤ 2k−1. A graph G is tight if and only if it is a
proper `Tk .
8. Pebble game algorithms for finding decompositions
A naı̈ve implementation of the constructions in the previous section leads to an algorithm re-
quiring Θ(n2) time to collect each pebble in a canonical construction: in the worst case Θ(n)
applications of the construction in Lemma 15 requiring Θ(n) time each, giving a total running
time of Θ(n3) for the decomposition problem.
In this section, we describe algorithms for the decomposition problem that run in time
O(n2). We begin with the overall structure of the algorithm.
Algorithm 17 (The canonical pebble game with colors).
Input: A graph G.
Output: A pebble-game graph H.
Method:
– Set V (H) = V (G) and place one pebble of each color on the vertices of H.
– For each edge vw ∈ E(G) try to collect at least `+1 pebbles on v and w using pebble-slide
moves as described by Lemma 15.
Sparsity-certifying Graph Decompositions 15
– If at least `+1 pebbles can be collected, add vw to H using an add-edge move as in Lemma
14, otherwise discard vw.
– Finally, return H, and the locations of the pebbles.
Correctness. Theorem 1 and the result from [24] that the sparse graphs are the independent
sets of a matroid show that H is a maximum sized sparse subgraph of G. Since the construction
found is canonical, the main theorem shows that the coloring of the edges in H gives a maps-
and-trees or proper `Tk decomposition.
Complexity. We start by observing that the running time of Algorithm 17 is the time taken to
process O(n) edges added to H and O(m) edges not added to H. We first consider the cost of an
edge of G that is added to H.
Each of the pebble game moves can be implemented in constant time. What remains is to
describe an efficient way to find and move the pebbles. We use the following algorithm as a
subroutine of Algorithm 17 to do this.
Algorithm 18 (Finding a canonical path to a pebble.).
Input: Vertices v and w, and a pebble game configuration on a directed graph H.
Output: If a pebble was found, ‘yes’, and ‘no’ otherwise. The configuration of H is updated.
Method:
– Start by doing a depth-first search from from v in H. If no pebble not on w is found, stop and
return ‘no.’
– Otherwise a pebble was found. We now have a path v = v1,e1, . . . ,ep−1,vp = u, where the vi
are vertices and ei is the edge vivi+1. Let c[ei] be the color of the pebble on ei. We will use
the array c[] to keep track of the colors of pebbles on vertices and edges after we move them
and the array s[] to sketch out a canonical path from v to u by finding a successor for each
edge.
– Set s[u] = ‘end′ and set c[u] to the color of an arbitrary pebble on u. We walk on the path in
reverse order: vp,ep−1,ep−2, . . . ,e1,v1. For each i, check to see if c[vi] is set; if so, go on to
the next i. Otherwise, check to see if c[vi+1] = c[ei].
– If it is, set s[vi] = ei and set c[vi] = c[ei], and go on to the next edge.
– Otherwise c[vi+1] 6= c[ei], try to find a monochromatic path in color c[vi+1] from vi to vi+1. If
a vertex x is encountered for which c[x] is set, we have a path vi = x1, f1,x2, . . . , fq−1,xq = x
that is monochromatic in the color of the edges; set c[xi] = c[ fi] and s[xi] = fi for i =
1,2, . . . ,q−1. If c[x] = c[ fq−1], stop. Otherwise, recursively check that there is not a monochro-
matic c[x] path from xq−1 to x using this same procedure.
– Finally, slide pebbles along the path from the original endpoints v to u specified by the
successor array s[v], s[s[v]], . . .
The correctness of Algorithm 18 comes from the fact that it is implementing the shortcut
construction. Efficiency comes from the fact that instead of potentially moving the pebble back
and forth, Algorithm 18 pre-computes a canonical path crossing each edge of H at most three
times: once in the initial depth-first search, and twice while converting the initial path to a
canonical one. It follows that each accepted edges takes O(n) time, for a total of O(n2) time
spent processing edges in H.
Although we have not discussed this explicity, for the algorithm to be efficient we need to
maintain components as in [12]. After each accepted edge, the components of H can be updated
in time O(n). Finally, the results of [12, 13] show that the rejected edges take an amortized O(1)
time each.
16 Ileana Streinu, Louis Theran
Summarizing, we have shown that the canonical pebble game with colors solves the decom-
position problem in time O(n2).
9. An important special case: Rigidity in dimension 2 and slider-pinning
In this short section we present a new application for the special case of practical importance,
k = 2, ` = 3. As discussed in the introduction, Laman’s theorem [11] characterizes minimally
rigid graphs as the (2,3)-tight graphs. In recent work on slider pinning, developed after the
current paper was submitted, we introduced the slider-pinning model of rigidity [15, 20]. Com-
binatorially, we model the bar-slider frameworks as simple graphs together with some loops
placed on their vertices in such a way that there are no more than 2 loops per vertex, one of each
color.
We characterize the minimally rigid bar-slider graphs [20] as graphs that are:
1. (2,3)-sparse for subgraphs containing no loops.
2. (2,0)-tight when loops are included.
We call these graphs (2,0,3)-graded-tight, and they are a special case of the graded-sparse
graphs studied in our paper [14].
The connection with the pebble games in this paper is the following.
Corollary 19 (Pebble games and slider-pinning). In any (2,3)-pebble game graph, if we
replace pebbles by loops, we obtain a (2,0,3)-graded-tight graph.
Proof. Follows from invariant (I3) of Lemma 7.
In [15], we study a special case of slider pinning where every slider is either vertical or
horizontal. We model the sliders as pre-colored loops, with the color indicating x or y direction.
For this axis parallel slider case, the minimally rigid graphs are characterized by:
1. (2,3)-sparse for subgraphs containing no loops.
2. Admit a 2-coloring of the edges so that each color is a forest (i.e., has no cycles), and each
monochromatic tree spans exactly one loop of its color.
This also has an interpretation in terms of colored pebble games.
Corollary 20 (The pebble game with colors and slider-pinning). In any canonical (2,3)-
pebble-game-with-colors graph, if we replace pebbles by loops of the same color, we obtain the
graph of a minimally pinned axis-parallel bar-slider framework.
Proof. Follows from Theorem 4, and Lemma 12.
10. Conclusions and open problems
We presented a new characterization of (k, `)-sparse graphs, the pebble game with colors, and
used it to give an efficient algorithm for finding decompositions of sparse graphs into edge-
disjoint trees. Our algorithm finds such sparsity-certifying decompositions in the upper range
and runs in time O(n2), which is as fast as the algorithms for recognizing sparse graphs in the
upper range from [12].
We also used the pebble game with colors to describe a new sparsity-certifying decomposi-
tion that applies to the entire matroidal range of sparse graphs.
Sparsity-certifying Graph Decompositions 17
We defined and studied a class of canonical pebble game constructions that correspond to
either a maps-and-trees or proper `Tk decomposition. This gives a new proof of the Tutte-Nash-
Williams arboricity theorem and a unified proof of the previously studied decomposition cer-
tificates of sparsity. Canonical pebble game constructions also show the relationship between
the `+1 pebble condition, which applies to the upper range of `, to matroid union augmenting
paths, which do not apply in the upper range.
Algorithmic consequences and open problems. In [6], Gabow and Westermann give an O(n3/2)
algorithm for recognizing sparse graphs in the lower range and extracting sparse subgraphs from
dense ones. Their technique is based on efficiently finding matroid union augmenting paths,
which extend a maps-and-trees decomposition. The O(n3/2) algorithm uses two subroutines to
find augmenting paths: cyclic scanning, which finds augmenting paths one at a time, and batch
scanning, which finds groups of disjoint augmenting paths.
We observe that Algorithm 17 can be used to replace cyclic scanning in Gabow and Wester-
mann’s algorithm without changing the running time. The data structures used in the implemen-
tation of the pebble game, detailed in [12, 13] are simpler and easier to implement than those
used to support cyclic scanning.
The two major open algorithmic problems related to the pebble game are then:
Problem 1. Develop a pebble game algorithm with the properties of batch scanning and obtain
an implementable O(n3/2) algorithm for the lower range.
Problem 2. Extend batch scanning to the `+1 pebble condition and derive an O(n3/2) pebble
game algorithm for the upper range.
In particular, it would be of practical importance to find an implementable O(n3/2) algorithm
for decompositions into edge-disjoint spanning trees.
References
1. Berg, A.R., Jordán, T.: Algorithms for graph rigidity and scene analysis. In: Proc. 11th
European Symposium on Algorithms (ESA ’03), LNCS, vol. 2832, pp. 78–89. (2003)
2. Crapo, H.: On the generic rigidity of plane frameworks. Tech. Rep. 1278, Institut de
recherche d’informatique et d’automatique (1988)
3. Edmonds, J.: Minimum partition of a matroid into independent sets. J. Res. Nat. Bur.
Standards Sect. B 69B, 67–72 (1965)
4. Edmonds, J.: Submodular functions, matroids, and certain polyhedra. In: Combinatorial
Optimization—Eureka, You Shrink!, no. 2570 in LNCS, pp. 11–26. Springer (2003)
5. Gabow, H.N.: A matroid approach to finding edge connectivity and packing arborescences.
Journal of Computer and System Sciences 50, 259–273 (1995)
6. Gabow, H.N., Westermann, H.H.: Forests, frames, and games: Algorithms for matroid sums
and applications. Algorithmica 7(1), 465–497 (1992)
7. Haas, R.: Characterizations of arboricity of graphs. Ars Combinatorica 63, 129–137 (2002)
8. Haas, R., Lee, A., Streinu, I., Theran, L.: Characterizing sparse graphs by map decompo-
sitions. Journal of Combinatorial Mathematics and Combinatorial Computing 62, 3–11
(2007)
9. Hendrickson, B.: Conditions for unique graph realizations. SIAM Journal on Computing
21(1), 65–84 (1992)
18 Ileana Streinu, Louis Theran
10. Jacobs, D.J., Hendrickson, B.: An algorithm for two-dimensional rigidity percolation: the
pebble game. Journal of Computational Physics 137, 346–365 (1997)
11. Laman, G.: On graphs and rigidity of plane skeletal structures. Journal of Engineering
Mathematics 4, 331–340 (1970)
12. Lee, A., Streinu, I.: Pebble game algorihms and sparse graphs. Discrete Mathematics
308(8), 1425–1437 (2008)
13. Lee, A., Streinu, I., Theran, L.: Finding and maintaining rigid components. In: Proc. Cana-
dian Conference of Computational Geometry. Windsor, Ontario (2005). http://cccg.
cs.uwindsor.ca/papers/72.pdf
14. Lee, A., Streinu, I., Theran, L.: Graded sparse graphs and matroids. Journal of Universal
Computer Science 13(10) (2007)
15. Lee, A., Streinu, I., Theran, L.: The slider-pinning problem. In: Proceedings of the 19th
Canadian Conference on Computational Geometry (CCCG’07) (2007)
16. Lovász, L.: Combinatorial Problems and Exercises. Akademiai Kiado and North-Holland,
Amsterdam (1979)
17. Nash-Williams, C.S.A.: Decomposition of finite graphs into forests. Journal of the London
Mathematical Society 39, 12 (1964)
18. Oxley, J.G.: Matroid theory. The Clarendon Press, Oxford University Press, New York
(1992)
19. Roskind, J., Tarjan, R.E.: A note on finding minimum cost edge disjoint spanning trees.
Mathematics of Operations Research 10(4), 701–708 (1985)
20. Streinu, I., Theran, L.: Combinatorial genericity and minimal rigidity. In: SCG ’08: Pro-
ceedings of the twenty-fourth annual Symposium on Computational Geometry, pp. 365–
374. ACM, New York, NY, USA (2008).
21. Tay, T.S.: Rigidity of multigraphs I: linking rigid bodies in n-space. Journal of Combinato-
rial Theory, Series B 26, 95–112 (1984)
22. Tay, T.S.: A new proof of Laman’s theorem. Graphs and Combinatorics 9, 365–370 (1993)
23. Tutte, W.T.: On the problem of decomposing a graph into n connected factors. Journal of
the London Mathematical Society 142, 221–230 (1961)
24. Whiteley, W.: The union of matroids and the rigidity of frameworks. SIAM Journal on
Discrete Mathematics 1(2), 237–255 (1988)
http://cccg.cs.uwindsor.ca/papers/72.pdf
http://cccg.cs.uwindsor.ca/papers/72.pdf
Introduction and preliminaries
Historical background
The pebble game with colors
Our Results
Pebble game graphs
The pebble-game-with-colors decomposition
Canonical Pebble Game Constructions
Pebble game algorithms for finding decompositions
An important special case: Rigidity in dimension 2 and slider-pinning
Conclusions and open problems
| Introduction and preliminaries
The focus of this paper is decompositions of (k, `)-sparse graphs into edge-disjoint subgraphs
that certify sparsity. We use graph to mean a multigraph, possibly with loops. We say that a
graph is (k, `)-sparse if no subset of n′ vertices spans more than kn′− ` edges in the graph; a
(k, `)-sparse graph with kn′− ` edges is (k, `)-tight. We call the range k ≤ `≤ 2k−1 the upper
range of sparse graphs and 0≤ `≤ k the lower range.
In this paper, we present efficient algorithms for finding decompositions that certify sparsity
in the upper range of `. Our algorithms also apply in the lower range, which was already ad-
dressed by [3, 4, 5, 6, 19]. A decomposition certifies the sparsity of a graph if the sparse graphs
and graphs admitting the decomposition coincide.
Our algorithms are based on a new characterization of sparse graphs, which we call the
pebble game with colors. The pebble game with colors is a simple graph construction rule that
produces a sparse graph along with a sparsity-certifying decomposition.
We define and study a canonical class of pebble game constructions, which correspond to
previously studied decompositions of sparse graphs into edge disjoint trees. Our results provide
a unifying framework for all the previously known special cases, including Nash-Williams-
Tutte and [7, 24]. Indeed, in the lower range, canonical pebble game constructions capture the
properties of the augmenting paths used in matroid union and intersection algorithms[5, 6].
Since the sparse graphs in the upper range are not known to be unions or intersections of the
matroids for which there are efficient augmenting path algorithms, these do not easily apply in
∗ Research of both authors funded by the NSF under grants NSF CCF-0430990 and NSF-DARPA CARGO
CCR-0310661 to the first author.
2 Ileana Streinu, Louis Theran
Term Meaning
Sparse graph G Every non-empty subgraph on n′ vertices has ≤ kn′− ` edges
Tight graph G G = (V,E) is sparse and |V |= n, |E|= kn− `
Block H in G G is sparse, and H is a tight subgraph
Component H of G G is sparse and H is a maximal block
Map-graph Graph that admits an out-degree-exactly-one orientation
(k, `)-maps-and-trees Edge-disjoint union of ` trees and (k− `) map-grpahs
`Tk Union of ` trees, each vertex is in exactly k of them
Set of tree-pieces of an `Tk induced on V ′ ⊂V Pieces of trees in the `Tk spanned by E(V ′)
Proper `Tk Every V ′ ⊂V contains ≥ ` pieces of trees from the `Tk
Table 1. Sparse graph and decomposition terminology used in this paper.
the upper range. Pebble game with colors constructions may thus be considered a strengthening
of augmenting paths to the upper range of matroidal sparse graphs.
1.1. Sparse graphs
A graph is (k, `)-sparse if for any non-empty subgraph with m′ edges and n′ vertices, m′ ≤
kn′− `. We observe that this condition implies that 0 ≤ ` ≤ 2k− 1, and from now on in this
paper we will make this assumption. A sparse graph that has n vertices and exactly kn−` edges
is called tight.
For a graph G = (V,E), and V ′ ⊂ V , we use the notation span(V ′) for the number of edges
in the subgraph induced by V ′. In a directed graph, out(V ′) is the number of edges with the tail
in V ′ and the head in V −V ′; for a subgraph induced by V ′, we call such an edge an out-edge.
There are two important types of subgraphs of sparse graphs. A block is a tight subgraph of
a sparse graph. A component is a maximal block.
Table 1 summarizes the sparse graph terminology used in this paper.
1.2. Sparsity-certifying decompositions
A k-arborescence is a graph that admits a decomposition into k edge-disjoint spanning trees.
Figure 1(a) shows an example of a 3-arborescence. The k-arborescent graphs are described
by the well-known theorems of Tutte [23] and Nash-Williams [17] as exactly the (k,k)-tight
graphs.
A map-graph is a graph that admits an orientation such that the out-degree of each vertex is
exactly one. A k-map-graph is a graph that admits a decomposition into k edge-disjoint map-
graphs. Figure 1(b) shows an example of a 2-map-graphs; the edges are oriented in one possible
configuration certifying that each color forms a map-graph. Map-graphs may be equivalently
defined (see, e.g., [18]) as having exactly one cycle per connected component.1
A (k, `)-maps-and-trees is a graph that admits a decomposition into k− ` edge-disjoint
map-graphs and ` spanning trees.
Another characterization of map-graphs, which we will use extensively in this paper, is as
the (1,0)-tight graphs [8, 24]. The k-map-graphs are evidently (k,0)-tight, and [8, 24] show that
the converse holds as well.
1 Our terminology follows Lovász in [16]. In the matroid literature map-graphs are sometimes known as bases
of the bicycle matroid or spanning pseudoforests.
Sparsity-certifying Graph Decompositions 3
Fig. 1. Examples of sparsity-certifying decompositions: (a) a 3-arborescence; (b) a 2-map-graph; (c) a
(2,1)-maps-and-trees. Edges with the same line style belong to the same subgraph. The 2-map-graph is
shown with a certifying orientation.
A `Tk is a decomposition into ` edge-disjoint (not necessarily spanning) trees such that each
vertex is in exactly k of them. Figure 2(a) shows an example of a 3T2.
Given a subgraph G′ of a `Tk graph G, the set of tree-pieces in G′ is the collection of the
components of the trees in G induced by G′ (since G′ is a subgraph each tree may contribute
multiple pieces to the set of tree-pieces in G′). We observe that these tree-pieces may come
from the same tree or be single-vertex “empty trees.” It is also helpful to note that the definition
of a tree-piece is relative to a specific subgraph. An `Tk decomposition is proper if the set of
tree-pieces in any subgraph G′ has size at least `.
Figure 2(a) shows a graph with a 3T2 decomposition; we note that one of the trees is an
isolated vertex in the bottom-right corner. The subgraph in Figure 2(b) has three black tree-
pieces and one gray tree-piece: an isolated vertex at the top-right corner, and two single edges.
These count as three tree-pieces, even though they come from the same back tree when the
whole graph in considered. Figure 2(c) shows another subgraph; in this case there are three
gray tree-pieces and one black one.
Table 1 contains the decomposition terminology used in this paper.
The decomposition problem. We define the decomposition problem for sparse graphs as tak-
ing a graph as its input and producing as output, a decomposition that can be used to certify spar-
sity. In this paper, we will study three kinds of outputs: maps-and-trees; proper `Tk decompositions;
and the pebble-game-with-colors decomposition, which is defined in the next section.
2. Historical background
The well-known theorems of Tutte [23] and Nash-Williams [17] relate the (k,k)-tight graphs to
the existence of decompositions into edge-disjoint spanning trees. Taking a matroidal viewpoint,
4 Ileana Streinu, Louis Theran
Fig. 2. (a) A graph with a 3T2 decomposition; one of the three trees is a single vertex in the bottom right
corner. (b) The highlighted subgraph inside the dashed countour has three black tree-pieces and one gray
tree-piece. (c) The highlighted subgraph inside the dashed countour has three gray tree-pieces (one is a
single vertex) and one black tree-piece.
Edmonds [3, 4] gave another proof of this result using matroid unions. The equivalence of maps-
and-trees graphs and tight graphs in the lower range is shown using matroid unions in [24], and
matroid augmenting paths are the basis of the algorithms for the lower range of [5, 6, 19].
In rigidity theory a foundational theorem of Laman [11] shows that (2,3)-tight (Laman)
graphs correspond to generically minimally rigid bar-and-joint frameworks in the plane. Tay
[21] proved an analogous result for body-bar frameworks in any dimension using (k,k)-tight
graphs. Rigidity by counts motivated interest in the upper range, and Crapo [2] proved the
equivalence of Laman graphs and proper 3T2 graphs. Tay [22] used this condition to give a
direct proof of Laman’s theorem and generalized the 3T2 condition to all `Tk for k≤ `≤ 2k−1.
Haas [7] studied `Tk decompositions in detail and proved the equivalence of tight graphs and
proper `Tk graphs for the general upper range. We observe that aside from our new pebble-
game-with-colors decomposition, all the combinatorial characterizations of the upper range of
sparse graphs, including the counts, have a geometric interpretation [11, 21, 22, 24].
A pebble game algorithm was first proposed in [10] as an elegant alternative to Hendrick-
son’s Laman graph algorithms [9]. Berg and Jordan [1], provided the formal analysis of the
pebble game of [10] and introduced the idea of playing the game on a directed graph. Lee and
Streinu [12] generalized the pebble game to the entire range of parameters 0≤ `≤ 2k−1, and
left as an open problem using the pebble game to find sparsity certifying decompositions.
3. The pebble game with colors
Our pebble game with colors is a set of rules for constructing graphs indexed by nonnegative
integers k and `. We will use the pebble game with colors as the basis of an efficient algorithm
for the decomposition problem later in this paper. Since the phrase “with colors” is necessary
only for comparison to [12], we will omit it in the rest of the paper when the context is clear.
Sparsity-certifying Graph Decompositions 5
We now present the pebble game with colors. The game is played by a single player on a
fixed finite set of vertices. The player makes a finite sequence of moves; a move consists in the
addition and/or orientation of an edge. At any moment of time, the state of the game is captured
by a directed graph H, with colored pebbles on vertices and edges. The edges of H are colored
by the pebbles on them. While playing the pebble game all edges are directed, and we use the
notation vw to indicate a directed edge from v to w.
We describe the pebble game with colors in terms of its initial configuration and the allowed
moves.
Fig. 3. Examples of pebble game with colors moves: (a) add-edge. (b) pebble-slide. Pebbles on vertices
are shown as black or gray dots. Edges are colored with the color of the pebble on them.
Initialization: In the beginning of the pebble game, H has n vertices and no edges. We start
by placing k pebbles on each vertex of H, one of each color ci, for i = 1,2, . . . ,k.
Add-edge-with-colors: Let v and w be vertices with at least `+1 pebbles on them. Assume
(w.l.o.g.) that v has at least one pebble on it. Pick up a pebble from v, add the oriented edge vw
to E(H) and put the pebble picked up from v on the new edge.
Figure 3(a) shows examples of the add-edge move.
Pebble-slide: Let w be a vertex with a pebble p on it, and let vw be an edge in H. Replace
vw with wv in E(H); put the pebble that was on vw on v; and put p on wv.
Note that the color of an edge can change with a pebble-slide move. Figure 3(b) shows
examples. The convention in these figures, and throughout this paper, is that pebbles on vertices
are represented as colored dots, and that edges are shown in the color of the pebble on them.
From the definition of the pebble-slide move, it is easy to see that a particular pebble is
always either on the vertex where it started or on an edge that has this vertex as the tail. However,
when making a sequence of pebble-slide moves that reverse the orientation of a path in H, it is
sometimes convenient to think of this path reversal sequence as bringing a pebble from the end
of the path to the beginning.
The output of playing the pebble game is its complete configuration.
Output: At the end of the game, we obtain the directed graph H, along with the location
and colors of the pebbles. Observe that since each edge has exactly one pebble on it, the pebble
game configuration colors the edges.
We say that the underlying undirected graph G of H is constructed by the (k, `)-pebble game
or that H is a pebble-game graph.
Since each edge of H has exactly one pebble on it, the pebble game’s configuration partitions
the edges of H, and thus G, into k different colors. We call this decomposition of H a pebble-
game-with-colors decomposition. Figure 4(a) shows an example of a (2,2)-tight graph with a
pebble-game decomposition.
Let G = (V,E) be pebble-game graph with the coloring induced by the pebbles on the edges,
and let G′ be a subgraph of G. Then the coloring of G induces a set of monochromatic con-
6 Ileana Streinu, Louis Theran
(a) (b) (c)
Fig. 4. A (2,2)-tight graph with one possible pebble-game decomposition. The edges are oriented to
show (1,0)-sparsity for each color. (a) The graph K4 with a pebble-game decomposition. There is an
empty black tree at the center vertex and a gray spanning tree. (b) The highlighted subgraph has two
black trees and a gray tree; the black edges are part of a larger cycle but contribute a tree to the subgraph.
(c) The highlighted subgraph (with a light gray background) has three empty gray trees; the black edges
contain a cycle and do not contribute a piece of tree to the subgraph.
Notation Meaning
span(V ′) Number of edges spanned in H by V ′ ⊂V ; i.e. |EH(V ′)|
peb(V ′) Number of pebbles on V ′ ⊂V
out(V ′) Number of edges vw in H with v ∈V ′ and w ∈V −V ′
pebi(v) Number of pebbles of color ci on v ∈V
outi(v) Number of edges vw colored ci for v ∈V
Table 2. Pebble game notation used in this paper.
nected subgraphs of G′ (there may be more than one of the same color). Such a monochromatic
subgraph is called a map-graph-piece of G′ if it contains a cycle (in G′) and a tree-piece of G′
otherwise. The set of tree-pieces of G′ is the collection of tree-pieces induced by G′. As with
the corresponding definition for `Tk s, the set of tree-pieces is defined relative to a specific sub-
graph; in particular a tree-piece may be part of a larger cycle that includes edges not spanned
by G′.
The properties of pebble-game decompositions are studied in Section 6, and Theorem 2
shows that each color must be (1,0)-sparse. The orientation of the edges in Figure 4(a) shows
this.
For example Figure 4(a) shows a (2,2)-tight graph with one possible pebble-game decom-
position. The whole graph contains a gray tree-piece and a black tree-piece that is an isolated
vertex. The subgraph in Figure 4(b) has a black tree and a gray tree, with the edges of the black
tree coming from a cycle in the larger graph. In Figure 4(c), however, the black cycle does not
contribute a tree-piece. All three tree-pieces in this subgraph are single-vertex gray trees.
In the following discussion, we use the notation peb(v) for the number of pebbles on v and
pebi(v) to indicate the number of pebbles of colors i on v.
Table 2 lists the pebble game notation used in this paper.
4. Our Results
We describe our results in this section. The rest of the paper provides the proofs.
Sparsity-certifying Graph Decompositions 7
Our first result is a strengthening of the pebble games of [12] to include colors. It says
that sparse graphs are exactly pebble game graphs. Recall that from now on, all pebble games
discussed in this paper are our pebble game with colors unless noted explicitly.
Theorem 1 (Sparse graphs and pebble-game graphs coincide). A graph G is (k, `)-sparse
with 0≤ `≤ 2k−1 if and only if G is a pebble-game graph.
Next we consider pebble-game decompositions, showing that they are a generalization of
proper `Tk decompositions that extend to the entire matroidal range of sparse graphs.
Theorem 2 (The pebble-game-with-colors decomposition). A graph G is a pebble-game
graph if and only if it admits a decomposition into k edge-disjoint subgraphs such that each
is (1,0)-sparse and every subgraph of G contains at least ` tree-pieces of the (1,0)-sparse
graphs in the decomposition.
The (1,0)-sparse subgraphs in the statement of Theorem 2 are the colors of the pebbles; thus
Theorem 2 gives a characterization of the pebble-game-with-colors decompositions obtained
by playing the pebble game defined in the previous section. Notice the similarity between the
requirement that the set of tree-pieces have size at least ` in Theorem 2 and the definition of a
proper `Tk .
Our next results show that for any pebble-game graph, we can specialize its pebble game
construction to generate a decomposition that is a maps-and-trees or proper `Tk . We call these
specialized pebble game constructions canonical, and using canonical pebble game construc-
tions, we obtain new direct proofs of existing arboricity results.
We observe Theorem 2 that maps-and-trees are special cases of the pebble-game decompo-
sition: both spanning trees and spanning map-graphs are (1,0)-sparse, and each of the spanning
trees contributes at least one piece of tree to every subgraph.
The case of proper `Tk graphs is more subtle; if each color in a pebble-game decomposition
is a forest, then we have found a proper `Tk , but this class is a subset of all possible proper
`Tk decompositions of a tight graph. We show that this class of proper `Tk decompositions is
sufficient to certify sparsity.
We now state the main theorem for the upper and lower range.
Theorem 3 (Main Theorem (Lower Range): Maps-and-trees coincide with pebble-game
graphs). Let 0 ≤ ` ≤ k. A graph G is a tight pebble-game graph if and only if G is a (k, `)-
maps-and-trees.
Theorem 4 (Main Theorem (Upper Range): Proper `Tk graphs coincide with pebble-game
graphs). Let k≤ `≤ 2k−1. A graph G is a tight pebble-game graph if and only if it is a proper
`Tk with kn− ` edges.
As corollaries, we obtain the existing decomposition results for sparse graphs.
Corollary 5 (Nash-Williams [17], Tutte [23], White and Whiteley [24]). Let `≤ k. A graph
G is tight if and only if has a (k, `)-maps-and-trees decomposition.
Corollary 6 (Crapo [2], Haas [7]). Let k ≤ `≤ 2k−1. A graph G is tight if and only if it is a
proper `Tk .
Efficiently finding canonical pebble game constructions. The proofs of Theorem 3 and Theo-
rem 4 lead to an obvious algorithm with O(n3) running time for the decomposition problem.
Our last result improves on this, showing that a canonical pebble game construction, and thus
8 Ileana Streinu, Louis Theran
a maps-and-trees or proper `Tk decomposition can be found using a pebble game algorithm in
O(n2) time and space.
These time and space bounds mean that our algorithm can be combined with those of [12]
without any change in complexity.
5. Pebble game graphs
In this section we prove Theorem 1, a strengthening of results from [12] to the pebble game
with colors. Since many of the relevant properties of the pebble game with colors carry over
directly from the pebble games of [12], we refer the reader there for the proofs.
We begin by establishing some invariants that hold during the execution of the pebble game.
Lemma 7 (Pebble game invariants). During the execution of the pebble game, the following
invariants are maintained in H:
(I1) There are at least ` pebbles on V . [12]
(I2) For each vertex v, span(v)+out(v)+peb(v) = k. [12]
(I3) For each V ′ ⊂V , span(V ′)+out(V ′)+peb(V ′) = kn′. [12]
(I4) For every vertex v ∈V , outi(v)+pebi(v) = 1.
(I5) Every maximal path consisting only of edges with color ci ends in either the first vertex with
a pebble of color ci or a cycle.
Proof. (I1), (I2), and (I3) come directly from [12].
(I4) This invariant clearly holds at the initialization phase of the pebble game with colors.
That add-edge and pebble-slide moves preserve (I4) is clear from inspection.
(I5) By (I4), a monochromatic path of edges is forced to end only at a vertex with a pebble of
the same color on it. If there is no pebble of that color reachable, then the path must eventually
visit some vertex twice.
From these invariants, we can show that the pebble game constructible graphs are sparse.
Lemma 8 (Pebble-game graphs are sparse [12]). Let H be a graph constructed with the
pebble game. Then H is sparse. If there are exactly ` pebbles on V (H), then H is tight.
The main step in proving that every sparse graph is a pebble-game graph is the following.
Recall that by bringing a pebble to v we mean reorienting H with pebble-slide moves to reduce
the out degree of v by one.
Lemma 9 (The `+1 pebble condition [12]). Let vw be an edge such that H + vw is sparse. If
peb({v,w}) < `+1, then a pebble not on {v,w} can be brought to either v or w.
It follows that any sparse graph has a pebble game construction.
Theorem 1 (Sparse graphs and pebble-game graphs coincide). A graph G is (k, `)-sparse
with 0≤ `≤ 2k−1 if and only if G is a pebble-game graph.
6. The pebble-game-with-colors decomposition
In this section we prove Theorem 2, which characterizes all pebble-game decompositions. We
start with the following lemmas about the structure of monochromatic connected components
in H, the directed graph maintained during the pebble game.
Sparsity-certifying Graph Decompositions 9
Lemma 10 (Monochromatic pebble game subgraphs are (1,0)-sparse). Let Hi be the sub-
graph of H induced by edges with pebbles of color ci on them. Then Hi is (1,0)-sparse, for
i = 1, . . . ,k.
Proof. By (I4) Hi is a set of edges with out degree at most one for every vertex.
Lemma 11 (Tree-pieces in a pebble-game graph). Every subgraph of the directed graph H
in a pebble game construction contains at least ` monochromatic tree-pieces, and each of these
is rooted at either a vertex with a pebble on it or a vertex that is the tail of an out-edge.
Recall that an out-edge from a subgraph H ′ = (V ′,E ′) is an edge vw with v∈V ′ and vw /∈ E ′.
Proof. Let H ′ = (V ′,E ′) be a non-empty subgraph of H, and assume without loss of generality
that H ′ is induced by V ′. By (I3), out(V ′)+ peb(V ′) ≥ `. We will show that each pebble and
out-edge tail is the root of a tree-piece.
Consider a vertex v ∈ V ′ and a color ci. By (I4) there is a unique monochromatic directed
path of color ci starting at v. By (I5), if this path ends at a pebble, it does not have a cycle.
Similarly, if this path reaches a vertex that is the tail of an out-edge also in color ci (i.e., if the
monochromatic path from v leaves V ′), then the path cannot have a cycle in H ′.
Since this argument works for any vertex in any color, for each color there is a partitioning
of the vertices into those that can reach each pebble, out-edge tail, or cycle. It follows that each
pebble and out-edge tail is the root of a monochromatic tree, as desired.
Applied to the whole graph Lemma 11 gives us the following.
Lemma 12 (Pebbles are the roots of trees). In any pebble game configuration, each pebble of
color ci is the root of a (possibly empty) monochromatic tree-piece of color ci.
Remark: Haas showed in [7] that in a `Tk , a subgraph induced by n′ ≥ 2 vertices with m′
edges has exactly kn′−m′ tree-pieces in it. Lemma 11 strengthens Haas’ result by extending it
to the lower range and giving a construction that finds the tree-pieces, showing the connection
between the `+1 pebble condition and the hereditary condition on proper `Tk .
We conclude our investigation of arbitrary pebble game constructions with a description of
the decomposition induced by the pebble game with colors.
Theorem 2 (The pebble-game-with-colors decomposition). A graph G is a pebble-game
graph if and only if it admits a decomposition into k edge-disjoint subgraphs such that each
is (1,0)-sparse and every subgraph of G contains at least ` tree-pieces of the (1,0)-sparse
graphs in the decomposition.
Proof. Let G be a pebble-game graph. The existence of the k edge-disjoint (1,0)-sparse sub-
graphs was shown in Lemma 10, and Lemma 11 proves the condition on subgraphs.
For the other direction, we observe that a color ci with ti tree-pieces in a given subgraph can
span at most n− ti edges; summing over all the colors shows that a graph with a pebble-game
decomposition must be sparse. Apply Theorem 1 to complete the proof.
Remark: We observe that a pebble-game decomposition for a Laman graph may be read out
of the bipartite matching used in Hendrickson’s Laman graph extraction algorithm [9]. Indeed,
pebble game orientations have a natural correspondence with the bipartite matchings used in
10 Ileana Streinu, Louis Theran
Maps-and-trees are a special case of pebble-game decompositions for tight graphs: if there
are no cycles in ` of the colors, then the trees rooted at the corresponding ` pebbles must be
spanning, since they have n− 1 edges. Also, if each color forms a forest in an upper range
pebble-game decomposition, then the tree-pieces condition ensures that the pebble-game de-
composition is a proper `Tk .
In the next section, we show that the pebble game can be specialized to correspond to maps-
and-trees and proper `Tk decompositions.
7. Canonical Pebble Game Constructions
In this section we prove the main theorems (Theorem 3 and Theorem 4), continuing the inves-
tigation of decompositions induced by pebble game constructions by studying the case where a
minimum number of monochromatic cycles are created. The main idea, captured in Lemma 15
and illustrated in Figure 6, is to avoid creating cycles while collecting pebbles. We show that
this is always possible, implying that monochromatic map-graphs are created only when we
add more than k(n′−1) edges to some set of n′ vertices. For the lower range, this implies that
every color is a forest. Every decomposition characterization of tight graphs discussed above
follows immediately from the main theorem, giving new proofs of the previous results in a
unified framework.
In the proof, we will use two specializations of the pebble game moves. The first is a modi-
fication of the add-edge move.
Canonical add-edge: When performing an add-edge move, cover the new edge with a color
that is on both vertices if possible. If not, then take the highest numbered color present.
The second is a restriction on which pebble-slide moves we allow.
Canonical pebble-slide: A pebble-slide move is allowed only when it does not create a
monochromatic cycle.
We call a pebble game construction that uses only these moves canonical. In this section
we will show that every pebble-game graph has a canonical pebble game construction (Lemma
14 and Lemma 15) and that canonical pebble game constructions correspond to proper `Tk and
maps-and-trees decompositions (Theorem 3 and Theorem 4).
We begin with a technical lemma that motivates the definition of canonical pebble game
constructions. It shows that the situations disallowed by the canonical moves are all the ways
for cycles to form in the lowest ` colors.
Lemma 13 (Monochromatic cycle creation). Let v ∈ V have a pebble p of color ci on it and
let w be a vertex in the same tree of color ci as v. A monochromatic cycle colored ci is created
in exactly one of the following ways:
(M1) The edge vw is added with an add-edge move.
(M2) The edge wv is reversed by a pebble-slide move and the pebble p is used to cover the reverse
edge vw.
Proof. Observe that the preconditions in the statement of the lemma are implied by Lemma 7.
By Lemma 12 monochromatic cycles form when the last pebble of color ci is removed from a
connected monochromatic subgraph. (M1) and (M2) are the only ways to do this in a pebble
game construction, since the color of an edge only changes when it is inserted the first time or
a new pebble is put on it by a pebble-slide move.
Sparsity-certifying Graph Decompositions 11
vw vw
Fig. 5. Creating monochromatic cycles in a (2,0)-pebble game. (a) A type (M1) move creates a cycle by
adding a black edge. (b) A type (M2) move creates a cycle with a pebble-slide move. The vertices are
labeled according to their role in the definition of the moves.
Figure 5(a) and Figure 5(b) show examples of (M1) and (M2) map-graph creation moves,
respectively, in a (2,0)-pebble game construction.
We next show that if a graph has a pebble game construction, then it has a canonical peb-
ble game construction. This is done in two steps, considering the cases (M1) and (M2) sepa-
rately. The proof gives two constructions that implement the canonical add-edge and canonical
pebble-slide moves.
Lemma 14 (The canonical add-edge move). Let G be a graph with a pebble game construc-
tion. Cycle creation steps of type (M1) can be eliminated in colors ci for 1 ≤ i ≤ `′, where
`′ = min{k, `}.
Proof. For add-edge moves, cover the edge with a color present on both v and w if possible. If
this is not possible, then there are `+1 distinct colors present. Use the highest numbered color
to cover the new edge.
Remark: We note that in the upper range, there is always a repeated color, so no canonical
add-edge moves create cycles in the upper range.
The canonical pebble-slide move is defined by a global condition. To prove that we obtain
the same class of graphs using only canonical pebble-slide moves, we need to extend Lemma
9 to only canonical moves. The main step is to show that if there is any sequence of moves that
reorients a path from v to w, then there is a sequence of canonical moves that does the same
thing.
Lemma 15 (The canonical pebble-slide move). Any sequence of pebble-slide moves leading
to an add-edge move can be replaced with one that has no (M2) steps and allows the same
add-edge move.
In other words, if it is possible to collect `+ 1 pebbles on the ends of an edge to be added,
then it is possible to do this without creating any monochromatic cycles.
12 Ileana Streinu, Louis Theran
Figure 7 and Figure 8 illustrate the construction used in the proof of Lemma 15. We call this
the shortcut construction by analogy to matroid union and intersection augmenting paths used
in previous work on the lower range.
Figure 6 shows the structure of the proof. The shortcut construction removes an (M2) step
at the beginning of a sequence that reorients a path from v to w with pebble-slides. Since one
application of the shortcut construction reorients a simple path from a vertex w′ to w, and a
path from v to w′ is preserved, the shortcut construction can be applied inductively to find the
sequence of moves we want.
Fig. 6. Outline of the shortcut construction: (a) An arbitrary simple path from v to w with curved lines
indicating simple paths. (b) An (M2) step. The black edge, about to be flipped, would create a cycle,
shown in dashed and solid gray, of the (unique) gray tree rooted at w. The solid gray edges were part
of the original path from (a). (c) The shortened path to the gray pebble; the new path follows the gray
tree all the way from the first time the original path touched the gray tree at w′. The path from v to w′ is
simple, and the shortcut construction can be applied inductively to it.
Proof. Without loss of generality, we can assume that our sequence of moves reorients a simple
path in H, and that the first move (the end of the path) is (M2). The (M2) step moves a pebble
of color ci from a vertex w onto the edge vw, which is reversed. Because the move is (M2), v
and w are contained in a maximal monochromatic tree of color ci. Call this tree H ′i , and observe
that it is rooted at w.
Now consider the edges reversed in our sequence of moves. As noted above, before we make
any of the moves, these sketch out a simple path in H ending at w. Let z be the first vertex on
this path in H ′i . We modify our sequence of moves as follows: delete, from the beginning, every
move before the one that reverses some edge yz; prepend onto what is left a sequence of moves
that moves the pebble on w to z in H ′i .
Sparsity-certifying Graph Decompositions 13
Fig. 7. Eliminating (M2) moves: (a) an (M2) move; (b) avoiding the (M2) by moving along another path.
The path where the pebbles move is indicated by doubled lines.
Fig. 8. Eliminating (M2) moves: (a) the first step to move the black pebble along the doubled path is
(M2); (b) avoiding the (M2) and simplifying the path.
Since no edges change color in the beginning of the new sequence, we have eliminated
the (M2) move. Because our construction does not change any of the edges involved in the
remaining tail of the original sequence, the part of the original path that is left in the new
sequence will still be a simple path in H, meeting our initial hypothesis.
The rest of the lemma follows by induction.
Together Lemma 14 and Lemma 15 prove the following.
Lemma 16. If G is a pebble-game graph, then G has a canonical pebble game construction.
Using canonical pebble game constructions, we can identify the tight pebble-game graphs
with maps-and-trees and `Tk graphs.
14 Ileana Streinu, Louis Theran
Theorem 3 (Main Theorem (Lower Range): Maps-and-trees coincide with pebble-game
graphs). Let 0 ≤ ` ≤ k. A graph G is a tight pebble-game graph if and only if G is a (k, `)-
maps-and-trees.
Proof. As observed above, a maps-and-trees decomposition is a special case of the pebble game
decomposition. Applying Theorem 2, we see that any maps-and-trees must be a pebble-game
graph.
For the reverse direction, consider a canonical pebble game construction of a tight graph.
From Lemma 8, we see that there are ` pebbles left on G at the end of the construction. The
definition of the canonical add-edge move implies that there must be at least one pebble of
each ci for i = 1,2, . . . , `. It follows that there is exactly one of each of these colors. By Lemma
12, each of these pebbles is the root of a monochromatic tree-piece with n− 1 edges, yielding
the required ` edge-disjoint spanning trees.
Corollary 5 (Nash-Williams [17], Tutte [23], White and Whiteley [24]). Let `≤ k. A graph
G is tight if and only if has a (k, `)-maps-and-trees decomposition.
We next consider the decompositions induced by canonical pebble game constructions when
`≥ k +1.
Theorem 4 (Main Theorem (Upper Range): Proper Trees-and-trees coincide with peb-
ble-game graphs). Let k≤ `≤ 2k−1. A graph G is a tight pebble-game graph if and only if it
is a proper `Tk with kn− ` edges.
Proof. As observed above, a proper `Tk decomposition must be sparse. What we need to show
is that a canonical pebble game construction of a tight graph produces a proper `Tk .
By Theorem 2 and Lemma 16, we already have the condition on tree-pieces and the decom-
position into ` edge-disjoint trees. Finally, an application of (I4), shows that every vertex must
in in exactly k of the trees, as required.
Corollary 6 (Crapo [2], Haas [7]). Let k ≤ `≤ 2k−1. A graph G is tight if and only if it is a
proper `Tk .
8. Pebble game algorithms for finding decompositions
A naı̈ve implementation of the constructions in the previous section leads to an algorithm re-
quiring Θ(n2) time to collect each pebble in a canonical construction: in the worst case Θ(n)
applications of the construction in Lemma 15 requiring Θ(n) time each, giving a total running
time of Θ(n3) for the decomposition problem.
In this section, we describe algorithms for the decomposition problem that run in time
O(n2). We begin with the overall structure of the algorithm.
Algorithm 17 (The canonical pebble game with colors).
Input: A graph G.
Output: A pebble-game graph H.
Method:
– Set V (H) = V (G) and place one pebble of each color on the vertices of H.
– For each edge vw ∈ E(G) try to collect at least `+1 pebbles on v and w using pebble-slide
moves as described by Lemma 15.
Sparsity-certifying Graph Decompositions 15
– If at least `+1 pebbles can be collected, add vw to H using an add-edge move as in Lemma
14, otherwise discard vw.
– Finally, return H, and the locations of the pebbles.
Correctness. Theorem 1 and the result from [24] that the sparse graphs are the independent
sets of a matroid show that H is a maximum sized sparse subgraph of G. Since the construction
found is canonical, the main theorem shows that the coloring of the edges in H gives a maps-
and-trees or proper `Tk decomposition.
Complexity. We start by observing that the running time of Algorithm 17 is the time taken to
process O(n) edges added to H and O(m) edges not added to H. We first consider the cost of an
edge of G that is added to H.
Each of the pebble game moves can be implemented in constant time. What remains is to
describe an efficient way to find and move the pebbles. We use the following algorithm as a
subroutine of Algorithm 17 to do this.
Algorithm 18 (Finding a canonical path to a pebble.).
Input: Vertices v and w, and a pebble game configuration on a directed graph H.
Output: If a pebble was found, ‘yes’, and ‘no’ otherwise. The configuration of H is updated.
Method:
– Start by doing a depth-first search from from v in H. If no pebble not on w is found, stop and
return ‘no.’
– Otherwise a pebble was found. We now have a path v = v1,e1, . . . ,ep−1,vp = u, where the vi
are vertices and ei is the edge vivi+1. Let c[ei] be the color of the pebble on ei. We will use
the array c[] to keep track of the colors of pebbles on vertices and edges after we move them
and the array s[] to sketch out a canonical path from v to u by finding a successor for each
edge.
– Set s[u] = ‘end′ and set c[u] to the color of an arbitrary pebble on u. We walk on the path in
reverse order: vp,ep−1,ep−2, . . . ,e1,v1. For each i, check to see if c[vi] is set; if so, go on to
the next i. Otherwise, check to see if c[vi+1] = c[ei].
– If it is, set s[vi] = ei and set c[vi] = c[ei], and go on to the next edge.
– Otherwise c[vi+1] 6= c[ei], try to find a monochromatic path in color c[vi+1] from vi to vi+1. If
a vertex x is encountered for which c[x] is set, we have a path vi = x1, f1,x2, . . . , fq−1,xq = x
that is monochromatic in the color of the edges; set c[xi] = c[ fi] and s[xi] = fi for i =
1,2, . . . ,q−1. If c[x] = c[ fq−1], stop. Otherwise, recursively check that there is not a monochro-
matic c[x] path from xq−1 to x using this same procedure.
– Finally, slide pebbles along the path from the original endpoints v to u specified by the
successor array s[v], s[s[v]], . . .
The correctness of Algorithm 18 comes from the fact that it is implementing the shortcut
construction. Efficiency comes from the fact that instead of potentially moving the pebble back
and forth, Algorithm 18 pre-computes a canonical path crossing each edge of H at most three
times: once in the initial depth-first search, and twice while converting the initial path to a
canonical one. It follows that each accepted edges takes O(n) time, for a total of O(n2) time
spent processing edges in H.
Although we have not discussed this explicity, for the algorithm to be efficient we need to
maintain components as in [12]. After each accepted edge, the components of H can be updated
in time O(n). Finally, the results of [12, 13] show that the rejected edges take an amortized O(1)
time each.
16 Ileana Streinu, Louis Theran
Summarizing, we have shown that the canonical pebble game with colors solves the decom-
position problem in time O(n2).
9. An important special case: Rigidity in dimension 2 and slider-pinning
In this short section we present a new application for the special case of practical importance,
k = 2, ` = 3. As discussed in the introduction, Laman’s theorem [11] characterizes minimally
rigid graphs as the (2,3)-tight graphs. In recent work on slider pinning, developed after the
current paper was submitted, we introduced the slider-pinning model of rigidity [15, 20]. Com-
binatorially, we model the bar-slider frameworks as simple graphs together with some loops
placed on their vertices in such a way that there are no more than 2 loops per vertex, one of each
color.
We characterize the minimally rigid bar-slider graphs [20] as graphs that are:
1. (2,3)-sparse for subgraphs containing no loops.
2. (2,0)-tight when loops are included.
We call these graphs (2,0,3)-graded-tight, and they are a special case of the graded-sparse
graphs studied in our paper [14].
The connection with the pebble games in this paper is the following.
Corollary 19 (Pebble games and slider-pinning). In any (2,3)-pebble game graph, if we
replace pebbles by loops, we obtain a (2,0,3)-graded-tight graph.
Proof. Follows from invariant (I3) of Lemma 7.
In [15], we study a special case of slider pinning where every slider is either vertical or
horizontal. We model the sliders as pre-colored loops, with the color indicating x or y direction.
For this axis parallel slider case, the minimally rigid graphs are characterized by:
1. (2,3)-sparse for subgraphs containing no loops.
2. Admit a 2-coloring of the edges so that each color is a forest (i.e., has no cycles), and each
monochromatic tree spans exactly one loop of its color.
This also has an interpretation in terms of colored pebble games.
Corollary 20 (The pebble game with colors and slider-pinning). In any canonical (2,3)-
pebble-game-with-colors graph, if we replace pebbles by loops of the same color, we obtain the
graph of a minimally pinned axis-parallel bar-slider framework.
Proof. Follows from Theorem 4, and Lemma 12.
10. Conclusions and open problems
We presented a new characterization of (k, `)-sparse graphs, the pebble game with colors, and
used it to give an efficient algorithm for finding decompositions of sparse graphs into edge-
disjoint trees. Our algorithm finds such sparsity-certifying decompositions in the upper range
and runs in time O(n2), which is as fast as the algorithms for recognizing sparse graphs in the
upper range from [12].
We also used the pebble game with colors to describe a new sparsity-certifying decomposi-
tion that applies to the entire matroidal range of sparse graphs.
Sparsity-certifying Graph Decompositions 17
We defined and studied a class of canonical pebble game constructions that correspond to
either a maps-and-trees or proper `Tk decomposition. This gives a new proof of the Tutte-Nash-
Williams arboricity theorem and a unified proof of the previously studied decomposition cer-
tificates of sparsity. Canonical pebble game constructions also show the relationship between
the `+1 pebble condition, which applies to the upper range of `, to matroid union augmenting
paths, which do not apply in the upper range.
Algorithmic consequences and open problems. In [6], Gabow and Westermann give an O(n3/2)
algorithm for recognizing sparse graphs in the lower range and extracting sparse subgraphs from
dense ones. Their technique is based on efficiently finding matroid union augmenting paths,
which extend a maps-and-trees decomposition. The O(n3/2) algorithm uses two subroutines to
find augmenting paths: cyclic scanning, which finds augmenting paths one at a time, and batch
scanning, which finds groups of disjoint augmenting paths.
We observe that Algorithm 17 can be used to replace cyclic scanning in Gabow and Wester-
mann’s algorithm without changing the running time. The data structures used in the implemen-
tation of the pebble game, detailed in [12, 13] are simpler and easier to implement than those
used to support cyclic scanning.
The two major open algorithmic problems related to the pebble game are then:
Problem 1. Develop a pebble game algorithm with the properties of batch scanning and obtain
an implementable O(n3/2) algorithm for the lower range.
Problem 2. Extend batch scanning to the `+1 pebble condition and derive an O(n3/2) pebble
game algorithm for the upper range.
In particular, it would be of practical importance to find an implementable O(n3/2) algorithm
for decompositions into edge-disjoint spanning trees.
References
1. Berg, A.R., Jordán, T.: Algorithms for graph rigidity and scene analysis. In: Proc. 11th
European Symposium on Algorithms (ESA ’03), LNCS, vol. 2832, pp. 78–89. (2003)
2. Crapo, H.: On the generic rigidity of plane frameworks. Tech. Rep. 1278, Institut de
recherche d’informatique et d’automatique (1988)
3. Edmonds, J.: Minimum partition of a matroid into independent sets. J. Res. Nat. Bur.
Standards Sect. B 69B, 67–72 (1965)
4. Edmonds, J.: Submodular functions, matroids, and certain polyhedra. In: Combinatorial
Optimization—Eureka, You Shrink!, no. 2570 in LNCS, pp. 11–26. Springer (2003)
5. Gabow, H.N.: A matroid approach to finding edge connectivity and packing arborescences.
Journal of Computer and System Sciences 50, 259–273 (1995)
6. Gabow, H.N., Westermann, H.H.: Forests, frames, and games: Algorithms for matroid sums
and applications. Algorithmica 7(1), 465–497 (1992)
7. Haas, R.: Characterizations of arboricity of graphs. Ars Combinatorica 63, 129–137 (2002)
8. Haas, R., Lee, A., Streinu, I., Theran, L.: Characterizing sparse graphs by map decompo-
sitions. Journal of Combinatorial Mathematics and Combinatorial Computing 62, 3–11
(2007)
9. Hendrickson, B.: Conditions for unique graph realizations. SIAM Journal on Computing
21(1), 65–84 (1992)
18 Ileana Streinu, Louis Theran
10. Jacobs, D.J., Hendrickson, B.: An algorithm for two-dimensional rigidity percolation: the
pebble game. Journal of Computational Physics 137, 346–365 (1997)
11. Laman, G.: On graphs and rigidity of plane skeletal structures. Journal of Engineering
Mathematics 4, 331–340 (1970)
12. Lee, A., Streinu, I.: Pebble game algorihms and sparse graphs. Discrete Mathematics
308(8), 1425–1437 (2008)
13. Lee, A., Streinu, I., Theran, L.: Finding and maintaining rigid components. In: Proc. Cana-
dian Conference of Computational Geometry. Windsor, Ontario (2005). http://cccg.
cs.uwindsor.ca/papers/72.pdf
14. Lee, A., Streinu, I., Theran, L.: Graded sparse graphs and matroids. Journal of Universal
Computer Science 13(10) (2007)
15. Lee, A., Streinu, I., Theran, L.: The slider-pinning problem. In: Proceedings of the 19th
Canadian Conference on Computational Geometry (CCCG’07) (2007)
16. Lovász, L.: Combinatorial Problems and Exercises. Akademiai Kiado and North-Holland,
Amsterdam (1979)
17. Nash-Williams, C.S.A.: Decomposition of finite graphs into forests. Journal of the London
Mathematical Society 39, 12 (1964)
18. Oxley, J.G.: Matroid theory. The Clarendon Press, Oxford University Press, New York
(1992)
19. Roskind, J., Tarjan, R.E.: A note on finding minimum cost edge disjoint spanning trees.
Mathematics of Operations Research 10(4), 701–708 (1985)
20. Streinu, I., Theran, L.: Combinatorial genericity and minimal rigidity. In: SCG ’08: Pro-
ceedings of the twenty-fourth annual Symposium on Computational Geometry, pp. 365–
374. ACM, New York, NY, USA (2008).
21. Tay, T.S.: Rigidity of multigraphs I: linking rigid bodies in n-space. Journal of Combinato-
rial Theory, Series B 26, 95–112 (1984)
22. Tay, T.S.: A new proof of Laman’s theorem. Graphs and Combinatorics 9, 365–370 (1993)
23. Tutte, W.T.: On the problem of decomposing a graph into n connected factors. Journal of
the London Mathematical Society 142, 221–230 (1961)
24. Whiteley, W.: The union of matroids and the rigidity of frameworks. SIAM Journal on
Discrete Mathematics 1(2), 237–255 (1988)
http://cccg.cs.uwindsor.ca/papers/72.pdf
http://cccg.cs.uwindsor.ca/papers/72.pdf
Introduction and preliminaries
Historical background
The pebble game with colors
Our Results
Pebble game graphs
The pebble-game-with-colors decomposition
Canonical Pebble Game Constructions
Pebble game algorithms for finding decompositions
An important special case: Rigidity in dimension 2 and slider-pinning
Conclusions and open problems
| Descomposiciones del gráfico de certificación de la sparsity
Ileana Streinu1*, Louis Theran2
1 Departamento de Ciencias de la Computación, Smith College, Northampton, MA. Correo electrónico: streinu@cs.smith.edu
2 Departamento de Ciencias de la Computación, Universidad de Massachusetts Amherst. Correo electrónico: theran@cs.umass.edu
Resumen. Describimos un nuevo algoritmo, el (k, `)-pebble juego con colores, y usarlo para obtener un charac-
la terización de la familia de gráficos (k, `)-sparse y soluciones algorítmicas a una familia de problemas
ing árbol descomposicións de gráficos. Casos especiales de gráficos escasos aparecen en la teoría de la rigidez y tienen
ha recibido una mayor atención en los últimos años. En particular, nuestros guijarros de colores generalizan y fortalecen
los resultados anteriores de Lee y Streinu [12] y dar una nueva prueba de la Tutte-Nash-Williams carácteri-
Zación de arboricidad. También presentamos una nueva descomposición que certifica la esparcidad basada en la (k, `)-pebble
juego con colores. Nuestro trabajo también expone conexiones entre los algoritmos de juego de guijarros y anteriores
algoritmos gráficos escasos de Gabow [5], Gabow y Westermann [6] y Hendrickson [9].
1. Introducción y preliminares
El foco de este documento son las descomposicións de (k, `)-sparse gráficos en bordes-disjunto subgraphs
que certifique la escasez. Usamos el gráfico para significar un múltiplo, posiblemente con bucles. Nosotros decimos que un
grafo es (k, `)-sparse si ningún subconjunto de n′ vértices abarca más de kn ` bordes en el gráfico; a
(k, `)-sparse gráfico con kn ` bordes es (k, `)-estrechado. Llamamos al rango k ≤ 2k−1 el superior
rango de gráficos escasos y 0≤ k el rango inferior.
En este artículo, presentamos algoritmos eficientes para encontrar descomposicións que certifiquen la escasez
en el rango superior de `. Nuestros algoritmos también se aplican en el rango inferior, que ya era ad-
vestido por [3, 4, 5, 6, 19]. Una descomposición certifica la escasez de un gráfico si los gráficos dispersos
y los gráficos que admiten la descomposición coinciden.
Nuestros algoritmos se basan en una nueva caracterización de gráficos escasos, que llamamos el
juego de guijarros con colores. El juego de guijarros con colores es una regla de construcción de gráficos simples que
produce un gráfico escaso junto con una descomposición certificadora de la escasez.
Definimos y estudiamos una clase canónica de construcciones de juego de guijarros, que corresponden a
previamente estudiado las descomposiciones de los gráficos escasos en los árboles disjuntos del borde. Nuestros resultados proporcionan
un marco unificador para todos los casos especiales conocidos anteriormente, incluidos Nash-Williams-
Tutte y [7, 24]. De hecho, en el rango inferior, las construcciones canónicas de juego de guijarros capturan la
propiedades de las rutas de aumento utilizadas en los algoritmos de unión de matroides y de intersección[5, 6].
Dado que los gráficos escasos en el rango superior no se sabe que son uniones o intersecciones de la
matroides para los que hay algoritmos de ruta de aumento eficiente, estos no se aplican fácilmente en
* Investigación de ambos autores financiada por la NSF bajo subvenciones NSF CCF-0430990 y NSF-DARPA CARGO
CCR-0310661 al primer autor.
2 Ileana Streinu, Louis Theran
Significado del término
Gráfico escaso G Cada subgrafo no vacío en n′ vértices tiene ≤ kn ` bordes
El gráfico ajustado G G = (V,E) es escaso y V = n, E= kn− `
El bloque H en G G es escaso, y H es un subgrafo apretado
El componente H de G G es escaso y H es un bloqueo máximo
Gráfico cartográfico que admite una orientación de grado-exactamente-uno
(k, `)-maps-and-trees Edge-disjunt union de ` árboles y (k- `) map-grpahs
`Tk Unión de ` árboles, cada vértice está exactamente en k de ellos
Conjunto de piezas arbóreas de un `Tk inducido en V ′ ́V Piezas de árboles en el `Tk extendido por E(V ′)
`Tk Apropiado Cada V ′ V contiene ≥ ` pedazos de árboles de la `Tk
Cuadro 1 Gráfico escaso y terminología de descomposición utilizada en este artículo.
el rango superior. Pebble juego con construcciones de colores por lo tanto puede ser considerado un fortalecimiento
de caminos de aumento a la gama superior de gráficos de la escasez matroidal.
1.1. Gráficos escasos
Un gráfico es (k, `)-sparse si para cualquier subgrafo no vacío con bordes m′ y n′ vértices, m′ ≤
kn `. Observamos que esta condición implica que 0 ≤ ` ≤ 2k− 1, y a partir de ahora en este
Haremos esta suposición. Un gráfico escaso que tiene n vértices y exactamente bordes kn
se llama apretado.
Para un gráfico G = (V,E), y V ′ V, utilizamos el intervalo de notación (V ′) para el número de bordes
en el subgráfico inducido por V ′. En un gráfico dirigido, out(V ′) es el número de bordes con la cola
en V ′ y la cabeza en V −V ′; para un subgráfico inducido por V ′, llamamos a tal borde un borde superior.
Hay dos tipos importantes de subgrafías de gráficos escasos. Un bloque es un subgrafo apretado de
un gráfico escaso. Un componente es un bloque máximo.
La Tabla 1 resume la escasa terminología gráfica utilizada en este artículo.
1.2. Descomposiciónes de certificación de la sparsidad
Un k-arborescencia es un gráfico que admite una descomposición en k borde-desjunto que abarca los árboles.
La Figura 1(a) muestra un ejemplo de una 3-arborescencia. Se describen los gráficos k-arborescentes
por los conocidos teoremas de Tutte [23] y Nash-Williams [17] como exactamente el (k,k) apretado
gráficos.
Un map-graph es un gráfico que admite una orientación tal que el grado de cada vértice es
Exactamente uno. Un k-map-graph es un gráfico que admite una descomposición en k borde-disjunto mapa-
gráficos. La Figura 1(b) muestra un ejemplo de un 2-map-graphs; los bordes están orientados en uno posible
configuración que certifica que cada color forma un mapa gráfico. Los mapas pueden ser equivalentes
definido (véase, por ejemplo, [18]) como tener exactamente un ciclo por componente conectado.1
A (k, `)-maps-and-trees es un gráfico que admite una descomposición en k− ` borde-disjunta
- mapas y árboles que se extienden por los árboles.
Otra caracterización de los mapas, que utilizaremos ampliamente en este artículo, es la siguiente:
los gráficos (1,0) ajustados [8, 24]. Los k-map-graphs son evidentemente (k,0)-stight, y [8, 24] muestran que
lo contrario se sostiene también.
1 Nuestra terminología sigue a Lovász en [16]. En la literatura matroide los mapas a veces se conocen como bases
del matroide de la bicicleta o pseudobosques que se extienden.
Descomposiciones del gráfico de certificación de la Sparsity 3
Fig. 1. Ejemplos de descomposiciones certificadoras de la escasez: a) una 3-arborescencia; b) una 2-map-graph; c) una
(2,1)-maps-y-árboles. Los bordes con el mismo estilo de línea pertenecen al mismo subgrafo. El 2-map-graph es
se muestra con una orientación certificadora.
Un `Tk es una descomposición en `árboles disjuntos de borde (que no necesariamente abarcan) de tal manera que cada uno
vértice está en exactamente k de ellos. La figura 2 a) muestra un ejemplo de un 3T2.
Dado un subgrafo G′ de un gráfico `Tk G, el conjunto de piezas arbóreas en G′ es la colección del
componentes de los árboles en G inducidos por G′ (dado que G′ es un subgrafo cada árbol puede contribuir
piezas múltiples en el conjunto de piezas de árbol en G′). Observamos que estas piezas de árboles pueden venir
del mismo árbol o ser un solo vertex “árboles vacíos.” También es útil tener en cuenta que la definición
de un árbol-pieza es relativo a un subgrafo específico. Una descomposición `Tk es apropiada si el conjunto de
las piezas arbóreas de cualquier subpárrafo G′ tienen un tamaño mínimo `.
La Figura 2(a) muestra un gráfico con una descomposición 3T2; observamos que uno de los árboles es un
vértice aislado en la esquina inferior derecha. El subgrafo de la Figura 2(b) tiene tres árboles negros-
piezas y un árbol-pieza gris: un vértice aislado en la esquina superior derecha, y dos bordes individuales.
Estos cuentan como tres árboles-piezas, a pesar de que vienen del mismo árbol trasero cuando el
Gráfico completo considerado. La figura 2 c) muestra otro subgráfico; en este caso hay tres
piezas de árboles grises y una negra.
En el cuadro 1 figura la terminología de descomposición utilizada en este documento.
El problema de descomposición. Definimos el problema de descomposición para gráficos escasos como tak-
• un gráfico como su entrada y producción como salida, una descomposición que se puede utilizar para certificar
sity. En el presente documento se estudiarán tres tipos de productos: mapas y árboles; descomposiciones adecuadas de `Tk;
y la descomposición de guijarros-juego-con-colores, que se define en la siguiente sección.
2. Antecedentes históricos
Los conocidos teoremas de Tutte [23] y Nash-Williams [17] relacionan los gráficos (k,k) ajustados a
la existencia de descomposicións en los árboles que se extienden por los bordes. Tomando un punto de vista matroidal,
4 Ileana Streinu, Louis Theran
Fig. 2. (a) Un gráfico con una descomposición 3T2; uno de los tres árboles es un único vértice en la parte inferior derecha
esquina. (b) El subgrafo resaltado dentro del conteo rayado tiene tres piezas de árbol negro y una gris
pieza de árbol. (c) El subgrafo resaltado dentro del conteo rayado tiene tres piezas de árbol grises (uno es un
solo vértice) y una pieza de árbol negro.
Edmonds [3, 4] dio otra prueba de este resultado usando uniones de matroide. La equivalencia de los mapas-
los gráficos y árboles y los gráficos ajustados en el rango inferior se muestran utilizando uniones de los matroides en [24], y
rutas de aumento matroide son la base de los algoritmos para el rango inferior de [5, 6, 19].
En la teoría de la rigidez un teorema fundacional de Laman [11] muestra que (2,3)-ajustado (Laman)
los gráficos corresponden a marcos de barras y conjuntos genéricamente mínimamente rígidos en el plano. Tay
[21] ha demostrado ser un resultado análogo para los marcos de la barra del cuerpo en cualquier dimensión utilizando (k,k)
gráficos. Rigidez por conteos de interés motivado en el rango superior, y Crapo [2] probó la
equivalencia de gráficos Laman y gráficos 3T2 apropiados. Tay [22] utilizó esta condición para dar un
prueba directa del teorema de Laman y generalizada la condición 3T2 a todos `Tk para k≤ 2k−1.
Haas [7] estudió detalladamente las descomposicións de `Tk y demostró la equivalencia de gráficos ajustados y
gráficos `Tk apropiados para el rango superior general. Observamos que aparte de nuestro nuevo guijarro...
game-with-colors descomposición, todas las caracterizaciones combinatoria de la gama superior de
Los gráficos escasos, incluidos los conteos, tienen una interpretación geométrica [11, 21, 22, 24].
Un algoritmo de juego de guijarros fue propuesto por primera vez en [10] como una alternativa elegante a Hendrick-
algoritmos de gráfico Laman de hijo [9]. Berg y Jordania [1], facilitaron el análisis formal de la
juego de guijarros de [10] e introdujo la idea de jugar el juego en un gráfico dirigido. Lee y
Streinu [12] generalizó el juego de guijarros a toda la gama de parámetros 0≤ 2k−1, y
izquierda como un problema abierto utilizando el juego de guijarros para encontrar la escasez certificando las descomposicións.
3. El juego de guijarros con colores
Nuestro juego de guijarros con colores es un conjunto de reglas para la construcción de gráficos indexados por no negativos
enteros k y `. Usaremos el juego de guijarros con colores como la base de un algoritmo eficiente
para el problema de descomposición más adelante en este documento. Puesto que la frase “con colores” es necesaria
Sólo en comparación con [12], lo omitiremos en el resto del documento cuando el contexto sea claro.
Descomposiciones del gráfico de certificación de la sparsity 5
Ahora presentamos el juego de guijarros con colores. El juego es jugado por un solo jugador en un
conjunto finito fijo de vértices. El jugador hace una secuencia finita de movimientos; un movimiento consiste en el
adición y/o orientación de un borde. En cualquier momento, el estado del juego es capturado
por un gráfico dirigido H, con guijarros de colores sobre vértices y bordes. Los bordes de H son de color
por los guijarros en ellos. Mientras que jugando el juego de guijarros todos los bordes están dirigidos, y utilizamos el
notación vw para indicar un borde dirigido de v a w.
Describimos el juego de guijarros con colores en términos de su configuración inicial y el permitido
se mueve.
Fig. 3. Ejemplos de juego de guijarros con movimientos de colores: (a) add-edge. b) Deslizamiento de guijarros. Guijarros sobre vértices
se muestran como puntos negros o grises. Los bordes están coloreados con el color de la rocalla en ellos.
Inicialización: Al principio del juego de guijarros, H tiene n vértices y no tiene bordes. Comenzamos
colocando k guijarros en cada vértice de H, uno de cada color ci, para i = 1,2,...,k.
Add-edge-with-colors: Dejar v y w ser vértices con al menos â € 1 guijarros en ellos. Asumir
(w.l.o.g.) que v tiene al menos un guijarro en él. Recoger un guijarro de v, añadir el borde orientado vw
a E(H) y poner el guijarro recogido de v en el nuevo borde.
La Figura 3(a) muestra ejemplos del movimiento de add-edge.
Pebble-slide: Dejar w ser un vértice con un guijarro p en él, y dejar vw ser un borde en H. Reemplazar
vw con wv en E(H); poner el guijarro que estaba en vw en v; y poner p en wv.
Tenga en cuenta que el color de un borde puede cambiar con un movimiento de guijarros. La figura 3 b) muestra
ejemplos. La convención en estas figuras, y a lo largo de este documento, es que los guijarros sobre los vértices
se representan como puntos de color, y que los bordes se muestran en el color de la rocalla en ellos.
A partir de la definición del movimiento de guijarros-deslizamiento, es fácil ver que un guijarro en particular es
siempre en el vértice donde empezó o en un borde que tiene este vértice como la cola. Sin embargo,
al hacer una secuencia de movimientos de guijarros que invierten la orientación de un camino en H, es
a veces es conveniente pensar en esta secuencia de inversión del camino como trayendo un guijarro desde el final
del camino al principio.
La salida de jugar el juego de guijarros es su configuración completa.
Salida: Al final del juego, obtenemos el gráfico dirigido H, junto con la ubicación
y los colores de los guijarros. Observe que ya que cada borde tiene exactamente un guijarro en él, el guijarro
la configuración del juego colorea los bordes.
Decimos que el gráfico G de H subyacente no dirigido es construido por el juego (k, `)-pebble
o que H es un gráfico de juego de guijarros.
Puesto que cada borde de H tiene exactamente un guijarro, las particiones de configuración del juego de guijarro
los bordes de H, y así G, en k diferentes colores. Llamamos a esta descomposición de H un guijarro...
juego-con-colores descomposición. La Figura 4(a) muestra un ejemplo de un gráfico ajustado (2,2) con un
Descomposición de juego de guijarros.
Que G = (V,E) sea gráfico de juego de guijarros con la coloración inducida por los guijarros en los bordes,
y dejar que G′ sea un subgrafo de G. Entonces la coloración de G induce un conjunto de con-
6 Ileana Streinu, Louis Theran
a) b) c)
Fig. 4. A (2,2)-término gráfico con una posible descomposición del juego de guijarros. Los bordes están orientados a
mostrar (1,0)-esparsidad para cada color. a) El gráfico K4 con una descomposición del juego de guijarros. Hay un
árbol negro vacío en el vértice central y un árbol gris que se extiende. b) El subgráfico resaltado consta de dos:
árboles negros y un árbol gris; los bordes negros son parte de un ciclo más grande pero aportan un árbol al subgrafo.
c) El subgrafo resaltado (con fondo gris claro) tiene tres árboles grises vacíos; los bordes negros
contienen un ciclo y no aportan un pedazo de árbol al subgrafo.
Significado de la notación
longitud (V ′) Número de bordes que se extienden en H por V ′ V ; es decir, EH(V ′)
Peb(V ′) Número de guijarros en V ′ ́V
fuera (V ′) Número de bordes vw en H con v ́V ′ y w ́V −V ′
pebi(v) Número de guijarros de color ci en v • V
outi(v) Número de bordes vw coloreados ci para v â € € TM V
Cuadro 2 Pebble notación de juego utilizado en este papel.
Subgrafías de G′ (puede haber más de uno del mismo color). Tan monocromático
subgraph se llama un mapa-foto-pieza de G′ si contiene un ciclo (en G′) y un árbol-pieza de G′
De lo contrario. El conjunto de piezas arbóreas de G′ es la colección de piezas arbóreas inducidas por G′. Al igual que con
la definición correspondiente para `Tk s, el conjunto de piezas arbóreas se define en relación con un sub-
grafo; en particular, una pieza de árbol puede formar parte de un ciclo más grande que incluye bordes que no se extienden
por G′.
Las propiedades de las descomposicións del juego de guijarros se estudian en la Sección 6 y en el Teorema 2
muestra que cada color debe ser (1,0)-sparse. La orientación de los bordes en la Figura 4(a) muestra
Esto.
Por ejemplo, la Figura 4(a) muestra un gráfico ajustado (2,2) con un posible decom de juego de guijarro-
posición. El gráfico completo contiene una pieza de árbol gris y una pieza de árbol negro que es un aislado
vértice. El subgrafo de la Figura 4(b) tiene un árbol negro y un árbol gris, con los bordes del negro
árbol procedente de un ciclo en el gráfico más grande. En la Figura 4(c), sin embargo, el ciclo negro no
contribuir con una pieza de árbol. Las tres piezas de árbol en este subgrafo son árboles grises de un solo vértex.
En la siguiente discusión, utilizamos la notación peb(v) para el número de guijarros en v y
pebi(v) para indicar el número de guijarros de colores i en v.
La Tabla 2 enumera la notación de juego de guijarros utilizada en este artículo.
4. Nuestros resultados
Describimos nuestros resultados en esta sección. El resto del periódico proporciona las pruebas.
Descomposiciones del gráfico de certificación de la sparsity 7
Nuestro primer resultado es un fortalecimiento de los juegos de guijarros de [12] para incluir los colores. Dice
que los gráficos escasos son exactamente gráficos de juego de guijarros. Recuerde que a partir de ahora, todos los juegos de guijarros
discutidos en este artículo son nuestro juego de guijarros con colores a menos que se anote explícitamente.
Teorema 1 (Los gráficos Sparse y los gráficos de juego de guijarros coinciden). Un gráfico G es (k, `)-sparse
con 0≤ 2k−1 si y sólo si G es un gráfico de juego de guijarros.
A continuación consideramos las descomposiciones de juego de guijarros, mostrando que son una generalización de
las descomposiciones adecuadas de `Tk que se extienden a toda la gama matroidal de gráficos dispersos.
Teorema 2 (La descomposición de guijarros-juego-con-colores). Un gráfico G es un juego de guijarros
gráfico si y sólo si admite una descomposición en k borde-discoint subgraphs tales que cada uno
es (1,0)-sparse y cada subgrafo de G contiene al menos ` piezas de árbol de la (1,0)-sparse
gráficos en la descomposición.
Las subgrafías de (1,0)-parse en la declaración de Teorema 2 son los colores de los guijarros; por lo tanto
Teorema 2 da una caracterización de las descomposicións de guijarros-juego-con-colores obtenidos
jugando el juego de guijarros definido en la sección anterior. Nótese la similitud entre el
requisito de que el conjunto de piezas arbóreas tenga por lo menos un tamaño ` en el Teorema 2 y la definición de un
propiamente dicho `Tk.
Nuestros siguientes resultados muestran que para cualquier gráfico de juego de guijarros, podemos especializar su juego de guijarros
construcción para generar una descomposición que es un mapa-y-árboles o `Tk. Nosotros llamamos a estos
especializada construcción de juegos de guijarros canónicos, y el uso canónico juego de guijarros construc-
ciones, obtenemos nuevas pruebas directas de los resultados de arboricidad existentes.
Observamos Teorema 2 que los mapas-y-árboles son casos especiales del juego de guijarros decompo-
Situación: tanto los árboles que se extienden y los mapas que se extienden son (1.0)-parse, y cada uno de la extensión
los árboles aportan al menos un pedazo de árbol a cada subgrafo.
El caso de los gráficos `Tk apropiados es más sutil; si cada color en una descomposición del juego de guijarros
es un bosque, entonces hemos encontrado un adecuado `Tk, pero esta clase es un subconjunto de todos los posibles apropiados
`Tk descomposiciones de un gráfico apretado. Demostramos que esta clase de descomposiciones apropiadas `Tk es
suficiente para certificar la escasez.
Ahora declaramos el teorema principal para el rango superior e inferior.
Teorema 3 (Teorema Principal): Mapas y árboles coinciden con el juego de guijarros
grafos). Que 0 ≤ ` ≤ k. Un gráfico G es un gráfico de juego de guijarro apretado si y sólo si G es un (k, `)-
mapas y árboles.
Teorema 4 (Teorema principal): Los gráficos `Tk adecuados coinciden con el juego de guijarros
grafos). Deje k≤ 2k−1. Un gráfico G es un gráfico de juego de guijarros apretado si y sólo si es un adecuado
`Tk con kn− ` bordes.
Como corolarios, obtenemos los resultados de descomposición existentes para gráficos escasos.
Corollario 5 (Nash-Williams [17], Tutte [23], White y Whiteley [24]). Deja k. Un gráfico
G es estrecho si y sólo si tiene una descomposición (k, `)-maps-and-trees.
Corollario 6 (Crapo [2], Haas [7]). Dejar k ≤ 2k−1. Un gráfico G es estrecho si y sólo si es un
propiamente dicho `Tk.
Encontrar eficientemente construcciones canónicas de juego de guijarros. Las pruebas de Teorema 3 y Theo-
rem 4 conduce a un algoritmo obvio con O(n3) tiempo de ejecución para el problema de descomposición.
Nuestro último resultado mejora en esto, mostrando que una construcción canónica juego de guijarros, y por lo tanto
8 Ileana Streinu, Louis Theran
un mapa-y-árboles o `Tk descomposición apropiada se puede encontrar usando un algoritmo de juego de guijarros en
O(n2) tiempo y espacio.
Estos límites de tiempo y espacio significan que nuestro algoritmo puede combinarse con los de [12]
sin ningún cambio en la complejidad.
5. Gráficos de juego de pebble
En esta sección demostramos Teorema 1, un fortalecimiento de los resultados de [12] al juego de guijarros
con colores. Dado que muchas de las propiedades relevantes del juego de guijarros con colores
directamente de los juegos de guijarros de [12], nos referimos al lector allí para las pruebas.
Comenzamos estableciendo algunas invariantes que se mantienen durante la ejecución del juego de guijarros.
Lemma 7 (invariantes de juego de pebble). Durante la ejecución del juego de guijarros, lo siguiente
los invariantes se mantienen en H:
(I1) Hay por lo menos ` guijarros en V. [12]
(I2) Para cada vértice v, span(v)+out(v)+peb(v) = k. [12]
(I3) Para cada V ′ ́V, span(V ′)+out(V ′)+peb(V ′) = kn′. [12]
(I4) Por cada vértice v V, outi(v)+pebi(v) = 1.
(I5) Cada ruta máxima que consiste sólo de bordes con ci de color termina en el primer vértice con
un guijarro de color ci o un ciclo.
Prueba. (I1), (I2), y (I3) vienen directamente de [12].
(I4) Este invariante se mantiene claramente en la fase de inicialización del juego de guijarros con colores.
Esa reserva de movimientos de bordes añadidos y guijarros (I4) está clara de la inspección.
(I5) Por (I4), un camino monocromático de los bordes se ve obligado a terminar sólo en un vértice con un guijarro de
el mismo color en ella. Si no hay guijarros de ese color alcanzable, entonces el camino debe eventualmente
Visita un vértice dos veces.
De estos invariantes, podemos mostrar que los gráficos constructibles del juego de guijarros son escasos.
Lemma 8 (Los gráficos de los juegos de pelota son escasos [12]). Dejar H ser un gráfico construido con el
Juego de guijarros. Entonces H es escasa. Si hay exactamente ` guijarros en V (H), entonces H es apretado.
El paso principal para probar que cada gráfico escaso es un gráfico de juego de guijarros es el siguiente.
Recordemos que al traer un guijarro a v nos referimos a reorientar H con movimientos de guijarro-deslizamiento para reducir
el grado de v por uno.
Lemma 9 (La condición de guijarro â € 1 [12]). Dejar vw ser un borde tal que H + vw es escaso. Si
peb({v,w}) < â € 1, entonces un guijarro no en {v,w} se puede llevar a v o w.
Se deduce que cualquier gráfico escaso tiene una construcción de juego de guijarros.
Teorema 1 (Los gráficos Sparse y los gráficos de juego de guijarros coinciden). Un gráfico G es (k, `)-sparse
con 0≤ 2k−1 si y sólo si G es un gráfico de juego de guijarros.
6. La descomposición de guijarros-juego-con-colores
En esta sección demostramos Teorema 2, que caracteriza todas las descomposicións de juego de guijarros. Nosotros
empezar con los siguientes lemas sobre la estructura de los componentes monocromáticos conectados
en H, el gráfico dirigido mantenido durante el juego de guijarros.
Descomposiciones del gráfico de certificación de la sparsity 9
Lemma 10 (los subgrafos monocromáticos del juego de guijarros son (1,0)-sparse). Deja que Hi sea el sub-
gráfico de H inducido por los bordes con guijarros de color ci en ellos. Entonces Hi es (1,0)-parso, para
i = 1,...,k.
Prueba. Por (I4) Hi es un conjunto de bordes con grado a lo sumo uno para cada vértice.
Lemma 11 (Piezas de árbol en un gráfico de juego de guijarros). Cada subgrafo del gráfico dirigido H
en una construcción de juego de guijarros contiene por lo menos ` piezas monocromáticas de árboles, y cada uno de estos
tiene sus raíces en un vértice con un guijarro en él o un vértice que es la cola de un borde.
Recordemos que un borde superior a un subpárrafo H ′ = (V ′,E ′) es un borde vw con v′ V y vw /′ E.
Prueba. Dejar que H ′ = (V ′,E ′) sea un subgrafo no vacío de H, y asumir sin pérdida de generalidad
que H ′ es inducida por V ′. Por (I3), fuera (V ′)+ peb(V ′) ≥ `. Mostraremos que cada guijarro y
cola de borde es la raíz de una pieza de árbol.
Considerar un vértice v V ′ y un color ci. Por (I4) hay un único monocromático dirigido
ruta de color ci a partir de v. Por (I5), si este camino termina en una rocalla, no tiene un ciclo.
Del mismo modo, si este camino alcanza un vértice que es la cola de un borde también en color ci (es decir, si el
trayectoria monocromática desde v hojas V ′), entonces la trayectoria no puede tener un ciclo en H ′.
Dado que este argumento funciona para cualquier vértice en cualquier color, para cada color hay una partición
de los vértices en aquellos que pueden alcanzar cada guijarro, cola de borde superior, o ciclo. De ello se deduce que cada uno de
guijarros y cola de borde superior es la raíz de un árbol monocromático, como se desee.
Aplicado a todo el gráfico Lemma 11 nos da lo siguiente.
Lemma 12 (Los pebbles son las raíces de los árboles). En cualquier configuración de juego de guijarros, cada guijarros de
color ci es la raíz de un (posiblemente vacío) monocromático árbol-pieza de color ci.
Nota: Haas mostró en [7] que en un `Tk, un subgráfico inducido por n′ ≥ 2 vértices con m′
los bordes tienen exactamente piezas de árbol knm′ en él. Lemma 11 refuerza el resultado de Haas al ampliarlo
a la gama inferior y dando una construcción que encuentra las piezas de árbol, mostrando la conexión
entre la condición de guijarro â € 1 y la condición hereditaria en la adecuada `Tk.
Concluimos nuestra investigación de construcciones arbitrarias de juego de guijarros con una descripción de
la descomposición inducida por el juego de guijarros con colores.
Teorema 2 (La descomposición de guijarros-juego-con-colores). Un gráfico G es un juego de guijarros
gráfico si y sólo si admite una descomposición en k borde-discoint subgraphs tales que cada uno
es (1,0)-sparse y cada subgrafo de G contiene al menos ` piezas de árbol de la (1,0)-sparse
gráficos en la descomposición.
Prueba. Deja que G sea un gráfico de juego de guijarros. La existencia de la k borde-disjunta (1,0)-sparse sub-
Los gráficos fueron mostrados en Lemma 10, y Lemma 11 prueba la condición en subgrafías.
Para la otra dirección, observamos que un ci de color con piezas de árbol ti en un subgrafo dado puede
espacio a lo sumo n- ti bordes; sumando sobre todos los colores muestra que un gráfico con un guijarro-juego
la descomposición debe ser escasa. Aplique el Teorema 1 para completar la prueba.
Observación: Observamos que una descomposición del juego de guijarros para un gráfico de Laman puede ser leída
de la coincidencia bipartita utilizada en el algoritmo de extracción de gráficos Laman de Hendrickson [9]. De hecho,
las orientaciones de juego de guijarros tienen una correspondencia natural con los emparejamientos bipartitos utilizados en
10 Ileana Streinu, Louis Theran
Mapas y árboles son un caso especial de descomposición de juegos de guijarros para gráficos apretados: si hay
no son ciclos en ` de los colores, entonces los árboles enraizados en los ` guijarros correspondientes deben ser
que se extienden, ya que tienen n - 1 bordes. Además, si cada color forma un bosque en un rango superior
la descomposición del juego de guijarros, entonces la condición de piezas de árbol asegura que el juego de guijarros de-
la composición es un `Tk.
En la siguiente sección, mostramos que el juego de guijarros puede ser especializado para corresponder a los mapas-
y árboles y las correspondientes descomposicións `Tk.
7. Construcciones Canónicas de Juego de Pebble
En esta sección demostramos los principales teoremas (Teorema 3 y Teorema 4), continuando las inves-
de las descomposiciones inducidas por las construcciones de juego de guijarros mediante el estudio del caso en el que un
Se crea un número mínimo de ciclos monocromáticos. La idea principal, capturada en Lemma 15
e ilustrado en la Figura 6, es evitar la creación de ciclos al recoger piedras. Demostramos que
esto es siempre posible, lo que implica que los mapas monocromáticos se crean sólo cuando
añadir más de k(n1) bordes a algún conjunto de n′ vértices. Para el rango inferior, esto implica que
Cada color es un bosque. Cada caracterización de descomposición de gráficos ajustados discutidos arriba
sigue inmediatamente del teorema principal, dando nuevas pruebas de los resultados anteriores en un
un marco unificado.
En la prueba, vamos a utilizar dos especializaciones de los movimientos de juego de guijarros. El primero es un modi-
ficación del movimiento de add-edge.
Add-edge canónico: Al realizar un movimiento de add-edge, cubra el nuevo borde con un color
que está en ambos vértices si es posible. Si no, entonces tome el color numerado más alto presente.
La segunda es una restricción en la que los movimientos de guijarros-deslizamiento que permitimos.
Deslizamiento canónico de guijarros: Un movimiento de guijarros se permite sólo cuando no crea un
ciclo monocromático.
Llamamos a una construcción de juego de guijarros que utiliza sólo estos movimientos canónicos. En esta sección
vamos a mostrar que cada gráfico de juego de guijarros tiene una construcción canónica de juego de guijarros (Lemma
14 y Lemma 15) y que las construcciones canónicas de juego de guijarros corresponden a `Tk y
las descomposicións de mapas y árboles (Teorema 3 y Teorema 4).
Comenzamos con un lema técnico que motiva la definición de juego canónico de guijarros
construcciones. Muestra que las situaciones desaprobadas por los movimientos canónicos son todas las maneras
para que los ciclos se formen en los colores más bajos.
Lemma 13 (creación del ciclo monocromático). Let v â € ¢ V tener un guijarro p de color ci en él y
dejar w ser un vértice en el mismo árbol de color ci como v. Un ciclo monocromático de color ci se crea
exactamente de una de las siguientes maneras:
(M1) El borde vw se añade con un movimiento de add-edge.
(M2) El borde wv es invertido por un movimiento de guijarro-deslizamiento y el guijarro p se utiliza para cubrir el reverso
edge vw.
Prueba. Observe que las condiciones previas en la declaración del lema están implícitas en Lemma 7.
Por Lemma 12 ciclos monocromáticos se forman cuando el último guijarro de color ci se elimina de un
Subgrafía monocromática conectada. (M1) y (M2) son las únicas maneras de hacer esto en un guijarro
construcción del juego, ya que el color de un borde sólo cambia cuando se inserta la primera vez o
un guijarro nuevo es puesto en él por un movimiento de guijarro-deslizamiento.
Descomposiciones del gráfico de certificación de la sparsity 11
vw vw
Fig. 5. Crear ciclos monocromáticos en un juego (2.0)-pebble. a) Un movimiento de tipo (M1) crea un ciclo por
añadir un borde negro. (b) Un movimiento de tipo (M2) crea un ciclo con un movimiento de guijarros-deslizamiento. Los vértices son
etiquetado de acuerdo a su papel en la definición de los movimientos.
La figura 5 a) y la figura 5 b) muestran ejemplos de movimientos de creación de mapas (M1) y (M2),
respectivamente, en una construcción de juego (2.0)-pebble.
A continuación mostramos que si un gráfico tiene una construcción de juego de guijarros, entonces tiene un peb canónico-
ble construcción de juegos. Esto se hace en dos pasos, considerando los casos (M1) y (M2) sepa-
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no. La prueba da dos construcciones que implementan el add-edge canónico y canónico
movimiento de guijarros-deslizamiento.
Lemma 14 (El movimiento canónico de add-edge). Let G ser un gráfico con un juego de guijarros construc-
tion. Los pasos de creación de ciclo de tipo (M1) se pueden eliminar en colores ci para 1 ≤ i ≤, donde
= min{k,.
Prueba. Para los movimientos de add-edge, cubra el borde con un color presente en v y w si es posible. Si
esto no es posible, entonces hay â € 1 colores distintos presentes. Usar el color numerado más alto
para cubrir el nuevo borde.
Observación: Observamos que en el rango superior, siempre hay un color repetido, por lo que no canónico
los movimientos de add-edge crean ciclos en el rango superior.
El movimiento canónico de guijarros se define por una condición global. Para demostrar que obtenemos
la misma clase de gráficos usando sólo movimientos canónicos de rocalla-deslizamiento, tenemos que extender Lemma
9 a sólo movimientos canónicos. El paso principal es mostrar que si hay alguna secuencia de movimientos que
reorienta un camino de v a w, entonces hay una secuencia de movimientos canónicos que hace lo mismo
Cosa.
Lemma 15 (El movimiento canónico de guijarros). Cualquier secuencia de deslizamiento de guijarros se mueve llevando
a un movimiento de add-edge se puede reemplazar por uno que no tiene pasos (M2) y permite el mismo
add-edge move.
En otras palabras, si es posible recoger 1 guijarros en los extremos de un borde a añadir,
entonces es posible hacer esto sin crear ningún ciclo monocromático.
12 Ileana Streinu, Louis Theran
La Figura 7 y la Figura 8 ilustran la construcción utilizada en la prueba de Lemma 15. Nosotros llamamos a esto
la construcción de atajos por analogía a la unión matroide y caminos de aumento de intersección utilizados
en trabajos anteriores en el rango inferior.
La Figura 6 muestra la estructura de la prueba. La construcción de acceso directo elimina un paso (M2)
al principio de una secuencia que reorienta un camino de v a w con deslizamientos de guijarros. Desde uno
la aplicación de la construcción abreviada reorienta un camino simple de un vértice w′ a w, y un
ruta de v a w′ se conserva, la construcción de acceso directo se puede aplicar inductivamente para encontrar la
secuencia de movimientos que queremos.
Fig. 6. Esquema de la construcción del atajo: (a) Un camino sencillo arbitrario de v a w con líneas curvas
indicando caminos simples. b) Una etapa (M2). El borde negro, a punto de ser volteado, crearía un ciclo,
se muestra en gris rayado y sólido, del (único) árbol gris enraizado en w. Los bordes grises sólidos eran parte
de la ruta original de (a). (c) El camino acortado a la rocalla gris; el nuevo camino sigue el gris
árbol todo el camino desde la primera vez que el camino original tocó el árbol gris en w′. La ruta de v a w′ es
simple, y la construcción del atajo se puede aplicar inductivamente a él.
Prueba. Sin pérdida de generalidad, podemos asumir que nuestra secuencia de movimientos reorienta un simple
camino en H, y que el primer movimiento (el final del camino) es (M2). El paso (M2) mueve un guijarro
de color ci de un vértice w en el borde vw, que se invierte. Porque el movimiento es (M2), v
y w están contenidos en un árbol monocromático máximo de color ci. Llame a este árbol H ′i, y observar
que está arraigado en w.
Ahora considere los bordes invertidos en nuestra secuencia de movimientos. Como se ha señalado anteriormente, antes de hacer
cualquiera de los movimientos, estos bosquejan un camino simple en H que termina en w. Que z sea el primer vértice en
este camino en H ′i. Modificamos nuestra secuencia de movimientos de la siguiente manera: eliminar, desde el principio, cada
mover antes de la que invierte algunos yz borde; prepend en lo que queda una secuencia de movimientos
que mueve el guijarro en w a z en H ′i.
Descomposiciones del gráfico de certificación de la sparsity 13
Fig. 7. Eliminando movimientos (M2): (a) un movimiento (M2); (b) evitando el (M2) moviéndose por otro camino.
El camino donde se mueven los guijarros está indicado por líneas duplicadas.
Fig. 8. Eliminación (M2) movimientos: (a) el primer paso para mover el guijarro negro a lo largo del camino doble es
(M2); (b) evitando el (M2) y simplificando el camino.
Puesto que ningún borde cambia de color en el comienzo de la nueva secuencia, hemos eliminado
el movimiento (M2). Porque nuestra construcción no cambia ninguno de los bordes involucrados en el
cola restante de la secuencia original, la parte de la ruta original que queda en el nuevo
secuencia seguirá siendo un camino simple en H, cumpliendo con nuestra hipótesis inicial.
El resto del lema sigue por inducción.
Juntos Lemma 14 y Lemma 15 prueban lo siguiente.
Lemma 16. Si G es un gráfico de juego de guijarros, entonces G tiene una construcción canónica de juego de guijarros.
Usando construcciones canónicas de juego de guijarros, podemos identificar los gráficos apretados de juego de guijarros
con mapas y árboles y gráficos `Tk.
14 Ileana Streinu, Louis Theran
Teorema 3 (Teorema Principal): Mapas y árboles coinciden con el juego de guijarros
grafos). Que 0 ≤ ` ≤ k. Un gráfico G es un gráfico de juego de guijarro apretado si y sólo si G es un (k, `)-
mapas y árboles.
Prueba. Como se observó anteriormente, una descomposición de mapas y árboles es un caso especial del juego de guijarros
descomposición. Aplicando el Teorema 2, vemos que cualquier mapa y árbol debe ser un juego de guijarros
gráfico.
Para la dirección inversa, considere la construcción canónica de un juego de guijarros de un gráfico apretado.
Desde Lemma 8, vemos que quedan piedras en G al final de la construcción. Los
definición del movimiento canónico de add-edge implica que debe haber al menos un guijarro de
cada ci para i = 1,2,........................................................................................................... Se deduce que hay exactamente uno de cada uno de estos colores. Por Lemma
12, cada uno de estos guijarros es la raíz de una pieza arbórea monocromática con n - 1 bordes, dando
los árboles de separación de bordes necesarios.
Corollario 5 (Nash-Williams [17], Tutte [23], White y Whiteley [24]). Deja k. Un gráfico
G es estrecho si y sólo si tiene una descomposición (k, `)-maps-and-trees.
A continuación consideramos las descomposicións inducidas por las construcciones canónicas de juego de guijarros cuando
k +1.
Teorema 4 (Teorema Principal): Árboles y árboles adecuados coinciden con el
ble-game graphs). Deje k≤ 2k−1. Un gráfico G es un gráfico de juego de guijarro apretado si y sólo si
es un `Tk con bordes kn− ` adecuado.
Prueba. Como se ha señalado anteriormente, una descomposición adecuada de `Tk debe ser escasa. Lo que tenemos que mostrar
es que una construcción canónica de un juego de guijarros de un gráfico apretado produce una adecuada `Tk.
Por Teorema 2 y Lemma 16, ya tenemos la condición en los árboles-piezas y el decom-
posición en `árboles de borde-desconectado. Por último, una aplicación de (I4), muestra que cada vértice debe
en exactamente k de los árboles, según sea necesario.
Corollario 6 (Crapo [2], Haas [7]). Dejar k ≤ 2k−1. Un gráfico G es estrecho si y sólo si es un
propiamente dicho `Tk.
8. Algoritmos de juego de pebble para encontrar descomposicións
Una ejecución naïve de las construcciones en la sección anterior conduce a un algoritmo re-
tiempo para recoger cada guijarro en una construcción canónica: en el peor de los casos
aplicaciones de la construcción en Lemma 15 requiriendo tiempo cada uno, dando un total de ejecución
tiempo de فارسى(n3) para el problema de descomposición.
En esta sección, describimos algoritmos para el problema de descomposición que se ejecutan en el tiempo
O(n2). Comenzamos con la estructura general del algoritmo.
Algoritmo 17 (El juego canónico de guijarros con colores).
Entrada: Un gráfico G.
Salida: Un gráfico de juego de guijarros H.
Método:
– Conjunto V (H) = V (G) y colocar un guijarro de cada color en los vértices de H.
– Para cada borde vw E(G) tratar de recoger al menos 1 guijarros en v y w utilizando guijarros deslizante
movimientos según lo descrito por Lemma 15.
Descomposiciones del gráfico de certificación de la Sparsity 15
– Si al menos 1 guijarros se puede recoger, añadir vw a H utilizando un movimiento de borde añadido como en Lemma
14, por lo demás descarte vw.
– Finalmente, devolver H, y las ubicaciones de los guijarros.
Correcto. Teorema 1 y el resultado de [24] que los gráficos escasos son los independientes
conjuntos de un matroide muestran que H es un subgrafo de tamaño máximo escaso de G. Desde la construcción
encontrado es canónico, el teorema principal muestra que el color de los bordes en H da un mapa-
y-árboles o descomposición adecuada `Tk.
Complejidad. Comenzamos observando que el tiempo de ejecución del Algoritmo 17 es el tiempo necesario para
proceso O(n) bordes añadidos a H y O(m) bordes no añadidos a H. Primero consideramos el costo de un
borde de G que se añade a H.
Cada uno de los movimientos de juego de guijarros se puede implementar en tiempo constante. Lo que queda es a
describir una manera eficiente de encontrar y mover los guijarros. Utilizamos el siguiente algoritmo como un
Subrutina de Algoritmo 17 para hacer esto.
Algoritmo 18 (Encontrar un camino canónico a una rocalla.).
Entrada: Vertices v y w, y una configuración de juego de guijarros en un gráfico dirigido H.
Salida: Si se encontró un guijarro, ‘sí’ y ‘no’ de otra manera. Se actualiza la configuración de H.
Método:
– Comience por hacer una búsqueda de profundidad desde v en H. Si no se encuentra ningún guijarro en w, detener y
devolver «no.»
– De lo contrario se encontró un guijarro. Ahora tenemos una ruta v = v1,e1,. ..,ep−1,vp = u, donde el vi
son vértices y ei es el borde vivi+1. Que c[ei] sea el color del guijarro en ei. Usaremos
la matriz c[] para hacer un seguimiento de los colores de los guijarros en los vértices y los bordes después de moverlos
y el array s[] para dibujar un camino canónico de v a u encontrando un sucesor para cada uno
borde.
– Establecer s[u] = «end′ y establecer c[u] al color de una piedra arbitraria en u. Caminamos en el camino en
orden inverso: vp,ep−1,ep−2,. ..,e1,v1. Para cada i, verifique si c[vi] está configurado; si es así, vaya a
la siguiente i. De lo contrario, compruebe si c[vi+1] = c[ei].
– Si lo es, establece s[vi] = ei y establece c[vi] = c[ei], y pasa al siguiente borde.
– De lo contrario c[vi+1] 6= c[ei], tratar de encontrar un camino monocromático en color c[vi+1] de vi a vi+1. Si
un vértice x se encuentra para el cual c[x] se establece, tenemos una ruta vi = x1, f1,x2,. .., fq−1,xq = x
que es monocromático en el color de los bordes; establecer c[xi] = c[fi] y s[xi] = fi para i =
1,2,...,q−1. Si c[x] = c[ fq−1], pare. De lo contrario, comprobar recursivamente que no hay un monocro-
c[x] ruta mática de xq−1 a x usando este mismo procedimiento.
– Finalmente, deslizar guijarros a lo largo del camino desde los puntos finales originales v a u especificado por el
array sucesor s[v], s[s[v],...
La corrección de Algoritmo 18 viene del hecho de que está implementando el atajo
construcción. La eficiencia viene del hecho de que en lugar de potencialmente mover el guijarro hacia atrás
y adelante, Algoritmo 18 pre-computa un camino canónico que cruza cada borde de H a lo sumo tres
times: una vez en la primera búsqueda de profundidad inicial, y dos veces al convertir la ruta inicial a una
Canónico. De ello se deduce que cada borde aceptado toma O(n) tiempo, para un total de O(n2) tiempo
los bordes de procesamiento gastados en H.
Aunque no hemos discutido esta explicitación, para que el algoritmo sea eficiente necesitamos
mantener los componentes como en [12]. Después de cada borde aceptado, los componentes de H se pueden actualizar
en el tiempo O(n). Por último, los resultados de [12, 13] muestran que los bordes rechazados toman un O(1) amortizado
tiempo cada uno.
16 Ileana Streinu, Louis Theran
Resumiendo, hemos demostrado que el juego canónico de guijarros con colores resuelve la decom-
problema de posición en el tiempo O(n2).
9. Un caso especial importante: Rigidez en la dimensión 2 y slider-pinning
En esta breve sección presentamos una nueva solicitud para el caso especial de importancia práctica,
k = 2, ` = 3. Como se explica en la introducción, el teorema de Laman [11] caracteriza mínimamente
gráficos rígidos como los gráficos ajustados (2,3). En el trabajo reciente sobre el slider pinning, desarrollado después de la
El documento actual fue presentado, introdujimos el modelo de slider-pinning de rigidez [15, 20]. Com-
binatoriamente, modelamos los marcos bar-slider como gráficos simples junto con algunos bucles
colocados en sus vértices de tal manera que no haya más de 2 bucles por vértice, uno de cada uno
color.
Caracterizamos los gráficos de deslizadores de barras mínimamente rígidos [20] como gráficos que son:
1. (2,3)-parse para subgrafías que no contengan bucles.
2. (2,0)-ajustado cuando se incluyen los bucles.
Llamamos a estos gráficos (2,0,3)-clasificados-ajustados, y son un caso especial de la clasificación-parse
gráficos estudiados en nuestro artículo [14].
La conexión con los juegos de guijarros en este artículo es la siguiente.
Corollary 19 (juegos de pebble y slider-pinning). En cualquier gráfico de juego (2,3)-pebble, si
Reemplazar los guijarros por los bucles, obtenemos un gráfico ajustado (2.0,3)-calificado.
Prueba. Seguidos de invariantes (I3) de Lemma 7.
En [15], estudiamos un caso especial de slider pinning donde cada slider es vertical o
horizontal. Modelamos los deslizadores como bucles precoloreados, con el color que indica la dirección x o y.
Para este caso de deslizador paralelo eje, los gráficos mínimamente rígidos se caracterizan por:
1. (2,3)-parse para subgrafías que no contengan bucles.
2. Admitir un 2-coloración de los bordes para que cada color sea un bosque (es decir, no tiene ciclos), y cada uno
árbol monocromático abarca exactamente un bucle de su color.
Esto también tiene una interpretación en términos de juegos de guijarros de colores.
Corollary 20 (El juego de guijarros con colores y slider-pinning). En cualquier canónico (2,3)-
Guijarro-juego-con-colores gráfico, si reemplazamos los guijarros por bucles del mismo color, obtenemos el
gráfico de un marco de eje-paralelo de barra-slider mínimamente fijado.
Prueba. Sigue desde el Teorema 4, y Lemma 12.
10. Conclusiones y problemas pendientes
Presentamos una nueva caracterización de (k, `)-sparse gráficos, el juego de guijarros con colores, y
lo utilizó para dar un algoritmo eficiente para encontrar descomposicións de gráficos escasos en el borde-
árboles desarticulados. Nuestro algoritmo encuentra tales descomposiciones certificadoras de esparcimiento en el rango superior
y se ejecuta en el tiempo O(n2), que es tan rápido como los algoritmos para reconocer gráficos escasos en el
rango superior a partir de [12].
También usamos el juego de guijarros con colores para describir una nueva descomposición de la sparsity-certificating-
ciones que se aplican a toda la gama matroidal de gráficos dispersos.
Descomposiciones del gráfico de certificación de la sparsity 17
Definimos y estudiamos una clase de construcciones canónicas de juego de guijarros que corresponden a
o bien una descomposición de mapas y árboles o bien una descomposición adecuada de `Tk. Esto da una nueva prueba de la Tutte-Nash-
Teorema de arboricidad Williams y una prueba unificada de la descomposición previamente estudiada cer-
tificates de la esparzidad. Las construcciones canónicas de juego de guijarros también muestran la relación entre
la condición de guijarro â 1, que se aplica a la gama superior de â, para aumentar la unión de los matroides
rutas, que no se aplican en el rango superior.
Consecuencias algorítmicas y problemas abiertos. En [6], Gabow y Westermann dan un O(n3/2)
algoritmo para reconocer gráficos escasos en el rango inferior y extraer subtítulos escasos de
Densos. Su técnica se basa en la búsqueda eficiente de caminos de aumento de unión de matroides,
que extienden una descomposición de mapas y árboles. El algoritmo O(n3/2) utiliza dos subrutinas para
encontrar rutas de aumento: exploración cíclica, que encuentra rutas de aumento uno a la vez, y lote
escaneado, que encuentra grupos de caminos de aumento disjuntos.
Observamos que Algoritmo 17 se puede utilizar para reemplazar el escaneo cíclico en Gabow y Wester-
algoritmo de mann sin cambiar el tiempo de ejecución. Las estructuras de datos utilizadas en la aplicación
de guijarros, detallado en [12, 13] son más simples y más fáciles de implementar que los
utilizado para apoyar el escaneo cíclico.
Los dos principales problemas algorítmicos abiertos relacionados con el juego de guijarros son entonces:
Problema 1. Desarrollar un algoritmo de juego de guijarros con las propiedades de escaneado por lotes y obtener
un algoritmo O(n3/2) implementable para el rango inferior.
Problema 2. Extender la exploración por lotes a la condición de guijarro â € 1 y derivar un guijarro O(n3/2)
algoritmo de juego para el rango superior.
En particular, sería de importancia práctica encontrar un algoritmo O(n3/2) implementable
para las descomposiciones en los árboles que se extienden por los bordes.
Bibliografía
1. Berg, A.R., Jordán, T.: Algoritmos para la rigidez gráfica y el análisis de la escena. In: Proc. 11a
Simposio Europeo sobre Algoritmos (ESA ’03), LNCS, vol. 2832, pp. 78–89. (2003)
2. Crapo, H.: Sobre la rigidez genérica de los marcos planos. Tech. Rep. 1278, Institut de
recherche d’informatique et d’automatique (1988)
3. Edmonds, J.: Partición mínima de un matroide en conjuntos independientes. J. Res. Nat. Bur.
Normas Secc. B 69B, 67–72 (1965)
4. Edmonds, J.: Funciones submodulares, matroides y ciertos poliedros. En: Combinatoria
Optimización: ¡Eureka, encogerte!, no. 2570 in LNCS, pp. 11–26. Springer (2003)
5. Gabow, H.N.: Un enfoque matroide para encontrar conectividad de borde y arborescencias de embalaje.
Journal of Computer and System Sciences 50, 259–273 (1995)
6. Gabow, H.N., Westermann, H.H.: Bosques, marcos y juegos: Algoritmos para sumas de matroide
y aplicaciones. Algoritmica 7(1), 465–497 (1992)
7. Haas, R.: Caracterizaciones de la arboricidad de los gráficos. Ars Combinatoria 63, 129–137 (2002)
8. Haas, R., Lee, A., Streinu, I., Theran, L.: Caracterizando gráficos escasos por mapa decompo-
Situaciones. Revista de Matemáticas Combinatoria y Computación Combinatoria 62, 3-11
(2007)
9. Hendrickson, B.: Condiciones para realizaciones gráficas únicas. SIAM Journal on Computing
21(1), 65–84 (1992)
18 Ileana Streinu, Louis Theran
10. Jacobs, D.J., Hendrickson, B.: Un algoritmo para la percolación de rigidez bidimensional: la
Juego de guijarros. Revista de Física Computacional 137, 346-365 (1997)
11. Laman, G.: En gráficos y rigidez de las estructuras esqueléticas planas. Revista de Ingeniería
Matemáticas 4, 331-340 (1970)
12. Lee, A., Streinu, I.: Algorihms de juego de pebble y gráficos escasos. Matemáticas discretas
308(8), 1425-1437 (2008)
13. Lee, A., Streinu, I., Theran, L.: Encontrar y mantener componentes rígidos. In: Proc. Cana...
Conferencia de Geometría Computacional. Windsor, Ontario (2005). http://cccg.
cs.uwindsor.ca/papers/72.pdf
14. Lee, A., Streinu, I., Theran, L.: Gráficos bajos y matroides. Diario de Universal
Ciencias de la computación 13(10) (2007)
15. Lee, A., Streinu, I., Theran, L.: El problema del slider-pinning. En: Actas del 19
Conferencia Canadiense sobre Geometría Computacional (CCCG’07) (2007)
16. Lovász, L.: Problemas y ejercicios combinatorios. Akademiai Kiado y North-Holland,
Amsterdam (1979)
17. Nash-Williams, C.S.A.: Descomposición de gráficos finitos en los bosques. Diario de Londres
Sociedad Matemática 39, 12 (1964)
18. Oxley, J.G.: Teoría de los matroides. The Clarendon Press, Oxford University Press, Nueva York
(1992)
19. Roskind, J., Tarjan, R.E.: Una nota sobre la búsqueda de un coste mínimo borde de árboles disjuntos que se extienden.
Matemáticas de la investigación de operaciones 10(4), 701-708 (1985)
20. Streinu, I., Theran, L.: Genericidad combinatoria y rigidez mínima. En: SCG ’08: Pro-
cedidas del 24o Simposio anual sobre Geometría Computacional, pp. 365–
374. ACM, Nueva York, NY, USA (2008).
21. Tay, T.S.: Rigidez de los multógrafos I: uniendo cuerpos rígidos en n-espacio. Diario de Combinato-
rial Theory, Serie B 26, 95–112 (1984)
22. Tay, T.S.Una nueva prueba del teorema de Laman. Gráficos y combinatorios 9, 365–370 (1993)
23. Tutte, W.T.: Sobre el problema de la descomposición de un gráfico en n factores conectados. Diario de
Sociedad Matemática de Londres 142, 221–230 (1961)
24. Whiteley, W.: La unión de los matroides y la rigidez de los marcos. SIAM Journal on
Matemáticas discretas 1(2), 237–255 (1988)
http://cccg.cs.uwindsor.ca/papers/72.pdf
http://cccg.cs.uwindsor.ca/papers/72.pdf
Introducción y preliminares
Antecedentes históricos
El juego de guijarros con colores
Nuestros resultados
Gráficos de juego de pebble
La descomposición de guijarros-juego-con-colores
Construcciones Canónicas de Juego de Pebble
Algoritmos de juego de pebble para encontrar descomposicións
Un caso especial importante: Rigidez en la dimensión 2 y slider-pinning
Conclusiones y problemas pendientes
|
704
| The evolution of the Earth-Moon system based on the dark matter field
fluid model
| The evolution of Earth-Moon system is described by the dark matter field
fluid model proposed in the Meeting of Division of Particle and Field 2004,
American Physical Society. The current behavior of the Earth-Moon system agrees
with this model very well and the general pattern of the evolution of the
Moon-Earth system described by this model agrees with geological and fossil
evidence. The closest distance of the Moon to Earth was about 259000 km at 4.5
billion years ago, which is far beyond the Roche's limit. The result suggests
that the tidal friction may not be the primary cause for the evolution of the
Earth-Moon system. The average dark matter field fluid constant derived from
Earth-Moon system data is 4.39 x 10^(-22) s^(-1)m^(-1). This model predicts
that the Mars's rotation is also slowing with the angular acceleration rate
about -4.38 x 10^(-22) rad s^(-2).
| The evolution of the Earth-Moon system based on the dark fluid model
The evolution of the Earth-Moon system based on
the dark matter field fluid model
Hongjun Pan
Department of Chemistry
University of North Texas, Denton, Texas 76203, U. S. A.
Abstract
The evolution of Earth-Moon system is described by the dark matter field fluid
model with a non-Newtonian approach proposed in the Meeting of Division of Particle
and Field 2004, American Physical Society. The current behavior of the Earth-Moon
system agrees with this model very well and the general pattern of the evolution of the
Moon-Earth system described by this model agrees with geological and fossil evidence.
The closest distance of the Moon to Earth was about 259000 km at 4.5 billion years ago,
which is far beyond the Roche’s limit. The result suggests that the tidal friction may not
be the primary cause for the evolution of the Earth-Moon system. The average dark
matter field fluid constant derived from Earth-Moon system data is 4.39 × 10-22 s-1m-1.
This model predicts that the Mars’s rotation is also slowing with the angular acceleration
rate about -4.38 × 10-22 rad s-2.
Key Words. dark matter, fluid, evolution, Earth, Moon, Mars
1. Introduction
The popularly accepted theory for the formation of the Earth-Moon system is that
the Moon was formed from debris of a strong impact by a giant planetesimal with the
Earth at the close of the planet-forming period (Hartmann and Davis 1975). Since the
formation of the Earth-Moon system, it has been evolving at all time scale. It is well
known that the Moon is receding from us and both the Earth’s rotation and Moon’s
rotation are slowing. The popular theory is that the tidal friction causes all those changes
based on the conservation of the angular momentum of the Earth-Moon system. The
situation becomes complicated in describing the past evolution of the Earth-Moon
system. Because the Moon is moving away from us and the Earth rotation is slowing, this
means that the Moon was closer and the Earth rotation was faster in the past. Creationists
argue that based on the tidal friction theory, the tidal friction should be stronger and the
recessional rate of the Moon should be greater in the past, the distance of the Moon
would quickly fall inside the Roche's limit (for earth, 15500 km) in which the Moon
would be torn apart by gravity in 1 to 2 billion years ago. However, geological evidence
indicates that the recession of the Moon in the past was slower than the present rate, i. e.,
the recession has been accelerating with time. Therefore, it must be concluded that tidal
friction was very much less in the remote past than we would deduce on the basis of
present-day observations (Stacey 1977). This was called “geological time scale
difficulty” or “Lunar crisis” and is one of the main arguments by creationists against the
tidal friction theory (Brush 1983).
But we have to consider the case carefully in various aspects. One possible
scenario is that the Earth has been undergoing dynamic evolution at all time scale since
its inception, the geological and physical conditions (such as the continent positions and
drifting, the crust, surface temperature fluctuation like the glacial/snowball effect, etc) at
remote past could be substantially different from currently, in which the tidal friction
could be much less; therefore, the receding rate of the Moon could be slower. Various
tidal friction models were proposed in the past to describe the evolution of the Earth-
Moon system to avoid such difficulty or crisis and put the Moon at quite a comfortable
distance from Earth at 4.5 billion years ago (Hansen 1982, Kagan and Maslova 1994, Ray
et al. 1999, Finch 1981, Slichter 1963). The tidal friction theories explain that the present
rate of tidal dissipation is anomalously high because the tidal force is close to a resonance
in the response function of ocean (Brush 1983). Kagan gave a detailed review about those
tidal friction models (Kagan 1997). Those models are based on many assumptions about
geological (continental position and drifting) and physical conditions in the past, and
many parameters (such as phase lag angle, multi-mode approximation with time
dependent frequencies of the resonance modes, etc.) have to be introduced and carefully
adjusted to make their predictions close to the geological evidence. However, those
assumptions and parameters are still challenged, to certain extent, as concoction.
The second possible scenario is that another mechanism could dominate the
evolution of the Earth-Moon system and the role of the tidal friction is not significant. In
the Meeting of Division of Particle and Field 2004, American Physical Society,
University of California at Riverside, the author proposed a dark matter field fluid model
(Pan 2005) with a non-Newtonian approach, the current Moon and Earth data agree with
this model very well. This paper will demonstrate that the past evolution of Moon-Earth
system can be described by the dark matter field fluid model without any assumptions
about past geological and physical conditions. Although the subject of the evolution of
the Earth-Moon system has been extensively studied analytically or numerically, to the
author’s knowledge, there are no theories similar or equivalent to this model.
2. Invisible matter
In modern cosmology, it was proposed that the visible matter in the universe is
about 2 ~ 10 % of the total matter and about 90 ~ 98% of total matter is currently
invisible which is called dark matter and dark energy, such invisible matter has an anti-
gravity property to make the universe expanding faster and faster.
If the ratio of the matter components of the universe is close to this hypothesis,
then, the evolution of the universe should be dominated by the physical mechanism of
such invisible matter, such physical mechanism could be far beyond the current
Newtonian physics and Einsteinian physics, and the Newtonian physics and Einsteinian
physics could reflect only a corner of the iceberg of the greater physics.
If the ratio of the matter components of the universe is close to this hypothesis,
then, it should be more reasonable to think that such dominant invisible matter spreads in
everywhere of the universe (the density of the invisible matter may vary from place to
place); in other words, all visible matter objects should be surrounded by such invisible
matter and the motion of the visible matter objects should be affected by the invisible
matter if there are interactions between the visible matter and the invisible matter.
If the ratio of the matter components of the universe is close to this hypothesis,
then, the size of the particles of the invisible matter should be very small and below the
detection limit of the current technology; otherwise, it would be detected long time ago
with such dominant amount.
With such invisible matter in mind, we move to the next section to develop the
dark matter field fluid model with non-Newtonian approach. For simplicity, all invisible
matter (dark matter, dark energy and possible other terms) is called dark matter here.
3. The dark matter field fluid model
In this proposed model, it is assumed that:
1. A celestial body rotates and moves in the space, which, for simplicity, is uniformly
filled with the dark matter which is in quiescent state relative to the motion of the
celestial body. The dark matter possesses a field property and a fluid property; it can
interact with the celestial body with its fluid and field properties; therefore, it can have
energy exchange with the celestial body, and affect the motion of the celestial body.
2. The fluid property follows the general principle of fluid mechanics. The dark matter
field fluid particles may be so small that they can easily permeate into ordinary
“baryonic” matter; i. e., ordinary matter objects could be saturated with such dark matter
field fluid. Thus, the whole celestial body interacts with the dark matter field fluid, in the
manner of a sponge moving thru water. The nature of the field property of the dark matter
field fluid is unknown. It is here assumed that the interaction of the field associated with
the dark matter field fluid with the celestial body is proportional to the mass of the
celestial body. The dark matter field fluid is assumed to have a repulsive force against the
gravitational force towards baryonic matter. The nature and mechanism of such repulsive
force is unknown.
With the assumptions above, one can study how the dark matter field fluid may
influence the motion of a celestial body and compare the results with observations. The
common shape of celestial bodies is spherical. According to Stokes's law, a rigid non-
permeable sphere moving through a quiescent fluid with a sufficiently low Reynolds
number experiences a resistance force F
rvF πμ6−= (1)
where v is the moving velocity, r is the radius of the sphere, and μ is the fluid viscosity
constant. The direction of the resistance force F in Eq. 1 is opposite to the direction of the
velocity v. For a rigid sphere moving through the dark matter field fluid, due to the dual
properties of the dark matter field fluid and its permeation into the sphere, the force F
may not be proportional to the radius of the sphere. Also, F may be proportional to the
mass of the sphere due to the field interaction. Therefore, with the combined effects of
both fluid and field, the force exerted on the sphere by the dark matter field fluid is
assumed to be of the scaled form
(2) mvrF n−−= 16πη
where n is a parameter arising from saturation by dark matter field fluid, the r1-n can be
viewed as the effective radius with the same unit as r, m is the mass of the sphere, and η
is the dark matter field fluid constant, which is equivalent to μ. The direction of the
resistance force F in Eq. 2 is opposite to the direction of the velocity v. The force
described by Eq. 2 is velocity-dependent and causes negative acceleration. According to
Newton's second law of motion, the equation of motion for the sphere is
mvr
m n−−= 16πη (3)
Then
(4) )6exp( 10 vtrvv
n−−= πη
where v0 is the initial velocity (t = 0) of the sphere. If the sphere revolves around a
massive gravitational center, there are three forces in the line between the sphere and the
gravitational center: (1) the gravitational force, (2) the centripetal acceleration force; and
(3) the repulsive force of the dark matter field fluid. The drag force in Eq. 3 reduces the
orbital velocity and causes the sphere to move inward to the gravitational center.
However, if the sum of the centripetal acceleration force and the repulsive force is
stronger than the gravitational force, then, the sphere will move outward and recede from
the gravitational center. This is the case of interest here. If the velocity change in Eq. 3 is
sufficiently slow and the repulsive force is small compared to the gravitational force and
centripetal acceleration force, then the rate of receding will be accordingly relatively
slow. Therefore, the gravitational force and the centripetal acceleration force can be
approximately treated in equilibrium at any time. The pseudo equilibrium equation is
GMm 2
2 = (5)
where G is the gravitational constant, M is the mass of the gravitational center, and R is
the radius of the orbit. Inserting v of Eq. 4 into Eq. 5 yields
)12exp( 1
R n−= πη (6)
(7) )12exp( 10 trRR
n−= πη
where
R = (8)
R0 is the initial distance to the gravitational center. Note that R exponentially increases
with time. The increase of orbital energy with the receding comes from the repulsive
force of dark matter field fluid. The recessional rate of the sphere is
dR n−= 112πη (9)
The acceleration of the recession is
( Rr
Rd n 21
12 −= πη ) . (10)
The recessional acceleration is positive and proportional to its distance to the
gravitational center, so the recession is faster and faster.
According to the mechanics of fluids, for a rigid non-permeable sphere rotating
about its central axis in the quiescent fluid, the torque T exerted by the fluid on the sphere
ωπμ 38 rT −= (11)
where ω is the angular velocity of the sphere. The direction of the torque in Eq. 11 is
opposite to the direction of the rotation. In the case of a sphere rotating in the quiescent
dark matter field fluid with angular velocity ω, similar to Eq. 2, the proposed T exerted
on the sphere is
( ) ωπη mrT n 318 −−= (12)
The direction of the torque in Eq. 12 is opposite to the direction of the rotation. The
torque causes the negative angular acceleration
= (13)
where I is the moment of inertia of the sphere in the dark matter field fluid
( )21
2 nrmI −= (14)
Therefore, the equation of rotation for the sphere in the dark matter field fluid is
ωπη
d −−= 120 (15)
Solving this equation yields
(16) )20exp( 10 tr
n−−= πηωω
where ω0 is the initial angular velocity. One can see that the angular velocity of the
sphere exponentially decreases with time and the angular deceleration is proportional to
its angular velocity.
For the same celestial sphere, combining Eq. 9 and Eq. 15 yields
(17)
The significance of Eq. 17 is that it contains only observed data without assumptions and
undetermined parameters; therefore, it is a critical test for this model.
For two different celestial spheres in the same system, combining Eq. 9 and Eq.
15 yields
67.1
1 −=−=⎟⎟
(18)
This is another critical test for this model.
4. The current behavior of the Earth-Moon system agrees with the model
The Moon-Earth system is the simplest gravitational system. The solar system is
complex, the Earth and the Moon experience not only the interaction of the Sun but also
interactions of other planets. Let us consider the local Earth-Moon gravitational system as
an isolated local gravitational system, i.e., the influence from the Sun and other planets
on the rotation and orbital motion of the Moon and on the rotation of the Earth is
assumed negligible compared to the forces exerted by the moon and earth on each other.
In addition, the eccentricity of the Moon's orbit is small enough to be ignored. The data
about the Moon and the Earth from references (Dickey et .al., 1994, and Lang, 1992) are
listed below for the readers' convenience to verify the calculation because the data may
vary slightly with different data sources.
Moon:
Mean radius: r = 1738.0 km
Mass: m = 7.3483 × 1025 gram
Rotation period = 27.321661 days
Angular velocity of Moon = 2.6617 × 10-6 rad s-1
Mean distance to Earth Rm= 384400 km
Mean orbital velocity v = 1.023 km s-1
Orbit eccentricity e = 0.0549
Angular rotation acceleration rate = -25.88 ± 0.5 arcsec century-2
= (-1.255 ± 0.024) × 10-4 rad century-2
= (-1.260 ± 0.024) × 10-23 rad s-2
Receding rate from Earth = 3.82 ± 0.07 cm year-1 = (1.21 ± 0.02) × 10-9 m s-1
Earth:
Mean radius: r = 6371.0 km
Mass: m = 5.9742 × 1027 gram
Rotation period = 23 h 56m 04.098904s = 86164.098904s
Angular velocity of rotation = 7.292115 × 10-5 rad s-1
Mean distance to the Sun Rm= 149,597,870.61 km
Mean orbital velocity v = 29.78 km s-1
Angular acceleration of Earth = (-5.5 ± 0.5) × 10-22 rad s-2
The Moon's angular rotation acceleration rate and increase in mean distance to the Earth
(receding rate) were obtained from the lunar laser ranging of the Apollo Program (Dickey
et .al., 1994). By inserting the data of the Moon's rotation and recession into Eq. 17, the
result is
039.054.1
10662.21021.1
1092509.31026.1
(19)
The distance R in Eq. 19 is from the Moon's center to the Earth's center and the number
384400 km is assumed to be the distance from the Moon's surface to the Earth's surface.
Eq. 19 is in good agreement with the theoretical value of -1.67. The result is in accord
with the model used here. The difference (about 7.8%) between the values of -1.54 and -
1.67 may come from several sources:
1. Moon's orbital is not a perfect circle
2. Moon is not a perfect rigid sphere.
3. The effect from Sun and other planets.
4. Errors in data.
5. Possible other unknown reasons.
The two parameters n and η in Eq. 9 and Eq. 15 can be determined with two data
sets. The third data set can be used to further test the model. If this model correctly
describes the situation at hand, it should give consistent results for different motions. The
values of n and η calculated from three different data sets are listed below (Note, the
mean distance of the Moon to the Earth and mean radii of the Moon and the Earth are
used in the calculation).
The value of n: n = 0.64
From the Moon's rotation: η = 4.27 × 10-22 s-1 m-1
From the Earth's rotation: η = 4.26 × 10-22 s-1 m-1
From the Moon's recession: η = 4.64 × 10-22 s-1 m-1
One can see that the three values of η are consistent within the range of error in the data.
The average value of η: η = (4.39 ± 0.22) × 10-22 s-1 m-1
By inserting the data of the Earth's rotation, the Moon’s recession and the value of n into
Eq. 18, the result is
14.053.1
6371000
1738000
1021.11029.7
1092509.3105.5
)64.01(
(20)
This is also in accord with the model used here.
The dragging force exerted on the Moon's orbital motion by the dark matter field
fluid is -1.11 × 108 N, this is negligibly small compared to the gravitational force between
the Moon and the Earth ~ 1.90 × 1020 N; and the torque exerted by the dark matter field
fluid on the Earth’s and the Moon's rotations is T = -5.49 × 1016 Nm and -1.15 × 1012 Nm,
respectively.
5. The evolution of Earth-Moon system
Sonett et al. found that the length of the terrestrial day 900 million years ago was
about 19.2 hours based on the laminated tidal sediments on the Earth (Sonett et al.,
1996). According to the model presented here, back in that time, the length of the day
was about 19.2 hours, this agrees very well with Sonett et al.'s result.
Another critical aspect of modeling the evolution of the Earth-Moon system is to
give a reasonable estimate of the closest distance of the Moon to the Earth when the
system was established at 4.5 billion years ago. Based on the dark matter field fluid
model, and the above result, the closest distance of the Moon to the Earth was about
259000 km (center to center) or 250900 km (surface to surface) at 4.5 billion years ago,
this is far beyond the Roche's limit. In the modern astronomy textbook by Chaisson and
McMillan (Chaisson and McMillan, 1993, p.173), the estimated distance at 4.5 billion
years ago was 250000 km, this is probably the most reasonable number that most
astronomers believe and it agrees excellently with the result of this model. The closest
distance of the Moon to the Earth by Hansen’s models was about 38 Earth radii or
242000 km (Hansen, 1982).
According to this model, the length of day of the Earth was about 8 hours at 4.5
billion years ago. Fig. 1 shows the evolution of the distance of Moon to the Earth and the
length of day of the Earth with the age of the Earth-Moon system described by this model
along with data from Kvale et al. (1999), Sonett et al. (1996) and Scrutton (1978). One
can see that those data fit this model very well in their time range.
Fig. 2 shows the geological data of solar days year-1 from Wells (1963) and from
Sonett et al. (1996) and the description (solid line) by this dark matter field fluid model
for past 900 million years. One can see that the model agrees with the geological and
fossil data beautifully.
The important difference of this model with early models in describing the early
evolution of the Earth-Moon system is that this model is based only on current data of the
Moon-Earth system and there are no assumptions about the conditions of earlier Earth
rotation and continental drifting. Based on this model, the Earth-Moon system has been
smoothly evolving to the current position since it was established and the recessional rate
of the Moon has been gradually increasing, however, this description does not take it into
account that there might be special events happened in the past to cause the suddenly
significant changes in the motions of the Earth and the Moon, such as strong impacts by
giant asteroids and comets, etc, because those impacts are very common in the universe.
The general pattern of the evolution of the Moon-Earth system described by this model
agrees with geological evidence. Based on Eq. 9, the recessional rate exponentially
increases with time. One may then imagine that the recessional rate will quickly become
very large. The increase is in fact extremely slow. The Moon's recessional rate will be
3.04 × 10-9 m s-1 after 10 billion years and 7.64 × 10-9 m s-1 after 20 billion years.
However, whether the Moon's recession will continue or at some time later another
mechanism will take over is not known. It should be understood that the tidal friction
does affect the evolution of the Earth itself such as the surface crust structure, continental
drifting and evolution of bio-system, etc; it may also play a role in slowing the Earth’s
rotation, however, such role is not a dominant mechanism.
Unfortunately, there is no data available for the changes of the Earth's orbital
motion and all other members of solar system. According to this model and above results,
the recessional rate of the Earth should be 6.86 × 10-7 m s-1 = 21.6 m year-1 = 2.16 km
century-1, the length of a year increases about 6.8 ms and the change of the temperature is
-1.8 × 10-8 K year-1 with constant radiation level of the Sun and the stable environment on
the Earth. The length of a year at 1 billion years ago would be 80% of the current length
of the year. However, much evidence (growth-bands of corals and shellfish as well as
some other evidences) suggest that there has been no apparent change in the length of the
year over the billion years and the Earth's orbital motion is more stable than its rotation.
This suggests that dark matter field fluid is circulating around Sun with the same
direction and similar speed of Earth (at least in the Earth's orbital range). Therefore, the
Earth's orbital motion experiences very little or no dragging force from the dark matter
field fluid. However, this is a conjecture, extensive research has to be conducted to verify
if this is the case.
6. Skeptical description of the evolution of the Mars
The Moon does not have liquid fluid on its surface, even there is no air, therefore,
there is no ocean-like tidal friction force to slow its rotation; however, the rotation of the
Moon is still slowing at significant rate of (-1.260 ± 0.024) × 10-23 rad s-2, which agrees
with the model very well. Based on this, one may reasonably think that the Mars’s
rotation should be slowing also.
The Mars is our nearest neighbor which has attracted human’s great attention
since ancient time. The exploration of the Mars has been heating up in recent decades.
NASA, Russian and Europe Space Agency sent many space crafts to the Mars to collect
data and study this mysterious planet. So far there is still not enough data about the
history of this planet to describe its evolution. Same as the Earth, the Mars rotates about
its central axis and revolves around the Sun, however, the Mars does not have a massive
moon circulating it (Mars has two small satellites: Phobos and Deimos) and there is no
liquid fluid on its surface, therefore, there is no apparent ocean-like tidal friction force to
slow its rotation by tidal friction theories. Based on the above result and current the
Mars's data, this model predicts that the angular acceleration of the Mars should be about
-4.38 × 10-22 rad s-2. Figure 3 describes the possible evolution of the length of day and the
solar days/Mars year, the vertical dash line marks the current age of the Mars with
assumption that the Mars was formed in a similar time period of the Earth formation.
Such description was not given before according to the author’s knowledge and is
completely skeptical due to lack of reliable data. However, with further expansion of the
research and exploration on the Mars, we shall feel confident that the reliable data about
the angular rotation acceleration of the Mars will be available in the near future which
will provide a vital test for the prediction of this model. There are also other factors
which may affect the Mars’s rotation rate such as mass redistribution due to season
change, winds, possible volcano eruptions and Mars quakes. Therefore, the data has to be
carefully analyzed.
7. Discussion about the model
From the above results, one can see that the current Earth-Moon data and the
geological and fossil data agree with the model very well and the past evolution of the
Earth-Moon system can be described by the model without introducing any additional
parameters; this model reveals the interesting relationship between the rotation and
receding (Eq. 17 and Eq. 18) of the same celestial body or different celestial bodies in
the same gravitational system, such relationship is not known before. Such success can
not be explained by “coincidence” or “luck” because of so many data involved (current
Earth’s and Moon’s data and geological and fossil data) if one thinks that this is just a
“ad hoc” or a wrong model, although the chance for the natural happening of such
“coincidence” or “luck” could be greater than wining a jackpot lottery; the future Mars’s
data will clarify this; otherwise, a new theory from different approach can be developed
to give the same or better description as this model does. It is certain that this model is
not perfect and may have defects, further development may be conducted.
James Clark Maxwell said in the 1873 “ The vast interplanetary and interstellar
regions will no longer be regarded as waste places in the universe, which the Creator has
not seen fit to fill with the symbols of the manifold order of His kingdom. We shall find
them to be already full of this wonderful medium; so full, that no human power can
remove it from the smallest portion of space, or produce the slightest flaw in its infinite
continuity. It extends unbroken from star to star ….” The medium that Maxwell talked
about is the aether which was proposed as the carrier of light wave propagation. The
Michelson-Morley experiment only proved that the light wave propagation does not
depend on such medium and did not reject the existence of the medium in the interstellar
space. In fact, the concept of the interstellar medium has been developed dramatically
recently such as the dark matter, dark energy, cosmic fluid, etc. The dark matter field
fluid is just a part of such wonderful medium and “precisely” described by Maxwell.
7. Conclusion
The evolution of the Earth-Moon system can be described by the dark matter field
fluid model with non-Newtonian approach and the current data of the Earth and the Moon
fits this model very well. At 4.5 billion years ago, the closest distance of the Moon to the
Earth could be about 259000 km, which is far beyond the Roche’s limit and the length of
day was about 8 hours. The general pattern of the evolution of the Moon-Earth system
described by this model agrees with geological and fossil evidence. The tidal friction may
not be the primary cause for the evolution of the Earth-Moon system. The Mars’s rotation
is also slowing with the angular acceleration rate about -4.38 × 10-22 rad s-2.
References
S. G. Brush, 1983. L. R. Godfrey (editor), Ghost from the Nineteenth century:
Creationist Arguments for a young Earth. Scientists confront creationism. W. W.
Norton & Company, New York, London, pp 49.
E. Chaisson and S. McMillan. 1993. Astronomy Today, Prentice Hall, Englewood
Cliffs, NJ 07632.
J. O. Dickey, et al., 1994. Science, 265, 482.
D. G. Finch, 1981. Earth, Moon, and Planets, 26(1), 109.
K. S. Hansen, 1982. Rev. Geophys. and Space Phys. 20(3), 457.
W. K. Hartmann, D. R. Davis, 1975. Icarus, 24, 504.
B. A. Kagan, N. B. Maslova, 1994. Earth, Moon and Planets 66, 173.
B. A. Kagan, 1997. Prog. Oceanog. 40, 109.
E. P. Kvale, H. W. Johnson, C. O. Sonett, A. W. Archer, and A. Zawistoski, 1999, J.
Sediment. Res. 69(6), 1154.
K. Lang, 1992. Astrophysical Data: Planets and Stars, Springer-Verlag, New York.
H. Pan, 2005. Internat. J. Modern Phys. A, 20(14), 3135.
R. D. Ray, B. G. Bills, B. F. Chao, 1999. J. Geophys. Res. 104(B8), 17653.
C. T. Scrutton, 1978. P. Brosche, J. Sundermann, (Editors.), Tidal Friction and the
Earth’s Rotation. Springer-Verlag, Berlin, pp. 154.
L. B. Slichter, 1963. J. Geophys. Res. 68, 14.
C. P. Sonett, E. P. Kvale, M. A. Chan, T. M. Demko, 1996. Science, 273, 100.
F. D. Stacey, 1977. Physics of the Earth, second edition. John Willey & Sons.
J. W. Wells, 1963. Nature, 197, 948.
Caption
Figure 1, the evolution of Moon’s distance and the length of day of the earth with
the age of the Earth-Moon system. Solid lines are calculated according to the dark matter
field fluid model. Data sources: the Moon distances are from Kvale and et al. and for the
length of day: (a and b) are from Scrutton ( page 186, fig. 8), c is from Sonett and et al.
The dash line marks the current age of the Earth-Moon system.
Figure 2, the evolution of Solar days of year with the age of the Earth-Moon
system. The solid line is calculated according to dark matter field fluid model. The data
are from Wells (3.9 ~ 4.435 billion years range), Sonett (3.6 billion years) and current
age (4.5 billion years).
Figure 3, the skeptical description of the evolution of Mars’s length of day and the
solar days/Mars year with the age of the Mars (assuming that the Mars’s age is about 4.5
billion years). The vertical dash line marks the current age of Mars.
Figure 1, Moon's distance and the length of day of Earth
change with the age of Earth-Moon system
The age of Earth-Moon system (109 years)
0 1 2 3 4 5
Distance
Length of day
Roche's limit
Hansen's result
Figure 2, the solar days / year vs. the age of the Earth
The age of the Earth (109 years)
3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6
| Introduction
The popularly accepted theory for the formation of the Earth-Moon system is that
the Moon was formed from debris of a strong impact by a giant planetesimal with the
Earth at the close of the planet-forming period (Hartmann and Davis 1975). Since the
formation of the Earth-Moon system, it has been evolving at all time scale. It is well
known that the Moon is receding from us and both the Earth’s rotation and Moon’s
rotation are slowing. The popular theory is that the tidal friction causes all those changes
based on the conservation of the angular momentum of the Earth-Moon system. The
situation becomes complicated in describing the past evolution of the Earth-Moon
system. Because the Moon is moving away from us and the Earth rotation is slowing, this
means that the Moon was closer and the Earth rotation was faster in the past. Creationists
argue that based on the tidal friction theory, the tidal friction should be stronger and the
recessional rate of the Moon should be greater in the past, the distance of the Moon
would quickly fall inside the Roche's limit (for earth, 15500 km) in which the Moon
would be torn apart by gravity in 1 to 2 billion years ago. However, geological evidence
indicates that the recession of the Moon in the past was slower than the present rate, i. e.,
the recession has been accelerating with time. Therefore, it must be concluded that tidal
friction was very much less in the remote past than we would deduce on the basis of
present-day observations (Stacey 1977). This was called “geological time scale
difficulty” or “Lunar crisis” and is one of the main arguments by creationists against the
tidal friction theory (Brush 1983).
But we have to consider the case carefully in various aspects. One possible
scenario is that the Earth has been undergoing dynamic evolution at all time scale since
its inception, the geological and physical conditions (such as the continent positions and
drifting, the crust, surface temperature fluctuation like the glacial/snowball effect, etc) at
remote past could be substantially different from currently, in which the tidal friction
could be much less; therefore, the receding rate of the Moon could be slower. Various
tidal friction models were proposed in the past to describe the evolution of the Earth-
Moon system to avoid such difficulty or crisis and put the Moon at quite a comfortable
distance from Earth at 4.5 billion years ago (Hansen 1982, Kagan and Maslova 1994, Ray
et al. 1999, Finch 1981, Slichter 1963). The tidal friction theories explain that the present
rate of tidal dissipation is anomalously high because the tidal force is close to a resonance
in the response function of ocean (Brush 1983). Kagan gave a detailed review about those
tidal friction models (Kagan 1997). Those models are based on many assumptions about
geological (continental position and drifting) and physical conditions in the past, and
many parameters (such as phase lag angle, multi-mode approximation with time
dependent frequencies of the resonance modes, etc.) have to be introduced and carefully
adjusted to make their predictions close to the geological evidence. However, those
assumptions and parameters are still challenged, to certain extent, as concoction.
The second possible scenario is that another mechanism could dominate the
evolution of the Earth-Moon system and the role of the tidal friction is not significant. In
the Meeting of Division of Particle and Field 2004, American Physical Society,
University of California at Riverside, the author proposed a dark matter field fluid model
(Pan 2005) with a non-Newtonian approach, the current Moon and Earth data agree with
this model very well. This paper will demonstrate that the past evolution of Moon-Earth
system can be described by the dark matter field fluid model without any assumptions
about past geological and physical conditions. Although the subject of the evolution of
the Earth-Moon system has been extensively studied analytically or numerically, to the
author’s knowledge, there are no theories similar or equivalent to this model.
2. Invisible matter
In modern cosmology, it was proposed that the visible matter in the universe is
about 2 ~ 10 % of the total matter and about 90 ~ 98% of total matter is currently
invisible which is called dark matter and dark energy, such invisible matter has an anti-
gravity property to make the universe expanding faster and faster.
If the ratio of the matter components of the universe is close to this hypothesis,
then, the evolution of the universe should be dominated by the physical mechanism of
such invisible matter, such physical mechanism could be far beyond the current
Newtonian physics and Einsteinian physics, and the Newtonian physics and Einsteinian
physics could reflect only a corner of the iceberg of the greater physics.
If the ratio of the matter components of the universe is close to this hypothesis,
then, it should be more reasonable to think that such dominant invisible matter spreads in
everywhere of the universe (the density of the invisible matter may vary from place to
place); in other words, all visible matter objects should be surrounded by such invisible
matter and the motion of the visible matter objects should be affected by the invisible
matter if there are interactions between the visible matter and the invisible matter.
If the ratio of the matter components of the universe is close to this hypothesis,
then, the size of the particles of the invisible matter should be very small and below the
detection limit of the current technology; otherwise, it would be detected long time ago
with such dominant amount.
With such invisible matter in mind, we move to the next section to develop the
dark matter field fluid model with non-Newtonian approach. For simplicity, all invisible
matter (dark matter, dark energy and possible other terms) is called dark matter here.
3. The dark matter field fluid model
In this proposed model, it is assumed that:
1. A celestial body rotates and moves in the space, which, for simplicity, is uniformly
filled with the dark matter which is in quiescent state relative to the motion of the
celestial body. The dark matter possesses a field property and a fluid property; it can
interact with the celestial body with its fluid and field properties; therefore, it can have
energy exchange with the celestial body, and affect the motion of the celestial body.
2. The fluid property follows the general principle of fluid mechanics. The dark matter
field fluid particles may be so small that they can easily permeate into ordinary
“baryonic” matter; i. e., ordinary matter objects could be saturated with such dark matter
field fluid. Thus, the whole celestial body interacts with the dark matter field fluid, in the
manner of a sponge moving thru water. The nature of the field property of the dark matter
field fluid is unknown. It is here assumed that the interaction of the field associated with
the dark matter field fluid with the celestial body is proportional to the mass of the
celestial body. The dark matter field fluid is assumed to have a repulsive force against the
gravitational force towards baryonic matter. The nature and mechanism of such repulsive
force is unknown.
With the assumptions above, one can study how the dark matter field fluid may
influence the motion of a celestial body and compare the results with observations. The
common shape of celestial bodies is spherical. According to Stokes's law, a rigid non-
permeable sphere moving through a quiescent fluid with a sufficiently low Reynolds
number experiences a resistance force F
rvF πμ6−= (1)
where v is the moving velocity, r is the radius of the sphere, and μ is the fluid viscosity
constant. The direction of the resistance force F in Eq. 1 is opposite to the direction of the
velocity v. For a rigid sphere moving through the dark matter field fluid, due to the dual
properties of the dark matter field fluid and its permeation into the sphere, the force F
may not be proportional to the radius of the sphere. Also, F may be proportional to the
mass of the sphere due to the field interaction. Therefore, with the combined effects of
both fluid and field, the force exerted on the sphere by the dark matter field fluid is
assumed to be of the scaled form
(2) mvrF n−−= 16πη
where n is a parameter arising from saturation by dark matter field fluid, the r1-n can be
viewed as the effective radius with the same unit as r, m is the mass of the sphere, and η
is the dark matter field fluid constant, which is equivalent to μ. The direction of the
resistance force F in Eq. 2 is opposite to the direction of the velocity v. The force
described by Eq. 2 is velocity-dependent and causes negative acceleration. According to
Newton's second law of motion, the equation of motion for the sphere is
mvr
m n−−= 16πη (3)
Then
(4) )6exp( 10 vtrvv
n−−= πη
where v0 is the initial velocity (t = 0) of the sphere. If the sphere revolves around a
massive gravitational center, there are three forces in the line between the sphere and the
gravitational center: (1) the gravitational force, (2) the centripetal acceleration force; and
(3) the repulsive force of the dark matter field fluid. The drag force in Eq. 3 reduces the
orbital velocity and causes the sphere to move inward to the gravitational center.
However, if the sum of the centripetal acceleration force and the repulsive force is
stronger than the gravitational force, then, the sphere will move outward and recede from
the gravitational center. This is the case of interest here. If the velocity change in Eq. 3 is
sufficiently slow and the repulsive force is small compared to the gravitational force and
centripetal acceleration force, then the rate of receding will be accordingly relatively
slow. Therefore, the gravitational force and the centripetal acceleration force can be
approximately treated in equilibrium at any time. The pseudo equilibrium equation is
GMm 2
2 = (5)
where G is the gravitational constant, M is the mass of the gravitational center, and R is
the radius of the orbit. Inserting v of Eq. 4 into Eq. 5 yields
)12exp( 1
R n−= πη (6)
(7) )12exp( 10 trRR
n−= πη
where
R = (8)
R0 is the initial distance to the gravitational center. Note that R exponentially increases
with time. The increase of orbital energy with the receding comes from the repulsive
force of dark matter field fluid. The recessional rate of the sphere is
dR n−= 112πη (9)
The acceleration of the recession is
( Rr
Rd n 21
12 −= πη ) . (10)
The recessional acceleration is positive and proportional to its distance to the
gravitational center, so the recession is faster and faster.
According to the mechanics of fluids, for a rigid non-permeable sphere rotating
about its central axis in the quiescent fluid, the torque T exerted by the fluid on the sphere
ωπμ 38 rT −= (11)
where ω is the angular velocity of the sphere. The direction of the torque in Eq. 11 is
opposite to the direction of the rotation. In the case of a sphere rotating in the quiescent
dark matter field fluid with angular velocity ω, similar to Eq. 2, the proposed T exerted
on the sphere is
( ) ωπη mrT n 318 −−= (12)
The direction of the torque in Eq. 12 is opposite to the direction of the rotation. The
torque causes the negative angular acceleration
= (13)
where I is the moment of inertia of the sphere in the dark matter field fluid
( )21
2 nrmI −= (14)
Therefore, the equation of rotation for the sphere in the dark matter field fluid is
ωπη
d −−= 120 (15)
Solving this equation yields
(16) )20exp( 10 tr
n−−= πηωω
where ω0 is the initial angular velocity. One can see that the angular velocity of the
sphere exponentially decreases with time and the angular deceleration is proportional to
its angular velocity.
For the same celestial sphere, combining Eq. 9 and Eq. 15 yields
(17)
The significance of Eq. 17 is that it contains only observed data without assumptions and
undetermined parameters; therefore, it is a critical test for this model.
For two different celestial spheres in the same system, combining Eq. 9 and Eq.
15 yields
67.1
1 −=−=⎟⎟
(18)
This is another critical test for this model.
4. The current behavior of the Earth-Moon system agrees with the model
The Moon-Earth system is the simplest gravitational system. The solar system is
complex, the Earth and the Moon experience not only the interaction of the Sun but also
interactions of other planets. Let us consider the local Earth-Moon gravitational system as
an isolated local gravitational system, i.e., the influence from the Sun and other planets
on the rotation and orbital motion of the Moon and on the rotation of the Earth is
assumed negligible compared to the forces exerted by the moon and earth on each other.
In addition, the eccentricity of the Moon's orbit is small enough to be ignored. The data
about the Moon and the Earth from references (Dickey et .al., 1994, and Lang, 1992) are
listed below for the readers' convenience to verify the calculation because the data may
vary slightly with different data sources.
Moon:
Mean radius: r = 1738.0 km
Mass: m = 7.3483 × 1025 gram
Rotation period = 27.321661 days
Angular velocity of Moon = 2.6617 × 10-6 rad s-1
Mean distance to Earth Rm= 384400 km
Mean orbital velocity v = 1.023 km s-1
Orbit eccentricity e = 0.0549
Angular rotation acceleration rate = -25.88 ± 0.5 arcsec century-2
= (-1.255 ± 0.024) × 10-4 rad century-2
= (-1.260 ± 0.024) × 10-23 rad s-2
Receding rate from Earth = 3.82 ± 0.07 cm year-1 = (1.21 ± 0.02) × 10-9 m s-1
Earth:
Mean radius: r = 6371.0 km
Mass: m = 5.9742 × 1027 gram
Rotation period = 23 h 56m 04.098904s = 86164.098904s
Angular velocity of rotation = 7.292115 × 10-5 rad s-1
Mean distance to the Sun Rm= 149,597,870.61 km
Mean orbital velocity v = 29.78 km s-1
Angular acceleration of Earth = (-5.5 ± 0.5) × 10-22 rad s-2
The Moon's angular rotation acceleration rate and increase in mean distance to the Earth
(receding rate) were obtained from the lunar laser ranging of the Apollo Program (Dickey
et .al., 1994). By inserting the data of the Moon's rotation and recession into Eq. 17, the
result is
039.054.1
10662.21021.1
1092509.31026.1
(19)
The distance R in Eq. 19 is from the Moon's center to the Earth's center and the number
384400 km is assumed to be the distance from the Moon's surface to the Earth's surface.
Eq. 19 is in good agreement with the theoretical value of -1.67. The result is in accord
with the model used here. The difference (about 7.8%) between the values of -1.54 and -
1.67 may come from several sources:
1. Moon's orbital is not a perfect circle
2. Moon is not a perfect rigid sphere.
3. The effect from Sun and other planets.
4. Errors in data.
5. Possible other unknown reasons.
The two parameters n and η in Eq. 9 and Eq. 15 can be determined with two data
sets. The third data set can be used to further test the model. If this model correctly
describes the situation at hand, it should give consistent results for different motions. The
values of n and η calculated from three different data sets are listed below (Note, the
mean distance of the Moon to the Earth and mean radii of the Moon and the Earth are
used in the calculation).
The value of n: n = 0.64
From the Moon's rotation: η = 4.27 × 10-22 s-1 m-1
From the Earth's rotation: η = 4.26 × 10-22 s-1 m-1
From the Moon's recession: η = 4.64 × 10-22 s-1 m-1
One can see that the three values of η are consistent within the range of error in the data.
The average value of η: η = (4.39 ± 0.22) × 10-22 s-1 m-1
By inserting the data of the Earth's rotation, the Moon’s recession and the value of n into
Eq. 18, the result is
14.053.1
6371000
1738000
1021.11029.7
1092509.3105.5
)64.01(
(20)
This is also in accord with the model used here.
The dragging force exerted on the Moon's orbital motion by the dark matter field
fluid is -1.11 × 108 N, this is negligibly small compared to the gravitational force between
the Moon and the Earth ~ 1.90 × 1020 N; and the torque exerted by the dark matter field
fluid on the Earth’s and the Moon's rotations is T = -5.49 × 1016 Nm and -1.15 × 1012 Nm,
respectively.
5. The evolution of Earth-Moon system
Sonett et al. found that the length of the terrestrial day 900 million years ago was
about 19.2 hours based on the laminated tidal sediments on the Earth (Sonett et al.,
1996). According to the model presented here, back in that time, the length of the day
was about 19.2 hours, this agrees very well with Sonett et al.'s result.
Another critical aspect of modeling the evolution of the Earth-Moon system is to
give a reasonable estimate of the closest distance of the Moon to the Earth when the
system was established at 4.5 billion years ago. Based on the dark matter field fluid
model, and the above result, the closest distance of the Moon to the Earth was about
259000 km (center to center) or 250900 km (surface to surface) at 4.5 billion years ago,
this is far beyond the Roche's limit. In the modern astronomy textbook by Chaisson and
McMillan (Chaisson and McMillan, 1993, p.173), the estimated distance at 4.5 billion
years ago was 250000 km, this is probably the most reasonable number that most
astronomers believe and it agrees excellently with the result of this model. The closest
distance of the Moon to the Earth by Hansen’s models was about 38 Earth radii or
242000 km (Hansen, 1982).
According to this model, the length of day of the Earth was about 8 hours at 4.5
billion years ago. Fig. 1 shows the evolution of the distance of Moon to the Earth and the
length of day of the Earth with the age of the Earth-Moon system described by this model
along with data from Kvale et al. (1999), Sonett et al. (1996) and Scrutton (1978). One
can see that those data fit this model very well in their time range.
Fig. 2 shows the geological data of solar days year-1 from Wells (1963) and from
Sonett et al. (1996) and the description (solid line) by this dark matter field fluid model
for past 900 million years. One can see that the model agrees with the geological and
fossil data beautifully.
The important difference of this model with early models in describing the early
evolution of the Earth-Moon system is that this model is based only on current data of the
Moon-Earth system and there are no assumptions about the conditions of earlier Earth
rotation and continental drifting. Based on this model, the Earth-Moon system has been
smoothly evolving to the current position since it was established and the recessional rate
of the Moon has been gradually increasing, however, this description does not take it into
account that there might be special events happened in the past to cause the suddenly
significant changes in the motions of the Earth and the Moon, such as strong impacts by
giant asteroids and comets, etc, because those impacts are very common in the universe.
The general pattern of the evolution of the Moon-Earth system described by this model
agrees with geological evidence. Based on Eq. 9, the recessional rate exponentially
increases with time. One may then imagine that the recessional rate will quickly become
very large. The increase is in fact extremely slow. The Moon's recessional rate will be
3.04 × 10-9 m s-1 after 10 billion years and 7.64 × 10-9 m s-1 after 20 billion years.
However, whether the Moon's recession will continue or at some time later another
mechanism will take over is not known. It should be understood that the tidal friction
does affect the evolution of the Earth itself such as the surface crust structure, continental
drifting and evolution of bio-system, etc; it may also play a role in slowing the Earth’s
rotation, however, such role is not a dominant mechanism.
Unfortunately, there is no data available for the changes of the Earth's orbital
motion and all other members of solar system. According to this model and above results,
the recessional rate of the Earth should be 6.86 × 10-7 m s-1 = 21.6 m year-1 = 2.16 km
century-1, the length of a year increases about 6.8 ms and the change of the temperature is
-1.8 × 10-8 K year-1 with constant radiation level of the Sun and the stable environment on
the Earth. The length of a year at 1 billion years ago would be 80% of the current length
of the year. However, much evidence (growth-bands of corals and shellfish as well as
some other evidences) suggest that there has been no apparent change in the length of the
year over the billion years and the Earth's orbital motion is more stable than its rotation.
This suggests that dark matter field fluid is circulating around Sun with the same
direction and similar speed of Earth (at least in the Earth's orbital range). Therefore, the
Earth's orbital motion experiences very little or no dragging force from the dark matter
field fluid. However, this is a conjecture, extensive research has to be conducted to verify
if this is the case.
6. Skeptical description of the evolution of the Mars
The Moon does not have liquid fluid on its surface, even there is no air, therefore,
there is no ocean-like tidal friction force to slow its rotation; however, the rotation of the
Moon is still slowing at significant rate of (-1.260 ± 0.024) × 10-23 rad s-2, which agrees
with the model very well. Based on this, one may reasonably think that the Mars’s
rotation should be slowing also.
The Mars is our nearest neighbor which has attracted human’s great attention
since ancient time. The exploration of the Mars has been heating up in recent decades.
NASA, Russian and Europe Space Agency sent many space crafts to the Mars to collect
data and study this mysterious planet. So far there is still not enough data about the
history of this planet to describe its evolution. Same as the Earth, the Mars rotates about
its central axis and revolves around the Sun, however, the Mars does not have a massive
moon circulating it (Mars has two small satellites: Phobos and Deimos) and there is no
liquid fluid on its surface, therefore, there is no apparent ocean-like tidal friction force to
slow its rotation by tidal friction theories. Based on the above result and current the
Mars's data, this model predicts that the angular acceleration of the Mars should be about
-4.38 × 10-22 rad s-2. Figure 3 describes the possible evolution of the length of day and the
solar days/Mars year, the vertical dash line marks the current age of the Mars with
assumption that the Mars was formed in a similar time period of the Earth formation.
Such description was not given before according to the author’s knowledge and is
completely skeptical due to lack of reliable data. However, with further expansion of the
research and exploration on the Mars, we shall feel confident that the reliable data about
the angular rotation acceleration of the Mars will be available in the near future which
will provide a vital test for the prediction of this model. There are also other factors
which may affect the Mars’s rotation rate such as mass redistribution due to season
change, winds, possible volcano eruptions and Mars quakes. Therefore, the data has to be
carefully analyzed.
7. Discussion about the model
From the above results, one can see that the current Earth-Moon data and the
geological and fossil data agree with the model very well and the past evolution of the
Earth-Moon system can be described by the model without introducing any additional
parameters; this model reveals the interesting relationship between the rotation and
receding (Eq. 17 and Eq. 18) of the same celestial body or different celestial bodies in
the same gravitational system, such relationship is not known before. Such success can
not be explained by “coincidence” or “luck” because of so many data involved (current
Earth’s and Moon’s data and geological and fossil data) if one thinks that this is just a
“ad hoc” or a wrong model, although the chance for the natural happening of such
“coincidence” or “luck” could be greater than wining a jackpot lottery; the future Mars’s
data will clarify this; otherwise, a new theory from different approach can be developed
to give the same or better description as this model does. It is certain that this model is
not perfect and may have defects, further development may be conducted.
James Clark Maxwell said in the 1873 “ The vast interplanetary and interstellar
regions will no longer be regarded as waste places in the universe, which the Creator has
not seen fit to fill with the symbols of the manifold order of His kingdom. We shall find
them to be already full of this wonderful medium; so full, that no human power can
remove it from the smallest portion of space, or produce the slightest flaw in its infinite
continuity. It extends unbroken from star to star ….” The medium that Maxwell talked
about is the aether which was proposed as the carrier of light wave propagation. The
Michelson-Morley experiment only proved that the light wave propagation does not
depend on such medium and did not reject the existence of the medium in the interstellar
space. In fact, the concept of the interstellar medium has been developed dramatically
recently such as the dark matter, dark energy, cosmic fluid, etc. The dark matter field
fluid is just a part of such wonderful medium and “precisely” described by Maxwell.
7. Conclusion
The evolution of the Earth-Moon system can be described by the dark matter field
fluid model with non-Newtonian approach and the current data of the Earth and the Moon
fits this model very well. At 4.5 billion years ago, the closest distance of the Moon to the
Earth could be about 259000 km, which is far beyond the Roche’s limit and the length of
day was about 8 hours. The general pattern of the evolution of the Moon-Earth system
described by this model agrees with geological and fossil evidence. The tidal friction may
not be the primary cause for the evolution of the Earth-Moon system. The Mars’s rotation
is also slowing with the angular acceleration rate about -4.38 × 10-22 rad s-2.
References
S. G. Brush, 1983. L. R. Godfrey (editor), Ghost from the Nineteenth century:
Creationist Arguments for a young Earth. Scientists confront creationism. W. W.
Norton & Company, New York, London, pp 49.
E. Chaisson and S. McMillan. 1993. Astronomy Today, Prentice Hall, Englewood
Cliffs, NJ 07632.
J. O. Dickey, et al., 1994. Science, 265, 482.
D. G. Finch, 1981. Earth, Moon, and Planets, 26(1), 109.
K. S. Hansen, 1982. Rev. Geophys. and Space Phys. 20(3), 457.
W. K. Hartmann, D. R. Davis, 1975. Icarus, 24, 504.
B. A. Kagan, N. B. Maslova, 1994. Earth, Moon and Planets 66, 173.
B. A. Kagan, 1997. Prog. Oceanog. 40, 109.
E. P. Kvale, H. W. Johnson, C. O. Sonett, A. W. Archer, and A. Zawistoski, 1999, J.
Sediment. Res. 69(6), 1154.
K. Lang, 1992. Astrophysical Data: Planets and Stars, Springer-Verlag, New York.
H. Pan, 2005. Internat. J. Modern Phys. A, 20(14), 3135.
R. D. Ray, B. G. Bills, B. F. Chao, 1999. J. Geophys. Res. 104(B8), 17653.
C. T. Scrutton, 1978. P. Brosche, J. Sundermann, (Editors.), Tidal Friction and the
Earth’s Rotation. Springer-Verlag, Berlin, pp. 154.
L. B. Slichter, 1963. J. Geophys. Res. 68, 14.
C. P. Sonett, E. P. Kvale, M. A. Chan, T. M. Demko, 1996. Science, 273, 100.
F. D. Stacey, 1977. Physics of the Earth, second edition. John Willey & Sons.
J. W. Wells, 1963. Nature, 197, 948.
Caption
Figure 1, the evolution of Moon’s distance and the length of day of the earth with
the age of the Earth-Moon system. Solid lines are calculated according to the dark matter
field fluid model. Data sources: the Moon distances are from Kvale and et al. and for the
length of day: (a and b) are from Scrutton ( page 186, fig. 8), c is from Sonett and et al.
The dash line marks the current age of the Earth-Moon system.
Figure 2, the evolution of Solar days of year with the age of the Earth-Moon
system. The solid line is calculated according to dark matter field fluid model. The data
are from Wells (3.9 ~ 4.435 billion years range), Sonett (3.6 billion years) and current
age (4.5 billion years).
Figure 3, the skeptical description of the evolution of Mars’s length of day and the
solar days/Mars year with the age of the Mars (assuming that the Mars’s age is about 4.5
billion years). The vertical dash line marks the current age of Mars.
Figure 1, Moon's distance and the length of day of Earth
change with the age of Earth-Moon system
The age of Earth-Moon system (109 years)
0 1 2 3 4 5
Distance
Length of day
Roche's limit
Hansen's result
Figure 2, the solar days / year vs. the age of the Earth
The age of the Earth (109 years)
3.5 3.6 3.7 3.8 3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6
| La evolución del sistema Tierra-Luna basado en el modelo de fluido oscuro
La evolución del sistema Tierra-Luna basado en
el modelo de fluido de campo de materia oscura
Hongjun Pan
Departamento de Química
Universidad del Norte de Texas, Denton, Texas 76203, U.S. A.
Resumen
La evolución del sistema Tierra-Luna es descrita por el fluido del campo de materia oscura
modelo con un enfoque no newtoniano propuesto en la Reunión de la División de Partículas
y Field 2004, American Physical Society. El comportamiento actual de la Luna-Tierra
sistema está de acuerdo con este modelo muy bien y el patrón general de la evolución de la
El sistema Luna-Tierra descrito por este modelo concuerda con la evidencia geológica y fósil.
La distancia más cercana de la Luna a la Tierra era de unos 259000 km en 4.500 millones de años atrás,
que está mucho más allá del límite del Roche. El resultado sugiere que la fricción de marea puede no
ser la causa principal de la evolución del sistema Tierra-Luna. La oscuridad media
La constante de fluido de campo de materia derivada de los datos del sistema Tierra-Luna es 4,39 × 10-22 s-1m-1.
Este modelo predice que la rotación de Marte también se está desacelerando con la aceleración angular
tasa alrededor de -4.38 × 10-22 rad s-2.
Palabras clave. materia oscura, fluido, evolución, Tierra, Luna, Marte
1. Introducción
La teoría aceptada popularmente para la formación del sistema Tierra-Luna es que
la Luna se formó a partir de escombros de un fuerte impacto por un gigante planetesimal con el
La Tierra al final del período de formación del planeta (Hartmann y Davis 1975). Desde el
formación del sistema Tierra-Luna, que ha estado evolucionando en toda escala de tiempo. Está bien.
sabe que la Luna se está alejando de nosotros y de la rotación de la Tierra y de la Luna
La rotación se está desacelerando. La teoría popular es que la fricción de mareas causa todos esos cambios.
basado en la conservación del impulso angular del sistema Tierra-Luna. Los
la situación se complica al describir la evolución pasada de la Luna-Tierra
sistema. Debido a que la Luna se está alejando de nosotros y la rotación de la Tierra se está desacelerando, esto
significa que la Luna estaba más cerca y la rotación de la Tierra era más rápida en el pasado. Creacionistas
argumentan que sobre la base de la teoría de la fricción de mareas, la fricción de mareas debe ser más fuerte y la
la tasa de recesión de la Luna debe ser mayor en el pasado, la distancia de la Luna
caería rápidamente dentro del límite de Roche (para la tierra, 15500 km) en el que la Luna
sería desgarrado por la gravedad en 1 a 2 mil millones de años atrás. Sin embargo, las pruebas geológicas
indica que la recesión de la Luna en el pasado fue más lenta que la tasa actual, es decir,
la recesión se ha acelerado con el tiempo. Por lo tanto, debe concluirse que las mareas
la fricción fue mucho menos en el pasado remoto de lo que deduciríamos sobre la base de
Observaciones actuales (Stacey 1977). Esto se llamó “escala de tiempo geológica
dificultad” o “crisis lunar” y es uno de los principales argumentos de los creacionistas contra el
teoría de la fricción de mareas (Brush 1983).
Pero tenemos que considerar el caso cuidadosamente en varios aspectos. Una posible
escenario es que la Tierra ha estado experimentando una evolución dinámica en toda escala de tiempo desde
su creación, las condiciones geológicas y físicas (como las posiciones del continente y
a la deriva, la corteza, fluctuación de la temperatura superficial como el efecto glacial/snowball, etc.)
pasado remoto podría ser sustancialmente diferente de la actual, en la que la fricción de mareas
podría ser mucho menos; por lo tanto, la tasa de descenso de la Luna podría ser más lenta. Varios
En el pasado se propusieron modelos de fricción de mareas para describir la evolución de la Tierra-
Sistema lunar para evitar tal dificultad o crisis y poner a la Luna en un lugar bastante cómodo
distancia de la Tierra hace 4.500 millones de años (Hansen 1982, Kagan y Maslova 1994, Ray
et al. 1999, Finch 1981, Slichter 1963). Las teorías de la fricción de marea explican que el presente
la tasa de disipación de las mareas es anomalosamente alta porque la fuerza de las mareas está cerca de una resonancia
en la función de respuesta del océano (Brush 1983). Kagan dio una revisión detallada sobre los
modelos de fricción de mareas (Kagan 1997). Estos modelos se basan en muchos supuestos sobre
condiciones geológicas (posición continental y deriva) y físicas en el pasado, y
muchos parámetros (como el ángulo de retardo de fase, la aproximación multimodo con el tiempo
frecuencias dependientes de los modos de resonancia, etc.) tienen que ser introducidos y cuidadosamente
ajustados para hacer sus predicciones cerca de la evidencia geológica. Sin embargo, los
los supuestos y parámetros siguen siendo cuestionados, en cierta medida, como brebaje.
El segundo escenario posible es que otro mecanismo podría dominar el
la evolución del sistema Tierra-Luna y el papel de la fricción de mareas no es significativo. In
la Reunión de la División de Partículas y Campo 2004, American Physical Society,
Universidad de California en Riverside, el autor propuso un modelo de fluido de campo de materia oscura
(Pan 2005) con un enfoque no newtoniano, los datos actuales de la Luna y la Tierra están de acuerdo con
este modelo muy bien. Este documento demostrará que la evolución pasada de la Luna-Tierra
sistema puede ser descrito por el modelo de fluido de campo de materia oscura sin ninguna suposición
sobre las condiciones geológicas y físicas del pasado. Aunque el tema de la evolución de
el sistema Tierra-Luna ha sido ampliamente estudiado analítica o numéricamente, a la
conocimiento del autor, no hay teorías similares o equivalentes a este modelo.
2. Materia invisible
En la cosmología moderna, se propuso que la materia visible en el universo es
aproximadamente el 2 ~ 10 % de la materia total y alrededor del 90 ~ 98% de la materia total es actualmente
invisible que se llama materia oscura y energía oscura, tal materia invisible tiene un anti-
propiedad de gravedad para hacer que el universo se expanda más y más rápido.
Si la proporción de los componentes de materia del universo está cerca de esta hipótesis,
entonces, la evolución del universo debe ser dominada por el mecanismo físico de
tal materia invisible, tal mecanismo físico podría estar mucho más allá de la corriente
La física newtoniana y la física Einsteiniana, y la física Newtoniana y la Einsteiniana
la física podría reflejar sólo un rincón del iceberg de la física mayor.
Si la proporción de los componentes de materia del universo está cerca de esta hipótesis,
entonces, debería ser más razonable pensar que tal materia invisible dominante se propaga en
en todas partes del universo (la densidad de la materia invisible puede variar de un lugar a otro
lugar); en otras palabras, todos los objetos de materia visible deben estar rodeados por tales invisibles
materia y el movimiento de la materia visible objetos deben ser afectados por el invisible
materia si hay interacciones entre la materia visible y la materia invisible.
Si la proporción de los componentes de materia del universo está cerca de esta hipótesis,
entonces, el tamaño de las partículas de la materia invisible debe ser muy pequeño y por debajo de la
límite de detección de la tecnología actual; de lo contrario, se detectaría hace mucho tiempo
con tal cantidad dominante.
Con esta materia invisible en mente, nos movemos a la siguiente sección para desarrollar la
Modelo de fluido de campo de materia oscura con enfoque no newtoniano. Para la simplicidad, todos invisibles
materia (materia oscura, energía oscura y otros términos posibles) se llama materia oscura aquí.
3. El modelo de fluido de campo de materia oscura
En este modelo propuesto, se supone que:
1. Un cuerpo celeste gira y se mueve en el espacio, que, para la simplicidad, es uniforme
lleno de la materia oscura que está en estado de quiescencia relativa al movimiento del
cuerpo celeste. La materia oscura posee una propiedad de campo y una propiedad fluida; puede
interactúe con el cuerpo celeste con sus propiedades de fluido y campo; por lo tanto, puede tener
intercambio de energía con el cuerpo celeste, y afectan el movimiento del cuerpo celeste.
2. La propiedad del fluido sigue el principio general de la mecánica del fluido. La materia oscura
partículas de líquido de campo pueden ser tan pequeñas que fácilmente pueden impregnarse en ordinario
materia “barionica”; es decir, los objetos de materia ordinaria podrían estar saturados con tal materia oscura
fluido de campo. Por lo tanto, todo el cuerpo celestial interactúa con el fluido del campo de materia oscura, en el
forma de una esponja que se mueve a través del agua. La naturaleza de la propiedad de campo de la materia oscura
se desconoce el líquido del campo. Se asume aquí que la interacción del campo asociado con
el fluido del campo de materia oscura con el cuerpo celestial es proporcional a la masa del
cuerpo celeste. El fluido del campo de materia oscura se supone que tiene una fuerza repulsiva contra el
fuerza gravitatoria hacia la materia bariónica. La naturaleza y el mecanismo de tal repulsivo
La fuerza es desconocida.
Con las suposiciones anteriores, uno puede estudiar cómo el fluido del campo de materia oscura puede
influir en el movimiento de un cuerpo celeste y comparar los resultados con las observaciones. Los
la forma común de los cuerpos celestes es esférica. Según la ley de Stokes, un rígido no-
esfera permeable que se mueve a través de un líquido quiescente con un Reynolds suficientemente bajo
número experimenta una fuerza de resistencia F
rvF 6−= (1)
donde v es la velocidad de movimiento, r es el radio de la esfera, y μ es la viscosidad del fluido
constante. La dirección de la fuerza de resistencia F en Eq. 1 es opuesto a la dirección de la
velocidad v. Para una esfera rígida que se mueve a través del fluido del campo de materia oscura, debido al doble
propiedades del fluido del campo de materia oscura y su permeación en la esfera, la fuerza F
puede no ser proporcional al radio de la esfera. Además, F puede ser proporcional a la
masa de la esfera debido a la interacción de campo. Por lo tanto, con los efectos combinados de
fluido y campo, la fuerza ejercida en la esfera por el fluido del campo de materia oscura es
se supone que es de la forma escalonada
(2) mvrF n= 16
donde n es un parámetro derivado de la saturación por el fluido de campo de materia oscura, el r1-n puede ser
visto como el radio efectivo con la misma unidad que r, m es la masa de la esfera, y η
es la constante del fluido del campo de materia oscura, que es equivalente a μ. La dirección de la
Fuerza de resistencia F en Eq. 2 es opuesto a la dirección de la velocidad v. La fuerza
descrita por Eq. 2 es dependiente de la velocidad y causa una aceleración negativa. De acuerdo con
Segunda ley del movimiento de Newton, la ecuación del movimiento para la esfera es
mvr
m n= 16 (3)
Entonces
(4) )6exp( 10 vtrv
No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
donde v0 es la velocidad inicial (t = 0) de la esfera. Si la esfera gira alrededor de un
centro gravitacional masivo, hay tres fuerzas en la línea entre la esfera y el
centro gravitacional: (1) la fuerza gravitatoria, (2) la fuerza de aceleración centrípeta; y
(3) la fuerza repulsiva del fluido del campo de materia oscura. La fuerza de arrastre en Eq. 3 reduce la
velocidad orbital y hace que la esfera se mueva hacia el centro gravitacional.
Sin embargo, si la suma de la fuerza de aceleración centrípeta y la fuerza repulsiva es
más fuerte que la fuerza gravitacional, entonces, la esfera se moverá hacia afuera y se retirará de
el centro gravitacional. Este es el caso del interés aquí. Si la velocidad cambia en Eq. 3 es
suficientemente lento y la fuerza repulsiva es pequeña en comparación con la fuerza gravitacional y
fuerza de aceleración centrípeta, entonces la tasa de retroceso será en consecuencia relativamente
Lentamente. Por lo tanto, la fuerza gravitacional y la fuerza de aceleración centrípeta puede ser
aproximadamente tratados en equilibrio en cualquier momento. La ecuación pseudo equilibrio es
GMm 2
2 = (5)
donde G es la constante gravitacional, M es la masa del centro gravitacional, y R es
el radio de la órbita. Insertar v de Eq. 4 en Eq. 5 rendimientos
)12exp( 1
R n−= (6)
(7) )12exp( 10 trRR
n−=
donde
R = (8)
R0 es la distancia inicial al centro gravitacional. Tenga en cuenta que R aumenta exponencialmente
con el tiempo. El aumento de la energía orbital con el retroceso proviene del repulsivo
fuerza del fluido de campo de materia oscura. La tasa de recesión de la esfera es
dR n−= 112 (9)
La aceleración de la recesión es
( Rr
Rd n 21
12 − = ). (10)
La aceleración recesiva es positiva y proporcional a su distancia a la
centro gravitacional, así que la recesión es cada vez más rápida.
Según la mecánica de los fluidos, para una esfera rígida no permeable giratoria
alrededor de su eje central en el fluido quiescente, el par T ejercido por el fluido en la esfera
38 rT − = (11)
donde • es la velocidad angular de la esfera. La dirección del par en Eq. 11 es
opuesta a la dirección de la rotación. En el caso de una esfera que gira en el quiescente
Líquido de campo de materia oscura con velocidad angular, similar a Eq. 2, la T propuesta ejerció
en la esfera es
( ) mrT n 318 = (12)
La dirección del par en Eq. 12 es opuesto a la dirección de la rotación. Los
el par causa la aceleración angular negativa
= (13)
donde estoy el momento de inercia de la esfera en el fluido del campo de materia oscura
( )21
2 nrmI = (14)
Por lo tanto, la ecuación de rotación para la esfera en el fluido del campo de materia oscura es
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
d = 120 (15)
Resolver esta ecuación produce
(16) )20exp( 10 tr
No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
donde 0 es la velocidad angular inicial. Uno puede ver que la velocidad angular de la
esfera disminuye exponencialmente con el tiempo y la desaceleración angular es proporcional a
su velocidad angular.
Para la misma esfera celestial, combinando Eq. 9 y Eq. 15 rendimientos
(17)
El significado de Eq. 17 es que sólo contiene datos observados sin suposiciones y
parámetros indeterminados; por lo tanto, es una prueba crítica para este modelo.
Para dos esferas celestes diferentes en el mismo sistema, combinando Eq. 9 y Eq.
15 rendimientos
67,1
1 −=−=
(18)
Esta es otra prueba crítica para este modelo.
4. El comportamiento actual del sistema Tierra-Luna concuerda con el modelo
El sistema Luna-Tierra es el sistema gravitacional más simple. El sistema solar es
complejo, la Tierra y la Luna experimentan no sólo la interacción del Sol, sino también
interacciones de otros planetas. Consideremos el sistema gravitacional Tierra-Luna local como
un sistema gravitacional local aislado, es decir, la influencia del Sol y otros planetas
sobre la rotación y el movimiento orbital de la Luna y sobre la rotación de la Tierra
asumido insignificante en comparación con las fuerzas ejercidas por la luna y la tierra en el otro.
Además, la excentricidad de la órbita de la Luna es lo suficientemente pequeña como para ser ignorada. Los datos
sobre la Luna y la Tierra a partir de referencias (Dickey et.al., 1994, y Lang, 1992)
listado a continuación para la conveniencia de los lectores para verificar el cálculo porque los datos pueden
varían ligeramente con diferentes fuentes de datos.
Luna:
Radio medio: r = 1738,0 km
Masa: m = 7,3483 × 1025 gramos
Período de rotación = 27,321661 días
Velocidad angular de la Luna = 2,6617 × 10-6 rad s-1
Distancia media a la Tierra Rm= 384400 km
Velocidad orbital media v = 1.023 km s-1
Excentricidad de la órbita e = 0,0549
Velocidad de aceleración de rotación angular = -25,88 ± 0,5 arcoseg siglo-2
= (-1,255 ± 0,024) × siglo rad 10-4-2
= (-1.260 ± 0.024) × 10-23 rad s-2
Tasa de retroceso de la Tierra = 3,82 ± 0,07 cm año-1 = (1,21 ± 0,02) × 10-9 m s-1
Tierra:
Radio medio: r = 6371,0 km
Masa: m = 5,9742 × 1027 gramos
Período de rotación = 23 h 56m 04.098904s = 86164.098904s
Velocidad angular de rotación = 7,292115 × 10-5 rad s-1
Distancia media al Sol Rm= 149.597.870,61 km
Velocidad orbital media v = 29,78 km s-1
Aceleración angular de la Tierra = (-5,5 ± 0,5) × 10-22 rad s-2
Velocidad angular de rotación de la Luna y aumento de la distancia media a la Tierra
(tasa de descenso) se obtuvieron de la gama de láser lunar del Programa Apollo (Dickey
et.al., 1994). Insertando los datos de la rotación y recesión de la Luna en Eq. 17, el
resultado es
039.054,1
10662,2121,1
1092509.31026.1
(19)
La distancia R en Eq. 19 es desde el centro de la Luna hasta el centro de la Tierra y el número
384400 km se supone que es la distancia de la superficie de la Luna a la superficie de la Tierra.
Eq. 19 está en buen acuerdo con el valor teórico de -1.67. El resultado está de acuerdo
con el modelo utilizado aquí. La diferencia (alrededor del 7,8%) entre los valores de -1,54 y -
1.67 pueden provenir de varias fuentes:
1. El orbital de la Luna no es un círculo perfecto
2. La Luna no es una esfera rígida perfecta.
3. El efecto del Sol y otros planetas.
4. Errores en los datos.
5. Posibles otras razones desconocidas.
Los dos parámetros n y η en Eq. 9 y Eq. 15 se puede determinar con dos datos
Sets. El tercer conjunto de datos se puede utilizar para seguir probando el modelo. Si este modelo es correcto
describe la situación actual, debe dar resultados coherentes para diferentes movimientos. Los
los valores de n y η calculados a partir de tres conjuntos de datos diferentes se enumeran a continuación (Nota:
la distancia media de la Luna a la Tierra y los radios medios de la Luna y la Tierra son
utilizado en el cálculo).
El valor de n: n = 0,64
De la rotación de la Luna: η = 4,27 × 10-22 s-1 m-1
De la rotación de la Tierra: η = 4,26 × 10-22 s-1 m-1
De la recesión de la Luna: η = 4,64 × 10-22 s-1 m-1
Se puede ver que los tres valores de η son consistentes dentro del rango de error en los datos.
El valor medio de η: η = (4,39 ± 0,22) × 10-22 s-1 m-1
Al insertar los datos de la rotación de la Tierra, la recesión de la Luna y el valor de n en
Eq. 18, el resultado es
14.053.1
6371000
1738000
1021.11029.7
1092509,3105.5
)64.01(
(20)
Esto también está de acuerdo con el modelo utilizado aquí.
La fuerza de arrastre ejercida sobre el movimiento orbital de la Luna por el campo de materia oscura
fluido es -1.11 × 108 N, esto es insignificantemente pequeño en comparación con la fuerza gravitacional entre
la Luna y la Tierra ~ 1,90 × 1020 N; y el torque ejercido por el campo de materia oscura
fluido en las rotaciones de la Tierra y la Luna es T = -5,49 × 1016 Nm y -1,15 × 1012 Nm,
respectivamente.
5. La evolución del sistema Tierra-Luna
Sonett et al. encontró que la longitud del día terrestre hace 900 millones de años fue
alrededor de 19,2 horas sobre la base de los sedimentos de marea laminadas en la Tierra (Sonett y otros,
1996). De acuerdo con el modelo presentado aquí, en ese tiempo, la duración del día
fue alrededor de 19,2 horas, esto concuerda muy bien con Sonett et al.El resultado.
Otro aspecto crítico de modelar la evolución del sistema Tierra-Luna es:
dar una estimación razonable de la distancia más cercana de la Luna a la Tierra cuando la
El sistema se estableció hace 4.500 millones de años. Basado en el fluido del campo de materia oscura
modelo, y el resultado anterior, la distancia más cercana de la Luna a la Tierra fue
259000 km (centro a centro) o 250900 km (superficie a superficie) en 4.500 millones de años atrás,
Esto está mucho más allá del límite del Roche. En el moderno libro de texto de astronomía de Chaisson y
McMillan (Chaisson y McMillan, 1993, p.173), la distancia estimada en 4.500 millones
hace 250000 km, este es probablemente el número más razonable que
Los astrónomos creen y concuerdan excelentemente con el resultado de este modelo. El más cercano
distancia de la Luna a la Tierra por los modelos de Hansen era de unos 38 radios de la Tierra o
242000 km (Hansen, 1982).
De acuerdo con este modelo, la longitud del día de la Tierra fue de aproximadamente 8 horas a 4.5
Hace miles de millones de años. Fig. 1 muestra la evolución de la distancia de la Luna a la Tierra y el
longitud del día de la Tierra con la edad del sistema Tierra-Luna descrito por este modelo
junto con datos de Kvale et al. (1999), Sonett y otros (1996) y Scrutton (1978). Uno
puede ver que esos datos encajan muy bien en este modelo en su rango de tiempo.
Fig. 2 muestra los datos geológicos de los días solares año-1 de Wells (1963) y de
Sonett et al. (1996) y la descripción (línea sólida) de este modelo de fluido de campo de materia oscura
desde hace 900 millones de años. Se puede ver que el modelo está de acuerdo con el
datos fósiles maravillosamente.
La diferencia importante de este modelo con los modelos tempranos en la descripción de la
la evolución del sistema Tierra-Luna es que este modelo se basa sólo en los datos actuales de la
Sistema Luna-Tierra y no hay suposiciones sobre las condiciones de la Tierra anterior
rotación y deriva continental. Basado en este modelo, el sistema Tierra-Luna ha sido
evolución a la situación actual desde que se estableció y la tasa de recesión
de la Luna ha ido aumentando gradualmente, sin embargo, esta descripción no lo toma en
cuenta que podría haber acontecimientos especiales sucedidos en el pasado para causar el repentino
cambios significativos en los movimientos de la Tierra y la Luna, tales como fuertes impactos por
asteroides y cometas gigantes, etc., porque esos impactos son muy comunes en el universo.
El patrón general de la evolución del sistema Luna-Tierra descrito por este modelo
está de acuerdo con las pruebas geológicas. Basado en Eq. 9, la tasa de recesión exponencialmente
aumenta con el tiempo. Se puede imaginar entonces que la tasa de recesión se convertirá rápidamente
Muy grande. De hecho, el aumento es extremadamente lento. La tasa de recesión de la Luna será
3,04 × 10-9 m s-1 después de 10 mil millones de años y 7,64 × 10-9 m s-1 después de 20 mil millones de años.
Sin embargo, si la recesión de la Luna continuará o en algún momento más tarde otro
No se sabe si el mecanismo asumirá el control. Se debe entender que la fricción de mareas
afecta a la evolución de la propia Tierra, como la estructura de la corteza superficial, continental
la deriva y la evolución del biosistema, etc; también puede jugar un papel en la desaceleración de la Tierra
la rotación, sin embargo, ese papel no es un mecanismo dominante.
Desafortunadamente, no hay datos disponibles sobre los cambios en la órbita de la Tierra.
movimiento y todos los demás miembros del sistema solar. De acuerdo con este modelo y los resultados anteriores,
la tasa de recesión de la Tierra debe ser de 6,86 × 10-7 m s-1 = 21,6 m año-1 = 2,16 km
siglo-1, la longitud de un año aumenta alrededor de 6,8 ms y el cambio de la temperatura es
-1.8 × 10-8 K año-1 con constante nivel de radiación del Sol y el entorno estable en
la Tierra. La duración de un año, hace mil millones de años, sería el 80% de la duración actual.
del año. Sin embargo, muchas pruebas (bandas de crecimiento de corales y mariscos, así como
de otras pruebas) sugieren que no ha habido ningún cambio aparente en la duración de la
año sobre los mil millones de años y el movimiento orbital de la Tierra es más estable que su rotación.
Esto sugiere que el líquido del campo de materia oscura está circulando alrededor del Sol con el mismo
dirección y velocidad similar de la Tierra (al menos en el rango orbital de la Tierra). Por lo tanto, el
El movimiento orbital de la Tierra experimenta muy poca o ninguna fuerza de arrastre de la materia oscura
fluido de campo. Sin embargo, se trata de una conjetura, hay que llevar a cabo una amplia investigación para verificar
Si este es el caso.
6. Descripción escéptica de la evolución del Marte
La Luna no tiene líquido líquido en su superficie, incluso no hay aire, por lo tanto,
no hay una fuerza de fricción mareomotriz similar al océano para ralentizar su rotación; sin embargo, la rotación de la
La Luna todavía se está desacelerando a un ritmo significativo de (-1.260 ± 0.024) × 10-23 rad s-2, lo que está de acuerdo
con el modelo muy bien. En base a esto, uno puede pensar razonablemente que los
la rotación también debería ser más lenta.
El Marte es nuestro vecino más cercano que ha atraído la gran atención de los humanos
Desde la antigüedad. La exploración de Marte se ha estado calentando en las últimas décadas.
NASA, Agencia Espacial Rusa y Europa enviaron muchas naves espaciales a Marte para recolectar
datos y estudiar este misterioso planeta. Hasta ahora todavía no hay suficientes datos sobre el
historia de este planeta para describir su evolución. Igual que la Tierra, el Marte gira alrededor
su eje central y gira alrededor del Sol, sin embargo, el Marte no tiene una masa
(Marte tiene dos pequeños satélites: Fobos y Deimos) y no hay
líquido líquido en su superficie, por lo tanto, no hay aparente fuerza de fricción mareo-como el océano a
ralentizar su rotación por teorías de fricción de mareas. Sobre la base del resultado anterior y actual
Los datos de Marte, este modelo predice que la aceleración angular del Marte debería ser alrededor de
-4.38 × 10-22 rad s-2. La figura 3 describe la posible evolución de la duración del día y la
días solares / año de Marte, la línea vertical marca la edad actual del Marte con
asumir que el Marte se formó en un período de tiempo similar de la formación de la Tierra.
Tal descripción no fue dada antes de acuerdo con el conocimiento del autor y es
completamente escéptico debido a la falta de datos confiables. Sin embargo, con una mayor expansión de la
investigación y exploración en Marte, nos sentiremos seguros de que los datos confiables sobre
la aceleración angular de rotación del Marte estará disponible en el futuro próximo que
proporcionará una prueba vital para la predicción de este modelo. También hay otros factores
que puede afectar a la tasa de rotación de Marte, como la redistribución de masa debido a la temporada
cambio, vientos, posibles erupciones volcánicas y terremotos de Marte. Por lo tanto, los datos deben ser
cuidadosamente analizados.
7. Discusión sobre el modelo
De los resultados anteriores, se puede ver que los datos actuales Tierra-Luna y el
datos geológicos y fósiles están de acuerdo con el modelo muy bien y la evolución pasada de la
Sistema Tierra-Luna puede ser descrito por el modelo sin introducir ningún adicional
parámetros; este modelo revela la interesante relación entre la rotación y
Retirada (Eq. 17 y Eq. 18) del mismo cuerpo celestial o diferentes cuerpos celestes en
el mismo sistema gravitacional, tal relación no se conoce antes. Tal éxito puede
no debe explicarse por “coincidencia” o “suerte” debido a la gran cantidad de datos
Los datos de la Tierra y la Luna y los datos geológicos y fósiles) si uno piensa que esto es sólo un
“ad hoc” o un modelo equivocado, aunque la posibilidad de que
“coincidencia” o “suerte” podría ser mayor que ganar un premio mayor de la lotería; el futuro de Marte
los datos aclararán esto; de lo contrario, se puede desarrollar una nueva teoría a partir de un enfoque diferente
dar la misma o mejor descripción como lo hace este modelo. Es cierto que este modelo es
no perfecto y puede tener defectos, se puede llevar a cabo un mayor desarrollo.
James Clark Maxwell dijo en el 1873 “El vasto interplanetario e interestelar
regiones ya no serán considerados como lugares de desecho en el universo, que el Creador tiene
no se considera apto para llenar con los símbolos de la orden múltiple de Su reino. Encontraremos
estar ya llenos de este maravilloso medio; tan lleno, que ningún poder humano puede
quitarlo de la porción más pequeña del espacio, o producir el más mínimo defecto en su infinito
continuidad. Se extiende ininterrumpidamente de estrella a estrella...”. El medio que habló Maxwell
alrededor es el éter que fue propuesto como portador de la propagación de la onda de luz. Los
El experimento Michelson-Morley sólo demostró que la propagación de la onda de luz no
depende de tal medio y no rechaza la existencia del medio en el interestelar
espacio. De hecho, el concepto de medio interestelar se ha desarrollado dramáticamente
recientemente como la materia oscura, la energía oscura, el fluido cósmico, etc. El campo de la materia oscura
fluido es sólo una parte de tan maravilloso medio y “precisamente” descrito por Maxwell.
7. Conclusión
La evolución del sistema Tierra-Luna puede ser descrita por el campo de materia oscura
modelo fluido con enfoque no newtoniano y los datos actuales de la Tierra y la Luna
Se adapta muy bien a este modelo. Hace 4.500 millones de años, la distancia más cercana de la Luna
La Tierra podría estar a unos 259000 km, que está muy por encima del límite de Roche y de la longitud de
El día era alrededor de 8 horas. El patrón general de la evolución del sistema Luna-Tierra
descrita por este modelo concuerda con la evidencia geológica y fósil. La fricción de mareas puede
no sea la causa principal de la evolución del sistema Tierra-Luna. La rotación de Marte
también se está desacelerando con la velocidad de aceleración angular alrededor de -4,38 × 10-22 rad s-2.
Bibliografía
S. G. Brush, 1983. L. R. Godfrey (editor), Fantasma del siglo XIX:
Argumentos creacionistas para una Tierra joven. Los científicos se enfrentan al creacionismo. W. W.
Norton & Company, Nueva York, Londres, pp.
E. Chaisson y S. McMillan. 1993. Astronomía Hoy, Sala Prentice, Englewood
Cliffs, NJ 07632.
J. O. Dickey, et al., 1994. Ciencia, 265, 482.
D. G. Finch, 1981. Tierra, Luna y Planetas, 26(1), 109.
K. S. Hansen, 1982. Rev. Geophys. y Space Phys. 20(3), 457.
W. K. Hartmann, D. R. Davis, 1975. Ícaro, 24, 504.
B. A. Kagan, N. B. Maslova, 1994. Tierra, Luna y Planetas 66, 173.
B. A. Kagan, 1997. Prog. Oceanog. 40, 109.
E. P. Kvale, H. W. Johnson, C. O. Sonett, A. W. Archer, y A. Zawistoski, 1999, J.
Sedimento. Res. 69(6), 1154.
K. Lang, 1992. Datos Astrofísicos: Planetas y Estrellas, Springer-Verlag, Nueva York.
H. Pan, 2005. Internat. J. Phys modernos. A, 20(14), 3135.
R. D. Ray, B. G. Bills, B. F. Chao, 1999. J. Geophys. Res. 104 (B8), 17653.
C. T. Scrutton, 1978. P. Brosche, J. Sundermann, (Editors.), la fricción de mareas y el
La rotación de la Tierra. Springer-Verlag, Berlín, pp. 154.
L. B. Slichter, 1963. J. Geophys. Res. 68, 14.
C. P. Sonett, E. P. Kvale, M. A. Chan, T. M. Demko, 1996. Ciencia, 273, 100.
F. D. Stacey, 1977. Física de la Tierra, segunda edición. John Willey & Sons.
J. W. Wells, 1963. Naturaleza, 197, 948.
Título
Figura 1, la evolución de la distancia de la Luna y la longitud del día de la tierra con
la era del sistema Tierra-Luna. Las líneas sólidas se calculan según la materia oscura
modelo de fluido de campo. Fuentes de datos: las distancias de la Luna son de Kvale y et al. y para el
longitud del día: (a y b) son de Scrutton (página 186, fig. 8), c es de Sonett y et al.
La línea marca la edad actual del sistema Tierra-Luna.
Figura 2, la evolución de los días solares del año con la edad de la Luna-Tierra
sistema. La línea sólida se calcula según el modelo de fluido de campo de materia oscura. Los datos
son de Wells (3.9 ~ 4.435 millones de años de rango), Sonett (3.600 millones de años) y actual
edad (4.500 millones de años).
Figura 3, la descripción escéptica de la evolución de la longitud del día de Marte y el
días solares/año de Marte con la edad de Marte (suponiendo que la edad de Marte es de aproximadamente 4.5
miles de millones de años). La línea vertical marca la edad actual de Marte.
Figura 1, distancia de la Luna y la longitud del día de la Tierra
cambio con la era del sistema Tierra-Luna
La edad del sistema Tierra-Luna (109 años)
0 1 2 3 4 5
Distancia
Duración del día
Límite de Roche
Resultado de Hansen
Figura 2, los días solares / año vs. la edad de la Tierra
La edad de la Tierra (109 años)
3,5 3,6 3,7 3,8 3,9 4,0 4,1 4,2 4,3 4,4 4,5 4,6
|
704
| A determinant of Stirling cycle numbers counts unlabeled acyclic
single-source automata
| We show that a determinant of Stirling cycle numbers counts unlabeled acyclic
single-source automata. The proof involves a bijection from these automata to
certain marked lattice paths and a sign-reversing involution to evaluate the
determinant.
| A Determinant of Stirling Cycle Numbers Counts Unlabeled
Acyclic Single-Source Automata
DAVID CALLAN
Department of Statistics
University of Wisconsin-Madison
1300 University Ave
Madison, WI 53706-1532
callan@stat.wisc.edu
March 30, 2007
Abstract
We show that a determinant of Stirling cycle numbers counts unlabeled acyclic
single-source automata. The proof involves a bijection from these automata to
certain marked lattice paths and a sign-reversing involution to evaluate the deter-
minant.
1 Introduction The chief purpose of this paper is to show bijectively that
a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata.
Specifically, let Ak(n) denote the kn × kn matrix with (i, j) entry
[ ⌊ i−1
⌊ i−1
⌋+1+i−j
, where
is the Stirling cycle number, the number of permutations on [i] with j cycles. For
example,
A2(5) =
1 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0
0 1 3 2 0 0 0 0 0 0
0 0 1 3 2 0 0 0 0 0
0 0 0 1 6 11 6 0 0 0
0 0 0 0 1 6 11 6 0 0
0 0 0 0 0 1 10 35 50 24
0 0 0 0 0 0 1 10 35 50
0 0 0 0 0 0 0 1 15 85
0 0 0 0 0 0 0 0 1 15
http://arxiv.org/abs/0704.0004v1
As evident in the example, Ak(n) is formed from k copies of each of rows 2 through n+1
of the Stirling cycle triangle, arranged so that the first nonzero entry in each row is a 1
and, after the first row, this 1 occurs just before the main diagonal; in other words, Ak(n)
is a Hessenberg matrix with 1s on the infra-diagonal. We will show
Main Theorem. The determinant of Ak(n) is the number of unlabeled acyclic single-
source automata with n transient states on a (k + 1)-letter input alphabet.
Section 2 reviews basic terminology for automata and recurrence relations to count
finite acyclic automata. Section 3 introduces column-marked subdiagonal paths, which
play an intermediate role, and a way to code them. Section 4 presents a bijection from
these column-marked subdiagonal paths to unlabeled acyclic single-source automata. Fi-
nally, Section 5 evaluates detAk(n) using a sign-reversing involution and shows that the
determinant counts the codes for column-marked subdiagonal paths.
2 Automata
A (complete, deterministic) automaton consists of a set of states and an input alphabet
whose letters transform the states among themselves: a letter and a state produce another
state (possibly the same one). A finite automaton (finite set of states, finite input alphabet
of, say, k letters) can be represented as a k-regular directed multigraph with ordered edges:
the vertices represent the states and the first, second, . . . edge from a vertex give the effect
of the first, second, . . . alphabet letter on that state. A finite automaton cannot be acyclic
in the usual sense of no cycles: pick a vertex and follow any path from it. This path must
ultimately hit a previously encountered vertex, thereby creating a cycle. So the term
acyclic is used in the looser sense that only one vertex, called the sink, is involved in
cycles. This means that all edges from the sink loop back to itself (and may safely be
omitted) and all other paths feed into the sink.
A non-sink state is called transient. The size of an acyclic automaton is the number of
transient states. An acyclic automaton of size n thus has transient states which we label
1, 2, . . . , n and a sink, labeled n + 1. Liskovets [1] uses the inclusion-exclusion principle
(more about this below) to obtain the following recurrence relation for the number ak(n)
of acyclic automata of size n on a k-letter input alphabet (k ≥ 1):
ak(0) = 1; ak(n) =
(−1)n−j−1
(j + 1)k(n−j)ak(j), n ≥ 1.
A source is a vertex with no incoming edges. A finite acyclic automaton has at least
one source because a path traversed backward v1 ← v2 ← v3 ← . . . must have distinct
vertices and so cannot continue indefinitely. An automaton is single-source (or initially
connected) if it has only one source. Let Bk(n) denote the set of single-source acyclic
finite (SAF) automata on a k-letter input alphabet with vertices 1, 2, . . . , n + 1 where 1
is the source and n + 1 is the sink, and set bk(n) = | Bk(n) |. The two-line representation
of an automaton in Bk(n) is the 2× kn matrix whose columns list the edges in order. For
example,
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
2 4 6 6 6 6 6 6 6 3 5 3 2 2 6
is in B3(5) and the source-to-sink paths in B include 1
→ 6, 1
→ 6, 1
→ 6, where the alphabet is {a, b, c}.
Proposition 1. The number bk(n) of SAF automata of size n on a k-letter input alphabet
(n, k ≥ 1) is given by
bk(n) =
(−1)n−i
(i+ 1)k(n−i)ak(i)
Remark This formula is a bit more succinct than the the recurrence in [1, Theorem
3.2].
Proof Consider the setA of acyclic automata with transient vertices [n] = {1, 2, . . . , n}
in which 1 is a source. Call 2, 3, . . . , n the interior vertices. For X ⊆ [2, n], let
f(X) = # automata in A whose set of interior vertices includes X,
g(X) = # automata in A whose set of interior vertices is precisely X.
Then f(X) =
Y :X⊆Y⊆[2,n] g(Y ) and by Möbius inversion [2] on the lattice of subsets of
[2, n], g(X) =
Y :X⊆Y⊆[2,n] µ(X, Y )f(Y ) where µ(X, Y ) is the Möbius function for this
lattice. Since µ(X, Y ) = (−1)|Y |−|X| if X ⊆ Y , we have in particular that
g(∅) =
Y⊆[2,n]
(−1)| Y |f(Y ). (1)
Let | Y | = n − i so that 1 ≤ i ≤ n. When Y consists entirely of sources, the vertices
in [n+ 1]\Y and their incident edges form a subautomaton with i transient states; there
are ak(i) such. Also, all edges from the n − i vertices comprising Y go directly into
[n + 1]\Y : (i + 1)k(n−i) choices. Thus f(Y ) = (i + 1)k(n−i)ak(i). By definition, g(∅) is
the number of automata in A for which 1 is the only source, that is, g(∅) = bk(n) and the
Proposition now follows from (1).
An unlabeled SAF automaton is an equivalence class of SAF automata under relabeling
of the interior vertices. Liskovets notes [1] (and we prove below) that Bk(n) has no
nontrivial automorphisms, that is, each of the (n− 1)! relabelings of the interior vertices
of B ∈ Bk(n) produces a different automaton. So unlabeled SAF automata of size n on
a k-letter alphabet are counted by 1
(n−1)!
bk(n). The next result establishes a canonical
representative in each relabeling class.
Proposition 2. Each equivalence class in Bk(n) under relabeling of interior vertices has
size (n− 1)! and contains exactly one SAF automaton with the “last occurrences increas-
ing” property: the last occurrences of the interior vertices—2, 3, . . . , n—in the bottom row
of its two-line representation occur in that order.
Proof The first assertion follows from the fact that the interior vertices of an au-
tomatonB ∈ bk(n) can be distinguished intrinsically, that is, independent of their labeling.
To see this, first mark the source, namely 1, with a mark (new label) v1 and observe that
there exists at least one interior vertex whose only incoming edge(s) are from the source
(the only currently marked vertex) for otherwise a cycle would be present. For each such
interior vertex v, choose the last edge from the marked vertex to v using the built-in
ordering of these edges. This determines an order on these vertices; mark them in order
v2, v3, . . . , vj (j ≥ 2). If there still remain unmarked interior vertices, at least one of them
has incoming edges only from a marked vertex or again a cycle would be present. For
each such vertex, use the last incoming edge from a marked vertex, where now edges are
arranged in order of initial vertex vi with the built-in order breaking ties, to order and
mark these vertices vj+1, vj+2, . . .. Proceed similarly until all interior vertices are marked.
For example, for
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
2 4 6 6 6 6 6 6 6 3 5 3 2 2 6
v1 = 1 and there is just one interior vertex, namely 4, whose only incoming edge is from
the source, and so v2 = 4 and 4 becomes a marked vertex. Now all incoming edges to
both 3 and 5 are from marked vertices and the last such edges (built-in order comes into
play) are 4
→ 5 and 4
→ 3 putting vertices 3, 5 in the order 5, 3. So v3 = 5 and v4 = 3.
Finally, v5 = 2. This proves the first assertion. By construction of the vs, relabeling each
interior vertex i with the subscript of its corresponding v produces an automaton in Bk(n)
with the “last occurrences increasing” property and is the only relabeling that does so.
The example yields
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
5 2 6 4 3 4 5 5 6 6 6 6 6 6 6
Now let Ck(n) denote the set of canonical SAF automata in Bk(n) representing un-
labeled automata; thus | Ck(n) | =
(n−1)!
bk(n). Henceforth, we identify an unlabeled au-
tomaton with its canonical representative.
3 Column-Marked Subdiagonal Paths
A subdiagonal (k, n, p)-path is a lattice path of steps E = (1, 0) and N = (0, 1), E for
east and N for north, from (0, 0) to (kn, p) that never rise above the line y = 1
x. Let
Ck(n, p) denote the set of such paths.For k ≥ 1, it is clear that Ck(n, p) is nonempty only
for 0 ≤ p ≤ n and it is known (generalized ballot theorem) that
|Ck(n, p) | =
kn− kp+ 1
kn+ p+ 1
kn+ p + 1
A path P in Ck(n, n) can be coded by the heights of its E steps above the line y = −1;
this gives a a sequence (bi)
i=1 subject to the restrictions 1 ≤ b1 ≤ b2 ≤ . . . ≤ bkn and
bi ≤ ⌈i/k⌉ for all i.
A column-marked subdiagonal (k, n, p)-path is one in which, for each i ∈ [1, kn], one of
the lattice squares below the ith E step and above the horizontal line y = −1 is marked,
say with a ‘ ∗ ’. Let C
k(n, p) denote the set of such marked paths.
b b b
b b b b
b b b b
∗ ∗ ∗
(0,0)
(8,4)
y = −1
y = 1
A path in C
2(4, 3)
A marked path P ∗ in C
k(n, n) can be coded by a sequence of pairs
(ai, bi)
where
i=1 is the code for the underlying path P and ai ∈ [1, bi] gives the position of the ∗ in the
ith column. The example is coded by (1, 1), (1, 1), (1, 2), (2, 2), (1, 2), (3, 3), (1, 3), (2, 3).
An explicit sum for |C
k(n, n) | is
k(n, n) | =
1≤b1≤b2≤...≤bkn,
bi ≤ ⌈i/k⌉ for all i
b1b2 . . . bkn,
because the summand b1b2 . . . bkn is the number of ways to insert the ‘ ∗ ’s in the underlying
path coded by (bi)
It is also possible to obtain a recurrence for |C
k(n, p) |, and then, using Prop. 1, to
show analytically that |C
k(n, n) | = | Ck+1(n) |. However, it is much more pleasant to
give a bijection and in the next section we will do so. In particular, the number of SAF
automata on a 2-letter alphabet is
| C2(n) | = |C
1(n, n) | =
1≤b1≤b2≤...≤bn
bi ≤ i for all i
b1b2 . . . bn = (1, 3, 16, 127, 1363, . . .)n≥1,
sequence A082161 in [3].
4 Bijection from Paths to Automata
In this section we exhibit a bijection from C
k(n, n) to Ck+1(n). Using the illustrated
path as a working example with k = 2 and n = 4,
b b b
b b b b
b b b b
∗ ∗ ∗
(0,0)
(8,4)
y = −1
y = 1
first construct the top row of a two-line representation consisting of k + 1 each 1s, 2s,
. . . ,n s and number them left to right:
The last step in the path is necessarily anN step. For the second last, third last,. . .N steps
in the path, count the number of steps following it. This gives a sequence i1, i2, . . . , in−1
satisfying 1 ≤ i1 < i2 < . . . < in−1 and ij ≤ (k + 1)j for all j. Circle the positions
i1, i2, . . . , in−1 in the two-line representation and then insert (in boldface) 2, 3, . . . , n in
the second row in the circled positions:
2 3 4
These will be the last occurrences of 2, 3, . . . , n in the second row. Working from the last
column in the path back to the first, fill in the blanks in the second row left to right as
follows. Count the number of squares from the ∗ up to the path (including the ∗ square)
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A082161
and add this number to the nearest boldface number to the left of the current blank entry
(if there are no boldface numbers to the left, add this number to 1) and insert the result
in the current blank square. In the example the numbers of squares are 2,3,1,2,1,2,1,1
yielding
2 4 5 3 3 5 4 5 4 5 5
This will fill all blank entries except the last. Note that ∗ s in the bottom row correspond
to sink (that is, n+1) labels in the second row. Finally, insert n+1 into the last remaining
blank space to give the image automaton:
1 1 1 2 2 2 3 3 3 4 4 4
2 4 5 3 3 5 4 5 4 5 5 5
This process is fully reversible and the map is a bijection.
5 Evaluation of detAk(n)
For simplicity, we treat the case k = 1, leaving the generalization to arbitrary k
as a not-too-difficult exercise for the interested reader. Write A(n) for A1(n). Thus
A(n) =
1≤i,j≤n
. From the definition of detA(n) as a sum of signed products, we
show that detA(n) is the total weight of certain lists of permutations, each list carrying
weight ±1. Then a weight-reversing involution cancels all −1 weights and reduces the
problem to counting the surviving lists. These surviving lists are essentially the codes for
paths in C
1(n, p), and the Main Theorem follows from §4.
To describe the permutations giving a nonzero contribution to detA(n) =
σ sgn σ×
i=1 ai,σ(i), define the code of a permutation σ on [n] to be the list c = (ci)
i=1 with
ci = σ(i)−(i−1). Since the (i, j) entry of A(n),
, is 0 unless j ≥ i−1, we must have
σ(i) ≥ i−1 for all i. It is well known that there are 2n−1 such permutations, corresponding
to compositions of n, with codes characterized by the following four conditions: (i) ci ≥ 0
for all i, (ii) c1 ≥ 1, (iii) each ci ≥ 1 is immediately followed by ci − 1 zeros in the list,
i=1 ci = n. Let us call such a list a padded composition of n: deleting the zeros
is a bijection to ordinary compositions of n. For example, (3, 0, 0, 1, 2, 0) is a padded
composition of 6. For a permutation σ with padded composition code c, the nonzero
entries in c give the cycle lengths of σ. Hence sgnσ, which is the parity of “n−#cycles
in σ”, is given by (−1)#0s in c.
We have detA(n) =
σ sgn σ
i=1 ai,σ(i) =
σ sgn σ
2i−σ(i)
, and so
detA(n) =
(−1)#0s in c
i+ 1− ci
where the sum is restricted to padded compositions c of n with ci ≤ i for all i (A002083)
because
i+1−ci
= 0 unless ci ≤ i.
Henceforth, let us write all permutations in standard cycle form whereby the smallest
entry occurs first in each cycle and these smallest entries increase left to right. Thus,
with dashes separating cycles, 154-2-36 is the standard cycle form of the permutation
( 1 2 3 4 5 65 2 6 1 4 3 ). We define a nonfirst entry to be one that does not start a cycle. Thus the
preceding permutation has 3 nonfirst entries: 5,4,6. Note that the number of nonfirst
entries is 0 only for the identity permutation. We denote an identity permutation (of any
size) by ǫ.
By definition of Stirling cycle number, the product in (2) counts lists (πi)
i=1 of permu-
tations where πi is a permutation on [i+1] with i+1− ci cycles, equivalently, with ci ≤ i
nonfirst entries. So define Ln to be the set all lists of permutations π = (πi)
i=1 where πi
is a permutation on [i + 1], #nonfirst entries in πi is ≤ i, π1 is the transposition (1,2),
each nonidentity permutation πi is immediately followed by ci − 1 ǫ’s where ci ≥ 1 is the
number of nonfirst entries in πi (so the total number of nonfirst entries is n). Assign a
weight to π ∈ Ln by wt(π) = (−1)
# ǫ’s in π. Then
detA(n) =
wt(π).
We now define a weight-reversing involution on (most of) Ln. Given π ∈ Ln, scan the
list of its component permutations π1 = (1, 2), π2, π3, . . . left to right. Stop at the first
one that either (i) has more than one nonfirst entry, or (ii) has only one nonfirst entry, b
say, and b > maximum nonfirst entry m of the next permutation in the list. Say πk is the
permutation where we stop.
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A002083
In case (i) decrement (i.e. decrease by 1) the number of ǫ’s in the list by splitting πk
into two nonidentity permutations as follows. Let m be the largest nonfirst entry of πk
and let ℓ be its predecessor. Replace πk and its successor in the list (necessarily an ǫ) by
the following two permutations: first the transposition (ℓ,m) and second the permutation
obtained from πk by erasing m from its cycle and turning it into a singleton. Here are
two examples of this case (recall permutations are in standard cycle form and, for clarity,
singleton cycles are not shown).
i 1 2 3 4 5 6
πi 12 13 23 14-253 ǫ ǫ
i 1 2 3 4 5 6
πi 12 13 23 25 14-23 ǫ
i 1 2 3 4 5 6
πi 12 23 14 13-24 ǫ 23
i 1 2 3 4 5 6
πi 12 23 14 24 13 23
The reader may readily check that this sends case (i) to case (ii).
In case (ii), πk is a transposition (a, b) with b > maximum nonfirst entry m of πk+1. In
this case, increment the number of ǫ’s in the list by combining πk and πk+1 into a single
permutation followed by an ǫ: in πk+1, b is a singleton; delete this singleton and insert b
immediately after a in πk+1 (in the same cycle). The reader may check that this reverses
the result in the two examples above and, in general, sends case (ii) to case (i). Since the
map alters the number of ǫ’s in the list by 1, it is clearly weight-reversing. The map fails
only for lists that both consist entirely of transpositions and have the form
(a1, b1), (a2, b2), . . . , (an, bn) with b1 ≤ b2 ≤ . . . ≤ bn.
Such lists have weight 1. Hence detA(n) is the number of lists
(ai, bi)
satisfying
1 ≤ ai < bi ≤ i+ 1 for 1 ≤ i ≤ n, and b1 ≤ b2 ≤ . . . ≤ bn. After subtracting 1 from each
bi, these lists code the paths in C
1(n, n) and, using §4, detA(n) = |C
1(n, n) | = | C2(n) |.
References
[1] Valery A. Liskovets, Exact enumeration of acyclic deterministic au-
tomata, Disc. Appl. Math., in press, 2006. Earlier version available at
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
[2] J. H. van Lint and R. M. Wilson, A Course in Combinatorics, 2nd ed., Cambridge
University Press, NY, 2001.
[3] Neil J. Sloane (founder and maintainer), The On-Line Encyclopedia of Integer Se-
quences http://www.research.att.com:80/ njas/sequences/index.html?blank=1
http://www.research.att.com:80/~njas/sequences/index.html?blank=1
| Introduction The chief purpose of this paper is to show bijectively that
a determinant of Stirling cycle numbers counts unlabeled acyclic single-source automata.
Specifically, let Ak(n) denote the kn × kn matrix with (i, j) entry
[ ⌊ i−1
⌊ i−1
⌋+1+i−j
, where
is the Stirling cycle number, the number of permutations on [i] with j cycles. For
example,
A2(5) =
1 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0
0 1 3 2 0 0 0 0 0 0
0 0 1 3 2 0 0 0 0 0
0 0 0 1 6 11 6 0 0 0
0 0 0 0 1 6 11 6 0 0
0 0 0 0 0 1 10 35 50 24
0 0 0 0 0 0 1 10 35 50
0 0 0 0 0 0 0 1 15 85
0 0 0 0 0 0 0 0 1 15
http://arxiv.org/abs/0704.0004v1
As evident in the example, Ak(n) is formed from k copies of each of rows 2 through n+1
of the Stirling cycle triangle, arranged so that the first nonzero entry in each row is a 1
and, after the first row, this 1 occurs just before the main diagonal; in other words, Ak(n)
is a Hessenberg matrix with 1s on the infra-diagonal. We will show
Main Theorem. The determinant of Ak(n) is the number of unlabeled acyclic single-
source automata with n transient states on a (k + 1)-letter input alphabet.
Section 2 reviews basic terminology for automata and recurrence relations to count
finite acyclic automata. Section 3 introduces column-marked subdiagonal paths, which
play an intermediate role, and a way to code them. Section 4 presents a bijection from
these column-marked subdiagonal paths to unlabeled acyclic single-source automata. Fi-
nally, Section 5 evaluates detAk(n) using a sign-reversing involution and shows that the
determinant counts the codes for column-marked subdiagonal paths.
2 Automata
A (complete, deterministic) automaton consists of a set of states and an input alphabet
whose letters transform the states among themselves: a letter and a state produce another
state (possibly the same one). A finite automaton (finite set of states, finite input alphabet
of, say, k letters) can be represented as a k-regular directed multigraph with ordered edges:
the vertices represent the states and the first, second, . . . edge from a vertex give the effect
of the first, second, . . . alphabet letter on that state. A finite automaton cannot be acyclic
in the usual sense of no cycles: pick a vertex and follow any path from it. This path must
ultimately hit a previously encountered vertex, thereby creating a cycle. So the term
acyclic is used in the looser sense that only one vertex, called the sink, is involved in
cycles. This means that all edges from the sink loop back to itself (and may safely be
omitted) and all other paths feed into the sink.
A non-sink state is called transient. The size of an acyclic automaton is the number of
transient states. An acyclic automaton of size n thus has transient states which we label
1, 2, . . . , n and a sink, labeled n + 1. Liskovets [1] uses the inclusion-exclusion principle
(more about this below) to obtain the following recurrence relation for the number ak(n)
of acyclic automata of size n on a k-letter input alphabet (k ≥ 1):
ak(0) = 1; ak(n) =
(−1)n−j−1
(j + 1)k(n−j)ak(j), n ≥ 1.
A source is a vertex with no incoming edges. A finite acyclic automaton has at least
one source because a path traversed backward v1 ← v2 ← v3 ← . . . must have distinct
vertices and so cannot continue indefinitely. An automaton is single-source (or initially
connected) if it has only one source. Let Bk(n) denote the set of single-source acyclic
finite (SAF) automata on a k-letter input alphabet with vertices 1, 2, . . . , n + 1 where 1
is the source and n + 1 is the sink, and set bk(n) = | Bk(n) |. The two-line representation
of an automaton in Bk(n) is the 2× kn matrix whose columns list the edges in order. For
example,
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
2 4 6 6 6 6 6 6 6 3 5 3 2 2 6
is in B3(5) and the source-to-sink paths in B include 1
→ 6, 1
→ 6, 1
→ 6, where the alphabet is {a, b, c}.
Proposition 1. The number bk(n) of SAF automata of size n on a k-letter input alphabet
(n, k ≥ 1) is given by
bk(n) =
(−1)n−i
(i+ 1)k(n−i)ak(i)
Remark This formula is a bit more succinct than the the recurrence in [1, Theorem
3.2].
Proof Consider the setA of acyclic automata with transient vertices [n] = {1, 2, . . . , n}
in which 1 is a source. Call 2, 3, . . . , n the interior vertices. For X ⊆ [2, n], let
f(X) = # automata in A whose set of interior vertices includes X,
g(X) = # automata in A whose set of interior vertices is precisely X.
Then f(X) =
Y :X⊆Y⊆[2,n] g(Y ) and by Möbius inversion [2] on the lattice of subsets of
[2, n], g(X) =
Y :X⊆Y⊆[2,n] µ(X, Y )f(Y ) where µ(X, Y ) is the Möbius function for this
lattice. Since µ(X, Y ) = (−1)|Y |−|X| if X ⊆ Y , we have in particular that
g(∅) =
Y⊆[2,n]
(−1)| Y |f(Y ). (1)
Let | Y | = n − i so that 1 ≤ i ≤ n. When Y consists entirely of sources, the vertices
in [n+ 1]\Y and their incident edges form a subautomaton with i transient states; there
are ak(i) such. Also, all edges from the n − i vertices comprising Y go directly into
[n + 1]\Y : (i + 1)k(n−i) choices. Thus f(Y ) = (i + 1)k(n−i)ak(i). By definition, g(∅) is
the number of automata in A for which 1 is the only source, that is, g(∅) = bk(n) and the
Proposition now follows from (1).
An unlabeled SAF automaton is an equivalence class of SAF automata under relabeling
of the interior vertices. Liskovets notes [1] (and we prove below) that Bk(n) has no
nontrivial automorphisms, that is, each of the (n− 1)! relabelings of the interior vertices
of B ∈ Bk(n) produces a different automaton. So unlabeled SAF automata of size n on
a k-letter alphabet are counted by 1
(n−1)!
bk(n). The next result establishes a canonical
representative in each relabeling class.
Proposition 2. Each equivalence class in Bk(n) under relabeling of interior vertices has
size (n− 1)! and contains exactly one SAF automaton with the “last occurrences increas-
ing” property: the last occurrences of the interior vertices—2, 3, . . . , n—in the bottom row
of its two-line representation occur in that order.
Proof The first assertion follows from the fact that the interior vertices of an au-
tomatonB ∈ bk(n) can be distinguished intrinsically, that is, independent of their labeling.
To see this, first mark the source, namely 1, with a mark (new label) v1 and observe that
there exists at least one interior vertex whose only incoming edge(s) are from the source
(the only currently marked vertex) for otherwise a cycle would be present. For each such
interior vertex v, choose the last edge from the marked vertex to v using the built-in
ordering of these edges. This determines an order on these vertices; mark them in order
v2, v3, . . . , vj (j ≥ 2). If there still remain unmarked interior vertices, at least one of them
has incoming edges only from a marked vertex or again a cycle would be present. For
each such vertex, use the last incoming edge from a marked vertex, where now edges are
arranged in order of initial vertex vi with the built-in order breaking ties, to order and
mark these vertices vj+1, vj+2, . . .. Proceed similarly until all interior vertices are marked.
For example, for
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
2 4 6 6 6 6 6 6 6 3 5 3 2 2 6
v1 = 1 and there is just one interior vertex, namely 4, whose only incoming edge is from
the source, and so v2 = 4 and 4 becomes a marked vertex. Now all incoming edges to
both 3 and 5 are from marked vertices and the last such edges (built-in order comes into
play) are 4
→ 5 and 4
→ 3 putting vertices 3, 5 in the order 5, 3. So v3 = 5 and v4 = 3.
Finally, v5 = 2. This proves the first assertion. By construction of the vs, relabeling each
interior vertex i with the subscript of its corresponding v produces an automaton in Bk(n)
with the “last occurrences increasing” property and is the only relabeling that does so.
The example yields
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
5 2 6 4 3 4 5 5 6 6 6 6 6 6 6
Now let Ck(n) denote the set of canonical SAF automata in Bk(n) representing un-
labeled automata; thus | Ck(n) | =
(n−1)!
bk(n). Henceforth, we identify an unlabeled au-
tomaton with its canonical representative.
3 Column-Marked Subdiagonal Paths
A subdiagonal (k, n, p)-path is a lattice path of steps E = (1, 0) and N = (0, 1), E for
east and N for north, from (0, 0) to (kn, p) that never rise above the line y = 1
x. Let
Ck(n, p) denote the set of such paths.For k ≥ 1, it is clear that Ck(n, p) is nonempty only
for 0 ≤ p ≤ n and it is known (generalized ballot theorem) that
|Ck(n, p) | =
kn− kp+ 1
kn+ p+ 1
kn+ p + 1
A path P in Ck(n, n) can be coded by the heights of its E steps above the line y = −1;
this gives a a sequence (bi)
i=1 subject to the restrictions 1 ≤ b1 ≤ b2 ≤ . . . ≤ bkn and
bi ≤ ⌈i/k⌉ for all i.
A column-marked subdiagonal (k, n, p)-path is one in which, for each i ∈ [1, kn], one of
the lattice squares below the ith E step and above the horizontal line y = −1 is marked,
say with a ‘ ∗ ’. Let C
k(n, p) denote the set of such marked paths.
b b b
b b b b
b b b b
∗ ∗ ∗
(0,0)
(8,4)
y = −1
y = 1
A path in C
2(4, 3)
A marked path P ∗ in C
k(n, n) can be coded by a sequence of pairs
(ai, bi)
where
i=1 is the code for the underlying path P and ai ∈ [1, bi] gives the position of the ∗ in the
ith column. The example is coded by (1, 1), (1, 1), (1, 2), (2, 2), (1, 2), (3, 3), (1, 3), (2, 3).
An explicit sum for |C
k(n, n) | is
k(n, n) | =
1≤b1≤b2≤...≤bkn,
bi ≤ ⌈i/k⌉ for all i
b1b2 . . . bkn,
because the summand b1b2 . . . bkn is the number of ways to insert the ‘ ∗ ’s in the underlying
path coded by (bi)
It is also possible to obtain a recurrence for |C
k(n, p) |, and then, using Prop. 1, to
show analytically that |C
k(n, n) | = | Ck+1(n) |. However, it is much more pleasant to
give a bijection and in the next section we will do so. In particular, the number of SAF
automata on a 2-letter alphabet is
| C2(n) | = |C
1(n, n) | =
1≤b1≤b2≤...≤bn
bi ≤ i for all i
b1b2 . . . bn = (1, 3, 16, 127, 1363, . . .)n≥1,
sequence A082161 in [3].
4 Bijection from Paths to Automata
In this section we exhibit a bijection from C
k(n, n) to Ck+1(n). Using the illustrated
path as a working example with k = 2 and n = 4,
b b b
b b b b
b b b b
∗ ∗ ∗
(0,0)
(8,4)
y = −1
y = 1
first construct the top row of a two-line representation consisting of k + 1 each 1s, 2s,
. . . ,n s and number them left to right:
The last step in the path is necessarily anN step. For the second last, third last,. . .N steps
in the path, count the number of steps following it. This gives a sequence i1, i2, . . . , in−1
satisfying 1 ≤ i1 < i2 < . . . < in−1 and ij ≤ (k + 1)j for all j. Circle the positions
i1, i2, . . . , in−1 in the two-line representation and then insert (in boldface) 2, 3, . . . , n in
the second row in the circled positions:
2 3 4
These will be the last occurrences of 2, 3, . . . , n in the second row. Working from the last
column in the path back to the first, fill in the blanks in the second row left to right as
follows. Count the number of squares from the ∗ up to the path (including the ∗ square)
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A082161
and add this number to the nearest boldface number to the left of the current blank entry
(if there are no boldface numbers to the left, add this number to 1) and insert the result
in the current blank square. In the example the numbers of squares are 2,3,1,2,1,2,1,1
yielding
2 4 5 3 3 5 4 5 4 5 5
This will fill all blank entries except the last. Note that ∗ s in the bottom row correspond
to sink (that is, n+1) labels in the second row. Finally, insert n+1 into the last remaining
blank space to give the image automaton:
1 1 1 2 2 2 3 3 3 4 4 4
2 4 5 3 3 5 4 5 4 5 5 5
This process is fully reversible and the map is a bijection.
5 Evaluation of detAk(n)
For simplicity, we treat the case k = 1, leaving the generalization to arbitrary k
as a not-too-difficult exercise for the interested reader. Write A(n) for A1(n). Thus
A(n) =
1≤i,j≤n
. From the definition of detA(n) as a sum of signed products, we
show that detA(n) is the total weight of certain lists of permutations, each list carrying
weight ±1. Then a weight-reversing involution cancels all −1 weights and reduces the
problem to counting the surviving lists. These surviving lists are essentially the codes for
paths in C
1(n, p), and the Main Theorem follows from §4.
To describe the permutations giving a nonzero contribution to detA(n) =
σ sgn σ×
i=1 ai,σ(i), define the code of a permutation σ on [n] to be the list c = (ci)
i=1 with
ci = σ(i)−(i−1). Since the (i, j) entry of A(n),
, is 0 unless j ≥ i−1, we must have
σ(i) ≥ i−1 for all i. It is well known that there are 2n−1 such permutations, corresponding
to compositions of n, with codes characterized by the following four conditions: (i) ci ≥ 0
for all i, (ii) c1 ≥ 1, (iii) each ci ≥ 1 is immediately followed by ci − 1 zeros in the list,
i=1 ci = n. Let us call such a list a padded composition of n: deleting the zeros
is a bijection to ordinary compositions of n. For example, (3, 0, 0, 1, 2, 0) is a padded
composition of 6. For a permutation σ with padded composition code c, the nonzero
entries in c give the cycle lengths of σ. Hence sgnσ, which is the parity of “n−#cycles
in σ”, is given by (−1)#0s in c.
We have detA(n) =
σ sgn σ
i=1 ai,σ(i) =
σ sgn σ
2i−σ(i)
, and so
detA(n) =
(−1)#0s in c
i+ 1− ci
where the sum is restricted to padded compositions c of n with ci ≤ i for all i (A002083)
because
i+1−ci
= 0 unless ci ≤ i.
Henceforth, let us write all permutations in standard cycle form whereby the smallest
entry occurs first in each cycle and these smallest entries increase left to right. Thus,
with dashes separating cycles, 154-2-36 is the standard cycle form of the permutation
( 1 2 3 4 5 65 2 6 1 4 3 ). We define a nonfirst entry to be one that does not start a cycle. Thus the
preceding permutation has 3 nonfirst entries: 5,4,6. Note that the number of nonfirst
entries is 0 only for the identity permutation. We denote an identity permutation (of any
size) by ǫ.
By definition of Stirling cycle number, the product in (2) counts lists (πi)
i=1 of permu-
tations where πi is a permutation on [i+1] with i+1− ci cycles, equivalently, with ci ≤ i
nonfirst entries. So define Ln to be the set all lists of permutations π = (πi)
i=1 where πi
is a permutation on [i + 1], #nonfirst entries in πi is ≤ i, π1 is the transposition (1,2),
each nonidentity permutation πi is immediately followed by ci − 1 ǫ’s where ci ≥ 1 is the
number of nonfirst entries in πi (so the total number of nonfirst entries is n). Assign a
weight to π ∈ Ln by wt(π) = (−1)
# ǫ’s in π. Then
detA(n) =
wt(π).
We now define a weight-reversing involution on (most of) Ln. Given π ∈ Ln, scan the
list of its component permutations π1 = (1, 2), π2, π3, . . . left to right. Stop at the first
one that either (i) has more than one nonfirst entry, or (ii) has only one nonfirst entry, b
say, and b > maximum nonfirst entry m of the next permutation in the list. Say πk is the
permutation where we stop.
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A002083
In case (i) decrement (i.e. decrease by 1) the number of ǫ’s in the list by splitting πk
into two nonidentity permutations as follows. Let m be the largest nonfirst entry of πk
and let ℓ be its predecessor. Replace πk and its successor in the list (necessarily an ǫ) by
the following two permutations: first the transposition (ℓ,m) and second the permutation
obtained from πk by erasing m from its cycle and turning it into a singleton. Here are
two examples of this case (recall permutations are in standard cycle form and, for clarity,
singleton cycles are not shown).
i 1 2 3 4 5 6
πi 12 13 23 14-253 ǫ ǫ
i 1 2 3 4 5 6
πi 12 13 23 25 14-23 ǫ
i 1 2 3 4 5 6
πi 12 23 14 13-24 ǫ 23
i 1 2 3 4 5 6
πi 12 23 14 24 13 23
The reader may readily check that this sends case (i) to case (ii).
In case (ii), πk is a transposition (a, b) with b > maximum nonfirst entry m of πk+1. In
this case, increment the number of ǫ’s in the list by combining πk and πk+1 into a single
permutation followed by an ǫ: in πk+1, b is a singleton; delete this singleton and insert b
immediately after a in πk+1 (in the same cycle). The reader may check that this reverses
the result in the two examples above and, in general, sends case (ii) to case (i). Since the
map alters the number of ǫ’s in the list by 1, it is clearly weight-reversing. The map fails
only for lists that both consist entirely of transpositions and have the form
(a1, b1), (a2, b2), . . . , (an, bn) with b1 ≤ b2 ≤ . . . ≤ bn.
Such lists have weight 1. Hence detA(n) is the number of lists
(ai, bi)
satisfying
1 ≤ ai < bi ≤ i+ 1 for 1 ≤ i ≤ n, and b1 ≤ b2 ≤ . . . ≤ bn. After subtracting 1 from each
bi, these lists code the paths in C
1(n, n) and, using §4, detA(n) = |C
1(n, n) | = | C2(n) |.
References
[1] Valery A. Liskovets, Exact enumeration of acyclic deterministic au-
tomata, Disc. Appl. Math., in press, 2006. Earlier version available at
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
[2] J. H. van Lint and R. M. Wilson, A Course in Combinatorics, 2nd ed., Cambridge
University Press, NY, 2001.
[3] Neil J. Sloane (founder and maintainer), The On-Line Encyclopedia of Integer Se-
quences http://www.research.att.com:80/ njas/sequences/index.html?blank=1
http://www.research.att.com:80/~njas/sequences/index.html?blank=1
| Un determinante de los números de ciclo de Stirling cuenta sin etiqueta
Automata de una sola fuente acíclica
DAVID CALLAN
Departamento de Estadística
Universidad de Wisconsin-Madison
1300 University Ave
Madison, WI 53706-1532
callen@stat.wisc.edu
30 de marzo de 2007
Resumen
Demostramos que un determinante de los números de ciclo Stirling cuenta sin etiqueta acíclica
autómatas de una sola fuente. La prueba implica una bijección de estos autómatas a
algunos caminos de celosía marcados y una involución de inversión de signos para evaluar la disuasión
Minant.
1 Introducción El propósito principal de este artículo es mostrar bijectamente que
un determinante de los números de ciclo Stirling cuenta autómatas acíclicos de una sola fuente sin etiqueta.
Específicamente, deje que Ak(n) denote la matriz kn × kn con (i, j) entrada
[ i−1
i−1
1+i−j
, donde
es el número del ciclo de Stirling, el número de permutaciones en [i] con ciclos j. Por
ejemplo,
A2(5) =
1 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0
0 1 3 2 0 0 0 0 0 0
0 0 1 3 2 0 0 0 0 0
0 0 0 1 6 11 6 0 0 0
0 0 0 0 1 6 11 6 0 0
0 0 0 0 0 1 10 35 50 24
0 0 0 0 0 1 10 35 50
0 0 0 0 0 0 1 15 85
0 0 0 0 0 0 0 1 15
http://arxiv.org/abs/0704.004v1
Como es evidente en el ejemplo, Ak(n) se forma a partir de k copias de cada una de las filas 2 a n+1
del triángulo del ciclo de Stirling, dispuesto de modo que la primera entrada no cero en cada fila es un 1
y, después de la primera fila, este 1 ocurre justo antes de la diagonal principal; en otras palabras, Ak(n)
es una matriz de Hessenberg con 1s en la infradiagonal. Vamos a mostrar
Teorema Principal. El determinante de Ak(n) es el número de mono-
autómatas de origen con n estados transitorios en un alfabeto de entrada (k + 1) letras.
En la sección 2 se examina la terminología básica para contar las relaciones automatizadas y recurrentes
autómatas acíclicos finitos. En la sección 3 se introducen las vías subdiagonales marcadas con columnas, que
jugar un papel intermedio, y una manera de codificarlos. En la sección 4 se presenta una
estos caminos subdiagonales marcados con columnas a autómatas acíclicos sin etiquetar de una sola fuente. Fi-
nally, la sección 5 evalúa detAk(n) usando una involución de inversión de signos y muestra que la
determinante cuenta los códigos para las rutas subdiagonales con marca de columna.
2 Automatas
Un autómata (completa, determinista) consiste en un conjunto de estados y un alfabeto de entrada
cuyas letras transforman los estados entre sí: una carta y un estado producen otro
Estado (posiblemente el mismo). Un autómata finito (conjunto finito de estados, alfabeto de entrada finito
de, digamos, k letras) se puede representar como un multógrafo dirigido k-regular con bordes ordenados:
los vértices representan los estados y el primero, segundo,. .. borde de un vértice dan el efecto
del primero, segundo,. .. letra del alfabeto en ese estado. Un autómata finito no puede ser acíclico
en el sentido habitual de no ciclos: elegir un vértice y seguir cualquier camino de él. Este camino debe
finalmente golpeó un vértice previamente encontrado, creando así un ciclo. Así que el término
acíclico se utiliza en el sentido más suelto que sólo un vértice, llamado el fregadero, está involucrado en
ciclos. Esto significa que todos los bordes del lazo del fregadero de nuevo a sí mismo (y puede ser
omitida) y todos los otros caminos se alimentan en el fregadero.
Un estado no-sumidero se llama transitorio. El tamaño de un autómata acíclico es el número de
estados transitorios. Un autómata acíclico de tamaño n por lo tanto tiene estados transitorios que etiquetamos
1, 2,........................................................................................................................................................................................... Liskovets [1] utiliza el principio de inclusión-exclusión
(más sobre esto a continuación) para obtener la siguiente relación de recurrencia para el número ak(n)
de autómatas acíclicos del tamaño n en un alfabeto de entrada de letra k (k ≥ 1):
ak(0) = 1; ak(n) =
(−1)n−j−1
(j + 1)k(n−j)ak(j), n ≥ 1.
Una fuente es un vértice sin bordes entrantes. Un autómata acíclico finito tiene al menos
una fuente porque un camino atravesó hacia atrás v1 ← v2 ← v3 ←. .. debe tener distinto
vértices y así no pueden continuar indefinidamente. Un autómata es de una sola fuente (o inicialmente
conectado) si sólo tiene una fuente. Deje que Bk(n) denote el conjunto de una fuente acíclica
autómatas finitos (SAF) en un alfabeto de entrada de letra k con vértices 1, 2,...., n + 1 donde 1
es la fuente y n + 1 es el fregadero, y set bk(n) = Bk(n). La representación en dos líneas
de un autómata en Bk(n) es la matriz de 2×kn cuyas columnas listan los bordes en orden. Por
ejemplo,
1 1 1 2 2 2 3 3 4 4 4 5 5
2 4 6 6 6 6 6 6 3 5 3 2 2 6
está en B3(5) y las rutas de origen a fregadero en B incluyen 1
→ 6, 1
→ 6, 1
→ 6, donde el alfabeto es {a, b, c}.
Proposición 1. El número bk(n) de SAF autómata del tamaño n en un alfabeto de entrada de letra k
(n, k ≥ 1)
bk(n) =
(−1)n−i
(i+1)k(n-i)ak(i)
Nota Esta fórmula es un poco más sucinta que la recurrencia en [1, Teorema
3.2].
Prueba Considere el conjuntoA de autómatas acíclicos con vértices transitorios [n] = {1, 2,..., n}
en la que 1 es una fuente. Llamar 2, 3,..., n los vértices interiores. Para X [2, n], vamos
f(X) = # autómatas en A cuyo conjunto de vértices interiores incluye X,
g(X) = # autómata en A cuyo conjunto de vértices interiores es precisamente X.
Entonces f(X) =
Y:XY[2,n] g(Y) y por Möbius inversión [2] en la celosía de subconjuntos de
[2, n], g(X) =
Y:XY[2,n] μ(X, Y)f(Y) donde μ(X, Y) es la función Möbius para esto
Enrejado. Desde μ(X, Y) = (−1)Y X si X Y, tenemos en particular que
g() =
Y[2,n]
(−1) Y f(Y ). 1)..........................................................................................................................................................
Dejar Y = n − i de modo que 1 ≤ i ≤ n. Cuando Y consiste enteramente de fuentes, los vértices
en [n+1]\Y y sus bordes de incidente forman un subautomatón con i estados transitorios; allí
son ak(i) tales. También, todos los bordes de los n − i vértices que componen Y ir directamente en
[n + 1]\Y : (i + 1)k(n-i) opciones. Así f(Y) = (i + 1)k(n-i)ak(i). Por definición, g() es
el número de autómatas en A para los cuales 1 es la única fuente, es decir, g() = bk(n) y la
La propuesta se deriva ahora de (1).
Un autómata SAF sin etiquetar es una clase de equivalencia de autómatas SAF bajo reetiquetado
de los vértices interiores. Liskovets nota [1] (y demostramos a continuación) que Bk(n) no tiene
automorfismos no triviales, es decir, cada uno de los (n− 1)! reetiquetados de los vértices interiores
de B-Bk(n) produce un autómata diferente. Así que autómatas SAF sin etiqueta de tamaño n en
un alfabeto de letra-k se cuenta por 1
(n−1)!
bk(n). El siguiente resultado establece un canon
representante en cada clase de reetiquetado.
Proposición 2. Cada clase de equivalencia en Bk(n) bajo reetiquetado de vértices interiores tiene
¡Tamaño (n− 1)! y contiene exactamente un autómata SAF con las “últimas ocurrencias
ing” propiedad: las últimas ocurrencias de los vértices interiores—2, 3,..., n—en la fila inferior
de su representación de dos líneas se producen en ese orden.
Prueba La primera afirmación se deriva del hecho de que los vértices interiores de un au-
bk(n) se puede distinguir intrínsecamente, es decir, independientemente de su etiquetado.
Para ver esto, primero marque la fuente, a saber, 1, con una marca (nueva etiqueta) v1 y observe que
existe al menos un vértice interior cuyo único borde(s) entrante(s) son de la fuente
(el único vértice actualmente marcado) para de lo contrario un ciclo estaría presente. Para cada uno de ellos
vértice interior v, elija el último borde del vértice marcado a v utilizando el incorporado
orden de estos bordes. Esto determina un orden en estos vértices; marquelos en orden
v2, v3,. .., vj (j ≥ 2). Si aún quedan vértices interiores sin marcar, al menos uno de ellos
tiene bordes entrantes sólo de un vértice marcado o de nuevo un ciclo estaría presente. Por
cada tal vértice, utilizar el último borde entrante de un vértice marcado, donde ahora los bordes son
arreglados en orden de vértice inicial vi con los lazos de ruptura incorporados en orden, a orden y
marca estos vértices vj+1, vj+2,.... Proceda de manera similar hasta que todos los vértices interiores estén marcados.
Por ejemplo, para
1 1 1 2 2 2 3 3 4 4 4 5 5
2 4 6 6 6 6 6 6 3 5 3 2 2 6
v1 = 1 y sólo hay un vértice interior, a saber, 4, cuyo único borde entrante es de
la fuente, y así v2 = 4 y 4 se convierte en un vértice marcado. Ahora todos los bordes entrantes a
3 y 5 son de vértices marcados y los últimos tales bordes (construido en orden entra en
jugar) son 4
→ 5 y 4
→ 3 poner vértices 3, 5 en el orden 5, 3. Así v3 = 5 y v4 = 3.
Finalmente, v5 = 2. Esto demuestra la primera afirmación. Por la construcción de la vs, reetiquetando cada uno
vértice interior i con el subíndice de su correspondiente v produce un autómata en Bk(n)
con la propiedad “últimas ocurrencias aumentando” y es el único reetiquetado que lo hace.
El ejemplo da resultados
1 1 1 2 2 2 3 3 4 4 4 5 5
5 2 6 4 3 4 5 5 6 6 6 6 6 6
Ahora deje que Ck(n) denote el conjunto de autómatas canónicos SAF en Bk(n) que representan un-
etiquetada autómata; así Ck(n) =
(n−1)!
bk(n). De ahora en adelante, identificamos un au-
tomate con su representante canónico.
3 Rutas subdiagonales marcadas por columnas
Un camino subdiagonal (k, n, p) es una trayectoria de celosía de los pasos E = (1, 0) y N = (0, 1), E para
Este y N para el norte, de (0, 0) a (kn, p) que nunca se elevan por encima de la línea y = 1
x. Vamos.
Ck(n, p) indica el conjunto de tales rutas.Para k ≥ 1, está claro que Ck(n, p) es no vacío solamente
para 0 ≤ p ≤ n y se conoce (teorema de votación generalizada) que
Ck(n, p) =
kn− kp+ 1
kn+ p+ 1
kn+ p + 1
Una ruta P en Ck(n, n) puede ser codificada por las alturas de sus pasos de E por encima de la línea y = −1;
esto da una secuencia (bi)
i=1 sujeto a las restricciones 1 ≤ b1 ≤ b2 ≤. ≤ bkn y
b ≤ i/k para todos los i.
Un camino subdiagonal marcado por la columna (k, n, p) es uno en el que, para cada i+ [1, kn], uno de
se marcan los cuadrados de celosía por debajo del paso ith E y por encima de la línea horizontal y = −1,
decir con un ‘ * ’. Let C
k(n, p) denota el conjunto de estas rutas marcadas.
b b b
b b b b
b b b b
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
(0,0)
(8,4)
y = −1
y = 1
Un camino en C
2, 4, 3)
Una trayectoria marcada P* en C
k(n, n) se puede codificar por una secuencia de pares
(ai, bi)
donde
i=1 es el código para la ruta subyacente P y ai â € [1, bi] da la posición de la â € en la
ith columna. El ejemplo está codificado por (1, 1), (1, 1), (1, 2), (2, 2), (1, 2), (3, 3), (1, 3), (2, 3).
Una suma explícita para C
k(n, n) es
k(n, n) =
1≤b1≤b2≤...≤bkn,
b ≤ i/k para todos los i
b1b2. .. bkn,
porque la suma b1b2. .. bkn es el número de maneras de insertar los ‘ * ’ en el subyacente
ruta codificada por (bi)
También es posible obtener una recurrencia para C
k(n, p), y luego, usando Prop. 1, a
mostrar analíticamente que C
k(n, n) = Ck+1(n). Sin embargo, es mucho más agradable a
dar una bijección y en la siguiente sección lo haremos. En particular, el número de FAS
autómatas en un alfabeto de 2 letras es
C2(n) = C
1 n, n) =
1≤b1≤b2≤...≤bn
b ≤ i para todos los i
b1b2. .. bn = (1, 3, 16, 127, 1363,..........................................................................................................................................
secuencia A082161 en [3].
4 Biyección de Rutas a Automata
En esta sección exhibimos una bijección de C
k(n, n) a Ck+1(n). Usando la ilustración
ruta como ejemplo de trabajo con k = 2 y n = 4,
b b b
b b b b
b b b b
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
(0,0)
(8,4)
y = −1
y = 1
primero construir la fila superior de una representación de dos líneas que consta de k + 1 cada 1s, 2s,
. ...............................................................................................................................................................................................................................................................
El último paso en el camino es necesariamente un paso N. Para el segundo último, tercer último,...N pasos
en el camino, cuente el número de pasos que lo siguen. Esto da una secuencia i1, i2,. ............................................................................
que cumplan 1 ≤ i1 < i2 <. .. < in−1 e ij ≤ (k + 1)j para todos j. Círculo de las posiciones
i1, i2,. ............................................
la segunda fila en las posiciones en círculo:
2 3 4
Estas serán las últimas ocurrencias de 2, 3,...., n en la segunda fila. Trabajando desde el último
columna en la ruta de vuelta a la primera, rellenar los espacios en blanco en la segunda fila de izquierda a derecha como
sigue. Contar el número de cuadrados desde el * hasta el camino (incluyendo el * cuadrado)
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A082161
y añadir este número al número negrita más cercano a la izquierda de la entrada en blanco actual
(si no hay números en negrita a la izquierda, añada este número a 1) e inserte el resultado
en el cuadrado en blanco actual. En el ejemplo los números de cuadrados son 2,3,1,2,1,2,1,1
rendimiento
2 4 5 3 3 5 4 5 4 5 5
Esto llenará todas las entradas en blanco excepto la última. Tenga en cuenta que * s en la fila inferior corresponden
para hundir (es decir, n+1) etiquetas en la segunda fila. Por último, insertar n+1 en el último resto
espacio en blanco para dar el autómata de la imagen:
1 1 1 2 2 2 3 3 3 4 4
2 4 5 3 3 5 4 5 4 5 5 5
Este proceso es completamente reversible y el mapa es una bijección.
5 Evaluación de detAk(n)
Para simplificar, tratamos el caso k = 1, dejando la generalización a arbitrario k
como un ejercicio no demasiado difícil para el lector interesado. Escriba A(n) para A1(n). Por lo tanto
A(n) =
1≤i,j≤n
. A partir de la definición de detA(n) como una suma de productos firmados, nosotros
mostrar que detA(n) es el peso total de ciertas listas de permutaciones, cada lista que lleva
peso ±1. Entonces una involución que invierte el peso cancela todos los −1 pesos y reduce el
problema para contar las listas de sobrevivientes. Estas listas supervivientes son esencialmente los códigos para
rutas en C
1 (n, p), y el Teorema Principal sigue del § 4.
Para describir las permutaciones dando una contribución no cero a detA(n) =
sgn
i=1 ai,(i), definir el código de una permutación en [n] para ser la lista c = (ci)
i=1 con
ci = (i)−(i−1). Desde la entrada (i, j) de A(n),
, es 0 a menos que j ≥ i−1, debemos tener
(i) ≥ i−1 para todos los i. Es bien sabido que hay 2n−1 tales permutaciones, correspondientes
a las composiciones de n, con códigos caracterizados por las cuatro condiciones siguientes: i) ci ≥ 0
para todos los i, ii) c1 ≥ 1, iii) cada ci ≥ 1 es inmediatamente seguido de ci − 1 ceros en la lista,
i=1 ci = n. Llamemos a tal lista una composición acolchada de n: borrar los ceros
es una bijección a composiciones ordinarias de n. Por ejemplo, (3, 0, 0, 1, 2, 0) es un acolchado
composición de 6. Para una permutación con código de composición acolchado c, el no cero
las entradas en c dan las longitudes del ciclo de . Por lo tanto sgnđ, que es la paridad de “nciclos
(−1)#0s in c.
Tenemos detA(n) =
sgn
i=1 ai,(i) =
sgn
2i(i)
, y así
detA(n) =
(−1)#0s in c
i+ 1− ci
cuando la suma se limite a composiciones acolchadas c de n con ci ≤ i para todos i (A002083)
porque
i+1−ci
= 0 a menos que ci ≤ i.
A partir de ahora, escribamos todas las permutaciones en forma de ciclo estándar por el cual el más pequeño
la entrada se produce primero en cada ciclo y estas entradas más pequeñas aumentan de izquierda a derecha. Por lo tanto,
con los ciclos de separación de guiones, 154-2-36 es la forma estándar del ciclo de la permutación
( 1 2 3 4 5 65 2 6 1 4 3 ). Definimos una entrada no primera para ser una que no comienza un ciclo. Por lo tanto, la
la permutación anterior tiene 3 entradas no primeras: 5,4,6. Tenga en cuenta que el número de nonfirst
entradas es 0 sólo para la permutación de identidad. Denotamos una permutación de identidad (de cualquier
(tamaño) por.
Por definición del número de ciclo de Stirling, el producto en (2) listas de recuentos (
i=1 de permu-
En los casos en que se trate de una permutación en [i+1] con ciclos i+1− ci, equivalentemente, con ci ≤ i
nonfirst entrys. Definir Ln para ser el conjunto de todas las listas de permutaciones
i=1 donde πi
es una permutación en [i + 1], #nonfirst en
cada permutación de no identidad πi es seguida inmediatamente por ci − 1 • s donde ci ≥ 1 es la
número de entradas que no son las primeras (por lo que el número total de entradas que no son las primeras es n). Asignar a
peso en peso a η ° Ln por wt(η) = (−1)
# Está en π. Entonces
detA(n) =
wt(l).
Ahora definimos una involución de inversión de peso en (la mayoría de) Ln. Teniendo en cuenta el número de Ln, escanee el
lista de sus permutaciones de componentes η1 = (1, 2), η2, η3,. .. de izquierda a derecha. Detente al principio.
una que: i) tenga más de una entrada que no sea la primera, o ii) tenga sólo una entrada que no sea la primera, b
decir, y b > máximo nonfirst entrada m de la siguiente permutación en la lista. Digamos que lk es el
Permutación donde nos detenemos.
http://www.research.att.com:80/cgi-bin/access.cgi/as/njas/sequences/eisA.cgi?Anum=A002083
En el caso i) decremento (es decir, Disminuir en 1) el número de personas que figuran en la lista dividiendo la cantidad de personas que figuran en la lista
en dos permutaciones de no identidad como sigue. Deje m ser la entrada más grande nonfirst de ηk
Y que yo sea su predecesor. Sustitúyase ηk y su sucesor en la lista (necesariamente un ) por
las dos permutaciones siguientes: primero la transposición (l,m) y segundo la permutación
obtenido de ηk borrando m de su ciclo y convirtiéndola en un singleton. Aquí están.
dos ejemplos de este caso (recordar las permutaciones están en forma de ciclo estándar y, para mayor claridad,
ciclos singleton no se muestran).
i 1 2 3 4 5 6
12 13 23 14-253 # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
i 1 2 3 4 5 6
12 13 23 25 14-23
i 1 2 3 4 5 6
12 23 14 13-24 23
i 1 2 3 4 5 6
12 23 14 24 13 23
El lector puede comprobar fácilmente que esto envía el caso (i) al caso (ii).
En el caso ii), ηk es una transposición (a, b) con b > máximo nonfirst entry m de ηk+1. In
en este caso, aumentar el número de los de la lista mediante la combinación de ηk y ηk+1 en un solo
permutación seguida de un : en ηk+1, b es un singleton; borrar este singleton e insertar b
inmediatamente después de una in ηk+1 (en el mismo ciclo). El lector puede comprobar que esto invierte
el resultado en los dos ejemplos anteriores y, en general, envía el caso ii) al caso i). Desde el
mapa altera el número de los de la lista por 1, que es claramente el peso-reversing. El mapa falla
sólo para las listas que consisten en su totalidad de transposiciones y tienen la forma
(a1, b1), (a2, b2),. ............................................................................................................................................................................................................................................................... ≤ bn.
Tales listas tienen peso 1. Por lo tanto detA(n) es el número de listas
(ai, bi)
Satisfacción
1 ≤ ai < bi ≤ i+ 1 para 1 ≤ i ≤ n, y b1 ≤ b2 ≤. ≤ bn. Después de restar 1 de cada uno
bi, estas listas codifican las rutas en C
1 (n, n) y utilizando §4, detA(n) = C
1 n, n) = C2(n).
Bibliografía
[1] Valery A. Liskovets, enumeración exacta
Tomata, Disc. Appl. Math., en la prensa, 2006. Versión anterior disponible en
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
http://www.i3s.unice.fr/fpsac/FPSAC03/articles.html
[2] J. H. van Lint y R. M. Wilson, A Course in Combinatorics, 2nd ed., Cambridge
University Press, NY, 2001.
[3] Neil J. Sloane (fundador y encargado), La Enciclopedia en Línea de Integer Se-
quences http://www.research.att.com:80/njas/sequences/index.html?blank=1
http://www.research.att.com:80/~njas/sequences/index.html?blank=1
|
704.001
| From dyadic $\Lambda_{\alpha}$ to $\Lambda_{\alpha}$
| In this paper we show how to compute the $\Lambda_{\alpha}$ norm, $\alpha\ge
0$, using the dyadic grid. This result is a consequence of the description of
the Hardy spaces $H^p(R^N)$ in terms of dyadic and special atoms.
| FROM DYADIC Λα TO Λα
WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
Abstract. In this paper we show how to compute the Λα norm , α ≥ 0,
using the dyadic grid. This result is a consequence of the description of
the Hardy spaces Hp(RN ) in terms of dyadic and special atoms.
Recently, several novel methods for computing the BMO norm of a function
f in two dimensions were discussed in [9]. Given its importance, it is also of
interest to explore the possibility of computing the norm of a BMO function,
or more generally a function in the Lipschitz class Λα, using the dyadic grid
in RN . It turns out that the BMO question is closely related to that of
approximating functions in the Hardy space H1(RN ) by the Haar system.
The approximation in H1(RN ) by affine systems was proved in [2], but this
result does not apply to the Haar system. Now, if HA(R) denotes the closure
of the Haar system in H1(R), it is not hard to see that the distance d(f,HA)
of f ∈ H1(R) to HA is ∼
f(x) dx
∣, see [1]. Thus, neither dyadic atoms
suffice to describe the Hardy spaces, nor the evaluation of the norm in BMO
can be reduced to a straightforward computation using the dyadic intervals.
In this paper we address both of these issues. First, we give a characterization
of the Hardy spaces Hp(RN ) in terms of dyadic and special atoms, and then,
by a duality argument, we show how to compute the norm in Λα(R
N ), α ≥ 0,
using the dyadic grid.
We begin by introducing some notations. Let J denote a family of cubes
Q in RN , and Pd the collection of polynomials in R
N of degree less than or
equal to d. Given α ≥ 0, Q ∈ J , and a locally integrable function g, let pQ(g)
denote the unique polynomial in P[α] such that [g − pQ(g)]χQ has vanishing
moments up to order [α].
For a locally square-integrable function g, we consider the maximal function
α,J g(x) given by
α,J g(x) = sup
x∈Q,Q∈J
|Q|α/N
|g(y)− pQ(g)(y)|
1991 Mathematics Subject Classification. 42B30,42B35.
http://arxiv.org/abs/0704.0005v1
2 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
The Lipschitz space Λα,J consists of those functions g such that M
α,J g is
in L∞, ‖g‖Λα,J = ‖M
α,J g‖∞; when the family in question contains all cubes
in RN , we simply omit the subscript J . Of course, Λ0 = BMO.
Two other families, of dyadic nature, are of interest to us. Intervals in R of
the form In,k = [ (k−1)2
n, k2n], where k and n are arbitrary integers, positive,
negative or 0, are said to be dyadic. In RN , cubes which are the product of
dyadic intervals of the same length, i.e., of the form Qn,k = In,k1 ×· · ·×In,kN ,
are called dyadic, and the collection of all such cubes is denoted D.
There is also the family D0. Let I
n,k = [(k− 1)2
n, (k+ 1)2n], where k and
n are arbitrary integers. Clearly I ′n,k is dyadic if k is odd, but not if k is even.
Now, the collection {I ′n,k : n, k integers} contains all dyadic intervals as well
as the shifts [(k − 1)2n + 2n−1, k 2n + 2n−1] of the dyadic intervals by their
half length. In RN , put D0 = {Q
n,k : Q
n,k = I
× · · · × I ′n,kN }; Q
n,k is
called a special cube. Note that D0 contains D properly.
Finally, given I ′n,k, let I
n,k = [(k − 1)2
n, k2n], and I
n,k = [k2
n, (k + 1)2n].
The 2N subcubes of Q′n,k = I
× · · · × I ′n,kN of the form I
× · · · × I
Sj = L or R, 1 ≤ j ≤ N , are called the dyadic subcubes of Q
Let Q0 denote the special cube [−1, 1]
N . Given α ≥ 0, we construct a
family Sα of piecewise polynomial splines in L
2(Q0) that will be useful in
characterizing Λα. Let A be the subspace of L
2(Q0) consisting of all functions
with vanishing moments up to order [α] which coincide with a polynomial
in P[α] on each of the 2
N dyadic subcubes of Q0. A is a finite dimensional
subspace of L2(Q0), and, therefore, by the Graham-Schmidt orthogonalization
process, say, A has an orthonormal basis in L2(Q0) consisting of functions
p1, . . . , pM with vanishing moments up to order [α], which coincide with a
polynomial in P[α] on each dyadic subinterval of Q0. Together with each p
we also consider all dyadic dilations and integer translations given by
pLn,k,α(x) = 2
n(N+α)pL(2nx1 + k1, . . . , 2
nxN + kN ) , 1 ≤ L ≤ M ,
and let
Sα = {p
n,k,α : n, k integers, 1 ≤ L ≤ M} .
Our first result shows how the dyadic grid can be used to compute the
norm in Λα.
Theorem A. Let g be a locally square-integrable function and α ≥ 0. Then,
g ∈ Λα if, and only if, g ∈ Λα,D and Aα(g) = supp∈Sα
∣〈g, p〉
∣ < ∞. Moreover,
‖g‖Λα ∼ ‖g‖Λα,D +Aα(g) .
Furthermore, it is also true, and the proof is given in Proposition 2.1 be-
low, that ‖g‖Λα ∼ ‖g‖Λα,D0 . However, in this simpler formulation, the tree
structure of the cubes in D has been lost.
FROM DYADIC Λα TO Λα 3
The proof of Theorem A relies on a close investigation of the predual of
Λα, namely, the Hardy space H
p(RN ) with 0 < p = (α + N)/N ≤ 1. In the
process we characterize Hp in terms of simpler subspaces: H
, or dyadic Hp,
and H
, the space generated by the special atoms in Sα. Specifically, we
Theorem B. Let 0 < p ≤ 1, and α = N(1/p− 1). We then have
Hp = H
where the sum is understood in the sense of quasinormed Banach spaces.
The paper is organized as follows. In Section 1 we show that individual
Hp atoms can be written as a superposition of dyadic and special atoms;
this fact may be thought of as an extension of the one-dimensional result of
Fridli concerning L∞ 1- atoms, see [5] and [1]. Then, we prove Theorem B.
In Section 2 we discuss how to pass from Λα,D, and Λα,D0 , to the Lipschitz
space Λα.
1. Characterization of the Hardy spaces Hp
We adopt the atomic definition of the Hardy spaces Hp, 0 < p ≤ 1, see
[6] and [10]. Recall that a compactly supported function a with [N(1/p− 1)]
vanishing moments is an L2 p -atom with defining cube Q if supp(a) ⊆ Q, and
|Q|1/p
| a(x) |2dx
≤ 1 .
The Hardy space Hp(RN ) = Hp consists of those distributions f that can be
written as f =
λjaj , where the aj ’s are H
p atoms,
|λj |
p < ∞, and the
convergence is in the sense of distributions as well as in Hp. Furthermore,
‖f‖Hp ∼ inf
|λj |
where the infimum is taken over all possible atomic decompositions of f . This
last expression has traditionally been called the atomic Hp norm of f .
Collections of atoms with special properties can be used to gain a better
understanding of the Hardy spaces. Formally, let A be a non-empty subset
of L2 p -atoms in the unit ball of Hp. The atomic space H
spanned by A
consists of those ϕ in Hp of the form
λjaj , aj ∈ A ,
|λj |
p < ∞ .
It is readily seen that, endowed with the atomic norm
‖ϕ‖Hp
= inf
|λj |
: ϕ =
λj aj , aj ∈ A
becomes a complete quasinormed space. Clearly, H
⊆ Hp, and, for
f ∈ H
, ‖f‖Hp ≤ ‖f‖Hp
4 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
Two families are of particular interest to us. When A is the collection
of all L2 p -atoms whose defining cube is dyadic, the resulting space is H
or dyadic Hp. Now, although ‖f‖Hp ≤ ‖f‖Hp
, the two quasinorms are not
equivalent on H
. Indeed, for p = 1 and N = 1, the functions
fn(x) = 2
n[χ[1−2−n,1](x) − χ[1,1+2−n](x)] ,
satisfy ‖fn‖H1 = 1, but ‖fn‖H1
∼ |n| tends to infinity with n.
Next, when Sα is the family of piecewise polynomial splines constructed
above with α = N(1/p − 1), in analogy with the one-dimensional results in
[4] and [1], H
is referred to as the space generated by special atoms.
We are now ready to describe Hp atoms as a superposition of dyadic and
special atoms.
Lemma 1.1. Let a be an L2 p -atom with defining cube Q, 0 < p ≤ 1,
and α = N(1/p − 1). Then a can be written as a linear combination of 2N
dyadic atoms ai, each supported in one of the dyadic subcubes of the smallest
special cube Qn,k containing Q, and a special atom b in Sα. More precisely,
a(x) =
i=1 di ai(x) +
L=1 cL p
−n,−k,α(x), with |di| , |cL| ≤ c.
Proof. Suppose first that the defining cube of a is Q0, and let Q1, . . . , Q2N
denote the dyadic subcubes of Q0. Furthermore, let {e
i , . . . , e
i } denote an
orthonormal basis of the subspace Ai of L
2(Qi) consisting of polynomials in
P[α], 1 ≤ i ≤ 2
N . Put
αi(x) = a(x)χQi (x)−
〈aχQi , e
j(x) , 1 ≤ i ≤ 2
and observe that 〈αi, e
j〉 = 0 for 1 ≤ j ≤ M . Therefore, αi has [α] vanishing
moments, is supported in Qi, and
‖αi‖2 ≤ ‖aχQi‖2 +
‖aχQi‖2 ≤ (M + 1) ‖aχQi‖2 .
ai(x) =
2N(1/2−1/p)
M + 1
αi(x) , 1 ≤ i ≤ N ,
is an L2 p - dyadic atom. Finally, put
b(x) = a(x) −
M + 1
2N(1/2−1/p)
ai(x) .
FROM DYADIC Λα TO Λα 5
Clearly b has [α] vanishing moments, is supported in Q0, coincides with a
polynomial in P[α] on each dyadic subcube of Q0, and
‖b‖22 ≤
|〈aχQi , e
2 ≤ M ‖a‖22 .
So, b ∈ A, and, consequently, b(x) =
L=1 cL p
L(x), where
|cL| = |〈b, p
L〉| ≤ c , 1 ≤ L ≤ M .
In the general case, let Q be the defining cube of a, side-length Q = ℓ, and
let n and k = (k1, . . . , kN ) be chosen so that 2
n−1 ≤ ℓ < 2n, and
Q ⊂ [(k1 − 1)2
n, (k1 + 1)2
n]× · · · × [(kN − 1)2
n, (kN + 1)2
Then, (1/2)N ≤ |Q|/2nN < 1.
Now, given x ∈ Q0, let a
′ be the translation and dilation of a given by
a′(x) = 2nN/pa(2nx1 − k1, . . . , 2
nxN − kN ) .
Clearly, [α] moments of a′ vanish, and
‖a′‖2 = 2
nN/p 2−nN/2‖a‖2 ≤ c |Q|
1/p|Q|−1/2‖a‖2 ≤ c .
Thus, a′ is a multiple of an atom with defining cube Q0. By the first part of
the proof,
a′(x) =
i(x) +
L(x) , x ∈ Q0 .
The support of each a′i is contained in one of the dyadic subcubes of Q0, and,
consequently, there is a k such that
ai(x) = 2
−nN/pa′i(2
−nx1 − k1, . . . , 2
−nxN − kN )
ai is an L
2p -atom supported in one of the dyadic subcubes of Q. Similarly
for the pL’s. Thus,
a(x) =
di ai(x) +
−n,−k,N(1/p−1)(x) ,
and we have finished. �
Theorem B follows readily from Lemma 1.1. Clearly, H
→֒ Hp.
Conversely, let f =
j λj aj be in H
p. By Lemma 1.1 each aj can be written
as a sum of dyadic and special atoms, and, by distributing the sum, we can
write f = fd + fs, with fd in H
, fs in H
, and
‖fd‖Hp
, ‖fs‖Hp
|λj |
Taking the infimum over the decompositions of f we get ‖f‖Hp
c ‖f‖Hp , and H
p →֒ H
. This completes the proof.
6 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
The meaning of this decomposition is the following. Cubes in D are con-
tained in one of the 2N non-overlapping quadrants of RN . To allow for the
information carried by a dyadic cube to be transmitted to an adjacent dyadic
cube, they must be connected. The pLn,k,α’s channel information across ad-
jacent dyadic cubes which would otherwise remain disconnected. The reader
will have no difficulty in proving the quantitative version of this observation:
Let T be a linear mapping defined on Hp, 0 < p ≤ 1, that assumes values in
a quasinormed Banach space X . Then, T is continuous if, and only if, the
restrictions of T to H
and H
are continuous.
2. Characterizations of Λα
Theorem A describes how to pass from Λα,D to Λα, and we prove it next.
Since (Hp)∗ = Λα and (H
)∗ = Λα,D, from Theorem B it follows readily that
Λα = Λα,D ∩ (H
)∗, so it only remains to show that (H
)∗ is characterized
by the condition Aα(g) < ∞.
First note that if g is a locally square-integrable function with Aα(g) < ∞
and f =
j,L cj,L p
nj ,kj ,α
, since 0 < p ≤ 1,
|〈g, f〉| ≤
|cj,L| |〈g, p
nj ,kj ,α
≤ Aα(g)
|cj,L|
and, consequently, taking the infimum over all atomic decompositions of f in
, we get g ∈ (H
)∗ and ‖g‖(Hp
)∗ ≤ Aα(g).
To prove the converse we proceed as in [3]. Let Qn = [−2
n, 2n]N . We begin
by observing that functions f in L2(Qn) that have vanishing moments up to
order [α] and coincide with polynomials of degree [α] on the dyadic subcubes
of Qn belong to H
‖f‖Hp
≤ |Qn|
1/p−1/2‖f‖2 .
Given ℓ ∈ (H
)∗, for a fixed n let us consider the restriction of ℓ to the space
of L2 functions f with [α] vanishing moments that are supported in Qn. Since
|ℓ(f)| ≤ ‖ℓ‖ ‖f‖Hp
≤ ‖ℓ‖ |Qn|
1/p−1/2‖f‖2 ,
this restriction is continuous with respect to the norm in L2 and, consequently,
it can be extended to a continuous linear functional in L2 and represented as
ℓ(f) =
f(x) gn(x) dx ,
FROM DYADIC Λα TO Λα 7
where gn ∈ L
2(Qn) and satisfies ‖gn‖2 ≤ ‖ℓ‖ |Qn|
1/p−1/2. Clearly, gn is
uniquely determined in Qn up to a polynomial pn in P[α]. Therefore,
gn(x) − pn(x) = gm(x)− pm(x) , a.e. x ∈ Qmin(n,m) .
Consequently, if
g(x) = gn(x)− pn(x) , x ∈ Qn ,
g(x) is well defined a.e. and, if f ∈ L2 has [α] vanishing moments and is
supported in Qn, we have
ℓ(f) =
f(x) gn(x) dx
f(x) [gn(x)− pn(x)] dx
f(x) g(x) dx .
Moreover, since each 2nN/ppL(2n ·+k) is an L2 p-atom, 1 ≤ L ≤ M , it readily
follows that
Aα(g) = sup
1≤L≤M
n,k∈Z
|〈g, 2−n/ppL(2n ·+k)〉|
≤ ‖ℓ‖ sup
‖pL‖Hp ≤ ‖ℓ‖ ,
and, consequently, Aα(g) ≤ ‖ℓ‖ , and (H
)∗ is the desired space. �
The reader will have no difficulty in showing that this result implies the
following: Let T be a bounded linear operator from a quasinormed space X
into Λα,D. Then, T is bounded from X into Λα if, and only if, Aα(Tx) ≤
c ‖x‖X for every x ∈ X .
The process of averaging the translates of dyadic BMO functions leads to
BMO, and is an important tool in obtaining results in BMO once they are
known to be true in its dyadic counterpart, BMOd, see [7]. It is also known
that BMO can be obtained as the intersection of BMOd and one of its shifted
counterparts, see [8]. These results motivate our next proposition, which
essentially says that g ∈ Λα if, and only if, g ∈ Λα,D and g is in the Lipschitz
class obtained from the shifted dyadic grid. Note that the shifts involved in
this class are in all directions parallel to the coordinate axis and depend on
the side-length of the cube.
Proposition 2.1. Λα = Λα,D0 , and ‖g‖Λα ∼ ‖g‖Λα,D0 .
Proof. It is obvious that ‖g‖Λα,D0 ≤ ‖g‖Λα . To show the other inequality we
invoke Theorem A. Since D ⊂ D0, it suffices to estimate Aα(g), or, equiva-
lently, |〈g, p〉| for p ∈ Sα, α = N(1/p − 1). So, pick p = p
n,k,α in Sα. The
defining cube Q of pLn,k,α is in D0, and, since p
n,k,α has [α] vanishing moments,
8 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
〈pLn,k,α, pQ(g)〉 = 0. Therefore,
|〈g, pLn,k,α〉| = |〈g − pQ(g), p
n,k,α〉|
≤ ‖pLn,k,α‖2 ‖g − pQ(g)‖L2(Q)
≤ |Q|α/N |Q|1/2‖pLn,k,α‖2 ‖g‖Λα,D0 .
Now, a simple change of variables gives |Q|α/N |Q|1/2‖pLn,k,α‖2 ≤ 1, and, con-
sequently, also Aα(g) ≤ ‖g‖Λα,D0 . �
References
[1] W. Abu-Shammala, J.-L. Shiu, and A. Torchinsky, Characterizations of the Hardy
space H1 and BMO, preprint.
[2] H.-Q. Bui and R. S. Laugesen, Approximation and spanning in the Hardy space, by
affine systems, Constr. Approx., to appear.
[3] A. P. Calderón and A. Torchinsky, Parabolic maximal functions associated with a
distibution, II, Advances in Math., 24 (1977), 101–171.
[4] G. S. de Souza, Spaces formed by special atoms, I, Rocky Mountain J. Math. 14 (1984),
no. 2, 423–431.
[5] S. Fridli, Transition from the dyadic to the real nonperiodic Hardy space, Acta Math.
Acad. Paedagog. Niházi (N.S.) 16 (2000), 1–8, (electronic).
[6] J. Garćıa-Cuerva and J. L. Rubio de Francia, Weighted norm inequalities and related
topics, Notas de Matemática 116, North Holland, Amsterdam, 1985.
[7] J. Garnett and P. Jones, BMO from dyadic BMO, Pacific J. Math. 99 (1982), no. 2,
351–371.
[8] T. Mei, BMO is the intersection of two translates of dyadic BMO, C. R. Math. Acad.
Sci. Paris 336 (2003), no. 12, 1003–1006.
[9] T. M. Le and L. A. Vese, Image decomposition using total variation and div( BMO)∗,
Multiscale Model. Simul. 4, (2005), no. 2, 390–423.
[10] A. Torchinsky, Real-variable methods in harmonic analysis, Dover Publications, Inc.,
Mineola, NY, 2004.
Department of Mathematics, Indiana University, Bloomington IN 47405
E-mail address: wabusham@indiana.edu
Department of Mathematics, Indiana University, Bloomington IN 47405
E-mail address: torchins@indiana.edu
1. Characterization of the Hardy spaces Hp
2. Characterizations of
References
| FROM DYADIC Λα TO Λα
WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
Abstract. In this paper we show how to compute the Λα norm , α ≥ 0,
using the dyadic grid. This result is a consequence of the description of
the Hardy spaces Hp(RN ) in terms of dyadic and special atoms.
Recently, several novel methods for computing the BMO norm of a function
f in two dimensions were discussed in [9]. Given its importance, it is also of
interest to explore the possibility of computing the norm of a BMO function,
or more generally a function in the Lipschitz class Λα, using the dyadic grid
in RN . It turns out that the BMO question is closely related to that of
approximating functions in the Hardy space H1(RN ) by the Haar system.
The approximation in H1(RN ) by affine systems was proved in [2], but this
result does not apply to the Haar system. Now, if HA(R) denotes the closure
of the Haar system in H1(R), it is not hard to see that the distance d(f,HA)
of f ∈ H1(R) to HA is ∼
f(x) dx
∣, see [1]. Thus, neither dyadic atoms
suffice to describe the Hardy spaces, nor the evaluation of the norm in BMO
can be reduced to a straightforward computation using the dyadic intervals.
In this paper we address both of these issues. First, we give a characterization
of the Hardy spaces Hp(RN ) in terms of dyadic and special atoms, and then,
by a duality argument, we show how to compute the norm in Λα(R
N ), α ≥ 0,
using the dyadic grid.
We begin by introducing some notations. Let J denote a family of cubes
Q in RN , and Pd the collection of polynomials in R
N of degree less than or
equal to d. Given α ≥ 0, Q ∈ J , and a locally integrable function g, let pQ(g)
denote the unique polynomial in P[α] such that [g − pQ(g)]χQ has vanishing
moments up to order [α].
For a locally square-integrable function g, we consider the maximal function
α,J g(x) given by
α,J g(x) = sup
x∈Q,Q∈J
|Q|α/N
|g(y)− pQ(g)(y)|
1991 Mathematics Subject Classification. 42B30,42B35.
http://arxiv.org/abs/0704.0005v1
2 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
The Lipschitz space Λα,J consists of those functions g such that M
α,J g is
in L∞, ‖g‖Λα,J = ‖M
α,J g‖∞; when the family in question contains all cubes
in RN , we simply omit the subscript J . Of course, Λ0 = BMO.
Two other families, of dyadic nature, are of interest to us. Intervals in R of
the form In,k = [ (k−1)2
n, k2n], where k and n are arbitrary integers, positive,
negative or 0, are said to be dyadic. In RN , cubes which are the product of
dyadic intervals of the same length, i.e., of the form Qn,k = In,k1 ×· · ·×In,kN ,
are called dyadic, and the collection of all such cubes is denoted D.
There is also the family D0. Let I
n,k = [(k− 1)2
n, (k+ 1)2n], where k and
n are arbitrary integers. Clearly I ′n,k is dyadic if k is odd, but not if k is even.
Now, the collection {I ′n,k : n, k integers} contains all dyadic intervals as well
as the shifts [(k − 1)2n + 2n−1, k 2n + 2n−1] of the dyadic intervals by their
half length. In RN , put D0 = {Q
n,k : Q
n,k = I
× · · · × I ′n,kN }; Q
n,k is
called a special cube. Note that D0 contains D properly.
Finally, given I ′n,k, let I
n,k = [(k − 1)2
n, k2n], and I
n,k = [k2
n, (k + 1)2n].
The 2N subcubes of Q′n,k = I
× · · · × I ′n,kN of the form I
× · · · × I
Sj = L or R, 1 ≤ j ≤ N , are called the dyadic subcubes of Q
Let Q0 denote the special cube [−1, 1]
N . Given α ≥ 0, we construct a
family Sα of piecewise polynomial splines in L
2(Q0) that will be useful in
characterizing Λα. Let A be the subspace of L
2(Q0) consisting of all functions
with vanishing moments up to order [α] which coincide with a polynomial
in P[α] on each of the 2
N dyadic subcubes of Q0. A is a finite dimensional
subspace of L2(Q0), and, therefore, by the Graham-Schmidt orthogonalization
process, say, A has an orthonormal basis in L2(Q0) consisting of functions
p1, . . . , pM with vanishing moments up to order [α], which coincide with a
polynomial in P[α] on each dyadic subinterval of Q0. Together with each p
we also consider all dyadic dilations and integer translations given by
pLn,k,α(x) = 2
n(N+α)pL(2nx1 + k1, . . . , 2
nxN + kN ) , 1 ≤ L ≤ M ,
and let
Sα = {p
n,k,α : n, k integers, 1 ≤ L ≤ M} .
Our first result shows how the dyadic grid can be used to compute the
norm in Λα.
Theorem A. Let g be a locally square-integrable function and α ≥ 0. Then,
g ∈ Λα if, and only if, g ∈ Λα,D and Aα(g) = supp∈Sα
∣〈g, p〉
∣ < ∞. Moreover,
‖g‖Λα ∼ ‖g‖Λα,D +Aα(g) .
Furthermore, it is also true, and the proof is given in Proposition 2.1 be-
low, that ‖g‖Λα ∼ ‖g‖Λα,D0 . However, in this simpler formulation, the tree
structure of the cubes in D has been lost.
FROM DYADIC Λα TO Λα 3
The proof of Theorem A relies on a close investigation of the predual of
Λα, namely, the Hardy space H
p(RN ) with 0 < p = (α + N)/N ≤ 1. In the
process we characterize Hp in terms of simpler subspaces: H
, or dyadic Hp,
and H
, the space generated by the special atoms in Sα. Specifically, we
Theorem B. Let 0 < p ≤ 1, and α = N(1/p− 1). We then have
Hp = H
where the sum is understood in the sense of quasinormed Banach spaces.
The paper is organized as follows. In Section 1 we show that individual
Hp atoms can be written as a superposition of dyadic and special atoms;
this fact may be thought of as an extension of the one-dimensional result of
Fridli concerning L∞ 1- atoms, see [5] and [1]. Then, we prove Theorem B.
In Section 2 we discuss how to pass from Λα,D, and Λα,D0 , to the Lipschitz
space Λα.
1. Characterization of the Hardy spaces Hp
We adopt the atomic definition of the Hardy spaces Hp, 0 < p ≤ 1, see
[6] and [10]. Recall that a compactly supported function a with [N(1/p− 1)]
vanishing moments is an L2 p -atom with defining cube Q if supp(a) ⊆ Q, and
|Q|1/p
| a(x) |2dx
≤ 1 .
The Hardy space Hp(RN ) = Hp consists of those distributions f that can be
written as f =
λjaj , where the aj ’s are H
p atoms,
|λj |
p < ∞, and the
convergence is in the sense of distributions as well as in Hp. Furthermore,
‖f‖Hp ∼ inf
|λj |
where the infimum is taken over all possible atomic decompositions of f . This
last expression has traditionally been called the atomic Hp norm of f .
Collections of atoms with special properties can be used to gain a better
understanding of the Hardy spaces. Formally, let A be a non-empty subset
of L2 p -atoms in the unit ball of Hp. The atomic space H
spanned by A
consists of those ϕ in Hp of the form
λjaj , aj ∈ A ,
|λj |
p < ∞ .
It is readily seen that, endowed with the atomic norm
‖ϕ‖Hp
= inf
|λj |
: ϕ =
λj aj , aj ∈ A
becomes a complete quasinormed space. Clearly, H
⊆ Hp, and, for
f ∈ H
, ‖f‖Hp ≤ ‖f‖Hp
4 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
Two families are of particular interest to us. When A is the collection
of all L2 p -atoms whose defining cube is dyadic, the resulting space is H
or dyadic Hp. Now, although ‖f‖Hp ≤ ‖f‖Hp
, the two quasinorms are not
equivalent on H
. Indeed, for p = 1 and N = 1, the functions
fn(x) = 2
n[χ[1−2−n,1](x) − χ[1,1+2−n](x)] ,
satisfy ‖fn‖H1 = 1, but ‖fn‖H1
∼ |n| tends to infinity with n.
Next, when Sα is the family of piecewise polynomial splines constructed
above with α = N(1/p − 1), in analogy with the one-dimensional results in
[4] and [1], H
is referred to as the space generated by special atoms.
We are now ready to describe Hp atoms as a superposition of dyadic and
special atoms.
Lemma 1.1. Let a be an L2 p -atom with defining cube Q, 0 < p ≤ 1,
and α = N(1/p − 1). Then a can be written as a linear combination of 2N
dyadic atoms ai, each supported in one of the dyadic subcubes of the smallest
special cube Qn,k containing Q, and a special atom b in Sα. More precisely,
a(x) =
i=1 di ai(x) +
L=1 cL p
−n,−k,α(x), with |di| , |cL| ≤ c.
Proof. Suppose first that the defining cube of a is Q0, and let Q1, . . . , Q2N
denote the dyadic subcubes of Q0. Furthermore, let {e
i , . . . , e
i } denote an
orthonormal basis of the subspace Ai of L
2(Qi) consisting of polynomials in
P[α], 1 ≤ i ≤ 2
N . Put
αi(x) = a(x)χQi (x)−
〈aχQi , e
j(x) , 1 ≤ i ≤ 2
and observe that 〈αi, e
j〉 = 0 for 1 ≤ j ≤ M . Therefore, αi has [α] vanishing
moments, is supported in Qi, and
‖αi‖2 ≤ ‖aχQi‖2 +
‖aχQi‖2 ≤ (M + 1) ‖aχQi‖2 .
ai(x) =
2N(1/2−1/p)
M + 1
αi(x) , 1 ≤ i ≤ N ,
is an L2 p - dyadic atom. Finally, put
b(x) = a(x) −
M + 1
2N(1/2−1/p)
ai(x) .
FROM DYADIC Λα TO Λα 5
Clearly b has [α] vanishing moments, is supported in Q0, coincides with a
polynomial in P[α] on each dyadic subcube of Q0, and
‖b‖22 ≤
|〈aχQi , e
2 ≤ M ‖a‖22 .
So, b ∈ A, and, consequently, b(x) =
L=1 cL p
L(x), where
|cL| = |〈b, p
L〉| ≤ c , 1 ≤ L ≤ M .
In the general case, let Q be the defining cube of a, side-length Q = ℓ, and
let n and k = (k1, . . . , kN ) be chosen so that 2
n−1 ≤ ℓ < 2n, and
Q ⊂ [(k1 − 1)2
n, (k1 + 1)2
n]× · · · × [(kN − 1)2
n, (kN + 1)2
Then, (1/2)N ≤ |Q|/2nN < 1.
Now, given x ∈ Q0, let a
′ be the translation and dilation of a given by
a′(x) = 2nN/pa(2nx1 − k1, . . . , 2
nxN − kN ) .
Clearly, [α] moments of a′ vanish, and
‖a′‖2 = 2
nN/p 2−nN/2‖a‖2 ≤ c |Q|
1/p|Q|−1/2‖a‖2 ≤ c .
Thus, a′ is a multiple of an atom with defining cube Q0. By the first part of
the proof,
a′(x) =
i(x) +
L(x) , x ∈ Q0 .
The support of each a′i is contained in one of the dyadic subcubes of Q0, and,
consequently, there is a k such that
ai(x) = 2
−nN/pa′i(2
−nx1 − k1, . . . , 2
−nxN − kN )
ai is an L
2p -atom supported in one of the dyadic subcubes of Q. Similarly
for the pL’s. Thus,
a(x) =
di ai(x) +
−n,−k,N(1/p−1)(x) ,
and we have finished. �
Theorem B follows readily from Lemma 1.1. Clearly, H
→֒ Hp.
Conversely, let f =
j λj aj be in H
p. By Lemma 1.1 each aj can be written
as a sum of dyadic and special atoms, and, by distributing the sum, we can
write f = fd + fs, with fd in H
, fs in H
, and
‖fd‖Hp
, ‖fs‖Hp
|λj |
Taking the infimum over the decompositions of f we get ‖f‖Hp
c ‖f‖Hp , and H
p →֒ H
. This completes the proof.
6 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
The meaning of this decomposition is the following. Cubes in D are con-
tained in one of the 2N non-overlapping quadrants of RN . To allow for the
information carried by a dyadic cube to be transmitted to an adjacent dyadic
cube, they must be connected. The pLn,k,α’s channel information across ad-
jacent dyadic cubes which would otherwise remain disconnected. The reader
will have no difficulty in proving the quantitative version of this observation:
Let T be a linear mapping defined on Hp, 0 < p ≤ 1, that assumes values in
a quasinormed Banach space X . Then, T is continuous if, and only if, the
restrictions of T to H
and H
are continuous.
2. Characterizations of Λα
Theorem A describes how to pass from Λα,D to Λα, and we prove it next.
Since (Hp)∗ = Λα and (H
)∗ = Λα,D, from Theorem B it follows readily that
Λα = Λα,D ∩ (H
)∗, so it only remains to show that (H
)∗ is characterized
by the condition Aα(g) < ∞.
First note that if g is a locally square-integrable function with Aα(g) < ∞
and f =
j,L cj,L p
nj ,kj ,α
, since 0 < p ≤ 1,
|〈g, f〉| ≤
|cj,L| |〈g, p
nj ,kj ,α
≤ Aα(g)
|cj,L|
and, consequently, taking the infimum over all atomic decompositions of f in
, we get g ∈ (H
)∗ and ‖g‖(Hp
)∗ ≤ Aα(g).
To prove the converse we proceed as in [3]. Let Qn = [−2
n, 2n]N . We begin
by observing that functions f in L2(Qn) that have vanishing moments up to
order [α] and coincide with polynomials of degree [α] on the dyadic subcubes
of Qn belong to H
‖f‖Hp
≤ |Qn|
1/p−1/2‖f‖2 .
Given ℓ ∈ (H
)∗, for a fixed n let us consider the restriction of ℓ to the space
of L2 functions f with [α] vanishing moments that are supported in Qn. Since
|ℓ(f)| ≤ ‖ℓ‖ ‖f‖Hp
≤ ‖ℓ‖ |Qn|
1/p−1/2‖f‖2 ,
this restriction is continuous with respect to the norm in L2 and, consequently,
it can be extended to a continuous linear functional in L2 and represented as
ℓ(f) =
f(x) gn(x) dx ,
FROM DYADIC Λα TO Λα 7
where gn ∈ L
2(Qn) and satisfies ‖gn‖2 ≤ ‖ℓ‖ |Qn|
1/p−1/2. Clearly, gn is
uniquely determined in Qn up to a polynomial pn in P[α]. Therefore,
gn(x) − pn(x) = gm(x)− pm(x) , a.e. x ∈ Qmin(n,m) .
Consequently, if
g(x) = gn(x)− pn(x) , x ∈ Qn ,
g(x) is well defined a.e. and, if f ∈ L2 has [α] vanishing moments and is
supported in Qn, we have
ℓ(f) =
f(x) gn(x) dx
f(x) [gn(x)− pn(x)] dx
f(x) g(x) dx .
Moreover, since each 2nN/ppL(2n ·+k) is an L2 p-atom, 1 ≤ L ≤ M , it readily
follows that
Aα(g) = sup
1≤L≤M
n,k∈Z
|〈g, 2−n/ppL(2n ·+k)〉|
≤ ‖ℓ‖ sup
‖pL‖Hp ≤ ‖ℓ‖ ,
and, consequently, Aα(g) ≤ ‖ℓ‖ , and (H
)∗ is the desired space. �
The reader will have no difficulty in showing that this result implies the
following: Let T be a bounded linear operator from a quasinormed space X
into Λα,D. Then, T is bounded from X into Λα if, and only if, Aα(Tx) ≤
c ‖x‖X for every x ∈ X .
The process of averaging the translates of dyadic BMO functions leads to
BMO, and is an important tool in obtaining results in BMO once they are
known to be true in its dyadic counterpart, BMOd, see [7]. It is also known
that BMO can be obtained as the intersection of BMOd and one of its shifted
counterparts, see [8]. These results motivate our next proposition, which
essentially says that g ∈ Λα if, and only if, g ∈ Λα,D and g is in the Lipschitz
class obtained from the shifted dyadic grid. Note that the shifts involved in
this class are in all directions parallel to the coordinate axis and depend on
the side-length of the cube.
Proposition 2.1. Λα = Λα,D0 , and ‖g‖Λα ∼ ‖g‖Λα,D0 .
Proof. It is obvious that ‖g‖Λα,D0 ≤ ‖g‖Λα . To show the other inequality we
invoke Theorem A. Since D ⊂ D0, it suffices to estimate Aα(g), or, equiva-
lently, |〈g, p〉| for p ∈ Sα, α = N(1/p − 1). So, pick p = p
n,k,α in Sα. The
defining cube Q of pLn,k,α is in D0, and, since p
n,k,α has [α] vanishing moments,
8 WAEL ABU-SHAMMALA AND ALBERTO TORCHINSKY
〈pLn,k,α, pQ(g)〉 = 0. Therefore,
|〈g, pLn,k,α〉| = |〈g − pQ(g), p
n,k,α〉|
≤ ‖pLn,k,α‖2 ‖g − pQ(g)‖L2(Q)
≤ |Q|α/N |Q|1/2‖pLn,k,α‖2 ‖g‖Λα,D0 .
Now, a simple change of variables gives |Q|α/N |Q|1/2‖pLn,k,α‖2 ≤ 1, and, con-
sequently, also Aα(g) ≤ ‖g‖Λα,D0 . �
References
[1] W. Abu-Shammala, J.-L. Shiu, and A. Torchinsky, Characterizations of the Hardy
space H1 and BMO, preprint.
[2] H.-Q. Bui and R. S. Laugesen, Approximation and spanning in the Hardy space, by
affine systems, Constr. Approx., to appear.
[3] A. P. Calderón and A. Torchinsky, Parabolic maximal functions associated with a
distibution, II, Advances in Math., 24 (1977), 101–171.
[4] G. S. de Souza, Spaces formed by special atoms, I, Rocky Mountain J. Math. 14 (1984),
no. 2, 423–431.
[5] S. Fridli, Transition from the dyadic to the real nonperiodic Hardy space, Acta Math.
Acad. Paedagog. Niházi (N.S.) 16 (2000), 1–8, (electronic).
[6] J. Garćıa-Cuerva and J. L. Rubio de Francia, Weighted norm inequalities and related
topics, Notas de Matemática 116, North Holland, Amsterdam, 1985.
[7] J. Garnett and P. Jones, BMO from dyadic BMO, Pacific J. Math. 99 (1982), no. 2,
351–371.
[8] T. Mei, BMO is the intersection of two translates of dyadic BMO, C. R. Math. Acad.
Sci. Paris 336 (2003), no. 12, 1003–1006.
[9] T. M. Le and L. A. Vese, Image decomposition using total variation and div( BMO)∗,
Multiscale Model. Simul. 4, (2005), no. 2, 390–423.
[10] A. Torchinsky, Real-variable methods in harmonic analysis, Dover Publications, Inc.,
Mineola, NY, 2004.
Department of Mathematics, Indiana University, Bloomington IN 47405
E-mail address: wabusham@indiana.edu
Department of Mathematics, Indiana University, Bloomington IN 47405
E-mail address: torchins@indiana.edu
1. Characterization of the Hardy spaces Hp
2. Characterizations of
References
| DE DÍA A DÍA
WAEL ABU-SHAMMALA Y ALBERTO TORCHINSKY
Resumen. En este artículo mostramos cómo calcular la norma â € ¢, α ≥ 0,
usando la cuadrícula dyádica. Este resultado es una consecuencia de la descripción de
el Hardy espacios Hp(RN) en términos de átomos dyadic y especiales.
Recientemente, varios métodos novedosos para calcular la norma BMO de una función
f en dos dimensiones fueron discutidos en [9]. Dada su importancia, es también de
interés por explorar la posibilidad de calcular la norma de una función de OMG,
o más generalmente una función en la clase Lipschitz, usando la cuadrícula dyádica
en RN. Resulta que la cuestión de los OMG está estrechamente relacionada con la de los OMG.
funciones de aproximación en el espacio Hardy H1(RN) por el sistema Haar.
La aproximación en H1(RN ) por los sistemas afín se demostró en [2], pero este
el resultado no se aplica al sistema Haar. Ahora, si HA(R) denota el cierre
del sistema Haar en H1(R), no es difícil ver que la distancia d(f,HA)
de f-H1(R) a HA
f(x) dx
•, véase [1]. Por lo tanto, ni los átomos dyádicos
suficiente para describir los espacios Hardy, ni la evaluación de la norma en BMO
puede reducirse a un cálculo sencillo utilizando los intervalos dyádicos.
En este documento abordamos ambas cuestiones. Primero, damos una caracterización
de los espacios Hardy Hp(RN ) en términos de átomos dyadic y especiales, y luego,
por un argumento de dualidad, mostramos cómo calcular la norma en â € (R
N ), α ≥ 0,
usando la cuadrícula dyádica.
Comenzamos por introducir algunas anotaciones. Deja que J denote una familia de cubos
Q en RN, y Pd la colección de polinomios en R
N de grado inferior o igual a
igual a d. Dado α ≥ 0, Q â € J, y una función localmente integrable g, dejar pQ(g)
denotar el polinomio único en P[α] de tal manera que [g − pQ(g)]χQ ha desaparecido
momentos hasta el orden [α].
Para una función localmente integrable cuadrado g, consideramos la función máxima
α,J g(x) dado por
α,J g(x) = sup
X-Q-Q-J.
Q/N
g(y)− pQ(g(y)
1991 Clasificación del sujeto de las matemáticas. 42B30,42B35.
http://arxiv.org/abs/0704.0005v1
2 WAEL ABU-SHAMMALA Y ALBERTO TORCHINSKY
El espacio Lipschitz,J consiste en esas funciones g tal que M
α,J g es
en L, g,J = M
α,J g; cuando la familia en cuestión contiene todos los cubos
en RN, simplemente omitimos el subíndice J. Por supuesto, 0 = BMO.
Otras dos familias, de naturaleza dyádica, son de interés para nosotros. Intervalos en R de
la forma In,k = [ (k−1)2
n, k2n], donde k y n son enteros arbitrarios, positivos,
negativo o 0, se dice que es dyádico. En RN, cubos que son el producto de
intervalos dyádicos de la misma longitud, es decir, de la forma Qn,k = In,k1 · In,kN,
se llaman dyádicos, y la colección de todos estos cubos se denota D.
También está la familia D0. Deja que yo...
n,k = [(k− 1)2
n, (k+ 1)2n], donde k y
n son enteros arbitrarios. Claramente ′n,k es dyadic si k es impar, pero no si k es par.
Ahora, la colección {I ′n,k : n, k enteros} contiene todos los intervalos dyádicos también
como los cambios [(k − 1)2n + 2n−1, k 2n + 2n−1] de los intervalos dyádicos por su
La mitad de largo. En RN, poner D0 = {Q
n,k : Q
n,k = I
× · · × I ′n,kN }; Q
n,k es
llamado cubo especial. Tenga en cuenta que D0 contiene D correctamente.
Por último, dado I ′n,k, dejar que yo
n,k = [(k − 1)2
n, k2n], y I
n,k = [k2
n, (k + 1)2n].
Los subcubos 2N de Q′n,k = I
× · · · × I ′n,kN del formulario I
× · · · × I
Sj = L o R, 1 ≤ j ≤ N, se llaman subcubes dyádicos de Q
Que Q0 denote el cubo especial [−1, 1]
N. Dado α ≥ 0, construimos un
familia Sα de splines polinomios a trozos en L
2 (Q0) que será útil en
caracterizando a â € â € TM. Dejar A ser el subespacio de L
2-Q0) que consiste en todas las funciones
con momentos de desaparición hasta el orden [α] que coinciden con un polinomio
en P[α] sobre cada uno de los 2
N subcubes dyádicos de Q0. A es una dimensión finita
subespacio de L2(Q0), y, por lo tanto, por la ortogonalización Graham-Schmidt
proceso, digamos, A tiene una base ortonormal en L2(Q0) que consiste en funciones
p1,. ..., pM con momentos de desaparición hasta el orden [α], que coinciden con un
polinomio en P[α] en cada subintervalo diádico de Q0. Junto con cada p
también consideramos todas las dilaciones dyádicas y traducciones enteras dadas por
pLn,k,α(x) = 2
n(N)pL(2nx1 + k1,. 2...........................................................
nxN + kN ), 1 ≤ L ≤ M,
y dejar que
Sα = {p
n,k,α : n, k enteros, 1 ≤ L ≤ M}.
Nuestro primer resultado muestra cómo se puede utilizar la cuadrícula dyadic para calcular la
norma en la letra a).
Teorema A. Dejar g ser una función localmente integrable cuadrado y α ≥ 0. Entonces,
g + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
g, p
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Además,
D + Aα(g).
Además, también es cierto, y la prueba se da en la Proposición 2.1 ser-
bajo, que â â € â € â € â € â € â € â € â â € â € â â € â â â â â â â â â â â â â â â â â â â â â, D0. Sin embargo, en esta formulación más simple, el árbol
la estructura de los cubos en D se ha perdido.
DE DÍA A DÍA 3
La prueba de Teorema A se basa en una investigación a fondo de la predual de
â € ¢, a saber, el espacio Hardy H
p(RN) con 0 < p = (α + N)/N ≤ 1. En el
proceso que caracterizamos Hp en términos de subespacios más simples: H
, o Hp dyádico,
y H
, el espacio generado por los átomos especiales en Sα. Específicamente, nosotros
Teorema B. Let 0 < p ≤ 1, y α = N(1/p− 1). Entonces tenemos
Hp = H
donde la suma se entiende en el sentido de espacios Banach cuasinormed.
El documento se organiza de la siguiente manera. En la Sección 1 mostramos que el individuo
Los átomos de Hp pueden ser escritos como una superposición de átomos dyádicos y especiales;
este hecho puede ser considerado como una extensión del resultado unidimensional de
Fridli relativo a los átomos L- 1, véase [5] y [1]. Entonces, probamos Teorema B.
En la sección 2 se discute cómo pasar de â € € TM, D, y â €, D0, a la Lipschitz
espacio.
1. Caracterización de los espacios Hardy Hp
Adoptamos la definición atómica de los espacios Hardy Hp, 0 < p ≤ 1, ver
[6] y [10]. Recuerde que una función de soporte compacto a con [N(1/p− 1)]
momentos de desaparición es un L2 p -átomo con el cubo definitorio Q si supp(a) Q, y
Q1/p
a(x) 2dx
≤ 1.
El espacio Hardy Hp(RN) = Hp consiste en las distribuciones f que pueden ser
escrito como f =
♥jaj, donde los aj’s son H
p átomos,
j
p < فارسى, y la
convergencia es en el sentido de distribuciones, así como en Hp. Además,
# FHp # # Inf #
j
donde el infimum es tomado sobre todas las posibles descomposiciones atómicas de f.
última expresión se ha llamado tradicionalmente la norma atómica Hp de f.
Las colecciones de átomos con propiedades especiales se pueden utilizar para obtener un mejor
comprensión de los espacios Hardy. Formalmente, dejar A ser un subconjunto no vacío
de L2 p -átomos en la bola de unidad de Hp. El espacio atómico H
Ampliado por A
consiste en los ♥ en Hp de la forma
*Jaj, aj* *A*
j
p < فارسى.
Se ve fácilmente que, dotado de la norma atómica
Hp
= inf
j
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A, A,
se convierte en un espacio cuasinombrado completo. Claramente, H
Hp, y, para
f • H
, + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
4 WAEL ABU-SHAMMALA Y ALBERTO TORCHINSKY
Dos familias son de especial interés para nosotros. Cuando A es la colección
de todos los L2 p -átomos cuyo cubo definidor es dyádico, el espacio resultante es H
o Hp dyádico. Ahora, aunque "f"Hp ≤ "f"Hp
, las dos cuasinormas no son
equivalente en H
. De hecho, para p = 1 y N = 1, las funciones
fn(x) = 2
n[χ[1−2−n,1](x) − χ[1,1+2−n](x)],
satisfacer «fn»H1 = 1, pero «fn»H1
n tiende a la infinidad con n.
A continuación, cuando Sα es la familia de splines polinomios a trozos construidos
arriba con α = N(1/p − 1), en analogía con los resultados unidimensionales en
[4] y [1], H
se conoce como el espacio generado por átomos especiales.
Ahora estamos listos para describir los átomos de Hp como una superposición de dyádico y
átomos especiales.
Lemma 1.1. Dejar ser un L2 p -átomo con el cubo definitorio Q, 0 < p ≤ 1,
y α = N(1/p − 1). A continuación, una se puede escribir como una combinación lineal de 2N
átomos dyádicos ai, cada uno apoyado en uno de los subcubes dyádicos de los más pequeños
cubo especial Qn,k que contiene Q, y un átomo especial b en Sα. Más precisamente,
a(x) =
i=1 di ai(x) +
L=1 cL p
−n,−k,α(x), con di, cL ≤ c.
Prueba. Supongamos primero que el cubo definitorio de a es Q0, y dejar Q1,. .., Q2N
denotan los subcubos dyádicos de Q0. Además, {e)
i,. .., e
i } denotar un
base ortonormal del Ai subespacial de L
2-Qi) compuesto de polinomios en
P[α], 1 ≤ i ≤ 2
N. Pon
αi(x) = a(x)χQi (x)−
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
j(x), 1 ≤ i ≤ 2
y observar que i, e
i = 0 para 1 ≤ j ≤ M. Por lo tanto, αi ha desaparecido [α]
momentos, se apoya en Qi, y
2 ≤ 1 × 2 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
≤ (M + 1) ≤ (M + 1) ≤ (Qi+2) ≤ (M + 1) ≤ (M + 1) ≤ (M + 1) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M) ≤ (M)
ai(x) =
2N(1/2−1/p)
M + 1
αi(x), 1 ≤ i ≤ N,
es un átomo L2 p - dyádico. Por último, poner
b(x) = a(x) −
M + 1
2N(1/2−1/p)
ai(x).
DE DÍA A 5
Claramente b tiene [α] momentos de desaparición, se apoya en Q0, coincide con un
polinomio en P[α] en cada subcubo diádico de Q0, y
â € TM bâ € 22 ≤
aχQi, e
2 ≤ M â € a € 22.
Por lo tanto, b A, y, en consecuencia, b (x) =
L=1 cL p
L(x), donde
cL = b, p
L ≤ c, 1 ≤ L ≤ M.
En el caso general, que Q sea el cubo definitorio de a, la longitud lateral Q = l, y
dejar n y k = (k1,. .., kN ) ser elegido de modo que 2
n−1 ≤ l < 2n, y
Q â € [(k1 − 1)2
n, (k1 + 1)2
n]× ·· · × [(kN − 1)2
n, (kN + 1)2
Entonces, (1/2)N ≤ Q/2nN < 1.
Ahora, dado x â € ¢ Q0, dejar un
′ ser la traducción y la dilatación de un dado por
a′(x) = 2nN/pa(2nx1 − k1,. 2...........................................................
nxN − kN ).
Claramente, [α] los momentos de un ′ desaparecen, y
2 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1
nN/p 2−nN/2+a+2 ≤ c Q
1/pQ1/2â > 2 ≤ c.
Por lo tanto, a′ es un múltiplo de un átomo con el cubo que define Q0. Por la primera parte de
la prueba,
a′(x) =
i(x) +
L(x), x(+) Q0.
El soporte de cada a′i está contenido en uno de los subcubos dyadic de Q0, y,
En consecuencia, hay una k tal que
ai(x) = 2
−nN/pa′i(2
− nx1 − k1,. 2...........................................................
− nxN − kN )
ai es una L
2p -átomo apoyado en uno de los subcubos dyadic de Q. Del mismo modo
para los pL. Por lo tanto,
a(x) =
di ai(x) +
− n,− k,N(1/p−1)(x),
y hemos terminado. - No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
Teorema B sigue fácilmente de Lemma 1.1. Claramente, H
Hp.
Por el contrario, dejar f =
j j aj ser en H
p. Por Lemma 1.1 cada aj se puede escribir
como una suma de átomos dyádicos y especiales, y, al distribuir la suma, podemos
escribir f = fd + fs, con fd en H
, fs en H
, y
â € € TM TM fdâ € TM Hp
, â € ¢fsâ € ¢Hp
j
Tomando el infimum sobre las descomposiciones de f obtenemos â € â € â € TM TM Hp
c + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
p H
. Esto completa la prueba.
6 WAEL ABU-SHAMMALA Y ALBERTO TORCHINSKY
El significado de esta descomposición es el siguiente. Los cubos en D son con-
contenido en uno de los cuadrantes 2N no superpuestos de RN. Para permitir la
información transportada por un cubo dyádico para ser transmitida a un dyádico adyacente
cubo, deben estar conectados. El pLn,k,α canal de información a través de anuncios
cubos dyádicos jacent que de otro modo permanecerían desconectados. El lector
no tendrá dificultad alguna para demostrar la versión cuantitativa de esta observación:
Que T sea una asignación lineal definida en Hp, 0 < p ≤ 1, que asume valores en
un espacio de Banach cuasinombrado X. Entonces, T es continua si, y sólo si, la
restricciones de T a H
y H
son continuas.
2. Caracterizaciones de
Teorema A describe cómo pasar de â € ¢, D a â € TM, y lo probamos a continuación.
Desde (Hp)* = y (H)
)* =,D, del Teorema B se sigue fácilmente que
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
)*, por lo que sólo queda por demostrar que (H
)* se caracteriza por
por la condición Aα(g) < فارسى.
Primera nota que si g es una función localmente integrable cuadrado con Aα(g) <
y f =
j,L cj,L p
nj,kj,α
, desde 0 < p ≤ 1,
g, f ≤
cj,L g, p
nj,kj,α
≤ Aα(g)
cj,L
y, en consecuencia, tomar el ínfimo sobre todas las descomposiciones atómicas de f en
, obtenemos g # (H
)* (Hp)* (Hp)******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
)* ≤ Aα(g).
Para probar lo contrario procedemos como en [3]. Que Qn = [−2
N, 2n]N. Comenzamos
observando que las funciones f en L2(Qn) que han desaparecido momentos hasta
orden [α] y coinciden con polinomios de grado [α] en los subcubos dyadic
de Qn pertenecen a H
â € â € TM € TM TM TM TM Hp
≤ Qn
1/p-1/2°f+2.
Teniendo en cuenta la letra h) del apartado 1 del artículo 4 del Reglamento (CEE) n° 1408/71 del Consejo, de 17 de diciembre de 1971, por el que se establece la organización común de mercados en el sector de la leche y de los productos lácteos y por el que se deroga el Reglamento (CEE) n° 1408/71 del Consejo, de 17 de diciembre de 1971, por el que se establecen disposiciones de aplicación del Reglamento (CEE) n° 1408/71 del Consejo, se deroga el Reglamento (CEE) n° 1408/71 del Consejo, por el que se establecen disposiciones de aplicación del Reglamento (CEE) n° 1408/71 del Consejo, por el que se establecen disposiciones de aplicación del Reglamento (CEE) n° 1408/71 del Consejo, por el que se establece la organización común de mercados en el sector de la leche y de los productos lácteos
)*, para un n fijo consideremos la restricción de l al espacio
funciones de L2 f con [α] momentos de desaparición que se soportan en Qn. Desde
l(f) ≤ â â € â € â € € TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM TM
≤ Qn
1/p-1/2°f+2,
esta restricción es continua con respecto a la norma en L2 y, en consecuencia,
se puede extender a una función lineal continua en L2 y se representa como
l(f) =
f(x) gn(x) dx,
DE DÍA A DÍA 7
donde gn â € L
2(Qn) y satisface las condiciones siguientes:
1/p−1/2. Claramente, gn es
determinado exclusivamente en Qn hasta un pn polinomio en P[α]. Por lo tanto,
gn(x) − pn(x) = gm(x)− pm(x), a.e. x Qmin(n,m).
En consecuencia, si
g(x) = gn(x)− pn(x), x • Qn,
g(x) está bien definido a.e. y, si f L2 tiene [α] momentos de desaparición y es
apoyado en Qn, tenemos
l(f) =
f(x) gn(x) dx
f(x) [gn(x)− pn(x)] dx
f(x) g(x) dx.
Además, dado que cada 2nN/ppL(2n k) es un L2 p-átomo, 1 ≤ L ≤ M, fácilmente
De ello se desprende que:
Aα(g) = sup
1≤L≤M
n,kÃ3z
g, 2−n/ppL(2n k)
≤ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
«pL», «Hp» ≤ «l»,
y, en consecuencia, Aα(g) ≤ , y (H)
)* es el espacio deseado. - No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
El lector no tendrá ninguna dificultad en demostrar que este resultado implica la
siguiente: Dejar que T sea un operador lineal limitado de un espacio cuasinombrado X
en â € TM a â € TM a â € TM a â TM a â TM a â TM a â TM a â TM a, D. Entonces, T se limita de X a si, y sólo si, Aα(Tx) ≤
c x x x por cada x x x.
El proceso de promedio de las traducciones de funciones de BMO dyadic conduce a
BMO, y es una herramienta importante para obtener resultados en BMO una vez que son
Se sabe que es cierto en su homólogo dyádico, BMOd, véase [7]. También se conoce
que BMO se puede obtener como la intersección de BMOd y uno de sus desplazados
homólogas, véase [8]. Estos resultados motivan nuestra próxima propuesta, que
esencialmente dice que g â € ¬ si, y sólo si, g â € €, D y g está en el Lipschitz
clase obtenida de la cuadrícula dyádica desplazada. Tenga en cuenta que los cambios involucrados en
esta clase están en todas las direcciones paralelas al eje de coordenadas y dependen de
la longitud lateral del cubo.
Proposición 2.1. # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Prueba. Es obvio que "g", D0 ≤ "g". Para mostrar la otra desigualdad nosotros
Invoque el Teorema A. Dado que D • D0, basta con estimar Aα(g), o equiva-
lenty, g, p para p Sα, α = N(1/p − 1). Por lo tanto, pick p = p
n,k,α en Sα. Los
definir cubo Q de pLn,k,α está en D0, y, desde p
n,k,α tiene [α] momentos de desaparición,
8 WAEL ABU-SHAMMALA Y ALBERTO TORCHINSKY
PLn,k,α, pQ(g) = 0. Por lo tanto,
g, pLn,k, = g − pQ(g), p
n,k,
≤ pLn,k,2 °g − pQ(g)°L2(Q)
≤ Q/N Q1/2pLn,k,2 g,D0.
Ahora, un simple cambio de variables da Q/N Q1/2pLn,k,2 ≤ 1, y, con-
Secuencialmente, también Aα(g) ≤ g,D0. - No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
Bibliografía
[1] W. Abu-Shammala, J.-L. Shiu, y A. Torchinsky, Caracterizaciones del Hardy
espacio H1 y BMO, preimpresión.
[2] H.-Q. Bui y R. S. Laugesen, Aproximación y extensión en el espacio Hardy, por
sistemas de afina, Constr. Aprox., para aparecer.
[3] A. P. Calderón y A. Torchinsky, Funciones máximas parabólicas asociadas a un
distibución, II, Avances en matemáticas., 24 (1977), 101–171.
[4] G. S. de Souza, Espacios formados por átomos especiales, I, Rocky Mountain J. Matemáticas. 14 (1984),
No. 2, 423-431.
[5] S. Fridli, Transición del diádico al verdadero espacio Hardy no periódico, Acta Math.
Acad. Pedagogo. Niházi (N.S.) 16 (2000), 1–8, (electrónica).
[6] J. Gara-Cuerva y J. L. Rubio de Francia, Desigualdades de normas ponderadas y relacionadas
temas, Notas de Matemáticas 116, Holanda del Norte, Amsterdam, 1985.
[7] J. Garnett y P. Jones, BMO de dyadic BMO, Pacific J. Matemáticas. 99 (1982), No. 2,
351–371.
[8] T. Mei, BMO es la intersección de dos traducciones de BMO dyádico, C. R. Math. Acad.
Sci. Paris 336 (2003), no. 12, 1003–1006.
[9] T. M. Le y L. A. Vese, descomposición de la imagen utilizando variación total y div( BMO)*,
Modelo multiescala. Simul. 4, (2005), no. 2, 390-423.
[10] A. Torchinsky, Métodos reales variables en el análisis armónico, Dover Publications, Inc.,
Mineola, NY, 2004.
Departamento de Matemáticas, Universidad de Indiana, Bloomington IN 47405
Dirección de correo electrónico: wabusham@indiana.edu
Departamento de Matemáticas, Universidad de Indiana, Bloomington IN 47405
Dirección de correo electrónico: torchins@indiana.edu
1. Caracterización de los espacios Hardy Hp
2. Caracterizaciones de
Bibliografía
|
704.001
| Polymer Quantum Mechanics and its Continuum Limit
| A rather non-standard quantum representation of the canonical commutation
relations of quantum mechanics systems, known as the polymer representation has
gained some attention in recent years, due to its possible relation with Planck
scale physics. In particular, this approach has been followed in a symmetric
sector of loop quantum gravity known as loop quantum cosmology. Here we explore
different aspects of the relation between the ordinary Schroedinger theory and
the polymer description. The paper has two parts. In the first one, we derive
the polymer quantum mechanics starting from the ordinary Schroedinger theory
and show that the polymer description arises as an appropriate limit. In the
second part we consider the continuum limit of this theory, namely, the reverse
process in which one starts from the discrete theory and tries to recover back
the ordinary Schroedinger quantum mechanics. We consider several examples of
interest, including the harmonic oscillator, the free particle and a simple
cosmological model.
| Polymer Quantum Mechanics and its Continuum Limit
Alejandro Corichi,1, 2, 3, ∗ Tatjana Vukašinac,4, † and José A. Zapata1, ‡
Instituto de Matemáticas, Unidad Morelia, Universidad Nacional Autónoma de México,
UNAM-Campus Morelia, A. Postal 61-3, Morelia, Michoacán 58090, Mexico
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México,
A. Postal 70-543, México D.F. 04510, Mexico
Institute for Gravitational Physics and Geometry, Physics Department,
Pennsylvania State University, University Park PA 16802, USA
Facultad de Ingenieŕıa Civil, Universidad Michoacana de San Nicolas de Hidalgo,
Morelia, Michoacán 58000, Mexico
A rather non-standard quantum representation of the canonical commutation relations of quan-
tum mechanics systems, known as the polymer representation has gained some attention in recent
years, due to its possible relation with Planck scale physics. In particular, this approach has been
followed in a symmetric sector of loop quantum gravity known as loop quantum cosmology. Here we
explore different aspects of the relation between the ordinary Schrödinger theory and the polymer
description. The paper has two parts. In the first one, we derive the polymer quantum mechanics
starting from the ordinary Schrödinger theory and show that the polymer description arises as an
appropriate limit. In the second part we consider the continuum limit of this theory, namely, the
reverse process in which one starts from the discrete theory and tries to recover back the ordinary
Schrödinger quantum mechanics. We consider several examples of interest, including the harmonic
oscillator, the free particle and a simple cosmological model.
PACS numbers: 04.60.Pp, 04.60.Ds, 04.60.Nc 11.10.Gh.
I. INTRODUCTION
The so-called polymer quantum mechanics, a non-
regular and somewhat ‘exotic’ representation of the
canonical commutation relations (CCR) [1], has been
used to explore both mathematical and physical issues in
background independent theories such as quantum grav-
ity [2, 3]. A notable example of this type of quantization,
when applied to minisuperspace models has given way to
what is known as loop quantum cosmology [4, 5]. As in
any toy model situation, one hopes to learn about the
subtle technical and conceptual issues that are present
in full quantum gravity by means of simple, finite di-
mensional examples. This formalism is not an exception
in this regard. Apart from this motivation coming from
physics at the Planck scale, one can independently ask
for the relation between the standard continuous repre-
sentations and their polymer cousins at the level of math-
ematical physics. A deeper understanding of this relation
becomes important on its own.
The polymer quantization is made of several steps.
The first one is to build a representation of the
Heisenberg-Weyl algebra on a Kinematical Hilbert space
that is “background independent”, and that is sometimes
referred to as the polymeric Hilbert space Hpoly. The
second and most important part, the implementation of
dynamics, deals with the definition of a Hamiltonian (or
Hamiltonian constraint) on this space. In the examples
∗Electronic address: corichi@matmor.unam.mx
†Electronic address: tatjana@shi.matmor.unam.mx
‡Electronic address: zapata@matmor.unam.mx
studied so far, the first part is fairly well understood,
yielding the kinematical Hilbert space Hpoly that is, how-
ever, non-separable. For the second step, a natural im-
plementation of the dynamics has proved to be a bit more
difficult, given that a direct definition of the Hamiltonian
Ĥ of, say, a particle on a potential on the space Hpoly is
not possible since one of the main features of this repre-
sentation is that the operators q̂ and p̂ cannot be both
simultaneously defined (nor their analogues in theories
involving more elaborate variables). Thus, any operator
that involves (powers of) the not defined variable has to
be regulated by a well defined operator which normally
involves introducing some extra structure on the configu-
ration (or momentum) space, namely a lattice. However,
this new structure that plays the role of a regulator can
not be removed when working in Hpoly and one is left
with the ambiguity that is present in any regularization.
The freedom in choosing it can be sometimes associated
with a length scale (the lattice spacing). For ordinary
quantum systems such as a simple harmonic oscillator,
that has been studied in detail from the polymer view-
point, it has been argued that if this length scale is taken
to be ‘sufficiently small’, one can arbitrarily approximate
standard Schrödinger quantum mechanics [2, 3]. In the
case of loop quantum cosmology, the minimum area gap
A0 of the full quantum gravity theory imposes such a
scale, that is then taken to be fundamental [4].
A natural question is to ask what happens when we
change this scale and go to even smaller ‘distances’, that
is, when we refine the lattice on which the dynamics of
the theory is defined. Can we define consistency con-
ditions between these scales? Or even better, can we
take the limit and find thus a continuum limit? As it
http://arxiv.org/abs/0704.0007v2
mailto:corichi@matmor.unam.mx
mailto:tatjana@shi.matmor.unam.mx
mailto:zapata@matmor.unam.mx
has been shown recently in detail, the answer to both
questions is in the affirmative [6]. There, an appropriate
notion of scale was defined in such a way that one could
define refinements of the theory and pose in a precise
fashion the question of the continuum limit of the theory.
These results could also be seen as handing a procedure
to remove the regulator when working on the appropri-
ate space. The purpose of this paper is to further explore
different aspects of the relation between the continuum
and the polymer representation. In particular in the first
part we put forward a novel way of deriving the polymer
representation from the ordinary Schrödinger represen-
tation as an appropriate limit. In Sec. II we derive two
versions of the polymer representation as different lim-
its of the Schrödinger theory. In Sec. III we show that
these two versions can be seen as different polarizations
of the ‘abstract’ polymer representation. These results,
to the best of our knowledge, are new and have not been
reported elsewhere. In Sec. IV we pose the problem of
implementing the dynamics on the polymer representa-
tion. In Sec. V we motivate further the question of the
continuum limit (i.e. the proper removal of the regulator)
and recall the basic constructions of [6]. Several exam-
ples are considered in Sec. VI. In particular a simple
harmonic oscillator, the polymer free particle and a sim-
ple quantum cosmology model are considered. The free
particle and the cosmological model represent a general-
ization of the results obtained in [6] where only systems
with a discrete and non-degenerate spectrum where con-
sidered. We end the paper with a discussion in Sec. VII.
In order to make the paper self-contained, we will keep
the level of rigor in the presentation to that found in the
standard theoretical physics literature.
II. QUANTIZATION AND POLYMER
REPRESENTATION
In this section we derive the so called polymer repre-
sentation of quantum mechanics starting from a specific
reformulation of the ordinary Schrödinger representation.
Our starting point will be the simplest of all possible
phase spaces, namely Γ = R2 corresponding to a particle
living on the real line R. Let us choose coordinates (q, p)
thereon. As a first step we shall consider the quantization
of this system that leads to the standard quantum theory
in the Schrödinger description. A convenient route is to
introduce the necessary structure to define the Fock rep-
resentation of such system. From this perspective, the
passage to the polymeric case becomes clearest. Roughly
speaking by a quantization one means a passage from the
classical algebraic bracket, the Poisson bracket,
{q, p} = 1 (1)
to a quantum bracket given by the commutator of the
corresponding operators,
[ q̂, p̂] = i~ 1̂ (2)
These relations, known as the canonical commutation re-
lation (CCR) become the most common corner stone of
the (kinematics of the) quantum theory; they should be
satisfied by the quantum system, when represented on a
Hilbert space H.
There are alternative points of departure for quantum
kinematics. Here we consider the algebra generated by
the exponentiated versions of q̂ and p̂ that are denoted
U(α) = ei(α q̂)/~ ; V (β) = ei(β p̂)/~
where α and β have dimensions of momentum and length,
respectively. The CCR now become
U(α) · V (β) = e(−iα β)/~V (β) · U(α) (3)
and the rest of the product is
U(α1)·U(α2) = U(α1+α2) ; V (β1)·V (β2) = V (β1+β2)
The Weyl algebra W is generated by taking finite linear
combinations of the generators U(αi) and V (βi) where
the product (3) is extended by linearity,
(Ai U(αi) +Bi V (βi))
From this perspective, quantization means finding an
unitary representation of the Weyl algebra W on a
Hilbert space H′ (that could be different from the ordi-
nary Schrödinger representation). At first it might look
weird to attempt this approach given that we know how
to quantize such a simple system; what do we need such
a complicated object as W for? It is infinite dimensional,
whereas the set S = {1̂, q̂, p̂}, the starting point of the
ordinary Dirac quantization, is rather simple. It is in
the quantization of field systems that the advantages of
the Weyl approach can be fully appreciated, but it is
also useful for introducing the polymer quantization and
comparing it to the standard quantization. This is the
strategy that we follow.
A question that one can ask is whether there is any
freedom in quantizing the system to obtain the ordinary
Schrödinger representation. On a first sight it might seem
that there is none given the Stone-Von Neumann unique-
ness theorem. Let us review what would be the argument
for the standard construction. Let us ask that the repre-
sentation we want to build up is of the Schrödinger type,
namely, where states are wave functions of configuration
space ψ(q). There are two ingredients to the construction
of the representation, namely the specification of how the
basic operators (q̂, p̂) will act, and the nature of the space
of functions that ψ belongs to, that is normally fixed by
the choice of inner product on H, or measure µ on R.
The standard choice is to select the Hilbert space to be,
H = L2(R, dq)
the space of square-integrable functions with respect to
the Lebesgue measure dq (invariant under constant trans-
lations) on R. The operators are then represented as,
q̂ · ψ(q) = (q ψ)(q) and p̂ · ψ(q) = −i ~ ∂
ψ(q) (4)
Is it possible to find other representations? In order to
appreciate this freedom we go to the Weyl algebra and
build the quantum theory thereon. The representation
of the Weyl algebra that can be called of the ‘Fock type’
involves the definition of an extra structure on the phase
space Γ: a complex structure J . That is, a linear map-
ping from Γ to itself such that J2 = −1. In 2 dimen-
sions, all the freedom in the choice of J is contained in
the choice of a parameter d with dimensions of length. It
is also convenient to define: k = p/~ that has dimensions
of 1/L. We have then,
Jd : (q, k) 7→ (−d2 k, q/d2)
This object together with the symplectic structure:
Ω((q, p); (q′, p′)) = q p′ − p q′ define an inner product on
Γ by the formula gd(· ; ·) = Ω(· ; Jd ·) such that:
gd((q, p); (q
′, p′)) =
q q′ +
which is dimension-less and positive definite. Note that
with this quantities one can define complex coordinates
(ζ, ζ̄) as usual:
q + i
p ; ζ̄ =
q − i d
from which one can build the standard Fock representa-
tion. Thus, one can alternatively view the introduction
of the length parameter d as the quantity needed to de-
fine (dimensionless) complex coordinates on the phase
space. But what is the relevance of this object (J or
d)? The definition of complex coordinates is useful for
the construction of the Fock space since from them one
can define, in a natural way, creation and annihilation
operators. But for the Schrödinger representation we are
interested here, it is a bit more subtle. The subtlety is
that within this approach one uses the algebraic prop-
erties of W to construct the Hilbert space via what is
known as the Gel’fand-Naimark-Segal (GNS) construc-
tion. This implies that the measure in the Schrödinger
representation becomes non trivial and thus the momen-
tum operator acquires an extra term in order to render
the operator self-adjoint. The representation of the Weyl
algebra is then, when acting on functions φ(q) [7]:
Û(α) · φ(q) := (eiα q/~ φ)(q)
V̂ (β) · φ(q) := e
(q−β/2)
φ(q − β)
The Hilbert space structure is introduced by the defini-
tion of an algebraic state (a positive linear functional)
ωd : W → C, that must coincide with the expectation
value in the Hilbert space taken on a special state ref-
ered to as the vacuum: ωd(a) = 〈â〉vac, for all a ∈ W .
In our case this specification of J induces such a unique
state ωd that yields,
〈Û(α)〉vac = e−
d2 α2
~2 (5)
〈V̂ (β)〉vac = e−
d2 (6)
Note that the exponents in the vacuum expectation
values correspond to the metric constructed out of J :
d2 α2
= gd((0, α); (0, α)) and
= gd((β, 0); (β, 0)).
Wave functions belong to the space L2(R, dµd), where
the measure that dictates the inner product in this rep-
resentation is given by,
dµd =
d2 dq
In this representation, the vacuum is given by the iden-
tity function φ0(q) = 1 that is, just as any plane wave,
normalized. Note that for each value of d > 0, the rep-
resentation is well defined and continuous in α and β.
Note also that there is an equivalence between the q-
representation defined by d and the k-representation de-
fined by 1/d.
How can we recover then the standard representation
in which the measure is given by the Lebesgue measure
and the operators are represented as in (4)? It is easy to
see that there is an isometric isomorphism K that maps
the d-representation in Hd to the standard Schrödinger
representation in Hschr by:
ψ(q) = K · φ(q) = e
d1/2π1/4
φ(q) ∈ Hschr = L2(R, dq)
Thus we see that all d-representations are unitarily equiv-
alent. This was to be expected in view of the Stone-Von
Neumann uniqueness result. Note also that the vacuum
now becomes
ψ0(q) =
d1/2π1/4
2 d2 ,
so even when there is no information about the param-
eter d in the representation itself, it is contained in the
vacuum state. This procedure for constructing the GNS-
Schrödinger representation for quantum mechanics has
also been generalized to scalar fields on arbitrary curved
space in [8]. Note, however that so far the treatment has
all been kinematical, without any knowledge of a Hamil-
tonian. For the Simple Harmonic Oscillator of mass m
and frequency ω, there is a natural choice compatible
with the dynamics given by d =
, in which some
calculations simplify (for instance for coherent states),
but in principle one can use any value of d.
Our study will be simplified by focusing on the funda-
mental entities in the Hilbert Space Hd , namely those
states generated by acting with Û(α) on the vacuum
φ0(q) = 1. Let us denote those states by,
φα(q) = Û(α) · φ0(q) = ei
The inner product between two such states is given by
〈φα, φλ〉d =
dµd e
~ = e−
(λ−α)2 d2
4 ~2 (7)
Note incidentally that, contrary to some common belief,
the ‘plane waves’ in this GNS Hilbert space are indeed
normalizable.
Let us now consider the polymer representation. For
that, it is important to note that there are two possible
limiting cases for the parameter d: i) The limit 1/d 7→ 0
and ii) The case d 7→ 0. In both cases, we have ex-
pressions that become ill defined in the representation or
measure, so one needs to be careful.
A. The 1/d 7→ 0 case.
The first observation is that from the expressions (5) and
(6) for the algebraic state ωd, we see that the limiting
cases are indeed well defined. In our case we get, ωA :=
lim1/d→0 ωd such that,
ωA(Û(α)) = δα,0 and ωA(V̂ (β)) = 1 (8)
From this, we can indeed construct the representation
by means of the GNS construction. In order to do that
and to show how this is obtained we shall consider several
expressions. One has to be careful though, since the limit
has to be taken with care. Let us consider the measure
on the representation that behaves as:
dµd =
d2 dq 7→ 1
so the measures tends to an homogeneous measure but
whose ‘normalization constant’ goes to zero, so the limit
becomes somewhat subtle. We shall return to this point
later.
Let us now see what happens to the inner product
between the fundamental entities in the Hilbert Space Hd
given by (7). It is immediate to see that in the 1/d 7→ 0
limit the inner product becomes,
〈φα, φλ〉d 7→ δα,λ (9)
with δα,λ being Kronecker’s delta. We see then that the
plane waves φα(q) become an orthonormal basis for the
new Hilbert space. Therefore, there is a delicate interplay
between the two terms that contribute to the measure in
order to maintain the normalizability of these functions;
we need the measure to become damped (by 1/d) in order
to avoid that the plane waves acquire an infinite norm
(as happens with the standard Lebesgue measure), but
on the other hand the measure, that for any finite value
of d is a Gaussian, becomes more and more spread.
It is important to note that, in this limit, the operators
Û(α) become discontinuous with respect to α, given that
for any given α1 and α2 (different), its action on a given
basis vector ψλ(q) yields orthogonal vectors. Since the
continuity of these operators is one of the hypotesis of
the Stone-Von Neumann theorem, the uniqueness result
does not apply here. The representation is inequivalent
to the standard one.
Let us now analyze the other operator, namely the
action of the operator V̂ (β) on the basis φα(q):
V̂ (β) · φα(q) = e−
~ e(β/d
2+iα/~)q
which in the limit 1/d 7→ 0 goes to,
V̂ (β) · φα(q) 7→ ei
~ φα(q)
that is continuous on β. Thus, in the limit, the operator
p̂ = −i~∂q is well defined. Also, note that in this limit
the operator p̂ has φα(q) as its eigenstate with eigenvalue
given by α:
p̂ · φα(q) 7→ αφα(q)
To summarize, the resulting theory obtained by taking
the limit 1/d 7→ 0 of the ordinary Schrödinger descrip-
tion, that we shall call the ‘polymer representation of
type A’, has the following features: the operators U(α)
are well defined but not continuous in α, so there is no
generator (no operator associated to q). The basis vec-
tors φα are orthonormal (for α taking values on a contin-
uous set) and are eigenvectors of the operator p̂ that is
well defined. The resulting Hilbert space HA will be the
(A-version of the) polymer representation. Let us now
consider the other case, namely, the limit when d 7→ 0.
B. The d 7→ 0 case
Let us now explore the other limiting case of the
Schrödinger/Fock representations labelled by the param-
eter d. Just as in the previous case, the limiting algebraic
state becomes, ωB := limd→0 ωd such that,
ωB(Û(α)) = 1 and ωB(V̂ (β)) = δβ,0 (10)
From this positive linear function, one can indeed con-
struct the representation using the GNS construction.
First let us note that the measure, even when the limit
has to be taken with due care, behaves as:
dµd =
d2 dq 7→ δ(q) dq
That is, as Dirac’s delta distribution. It is immediate to
see that, in the d 7→ 0 limit, the inner product between
the fundamental states φα(q) becomes,
〈φα, φλ〉d 7→ 1 (11)
This in fact means that the vector ξ = φα − φλ belongs
to the Kernel of the limiting inner product, so one has to
mod out by these (and all) zero norm states in order to
get the Hilbert space.
Let us now analyze the other operator, namely the
action of the operator V̂ (β) on the vacuum φ0(q) = 1,
which for arbitrary d has the form,
φ̃β := V̂ (β) · φ0(q) = e
(q−β/2)
The inner product between two such states is given by
〈φ̃α, φ̃β〉d = e−
(α−β)2
In the limit d → 0, 〈φ̃α, φ̃β〉d → δα,β. We can see then
that it is these functions that become the orthonormal,
‘discrete basis’ in the theory. However, the function φ̃β(q)
in this limit becomes ill defined. For example, for β > 0,
it grows unboundedly for q > β/2, is equal to one if
q = β/2 and zero otherwise. In order to overcome these
difficulties and make more transparent the resulting the-
ory, we shall consider the other form of the representation
in which the measure is incorporated into the states (and
the resulting Hilbert space is L2(R, dq)). Thus the new
state
ψβ(q) := K · (V̂ (β) · φ0(q)) =
(q−β)2
We can now take the limit and what we get is
d 7→0
ψβ(q) := δ
1/2(q, β)
where by δ1/2(q, β) we mean something like ‘the square
root of the Dirac distribution’. What we really mean is
an object that satisfies the following property:
δ1/2(q, β) · δ1/2(q, α) = δ(q, β) δβ,α
That is, if α = β then it is just the ordinary delta, other-
wise it is zero. In a sense these object can be regarded as
half-densities that can not be integrated by themselves,
but whose product can. We conclude then that the inner
product is,
〈ψβ , ψα〉 =
dq ψβ(q)ψα(q) =
dq δ(q, α) δβ,α = δβ,α
which is just what we expected. Note that in this repre-
sentation, the vacuum state becomes ψ0(q) := δ
1/2(q, 0),
namely, the half-delta with support in the origin. It is
important to note that we are arriving in a natural way to
states as half-densities, whose squares can be integrated
without the need of a nontrivial measure on the configu-
ration space. Diffeomorphism invariance arises then in a
natural but subtle manner.
Note that as the end result we recover the Kronecker
delta inner product for the new fundamental states:
χβ(q) := δ
1/2(q, β).
Thus, in this new B-polymer representation, the Hilbert
space HB is the completion with respect to the inner
product (13) of the states generated by taking (finite)
linear combinations of basis elements of the form χβ :
Ψ(q) =
bi χβi(q) (14)
Let us now introduce an equivalent description of this
Hilbert space. Instead of having the basis elements be
half-deltas as elements of the Hilbert space where the
inner product is given by the ordinary Lebesgue measure
dq, we redefine both the basis and the measure. We
could consider, instead of a half-delta with support β, a
Kronecker delta or characteristic function with support
on β:
χ′β(q) := δq,β
These functions have a similar behavior with respect to
the product as the half-deltas, namely: χ′β(q) · χ′α(q) =
δβ,α. The main difference is that neither χ
′ nor their
squares are integrable with respect to the Lebesgue mea-
sure (having zero norm). In order to fix that problem we
have to change the measure so that we recover the basic
inner product (13) with our new basis. The needed mea-
sure turns out to be the discrete counting measure on R.
Thus any state in the ‘half density basis’ can be written
(using the same expression) in terms of the ‘Kronecker
basis’. For more details and further motivation see the
next section.
Note that in this B-polymer representation, both Û
and V̂ have their roles interchanged with that of the
A-polymer representation: while U(α) is discontinuous
and thus q̂ is not defined in the A-representation, we
have that it is V (β) in the B-representation that has this
property. In this case, it is the operator p̂ that can not
be defined. We see then that given a physical system for
which the configuration space has a well defined physi-
cal meaning, within the possible representation in which
wave-functions are functions of the configuration variable
q, the A and B polymer representations are radically dif-
ferent and inequivalent.
Having said this, it is also true that the A and B
representations are equivalent in a different sense, by
means of the duality between q and p representations
and the d↔ 1/d duality: The A-polymer representation
in the “q-representation” is equivalent to the B-polymer
representation in the “p-representation”, and conversely.
When studying a problem, it is important to decide from
the beginning which polymer representation (if any) one
should be using (for instance in the q-polarization). This
has as a consequence an implication on which variable is
naturally “quantized” (even if continuous): p for A and q
for B. There could be for instance a physical criteria for
this choice. For example a fundamental symmetry could
suggest that one representation is more natural than an-
other one. This indeed has been recently noted by Chiou
in [10], where the Galileo group is investigated and where
it is shown that the B representation is better behaved.
In the other polarization, namely for wavefunctions
of p, the picture gets reversed: q is discrete for the A-
representation, while p is for the B-case. Let us end this
section by noting that the procedure of obtaining the
polymer quantization by means of an appropriate limit
of Fock-Schrödinger representations might prove useful in
more general settings in field theory or quantum gravity.
III. POLYMER QUANTUM MECHANICS:
KINEMATICS
In previous sections we have derived what we have
called the A and B polymer representations (in the q-
polarization) as limiting cases of ordinary Fock repre-
sentations. In this section, we shall describe, without
any reference to the Schrödinger representation, the ‘ab-
stract’ polymer representation and then make contact
with its two possible realizations, closely related to the A
and B cases studied before. What we will see is that one
of them (the A case) will correspond to the p-polarization
while the other one corresponds to the q−representation,
when a choice is made about the physical significance of
the variables.
We can start by defining abstract kets |µ〉 labelled by
a real number µ. These shall belong to the Hilbert space
Hpoly. From these states, we define a generic ‘cylinder
states’ that correspond to a choice of a finite collection of
numbers µi ∈ R with i = 1, 2, . . . , N . Associated to this
choice, there are N vectors |µi〉, so we can take a linear
combination of them
|ψ〉 =
ai |µi〉 (15)
The polymer inner product between the fundamental kets
is given by,
〈ν|µ〉 = δν,µ (16)
That is, the kets are orthogonal to each other (when ν 6=
µ) and they are normalized (〈µ|µ〉 = 1). Immediately,
this implies that, given any two vectors |φ〉 =
j=1 bj |νj〉
and |ψ〉 =
i=1 ai |µi〉, the inner product between them
is given by,
〈φ|ψ〉 =
b̄j ai 〈νj |µi〉 =
b̄k ak
where the sum is over k that labels the intersection points
between the set of labels {νj} and {µi}. The Hilbert
space Hpoly is the Cauchy completion of finite linear com-
bination of the form (15) with respect to the inner prod-
uct (16). Hpoly is non-separable. There are two basic
operators on this Hilbert space: the ‘label operator’ ε̂:
ε̂ |µ〉 := µ |µ〉
and the displacement operator ŝ (λ),
ŝ (λ) |µ〉 := |µ+ λ〉
The operator ε̂ is symmetric and the operator(s) ŝ(λ)
defines a one-parameter family of unitary operators on
Hpoly, where its adjoint is given by ŝ† (λ) = ŝ (−λ). This
action is however, discontinuous with respect to λ given
that |µ〉 and |µ + λ〉 are always orthogonal, no matter
how small is λ. Thus, there is no (Hermitian) operator
that could generate ŝ (λ) by exponentiation.
So far we have given the abstract characterization of
the Hilbert space, but one would like to make contact
with concrete realizations as wave functions, or by iden-
tifying the abstract operators ε̂ and ŝ with physical op-
erators.
Suppose we have a system with a configuration space
with coordinate given by q, and p denotes its canonical
conjugate momenta. Suppose also that for physical rea-
sons we decide that the configuration coordinate q will
have some “discrete character” (for instance, if it is to
be identified with position, one could say that there is
an underlying discreteness in position at a small scale).
How can we implement such requirements by means of
the polymer representation? There are two possibilities,
depending on the choice of ‘polarizations’ for the wave-
functions, namely whether they will be functions of con-
figuration q or momenta p. Let us the divide the discus-
sion into two parts.
A. Momentum polarization
In this polarization, states will be denoted by,
ψ(p) = 〈p|ψ〉
where
ψµ(p) = 〈p|µ〉 = ei
How are then the operators ε̂ and ŝ represented? Note
that if we associate the multiplicative operator
V̂ (λ) · ψµ(p) = ei
~ = ei
(µ+λ)
p = ψ(µ+λ)(p)
we see then that the operator V̂ (λ) corresponds precisely
to the shift operator ŝ (λ). Thus we can also conclude
that the operator p̂ does not exist. It is now easy to
identify the operator q̂ with:
q̂ · ψµ(p) = −i~
ψµ(p) = µ e
~ = µψµ(p)
namely, with the abstract operator ε̂. The reason we
say that q̂ is discrete is because this operator has as its
eigenvalue the label µ of the elementary state ψµ(p), and
this label, even when it can take value in a continuum
of possible values, is to be understood as a discrete set,
given that the states are orthonormal for all values of
µ. Given that states are now functions of p, the inner
product (16) should be defined by a measure µ on the
space on which the wave-functions are defined. In order
to know what these two objects are, namely, the quan-
tum “configuration” space C and the measure thereon1,
we have to make use of the tools available to us from
the theory of C∗-algebras. If we consider the operators
V̂ (λ), together with their natural product and ∗-relation
given by V̂ ∗(λ) = V̂ (−λ), they have the structure of
an Abelian C∗-algebra (with unit) A. We know from
the representation theory of such objects that A is iso-
morphic to the space of continuous functions C0(∆) on a
compact space ∆, the spectrum of A. Any representation
of A on a Hilbert space as multiplication operator will be
on spaces of the form L2(∆, dµ). That is, our quantum
configuration space is the spectrum of the algebra, which
in our case corresponds to the Bohr compactification Rb
of the real line [11]. This space is a compact group and
there is a natural probability measure defined on it, the
Haar measure µH. Thus, our Hilbert space Hpoly will be
isomorphic to the space,
Hpoly,p = L2(Rb, dµH) (17)
In terms of ‘quasi periodic functions’ generated by ψµ(p),
the inner product takes the form
〈ψµ|ψλ〉 :=
dµH ψµ(p)ψλ(p) :=
= lim
L 7→∞
dpψµ(p)ψλ(p) = δµ,λ (18)
note that in the p-polarization, this characterization cor-
responds to the ‘A-version’ of the polymer representation
of Sec. II (where p and q are interchanged).
B. q-polarization
Let us now consider the other polarization in which wave
functions will depend on the configuration coordinate q:
ψ(q) = 〈q|ψ〉
The basic functions, that now will be called ψ̃µ(q), should
be, in a sense, the dual of the functions ψµ(p) of the
previous subsection. We can try to define them via a
‘Fourier transform’:
ψ̃µ(q) := 〈q|µ〉 = 〈q|
dµH|p〉〈p|µ〉
which is given by
ψ̃µ(q) :=
dµH〈q|p〉ψµ(p) =
dµH e
−i p q
~ = δq,µ (19)
1 here we use the standard terminology of ‘configuration space’ to
denote the domain of the wave function even when, in this case,
it corresponds to the physical momenta p.
That is, the basic objects in this representation are Kro-
necker deltas. This is precisely what we had found in
Sec. II for the B-type representation. How are now the
basic operators represented and what is the form of the
inner product? Regarding the operators, we expect that
they are represented in the opposite manner as in the
previous p-polarization case, but that they preserve the
same features: p̂ does not exist (the derivative of the Kro-
necker delta is ill defined), but its exponentiated version
V̂ (λ) does:
V̂ (λ) · ψ(q) = ψ(q + λ)
and the operator q̂ that now acts as multiplication has
as its eigenstates, the functions ψ̃ν(q) = δν,q:
q̂ · ψ̃µ(q) := µ ψ̃µ(q)
What is now the nature of the quantum configurations
space Q? And what is the measure thereon dµq? that
defines the inner product we should have:
〈ψ̃µ(q), ψ̃λ(q)〉 = δµ,λ
The answer comes from one of the characterizations of
the Bohr compactification: we know that it is, in a precise
sense, dual to the real line but when equipped with the
discrete topology Rd. Furthermore, the measure on Rd
will be the ‘counting measure’. In this way we recover the
same properties we had for the previous characterization
of the polymer Hilbert space. We can thus write:
Hpoly,x := L2(Rd, dµc) (20)
This completes a precise construction of the B-type poly-
mer representation sketched in the previous section. Note
that if we had chosen the opposite physical situation,
namely that q, the configuration observable, be the quan-
tity that does not have a corresponding operator, then
we would have had the opposite realization: In the q-
polarization we would have had the type-A polymer rep-
resentation and the type-B for the p-polarization. As
we shall see both scenarios have been considered in the
literature.
Up to now we have only focused our discussion on the
kinematical aspects of the quantization process. Let us
now consider in the following section the issue of dynam-
ics and recall the approach that had been adopted in the
literature, before the issue of the removal of the regulator
was reexamined in [6].
IV. POLYMER QUANTUM MECHANICS:
DYNAMICS
As we have seen the construction of the polymer
representation is rather natural and leads to a quan-
tum theory with different properties than the usual
Schrödinger counterpart such as its non-separability, the
non-existence of certain operators and the existence of
normalized eigen-vectors that yield a precise value for
one of the phase space coordinates. This has been done
without any regard for a Hamiltonian that endows the
system with a dynamics, energy and so on.
First let us consider the simplest case of a particle of
mass m in a potential V (q), in which the Hamiltonian H
takes the form,
p2 + V (q)
Suppose furthermore that the potential is given by a non-
periodic function, such as a polynomial or a rational func-
tion. We can immediately see that a direct implementa-
tion of the Hamiltonian is out of our reach, for the simple
reason that, as we have seen, in the polymer representa-
tion we can either represent q or p, but not both! What
has been done so far in the literature? The simplest
thing possible: approximate the non-existing term by a
well defined function that can be quantized and hope for
the best. As we shall see in next sections, there is indeed
more that one can do.
At this point there is also an important decision to be
made: which variable q or p should be regarded as “dis-
crete”? Once this choice is made, then it implies that
the other variable will not exist: if q is regarded as dis-
crete, then p will not exist and we need to approximate
the kinetic term p2/2m by something else; if p is to be
the discrete quantity, then q will not be defined and then
we need to approximate the potential V (q). What hap-
pens with a periodic potential? In this case one would
be modelling, for instance, a particle on a regular lattice
such as a phonon living on a crystal, and then the natural
choice is to have q not well defined. Furthermore, the po-
tential will be well defined and there is no approximation
needed.
In the literature both scenarios have been considered.
For instance, when considering a quantum mechanical
system in [2], the position was chosen to be discrete,
so p does not exist, and one is then in the A type for
the momentum polarization (or the type B for the q-
polarization). With this choice, it is the kinetic term the
one that has to be approximated, so once one has done
this, then it is immediate to consider any potential that
will thus be well defined. On the other hand, when con-
sidering loop quantum cosmology (LQC), the standard
choice is that the configuration variable is not defined
[4]. This choice is made given that LQC is regarded as
the symmetric sector of full loop quantum gravity where
the connection (that is regarded as the configuration vari-
able) can not be promoted to an operator and one can
only define its exponentiated version, namely, the holon-
omy. In that case, the canonically conjugate variable,
closely related to the volume, becomes ‘discrete’, just as
in the full theory. This case is however, different from the
particle in a potential example. First we could mention
that the functional form of the Hamiltonian constraint
that implements dynamics has a different structure, but
the more important difference lies in that the system is
constrained.
Let us return to the case of the particle in a po-
tential and for definiteness, let us start with the aux-
iliary kinematical framework in which: q is discrete, p
can not be promoted and thus we have to approximate
the kinetic term p̂2/2m. How is this done? The stan-
dard prescription is to define, on the configuration space
C, a regular ‘graph’ γµ0 . This consists of a numerable
set of points, equidistant, and characterized by a pa-
rameter µ0 that is the (constant) separation between
points. The simplest example would be to consider the
set γµ0 = {q ∈ R | q = nµ0 , ∀ n ∈ Z}.
This means that the basic kets that will be considered
|µn〉 will correspond precisely to labels µn belonging to
the graph γµ0 , that is, µn = nµ0. Thus, we shall only
consider states of the form,
|ψ〉 =
bn |µn〉 . (21)
This ‘small’ Hilbert space Hγµ0 , the graph Hilbert space,
is a subspace of the ‘large’ polymer Hilbert space Hpoly
but it is separable. The condition for a state of the form
(21) to belong to the Hilbert space Hγµ0 is that the co-
efficients bn satisfy:
n |bn|2 <∞.
Let us now consider the kinetic term p̂2/2m. We have
to approximate it by means of trigonometric functions,
that can be built out of the functions of the form eiλ p/~.
As we have seen in previous sections, these functions can
indeed be promoted to operators and act as translation
operators on the kets |µ〉. If we want to remain in the
graph γ, and not create ‘new points’, then one is con-
strained to considering operators that displace the kets
by just the right amount. That is, we want the basic
shift operator V̂ (λ) to be such that it maps the ket with
label |µn〉 to the next ket, namely |µn+1〉. This can in-
deed achieved by fixing, once and for all, the value of the
allowed parameter λ to be λ = µ0. We have then,
V̂ (µ0) · |µn〉 = |µn + µ0〉 = |µn+1〉
which is what we wanted. This basic ‘shift operator’ will
be the building block for approximating any (polynomial)
function of p. In order to do that we notice that the
function p can be approximated by,
p ≈ ~
(µ0 p
~ − e−i
where the approximation is good for p << ~/µ0. Thus,
one can define a regulated operator p̂µ0 that depends on
the ‘scale’ µ0 as:
p̂µ0 · |µn〉 :=
[V (µ0) − V (−µ0)] · |µn〉 =
(|µn+1〉 − |µn−1〉) (22)
In order to regulate the operator p̂2, there are (at least)
two possibilities, namely to compose the operator p̂µ0
with itself or to define a new approximation. The oper-
ator p̂µ0 · p̂µ0 has the feature that shifts the states two
steps in the graph to both sides. There is however an-
other operator that only involves shifting once:
p̂2µ0 · |νn〉 :=
[2 − V̂ (µ0) − V̂ (−µ0)] · |νn〉 =
(2|νn〉 − |νn+1〉 − |νn−1〉) (23)
which corresponds to the approximation p2 ≈ 2~
cos(µ0 p/~)), valid also in the regime p << ~/µ0. With
these considerations, one can define the operator Ĥµ0 ,
the Hamiltonian at scale µ0, that in practice ‘lives’ on
the space Hγµ0 as,
Ĥµ0 :=
p̂2µ0 + V̂ (q) , (24)
that is a well defined, symmetric operator on Hγµ0 . No-
tice that the operator is also defined on Hpoly, but there
its physical interpretation is problematic. For example,
it turns out that the expectation value of the kinetic term
calculated on most states (states which are not tailored
to the exact value of the parameter µ0) is zero. Even
if one takes a state that gives “reasonable“ expectation
values of the µ0-kinetic term and uses it to calculate the
expectation value of the kinetic term corresponding to
a slight perturbation of the parameter µ0 one would get
zero. This problem, and others that arise when working
on Hpoly, forces one to assign a physical interpretation
to the Hamiltonian Ĥµ0 only when its action is restricted
to the subspace Hγµ0 .
Let us now explore the form that the Hamiltonian takes
in the two possible polarizations. In the q-polarization,
the basis, labelled by n is given by the functions χn(q) =
δq,µn . That is, the wave functions will only have sup-
port on the set γµ0 . Alternatively, one can think of a
state as completely characterized by the ‘Fourier coeffi-
cients’ an: ψ(q) ↔ an, which is the value that the wave
function ψ(q) takes at the point q = µn = nµ0. Thus,
the Hamiltonian takes the form of a difference equation
when acting on a general state ψ(q). Solving the time
independent Schrödinger equation Ĥ · ψ = E ψ amounts
to solving the difference equation for the coefficients an.
The momentum polarization has a different structure.
In this case, the operator p̂2µ0 acts as a multiplication
operator,
p̂2µ0 · ψ(p) =
1 − cos
(µ0 p
ψ(p) (25)
The operator corresponding to q will be represented as a
derivative operator
q̂ · ψ(p) := i~ ∂p ψ(p).
For a generic potential V (q), it has to be defined by
means of spectral theory defined now on a circle. Why
on a circle? For the simple reason that by restricting
ourselves to a regular graph γµ0 , the functions of p that
preserve it (when acting as shift operators) are of the
form e(i m µ0 p/~) for m integer. That is, what we have
are Fourier modes, labelled by m, of period 2π ~/µ0 in p.
Can we pretend then that the phase space variable p is
now compactified? The answer is in the affirmative. The
inner product on periodic functions ψµ0(p) of p coming
from the full Hilbert space Hpoly and given by
〈φ(p)|ψ(p)〉poly = lim
L 7→∞
dp φ(p)ψ(p)
is precisely equivalent to the inner product on the circle
given by the uniform measure
〈φ(p)|ψ(p)〉µ0 =
∫ π~/µ0
−π~/µ0
dp φ(p)ψ(p)
with p ∈ (−π~/µ0, π~/µ0). As long as one restricts at-
tention to the graph γµ0 , one can work in this separable
Hilbert space Hγµ0 of square integrable functions on S
Immediately, one can see the limitations of this descrip-
tion. If the mechanical system to be quantized is such
that its orbits have values of the momenta p that are
not small compared with π~/µ0 then the approximation
taken will be very poor, and we don’t expect neither the
effective classical description nor its quantization to be
close to the standard one. If, on the other hand, one is al-
ways within the region in which the approximation can be
regarded as reliable, then both classical and quantum de-
scriptions should approximate the standard description.
What does ‘close to the standard description’ exactly
mean needs, of course, some further clarification. In
particular one is assuming the existence of the usual
Schrödinger representation in which the system has a be-
havior that is also consistent with observations. If this is
the case, the natural question is: How can we approxi-
mate such description from the polymer picture? Is there
a fine enough graph γµ0 that will approximate the system
in such a way that all observations are indistinguishable?
Or even better, can we define a procedure, that involves
a refinement of the graph γµ0 such that one recovers the
standard picture?
It could also happen that a continuum limit can be de-
fined but does not coincide with the ‘expected one’. But
there might be also physical systems for which there is
no standard description, or it just does not make sense.
Can in those cases the polymer representation, if it ex-
ists, provide the correct physical description of the sys-
tem under consideration? For instance, if there exists a
physical limitation to the minimum scale set by µ0, as
could be the case for a quantum theory of gravity, then
the polymer description would provide a true physical
bound on the value of certain quantities, such as p in
our example. This could be the case for loop quantum
cosmology, where there is a minimum value for physical
volume (coming from the full theory), and phase space
points near the ‘singularity’ lie at the region where the
approximation induced by the scale µ0 departs from the
standard classical description. If in that case the poly-
mer quantum system is regarded as more fundamental
than the classical system (or its standard Wheeler-De
Witt quantization), then one would interpret this dis-
crepancies in the behavior as a signal of the breakdown
of classical description (or its ‘naive’ quantization).
In the next section we present a method to remove
the regulator µ0 which was introduced as an intermedi-
ate step to construct the dynamics. More precisely, we
shall consider the construction of a continuum limit of
the polymer description by means of a renormalization
procedure.
V. THE CONTINUUM LIMIT
This section has two parts. In the first one we motivate
the need for a precise notion of the continuum limit of
the polymeric representation, explaining why the most
direct, and naive approach does not work. In the sec-
ond part, we shall present the main ideas and results of
the paper [6], where the Hamiltonian and the physical
Hilbert space in polymer quantum mechanics are con-
structed as a continuum limit of effective theories, follow-
ing Wilson’s renormalization group ideas. The resulting
physical Hilbert space turns out to be unitarily isomor-
phic to the ordinary Hs = L2(R, dq) of the Schrödinger
theory.
Before describing the results of [6] we should discuss
the precise meaning of reaching a theory in the contin-
uum. Let us for concreteness consider the B-type repre-
sentation in the q-polarization. That is, states are func-
tions of q and the orthonormal basis χµ(q) is given by
characteristic functions with support on q = µ. Let us
now suppose we have a Schrödinger state Ψ(q) ∈ Hs =
L2(R, dq). What is the relation between Ψ(q) and a state
in Hpoly,x? We are also interested in the opposite ques-
tion, that is, we would like to know if there is a preferred
state in Hs that is approximated by an arbitrary state
ψ(q) in Hpoly,x. The first obvious observation is that a
Schödinger state Ψ(q) does not belong to Hpoly,x since it
would have an infinite norm. To see that note that even
when the would-be state can be formally expanded in the
χµ basis as,
Ψ(q) =
Ψ(µ) χµ(q)
where the sum is over the parameter µ ∈ R. Its associ-
ated norm in Hpoly,x would be:
|Ψ(q)|2poly =
|Ψ(µ)|2 → ∞
which blows up. Note that in order to define a mapping
P : Hs → Hpoly,x, there is a huge ambiguity since the
values of the function Ψ(q) are needed in order to expand
the polymer wave function. Thus we can only define a
mapping in a dense subset D of Hs where the values of the
functions are well defined (recall that in Hs the value of
functions at a given point has no meaning since states are
equivalence classes of functions). We could for instance
ask that the mapping be defined for representatives of the
equivalence classes in Hs that are piecewise continuous.
From now on, when we refer to an element of the space
Hs we shall be refereeing to one of those representatives.
Notice then that an element of Hs does define an element
of Cyl∗γ , the dual to the space Cylγ , that is, the space
of cylinder functions with support on the (finite) lattice
γ = {µ1, µ2, . . . , µN}, in the following way:
Ψ(q) : Cylγ −→ C
such that
Ψ(q)[ψ(q)] = (Ψ|ψ〉 :=
Ψ(µ) 〈χµ|
ψi χµi〉polyγ
Ψ(µi)ψi < ∞ (26)
Note that this mapping could be seen as consisting of two
parts: First, a projection Pγ : Cyl
∗ → Cylγ such that
Pγ(Ψ) = Ψγ(q) :=
i Ψ(µi)χµi(q) ∈ Cylγ . The state
Ψγ is sometimes refereed to as the ‘shadow of Ψ(q) on
the lattice γ’. The second step is then to take the inner
product between the shadow Ψγ(q) and the state ψ(q)
with respect to the polymer inner product 〈Ψγ |ψ〉polyγ .
Now this inner product is well defined. Notice that for
any given lattice γ the corresponding projector Pγ can be
intuitively interpreted as some kind of ‘coarse graining
map’ from the continuum to the lattice γ. In terms of
functions of q the projection is replacing a continuous
function defined on R with a function over the lattice
γ ⊂ R which is a discrete set simply by restricting Ψ to
γ. The finer the lattice the more points that we have
on the curve. As we shall see in the second part of this
section, there is indeed a precise notion of coarse graining
that implements this intuitive idea in a concrete fashion.
In particular, we shall need to replace the lattice γ with
a decomposition of the real line in intervals (having the
lattice points as end points).
Let us now consider a system in the polymer represen-
tation in which a particular lattice γ0 was chosen, say
with points of the form {qk ∈ R |qk = ka0 , ∀ k ∈ Z},
namely a uniform lattice with spacing equal to a0. In this
case, any Schrödinger wave function (of the type that we
consider) will have a unique shadow on the lattice γ0. If
we refine the lattice γ 7→ γn by dividing each interval in
2n new intervals of length an = a0/2
n we have new shad-
ows that have more and more points on the curve. Intu-
itively, by refining infinitely the graph we would recover
the original function Ψ(q). Even when at each finite step
the corresponding shadow has a finite norm in the poly-
mer Hilbert space, the norm grows unboundedly and the
limit can not be taken, precisely because we can not em-
bed Hs into Hpoly. Suppose now that we are interested
in the reverse process, namely starting from a polymer
theory on a lattice and asking for the ‘continuum wave
function’ that is best approximated by a wave function
over a graph. Suppose furthermore that we want to con-
sider the limit of the graph becoming finer. In order
to give precise answers to these (and other) questions we
need to introduce some new technology that will allow us
to overcome these apparent difficulties. In the remaining
of this section we shall recall these constructions for the
benefit of the reader. Details can be found in [6] (which
is an application of the general formalism discussed in
[9]).
The starting point in this construction is the concept
of a scale C, which allows us to define the effective the-
ories and the concept of continuum limit. In our case a
scale is a decomposition of the real line in the union of
closed-open intervals, that cover the whole line and do
not intersect. Intuitively, we are shifting the emphasis
from the lattice points to the intervals defined by the
same points with the objective of approximating con-
tinuous functions defined on R with functions that are
constant on the intervals defined by the lattice. To be
precise, we define an embedding, for each scale Cn from
Hpoly to Hs by means of a step function:
Ψ(man) χman(q) →
Ψ(man) χαm(q) ∈ Hs
with χαn(q) a characteristic function on the interval
αm = [man, (m + 1)an). Thus, the shadows (living on
the lattice) were just an intermediate step in the con-
struction of the approximating function; this function is
piece-wise constant and can be written as a linear com-
bination of step functions with the coefficients provided
by the shadows.
The challenge now is to define in an appropriate sense
how one can approximate all the aspects of the theory
by means of this constant by pieces functions. Then the
strategy is that, for any given scale, one can define an
effective theory by approximating the kinetic operator
by a combination of the translation operators that shift
between the vertices of the given decomposition, in other
words by a periodic function in p. As a result one has a
set of effective theories at given scales which are mutually
related by coarse graining maps. This framework was
developed in [6]. For the convenience of the reader we
briefly recall part of that framework.
Let us denote the kinematic polymer Hilbert space at
the scale Cn as HCn , and its basis elements as eαi,Cn ,
where αi = [ian, (i + 1)an) ∈ Cn. By construction this
basis is orthonormal. The basis elements in the dual
Hilbert space H∗Cn are denoted by ωαi,Cn ; they are also
orthonormal. The states ωαi,Cn have a simple action on
Cyl, ωαi,Cn(δx0,q) = χαi,Cn(x0). That is, if x0 is in the
interval αi of Cn the result is one and it is zero if it is
not there.
Given any m ≤ n, we define d∗m,n : H∗Cn → H
as the ‘coarse graining’ map between the dual Hilbert
spaces, that sends the part of the elements of the dual
basis to zero while keeping the information of the rest:
d∗m,n(ωαi,Cn) = ωβj ,Cm if i = j2
n−m, in the opposite case
d∗m,n(ωαi,Cn) = 0.
At every scale the corresponding effective theory is
given by the hamiltonian Hn. These Hamiltonians will
be treated as quadratic forms, hn : HCn → R, given by
hn(ψ) = λ
(ψ,Hnψ) , (27)
where λ2Cn is a normalizaton factor. We will see later
that this rescaling of the inner product is necessary in
order to guarantee the convergence of the renormalized
theory. The completely renormalized theory at this scale
is obtained as
hrenm := lim
d⋆m,nhn. (28)
and the renormalized Hamiltonians are compatible with
each other, in the sense that
d⋆m,nh
n = h
In order to analyze the conditions for the convergence
in (28) let us express the Hamiltonian in terms of its
eigen-covectors end eigenvalues. We will work with effec-
tive Hamiltonians that have a purely discrete spectrum
(labelled by ν) Hn · Ψν,Cn = Eν,Cn Ψν,Cn . We shall also
introduce, as an intermediate step, a cut-off in the energy
levels. The origin of this cut-off is in the approximation
of the Hamiltonian of our system at a given scale with
a Hamiltonian of a periodic system in a regime of small
energies, as we explained earlier. Thus, we can write
hνcut−offm =
νcut−off
Eν,CmΨν,Cm ⊗ Ψν,Cm , (29)
where the eigen covectors Ψν,Cm are normalized accord-
ing to the inner product rescaled by 1
, and the cut-
off can vary up to a scale dependent bound, νcut−off ≤
νmax(Cm). The Hilbert space of covectors together with
such inner product will be called H⋆renCm .
In the presence of a cut-off, the convergence of the
microscopically corrected Hamiltonians, equation (28) is
equivalent to the existence of the following two limits.
The first one is the convergence of the energy levels,
Eν,Cn = E
ν . (30)
Second is the existence of the completely renormalized
eigen covectors,
d⋆m,n Ψν,Cn = Ψ
∈ H⋆renCm ⊂ Cyl
⋆ . (31)
We clarify that the existence of the above limit means
that Ψrenν,Cm(δx0,q) is well defined for any δx0,q ∈ Cyl. No-
tice that this point-wise convergence, if it can take place
at all, will require the tuning of the normalization factors
λ2Cn .
Now we turn to the question of the continuum limit
of the renormalized covectors. First we can ask for the
existence of the limit
Ψrenν,Cn(δx0,q) (32)
for any δx0,q ∈ Cyl. When this limits exists there is
a natural action of the eigen covectors in the continuum
limit. Below we consider another notion of the continuum
limit of the renormalized eigen covectors.
When the completely renormalized eigen covectors
exist, they form a collection that is d⋆-compatible,
d⋆m,nΨ
= Ψrenν,Cm . A sequence of d
⋆-compatible nor-
malizable covectors define an element of
, which is
the projective limit of the renormalized spaces of covec-
H⋆renCn . (33)
The inner product in this space is defined by
({ΨCn}, {ΦCn})renR := lim
(ΨCn ,ΦCn)
The natural inclusion of C∞0 in
is by an antilinear
map which assigns to any Ψ ∈ C∞0 the d⋆-compatible
collection ΨshadCn :=
ωαiΨ̄(L(αi)) ∈ H⋆renCn ⊂ Cyl
ΨshadCn will be called the shadow of Ψ at scale Cn and acts
in Cyl as a piecewise constant function. Clearly other
types of test functions like Schwartz functions are also
naturally included in
. In this context a shadow is
a state of the effective theory that approximates a state
in the continuum theory.
Since the inner product in
is degenerate, the
physical Hilbert space is defined as
H⋆phys :=
/ ker(·, ·)ren
Hphys := H⋆⋆phys
The nature of the physical Hilbert space, whether it is
isomorphic to the Schrödinger Hilber space, Hs, or not, is
determined by the normalization factors λ2Cn which can
be obtained from the conditions asking for compatibil-
ity of the dynamics of the effective theories at different
scales. The dynamics of the system under consideration
selects the continuum limit.
Let us now return to the definition of the Hamilto-
nian in the continuum limit. First consider the contin-
uum limit of the Hamiltonian (with cut-off) in the sense
of its point-wise convergence as a quadratic form. It
turns out that if the limit of equation (32) exists for
all the eigencovectors allowed by the cut-off, we have
νcut−off ren
: Hpoly,x → R defined by
νcut−off ren
(δx0,q) := lim
hνcut−off renn ([δx0,q]Cn). (34)
This Hamiltonian quadratic form in the continuum can
be coarse grained to any scale and, as can be ex-
pected, it yields the completely renormalized Hamilto-
nian quadratic forms at that scale. However, this is not
a completely satisfactory continuum limit because we can
not remove the auxiliary cut-off νcut−off . If we tried, as
we include more and more eigencovectors in the Hamilto-
nian the calculations done at a given scale would diverge
and doing them in the continuum is just as divergent.
Below we explore a more successful path.
We can use the renormalized inner product to induce
an action of the cut–off Hamiltonians on
νcut−off ren
({ΨCn}) := lim
hνcut−off renn ((ΨCn , ·)renCn ),
where we have used the fact that (ΨCn , ·)renCn ∈ HCn . The
existence of this limit is trivial because the renormalized
Hamiltonians are finite sums and the limit exists term by
term.
These cut-off Hamiltonians descend to the physical
Hilbert space
νcut−off ren
([{ΨCn}]) := h
νcut−off ren
({ΨCn})
for any representative {ΨCn} ∈ [{ΨCn}] ∈ H⋆phys.
Finally we can address the issue of removal of the cut-
off. The Hamiltonian hren
→ R is defined by the
limit
:= lim
νcut−off→∞
νcut−off ren
when the limit exists. Its corresponding Hermitian form
in Hphys is defined whenever the above limit exists. This
concludes our presentation of the main results of [6]. Let
us now consider several examples of systems for which
the continuum limit can be investigated.
VI. EXAMPLES
In this section we shall develop several examples of
systems that have been treated with the polymer quanti-
zation. These examples are simple quantum mechanical
systems, such as the simple harmonic oscillator and the
free particle, as well as a quantum cosmological model
known as loop quantum cosmology.
A. The Simple Harmonic Oscillator
In this part, let us consider the example of a Simple Har-
monic Oscillator (SHO) with parameters m and ω, clas-
sically described by the following Hamiltonian
mω2 x2.
Recall that from these parameters one can define a length
scale D =
~/mω. In the standard treatment one uses
this scale to define a complex structure JD (and an in-
ner product from it), as we have described in detail that
uniquely selects the standard Schrödinger representation.
At scale Cn we have an effective Hamiltonian for the
Simple Harmonic Oscillator (SHO) given by
HCn =
1 − cos anp
mω2x2 . (35)
If we interchange position and momentum, this Hamilto-
nian is exactly that of a pendulum of mass m, length l
and subject to a constant gravitational field g:
ĤCn = −
+mgl(1 − cos θ)
where those quantities are related to our system by,
mω an
, g =
, θ =
That is, we are approximating, for each scale Cn the
SHO by a pendulum. There is, however, an important
difference. From our knowledge of the pendulum system,
we know that the quantum system will have a spectrum
for the energy that has two different asymptotic behav-
iors, the SHO for low energies and the planar rotor in
the higher end, corresponding to oscillating and rotating
solutions respectively2. As we refine our scale and both
the length of the pendulum and the height of the periodic
potential increase, we expect to have an increasing num-
ber of oscillating states (for a given pendulum system,
there is only a finite number of such states). Thus, it
is justified to consider the cut-off in the energy eigenval-
ues, as discussed in the last section, given that we only
expect a finite number of states of the pendulum to ap-
proximate SHO eigenstates. With these consideration in
mind, the relevant question is whether the conditions for
the continuum limit to exist are satisfied. This question
has been answered in the affirmative in [6]. What was
shown there was that the eigen-values and eigen func-
tions of the discrete systems, which represent a discrete
and non-degenerate set, approximate those of the contin-
uum, namely, of the standard harmonic oscillator when
the inner product is renormalized by a factor λ2Cn = 1/2
This convergence implies that the continuum limit exists
as we understand it. Let us now consider the simplest
possible system, a free particle, that has nevertheless the
particular feature that the spectrum of the energy is con-
tinuous.
2 Note that both types of solutions are, in the phase space, closed.
This is the reason behind the purely discrete spectrum. The
distinction we are making is between those solutions inside the
separatrix, that we call oscillating, and those that are above it
that we call rotating.
B. Free Polymer Particle
In the limit ω → 0, the Hamiltonian of the Simple
Harmonic oscillator (35) goes to the Hamiltonian of a
free particle and the corresponding time independent
Schrödinger equation, in the p−polarization, is given by
(1 − cos anp
) − ECn
ψ̃(p) = 0
where we now have that p ∈ S1, with p ∈ (−π~
Thus, we have
ECn =
1 − cos
≤ ECn,max ≡ 2
. (36)
At each scale the energy of the particle we can describe
is bounded from above and the bound depends on the
scale. Note that in this case the spectrum is continu-
ous, which implies that the ordinary eigenfunctions of
the Hilbert are not normalizable. This imposes an upper
bound in the value that the energy of the particle can
have, in addition to the bound in the momentum due to
its “compactification”.
Let us first look for eigen-solutions to the time inde-
pendent Schrödinger equation, that is, for energy eigen-
states. In the case of the ordinary free particle, these
correspond to constant momentum plane waves of the
form e±(
) and such that the ordinary dispersion re-
lation p2/2m = E is satisfied. These plane waves are
not square integrable and do not belong to the ordinary
Hilbert space of the Schrödinger theory but they are still
useful for extracting information about the system. For
the polymer free particle we have,
ψ̃Cn(p) = c1δ(p− PCn) + c2δ(p+ PCn)
where PCn is a solution of the previous equation consid-
ering a fixed value of ECn . That is,
PCn = P (ECn) =
arccos
1 − ma
The inverse Fourier transform yields, in the ‘x represen-
tation’,
ψCn(xj) =
∫ π~/an
−π~/an
ψ̃(p) e
p j dp =
ixjPCn /~ + c2e
−ixjPCn /~
.(37)
with xj = an j for j ∈ Z. Note that the eigenfunctions
are still delta functions (in the p representation) and thus
not (square) normalizable with respect to the polymer
inner product, that in the p polarization is just given
by the ordinary Haar measure on S1, and there is no
quantization of the momentum (its spectrum is still truly
continuous).
Let us now consider the time dependent Schrödinger
equation,
i~ ∂t Ψ̃(p, t) = Ĥ · Ψ̃(p, t).
Which now takes the form,
Ψ̃(p, t) =
(1 − cos (an p/~)) Ψ̃(p, t)
that has as its solution,
Ψ̃(p, t) = e−
(1−cos (an p/~)) t ψ̃(p) = e(−iECn /~) t ψ̃(p)
for any initial function ψ̃(p), where ECn satisfy the dis-
persion relation (36). The wave function Ψ(xj , t), the
xj-representation of the wave function, can be obtained
for any given time t by Fourier transforming with (37)
the wave function Ψ̃(p, t).
In order to check out the convergence of the micro-
scopically corrected Hamiltonians we should analyze the
convergence of the energy levels and of the proper cov-
ectors. In the limit n → ∞, ECn → E = p2/2m so
we can be certain that the eigen-values for the energy
converge (when fixing the value of p). Let us write the
proper covector as ΨCn = (ψCn , ·)renCn ∈ H
. Then we
can bring microscopic corrections to scale Cm and look
for convergence of such corrections
ΨrenCm
= lim
d⋆m,nΨCn .
It is easy to see that given any basis vector eαi ∈ HCm
the following limit
ΨrenCm(eαi,Cm) = limCn→∞
ΨCn(dn,m(eαi,Cm))
exists and is equal to
ΨshadCm (eαi,Cm) = [d
⋆ΨSchr](eαi,Cm) = Ψ
Schr(iam)
where ΨshadCm is calculated using the free particle Hamilto-
nian in the Schrödinger representation. This expression
defines the completely renormalized proper covector at
the scale Cm.
C. Polymer Quantum Cosmology
In this section we shall present a version of quantum
cosmology that we call polymer quantum cosmology. The
idea behind this name is that the main input in the quan-
tization of the corresponding mini-superspace model is
the use of a polymer representation as here understood.
Another important input is the choice of fundamental
variables to be used and the definition of the Hamiltonian
constraint. Different research groups have made differ-
ent choices. We shall take here a simple model that has
received much attention recently, namely an isotropic,
homogeneous FRW cosmology with k = 0 and coupled
to a massless scalar field ϕ. As we shall see, a proper
treatment of the continuum limit of this system requires
new tools under development that are beyond the scope
of this work. We will thus restrict ourselves to the intro-
duction of the system and the problems that need to be
solved.
The system to be quantized corresponds to the phase
space of cosmological spacetimes that are homogeneous
and isotropic and for which the homogeneous spatial
slices have a flat intrinsic geometry (k = 0 condition).
The only matter content is a mass-less scalar field ϕ. In
this case the spacetime geometry is given by metrics of
the form:
ds2 = −dt2 + a2(t) (dx2 + dy2 + dz2)
where the function a(t) carries all the information and
degrees of freedom of the gravity part. In terms of the
coordinates (a, pa, ϕ, pϕ) for the phase space Γ of the the-
ory, all the dynamics is captured in the Hamiltonian con-
straint
C := −3
+ 8πG
2|a|3
The first step is to define the constraint on the kine-
matical Hilbert space to find physical states and then a
physical inner product to construct the physical Hilbert
space. First note that one can rewrite the equation as:
p2a a
2 = 8πG
If, as is normally done, one chooses ϕ to act as an in-
ternal time, the right hand side would be promoted, in
the quantum theory, to a second derivative. The left
hand side is, furthermore, symmetric in a and pa. At
this point we have the freedom in choosing the variable
that will be quantized and the variable that will not be
well defined in the polymer representation. The standard
choice is that pa is not well defined and thus, a and any
geometrical quantity derived from it, is quantized. Fur-
thermore, we have the choice of polarization on the wave
function. In this respect the standard choice is to select
the a-polarization, in which a acts as multiplication and
the approximation of pa, namely sin(λ pa)/λ acts as a
difference operator on wave functions of a. For details of
this particular choice see [5]. Here we shall adopt the op-
posite polarization, that is, we shall have wave functions
Ψ(pa, ϕ).
Just as we did in the previous cases, in order to gain
intuition about the behavior of the polymer quantized
theory, it is convenient to look at the equivalent prob-
lem in the classical theory, namely the classical system
we would get be approximating the non-well defined ob-
servable (pa in our present case) by a well defined object
(made of trigonometric functions). Let us for simplicity
choose to replace pa 7→ sin(λ pa)/λ. With this choice
we get an effective classical Hamiltonian constraint that
depends on λ:
Cλ := −
sin(λ pa)
λ2|a|
+ 8πG
2|a|3
We can now compute effective equations of motion by
means of the equations: Ḟ := {F, Cλ}, for any observable
F ∈ C∞(Γ), and where we are using the effective (first
order) action:
dτ(pa ȧ+ pϕ ϕ̇−N Cλ)
with the choice N = 1. The first thing to notice is that
the quantity pϕ is a constant of the motion, given that
the variable ϕ is cyclic. The second observation is that
ϕ̇ = 8πG
has the same sign as pϕ and never vanishes.
Thus ϕ can be used as a (n internal) time variable. The
next observation is that the equation for
, namely
the effective Friedman equation, will have a zero for a
non-zero value of a given by
λ2p2ϕ.
This is the value at which there will be bounce if the
trajectory started with a large value of a and was con-
tracting. Note that the ‘size’ of the universe when the
bounce occurs depends on both the constant pϕ (that
dictates the matter density) and the value of the lattice
size λ. Here it is important to stress that for any value
of pϕ (that uniquely fixes the trajectory in the (a, pa)
plane), there will be a bounce. In the original description
in terms of Einstein’s equations (without the approxima-
tion that depends on λ), there in no such bounce. If
ȧ < 0 initially, it will remain negative and the universe
collapses, reaching the singularity in a finite proper time.
What happens within the effective description if we re-
fine the lattice and go from λ to λn := λ/2
n? The only
thing that changes, for the same classical orbit labelled
by pϕ, is that the bounce occurs at a ‘later time’ and for
a smaller value of a∗ but the qualitative picture remains
the same.
This is the main difference with the systems considered
before. In those cases, one could have classical trajecto-
ries that remained, for a given choice of parameter λ,
within the region where sin(λp)/λ is a good approxima-
tion to p. Of course there were also classical trajectories
that were outside this region but we could then refine the
lattice and find a new value λ′ for which the new clas-
sical trajectory is well approximated. In the case of the
polymer cosmology, this is never the case: Every classical
trajectory will pass from a region where the approxima-
tion is good to a region where it is not; this is precisely
where the ‘quantum corrections’ kick in and the universes
bounces.
Given that in the classical description, the ‘original’
and the ‘corrected’ descriptions are so different we expect
that, upon quantization, the corresponding quantum the-
ories, namely the polymeric and the Wheeler-DeWitt will
be related in a non-trivial way (if at all).
In this case, with the choice of polarization and for a
particular factor ordering we have,
sin(λpa)
· Ψ(pa, ϕ) = 0
as the Polymer Wheeler-DeWitt equation.
In order to approach the problem of the continuum
limit of this quantum theory, we have to realize that the
task is now somewhat different than before. This is so
given that the system is now a constrained system with
a constraint operator rather than a regular non-singular
system with an ordinary Hamiltonian evolution. Fortu-
nately for the system under consideration, the fact that
the variable ϕ can be regarded as an internal time allows
us to interpret the quantum constraint as a generalized
Klein-Gordon equation of the form
Ψ = Θλ · Ψ
where the operator Θλ is ‘time independent’. This al-
lows us to split the space of solutions into ‘positive and
negative frequency’, introduce a physical inner product
on the positive frequency solutions of this equation and
a set of physical observables in terms of which to de-
scribe the system. That is, one reduces in practice the
system to one very similar to the Schrödinger case by
taking the positive square root of the previous equation:
Θλ · Ψ. The question we are interested is
whether the continuum limit of these theories (labelled
by λ) exists and whether it corresponds to the Wheeler-
DeWitt theory. A complete treatment of this problem
lies, unfortunately, outside the scope of this work and
will be reported elsewhere [12].
VII. DISCUSSION
Let us summarize our results. In the first part of the
article we showed that the polymer representation of the
canonical commutation relations can be obtained as the
limiting case of the ordinary Fock-Schrödinger represen-
tation in terms of the algebraic state that defines the
representation. These limiting cases can also be inter-
preted in terms of the naturally defined coherent states
associated to each representation labelled by the param-
eter d, when they become infinitely ‘squeezed’. The two
possible limits of squeezing lead to two different polymer
descriptions that can nevertheless be identified, as we
have also shown, with the two possible polarizations for
an abstract polymer representation. This resulting the-
ory has, however, very different behavior as the standard
one: The Hilbert space is non-separable, the representa-
tion is unitarily inequivalent to the Schrödinger one, and
natural operators such as p̂ are no longer well defined.
This particular limiting construction of the polymer the-
ory can shed some light for more complicated systems
such as field theories and gravity.
In the regular treatments of dynamics within the poly-
mer representation, one needs to introduce some extra
structure, such as a lattice on configuration space, to con-
struct a Hamiltonian and implement the dynamics for the
system via a regularization procedure. How does this re-
sulting theory compare to the original continuum theory
one had from the beginning? Can one hope to remove
the regulator in the polymer description? As they stand
there is no direct relation or mapping from the polymer
to a continuum theory (in case there is one defined). As
we have shown, one can indeed construct in a systematic
fashion such relation by means of some appropriate no-
tions related to the definition of a scale, closely related
to the lattice one had to introduce in the regularization.
With this important shift in perspective, and an appro-
priate renormalization of the polymer inner product at
each scale one can, subject to some consistency condi-
tions, define a procedure to remove the regulator, and
arrive to a Hamiltonian and a Hilbert space.
As we have seen, for some simple examples such as
a free particle and the harmonic oscillator one indeed
recovers the Schrödinger description back. For other sys-
tems, such as quantum cosmological models, the answer
is not as clear, since the structure of the space of classi-
cal solutions is such that the ‘effective description’ intro-
duced by the polymer regularization at different scales
is qualitatively different from the original dynamics. A
proper treatment of these class of systems is underway
and will be reported elsewhere [12].
Perhaps the most important lesson that we have
learned here is that there indeed exists a rich inter-
play between the polymer description and the ordinary
Schrödinger representation. The full structure of such re-
lation still needs to be unravelled. We can only hope that
a full understanding of these issues will shed some light
in the ultimate goal of treating the quantum dynamics
of background independent field systems such as general
relativity.
Acknowledgments
We thank A. Ashtekar, G. Hossain, T. Pawlowski and P.
Singh for discussions. This work was in part supported
by CONACyT U47857-F and 40035-F grants, by NSF
PHY04-56913, by the Eberly Research Funds of Penn
State, by the AMC-FUMEC exchange program and by
funds of the CIC-Universidad Michoacana de San Nicolás
de Hidalgo.
[1] R. Beaume, J. Manuceau, A. Pellet and M. Sirugue,
“Translation Invariant States In Quantum Mechanics,”
Commun. Math. Phys. 38, 29 (1974); W. E. Thirring and
H. Narnhofer, “Covariant QED without indefinite met-
ric,” Rev. Math. Phys. 4, 197 (1992); F. Acerbi, G. Mor-
chio and F. Strocchi, “Infrared singular fields and non-
regular representations of canonical commutation rela-
tion algebras”, J. Math. Phys. 34, 899 (1993); F. Cav-
allaro, G. Morchio and F. Strocchi, “A generalization of
the Stone-von Neumann theorem to non-regular repre-
sentations of the CCR-algebra”, Lett. Math. Phys. 47
307 (1999); H. Halvorson, “Complementarity of Repre-
sentations in quantum mechanics”, Studies in History
and Philosophy of Modern Physics 35 45 (2004).
[2] A. Ashtekar, S. Fairhurst and J.L. Willis, “Quantum
gravity, shadow states, and quantum mechanics”, Class.
Quant. Grav. 20 1031 (2003) [arXiv:gr-qc/0207106].
[3] K. Fredenhagen and F. Reszewski, “Polymer state ap-
proximations of Schrödinger wave functions”, Class.
Quant. Grav. 23 6577 (2006) [arXiv:gr-qc/0606090].
[4] M. Bojowald, “Loop quantum cosmology”, Living Rev.
Rel. 8, 11 (2005) [arXiv:gr-qc/0601085]; A. Ashtekar,
M. Bojowald and J. Lewandowski, “Mathematical struc-
ture of loop quantum cosmology”, Adv. Theor. Math.
Phys. 7 233 (2003) [arXiv:gr-qc/0304074]; A. Ashtekar,
T. Pawlowski and P. Singh, “Quantum nature of the
big bang: Improved dynamics” Phys. Rev. D 74 084003
(2006) [arXiv:gr-qc/0607039]
[5] V. Husain and O. Winkler, “Semiclassical states for
quantum cosmology” Phys. Rev. D 75 024014 (2007)
[arXiv:gr-qc/0607097]; V. Husain V and O. Winkler, “On
singularity resolution in quantum gravity”, Phys. Rev. D
69 084016 (2004). [arXiv:gr-qc/0312094].
[6] A. Corichi, T. Vukasinac and J.A. Zapata. “Hamil-
tonian and physical Hilbert space in polymer quan-
tum mechanics”, Class. Quant. Grav. 24 1495 (2007)
[arXiv:gr-qc/0610072]
[7] A. Corichi and J. Cortez, “Canonical quantization from
an algebraic perspective” (preprint)
[8] A. Corichi, J. Cortez and H. Quevedo, “Schrödinger
and Fock Representations for a Field Theory on
Curved Spacetime”, Annals Phys. (NY) 313 446 (2004)
[arXiv:hep-th/0202070].
[9] E. Manrique, R. Oeckl, A. Weber and J.A. Zapata, “Loop
quantization as a continuum limit” Class. Quant. Grav.
23 3393 (2006) [arXiv:hep-th/0511222]; E. Manrique,
R. Oeckl, A. Weber and J.A. Zapata, “Effective theo-
ries and continuum limit for canonical loop quantization”
(preprint)
[10] D.W. Chiou, “Galileo symmetries in polymer particle
representation”, Class. Quant. Grav. 24, 2603 (2007)
[arXiv:gr-qc/0612155].
[11] W. Rudin, Fourier analysis on groups, (Interscience, New
York, 1962)
[12] A. Ashtekar, A. Corichi, P. Singh, “Contrasting LQC
and WDW using an exactly soluble model” (preprint);
A. Corichi, T. Vukasinac, and J.A. Zapata, “Continuum
limit for quantum constrained system” (preprint).
http://arxiv.org/abs/gr-qc/0207106
http://arxiv.org/abs/gr-qc/0606090
http://arxiv.org/abs/gr-qc/0601085
http://arxiv.org/abs/gr-qc/0304074
http://arxiv.org/abs/gr-qc/0607039
http://arxiv.org/abs/gr-qc/0607097
http://arxiv.org/abs/gr-qc/0312094
http://arxiv.org/abs/gr-qc/0610072
http://arxiv.org/abs/hep-th/0202070
http://arxiv.org/abs/hep-th/0511222
http://arxiv.org/abs/gr-qc/0612155
| Polymer Quantum Mechanics and its Continuum Limit
Alejandro Corichi,1, 2, 3, ∗ Tatjana Vukašinac,4, † and José A. Zapata1, ‡
Instituto de Matemáticas, Unidad Morelia, Universidad Nacional Autónoma de México,
UNAM-Campus Morelia, A. Postal 61-3, Morelia, Michoacán 58090, Mexico
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México,
A. Postal 70-543, México D.F. 04510, Mexico
Institute for Gravitational Physics and Geometry, Physics Department,
Pennsylvania State University, University Park PA 16802, USA
Facultad de Ingenieŕıa Civil, Universidad Michoacana de San Nicolas de Hidalgo,
Morelia, Michoacán 58000, Mexico
A rather non-standard quantum representation of the canonical commutation relations of quan-
tum mechanics systems, known as the polymer representation has gained some attention in recent
years, due to its possible relation with Planck scale physics. In particular, this approach has been
followed in a symmetric sector of loop quantum gravity known as loop quantum cosmology. Here we
explore different aspects of the relation between the ordinary Schrödinger theory and the polymer
description. The paper has two parts. In the first one, we derive the polymer quantum mechanics
starting from the ordinary Schrödinger theory and show that the polymer description arises as an
appropriate limit. In the second part we consider the continuum limit of this theory, namely, the
reverse process in which one starts from the discrete theory and tries to recover back the ordinary
Schrödinger quantum mechanics. We consider several examples of interest, including the harmonic
oscillator, the free particle and a simple cosmological model.
PACS numbers: 04.60.Pp, 04.60.Ds, 04.60.Nc 11.10.Gh.
I. INTRODUCTION
The so-called polymer quantum mechanics, a non-
regular and somewhat ‘exotic’ representation of the
canonical commutation relations (CCR) [1], has been
used to explore both mathematical and physical issues in
background independent theories such as quantum grav-
ity [2, 3]. A notable example of this type of quantization,
when applied to minisuperspace models has given way to
what is known as loop quantum cosmology [4, 5]. As in
any toy model situation, one hopes to learn about the
subtle technical and conceptual issues that are present
in full quantum gravity by means of simple, finite di-
mensional examples. This formalism is not an exception
in this regard. Apart from this motivation coming from
physics at the Planck scale, one can independently ask
for the relation between the standard continuous repre-
sentations and their polymer cousins at the level of math-
ematical physics. A deeper understanding of this relation
becomes important on its own.
The polymer quantization is made of several steps.
The first one is to build a representation of the
Heisenberg-Weyl algebra on a Kinematical Hilbert space
that is “background independent”, and that is sometimes
referred to as the polymeric Hilbert space Hpoly. The
second and most important part, the implementation of
dynamics, deals with the definition of a Hamiltonian (or
Hamiltonian constraint) on this space. In the examples
∗Electronic address: corichi@matmor.unam.mx
†Electronic address: tatjana@shi.matmor.unam.mx
‡Electronic address: zapata@matmor.unam.mx
studied so far, the first part is fairly well understood,
yielding the kinematical Hilbert space Hpoly that is, how-
ever, non-separable. For the second step, a natural im-
plementation of the dynamics has proved to be a bit more
difficult, given that a direct definition of the Hamiltonian
Ĥ of, say, a particle on a potential on the space Hpoly is
not possible since one of the main features of this repre-
sentation is that the operators q̂ and p̂ cannot be both
simultaneously defined (nor their analogues in theories
involving more elaborate variables). Thus, any operator
that involves (powers of) the not defined variable has to
be regulated by a well defined operator which normally
involves introducing some extra structure on the configu-
ration (or momentum) space, namely a lattice. However,
this new structure that plays the role of a regulator can
not be removed when working in Hpoly and one is left
with the ambiguity that is present in any regularization.
The freedom in choosing it can be sometimes associated
with a length scale (the lattice spacing). For ordinary
quantum systems such as a simple harmonic oscillator,
that has been studied in detail from the polymer view-
point, it has been argued that if this length scale is taken
to be ‘sufficiently small’, one can arbitrarily approximate
standard Schrödinger quantum mechanics [2, 3]. In the
case of loop quantum cosmology, the minimum area gap
A0 of the full quantum gravity theory imposes such a
scale, that is then taken to be fundamental [4].
A natural question is to ask what happens when we
change this scale and go to even smaller ‘distances’, that
is, when we refine the lattice on which the dynamics of
the theory is defined. Can we define consistency con-
ditions between these scales? Or even better, can we
take the limit and find thus a continuum limit? As it
http://arxiv.org/abs/0704.0007v2
mailto:corichi@matmor.unam.mx
mailto:tatjana@shi.matmor.unam.mx
mailto:zapata@matmor.unam.mx
has been shown recently in detail, the answer to both
questions is in the affirmative [6]. There, an appropriate
notion of scale was defined in such a way that one could
define refinements of the theory and pose in a precise
fashion the question of the continuum limit of the theory.
These results could also be seen as handing a procedure
to remove the regulator when working on the appropri-
ate space. The purpose of this paper is to further explore
different aspects of the relation between the continuum
and the polymer representation. In particular in the first
part we put forward a novel way of deriving the polymer
representation from the ordinary Schrödinger represen-
tation as an appropriate limit. In Sec. II we derive two
versions of the polymer representation as different lim-
its of the Schrödinger theory. In Sec. III we show that
these two versions can be seen as different polarizations
of the ‘abstract’ polymer representation. These results,
to the best of our knowledge, are new and have not been
reported elsewhere. In Sec. IV we pose the problem of
implementing the dynamics on the polymer representa-
tion. In Sec. V we motivate further the question of the
continuum limit (i.e. the proper removal of the regulator)
and recall the basic constructions of [6]. Several exam-
ples are considered in Sec. VI. In particular a simple
harmonic oscillator, the polymer free particle and a sim-
ple quantum cosmology model are considered. The free
particle and the cosmological model represent a general-
ization of the results obtained in [6] where only systems
with a discrete and non-degenerate spectrum where con-
sidered. We end the paper with a discussion in Sec. VII.
In order to make the paper self-contained, we will keep
the level of rigor in the presentation to that found in the
standard theoretical physics literature.
II. QUANTIZATION AND POLYMER
REPRESENTATION
In this section we derive the so called polymer repre-
sentation of quantum mechanics starting from a specific
reformulation of the ordinary Schrödinger representation.
Our starting point will be the simplest of all possible
phase spaces, namely Γ = R2 corresponding to a particle
living on the real line R. Let us choose coordinates (q, p)
thereon. As a first step we shall consider the quantization
of this system that leads to the standard quantum theory
in the Schrödinger description. A convenient route is to
introduce the necessary structure to define the Fock rep-
resentation of such system. From this perspective, the
passage to the polymeric case becomes clearest. Roughly
speaking by a quantization one means a passage from the
classical algebraic bracket, the Poisson bracket,
{q, p} = 1 (1)
to a quantum bracket given by the commutator of the
corresponding operators,
[ q̂, p̂] = i~ 1̂ (2)
These relations, known as the canonical commutation re-
lation (CCR) become the most common corner stone of
the (kinematics of the) quantum theory; they should be
satisfied by the quantum system, when represented on a
Hilbert space H.
There are alternative points of departure for quantum
kinematics. Here we consider the algebra generated by
the exponentiated versions of q̂ and p̂ that are denoted
U(α) = ei(α q̂)/~ ; V (β) = ei(β p̂)/~
where α and β have dimensions of momentum and length,
respectively. The CCR now become
U(α) · V (β) = e(−iα β)/~V (β) · U(α) (3)
and the rest of the product is
U(α1)·U(α2) = U(α1+α2) ; V (β1)·V (β2) = V (β1+β2)
The Weyl algebra W is generated by taking finite linear
combinations of the generators U(αi) and V (βi) where
the product (3) is extended by linearity,
(Ai U(αi) +Bi V (βi))
From this perspective, quantization means finding an
unitary representation of the Weyl algebra W on a
Hilbert space H′ (that could be different from the ordi-
nary Schrödinger representation). At first it might look
weird to attempt this approach given that we know how
to quantize such a simple system; what do we need such
a complicated object as W for? It is infinite dimensional,
whereas the set S = {1̂, q̂, p̂}, the starting point of the
ordinary Dirac quantization, is rather simple. It is in
the quantization of field systems that the advantages of
the Weyl approach can be fully appreciated, but it is
also useful for introducing the polymer quantization and
comparing it to the standard quantization. This is the
strategy that we follow.
A question that one can ask is whether there is any
freedom in quantizing the system to obtain the ordinary
Schrödinger representation. On a first sight it might seem
that there is none given the Stone-Von Neumann unique-
ness theorem. Let us review what would be the argument
for the standard construction. Let us ask that the repre-
sentation we want to build up is of the Schrödinger type,
namely, where states are wave functions of configuration
space ψ(q). There are two ingredients to the construction
of the representation, namely the specification of how the
basic operators (q̂, p̂) will act, and the nature of the space
of functions that ψ belongs to, that is normally fixed by
the choice of inner product on H, or measure µ on R.
The standard choice is to select the Hilbert space to be,
H = L2(R, dq)
the space of square-integrable functions with respect to
the Lebesgue measure dq (invariant under constant trans-
lations) on R. The operators are then represented as,
q̂ · ψ(q) = (q ψ)(q) and p̂ · ψ(q) = −i ~ ∂
ψ(q) (4)
Is it possible to find other representations? In order to
appreciate this freedom we go to the Weyl algebra and
build the quantum theory thereon. The representation
of the Weyl algebra that can be called of the ‘Fock type’
involves the definition of an extra structure on the phase
space Γ: a complex structure J . That is, a linear map-
ping from Γ to itself such that J2 = −1. In 2 dimen-
sions, all the freedom in the choice of J is contained in
the choice of a parameter d with dimensions of length. It
is also convenient to define: k = p/~ that has dimensions
of 1/L. We have then,
Jd : (q, k) 7→ (−d2 k, q/d2)
This object together with the symplectic structure:
Ω((q, p); (q′, p′)) = q p′ − p q′ define an inner product on
Γ by the formula gd(· ; ·) = Ω(· ; Jd ·) such that:
gd((q, p); (q
′, p′)) =
q q′ +
which is dimension-less and positive definite. Note that
with this quantities one can define complex coordinates
(ζ, ζ̄) as usual:
q + i
p ; ζ̄ =
q − i d
from which one can build the standard Fock representa-
tion. Thus, one can alternatively view the introduction
of the length parameter d as the quantity needed to de-
fine (dimensionless) complex coordinates on the phase
space. But what is the relevance of this object (J or
d)? The definition of complex coordinates is useful for
the construction of the Fock space since from them one
can define, in a natural way, creation and annihilation
operators. But for the Schrödinger representation we are
interested here, it is a bit more subtle. The subtlety is
that within this approach one uses the algebraic prop-
erties of W to construct the Hilbert space via what is
known as the Gel’fand-Naimark-Segal (GNS) construc-
tion. This implies that the measure in the Schrödinger
representation becomes non trivial and thus the momen-
tum operator acquires an extra term in order to render
the operator self-adjoint. The representation of the Weyl
algebra is then, when acting on functions φ(q) [7]:
Û(α) · φ(q) := (eiα q/~ φ)(q)
V̂ (β) · φ(q) := e
(q−β/2)
φ(q − β)
The Hilbert space structure is introduced by the defini-
tion of an algebraic state (a positive linear functional)
ωd : W → C, that must coincide with the expectation
value in the Hilbert space taken on a special state ref-
ered to as the vacuum: ωd(a) = 〈â〉vac, for all a ∈ W .
In our case this specification of J induces such a unique
state ωd that yields,
〈Û(α)〉vac = e−
d2 α2
~2 (5)
〈V̂ (β)〉vac = e−
d2 (6)
Note that the exponents in the vacuum expectation
values correspond to the metric constructed out of J :
d2 α2
= gd((0, α); (0, α)) and
= gd((β, 0); (β, 0)).
Wave functions belong to the space L2(R, dµd), where
the measure that dictates the inner product in this rep-
resentation is given by,
dµd =
d2 dq
In this representation, the vacuum is given by the iden-
tity function φ0(q) = 1 that is, just as any plane wave,
normalized. Note that for each value of d > 0, the rep-
resentation is well defined and continuous in α and β.
Note also that there is an equivalence between the q-
representation defined by d and the k-representation de-
fined by 1/d.
How can we recover then the standard representation
in which the measure is given by the Lebesgue measure
and the operators are represented as in (4)? It is easy to
see that there is an isometric isomorphism K that maps
the d-representation in Hd to the standard Schrödinger
representation in Hschr by:
ψ(q) = K · φ(q) = e
d1/2π1/4
φ(q) ∈ Hschr = L2(R, dq)
Thus we see that all d-representations are unitarily equiv-
alent. This was to be expected in view of the Stone-Von
Neumann uniqueness result. Note also that the vacuum
now becomes
ψ0(q) =
d1/2π1/4
2 d2 ,
so even when there is no information about the param-
eter d in the representation itself, it is contained in the
vacuum state. This procedure for constructing the GNS-
Schrödinger representation for quantum mechanics has
also been generalized to scalar fields on arbitrary curved
space in [8]. Note, however that so far the treatment has
all been kinematical, without any knowledge of a Hamil-
tonian. For the Simple Harmonic Oscillator of mass m
and frequency ω, there is a natural choice compatible
with the dynamics given by d =
, in which some
calculations simplify (for instance for coherent states),
but in principle one can use any value of d.
Our study will be simplified by focusing on the funda-
mental entities in the Hilbert Space Hd , namely those
states generated by acting with Û(α) on the vacuum
φ0(q) = 1. Let us denote those states by,
φα(q) = Û(α) · φ0(q) = ei
The inner product between two such states is given by
〈φα, φλ〉d =
dµd e
~ = e−
(λ−α)2 d2
4 ~2 (7)
Note incidentally that, contrary to some common belief,
the ‘plane waves’ in this GNS Hilbert space are indeed
normalizable.
Let us now consider the polymer representation. For
that, it is important to note that there are two possible
limiting cases for the parameter d: i) The limit 1/d 7→ 0
and ii) The case d 7→ 0. In both cases, we have ex-
pressions that become ill defined in the representation or
measure, so one needs to be careful.
A. The 1/d 7→ 0 case.
The first observation is that from the expressions (5) and
(6) for the algebraic state ωd, we see that the limiting
cases are indeed well defined. In our case we get, ωA :=
lim1/d→0 ωd such that,
ωA(Û(α)) = δα,0 and ωA(V̂ (β)) = 1 (8)
From this, we can indeed construct the representation
by means of the GNS construction. In order to do that
and to show how this is obtained we shall consider several
expressions. One has to be careful though, since the limit
has to be taken with care. Let us consider the measure
on the representation that behaves as:
dµd =
d2 dq 7→ 1
so the measures tends to an homogeneous measure but
whose ‘normalization constant’ goes to zero, so the limit
becomes somewhat subtle. We shall return to this point
later.
Let us now see what happens to the inner product
between the fundamental entities in the Hilbert Space Hd
given by (7). It is immediate to see that in the 1/d 7→ 0
limit the inner product becomes,
〈φα, φλ〉d 7→ δα,λ (9)
with δα,λ being Kronecker’s delta. We see then that the
plane waves φα(q) become an orthonormal basis for the
new Hilbert space. Therefore, there is a delicate interplay
between the two terms that contribute to the measure in
order to maintain the normalizability of these functions;
we need the measure to become damped (by 1/d) in order
to avoid that the plane waves acquire an infinite norm
(as happens with the standard Lebesgue measure), but
on the other hand the measure, that for any finite value
of d is a Gaussian, becomes more and more spread.
It is important to note that, in this limit, the operators
Û(α) become discontinuous with respect to α, given that
for any given α1 and α2 (different), its action on a given
basis vector ψλ(q) yields orthogonal vectors. Since the
continuity of these operators is one of the hypotesis of
the Stone-Von Neumann theorem, the uniqueness result
does not apply here. The representation is inequivalent
to the standard one.
Let us now analyze the other operator, namely the
action of the operator V̂ (β) on the basis φα(q):
V̂ (β) · φα(q) = e−
~ e(β/d
2+iα/~)q
which in the limit 1/d 7→ 0 goes to,
V̂ (β) · φα(q) 7→ ei
~ φα(q)
that is continuous on β. Thus, in the limit, the operator
p̂ = −i~∂q is well defined. Also, note that in this limit
the operator p̂ has φα(q) as its eigenstate with eigenvalue
given by α:
p̂ · φα(q) 7→ αφα(q)
To summarize, the resulting theory obtained by taking
the limit 1/d 7→ 0 of the ordinary Schrödinger descrip-
tion, that we shall call the ‘polymer representation of
type A’, has the following features: the operators U(α)
are well defined but not continuous in α, so there is no
generator (no operator associated to q). The basis vec-
tors φα are orthonormal (for α taking values on a contin-
uous set) and are eigenvectors of the operator p̂ that is
well defined. The resulting Hilbert space HA will be the
(A-version of the) polymer representation. Let us now
consider the other case, namely, the limit when d 7→ 0.
B. The d 7→ 0 case
Let us now explore the other limiting case of the
Schrödinger/Fock representations labelled by the param-
eter d. Just as in the previous case, the limiting algebraic
state becomes, ωB := limd→0 ωd such that,
ωB(Û(α)) = 1 and ωB(V̂ (β)) = δβ,0 (10)
From this positive linear function, one can indeed con-
struct the representation using the GNS construction.
First let us note that the measure, even when the limit
has to be taken with due care, behaves as:
dµd =
d2 dq 7→ δ(q) dq
That is, as Dirac’s delta distribution. It is immediate to
see that, in the d 7→ 0 limit, the inner product between
the fundamental states φα(q) becomes,
〈φα, φλ〉d 7→ 1 (11)
This in fact means that the vector ξ = φα − φλ belongs
to the Kernel of the limiting inner product, so one has to
mod out by these (and all) zero norm states in order to
get the Hilbert space.
Let us now analyze the other operator, namely the
action of the operator V̂ (β) on the vacuum φ0(q) = 1,
which for arbitrary d has the form,
φ̃β := V̂ (β) · φ0(q) = e
(q−β/2)
The inner product between two such states is given by
〈φ̃α, φ̃β〉d = e−
(α−β)2
In the limit d → 0, 〈φ̃α, φ̃β〉d → δα,β. We can see then
that it is these functions that become the orthonormal,
‘discrete basis’ in the theory. However, the function φ̃β(q)
in this limit becomes ill defined. For example, for β > 0,
it grows unboundedly for q > β/2, is equal to one if
q = β/2 and zero otherwise. In order to overcome these
difficulties and make more transparent the resulting the-
ory, we shall consider the other form of the representation
in which the measure is incorporated into the states (and
the resulting Hilbert space is L2(R, dq)). Thus the new
state
ψβ(q) := K · (V̂ (β) · φ0(q)) =
(q−β)2
We can now take the limit and what we get is
d 7→0
ψβ(q) := δ
1/2(q, β)
where by δ1/2(q, β) we mean something like ‘the square
root of the Dirac distribution’. What we really mean is
an object that satisfies the following property:
δ1/2(q, β) · δ1/2(q, α) = δ(q, β) δβ,α
That is, if α = β then it is just the ordinary delta, other-
wise it is zero. In a sense these object can be regarded as
half-densities that can not be integrated by themselves,
but whose product can. We conclude then that the inner
product is,
〈ψβ , ψα〉 =
dq ψβ(q)ψα(q) =
dq δ(q, α) δβ,α = δβ,α
which is just what we expected. Note that in this repre-
sentation, the vacuum state becomes ψ0(q) := δ
1/2(q, 0),
namely, the half-delta with support in the origin. It is
important to note that we are arriving in a natural way to
states as half-densities, whose squares can be integrated
without the need of a nontrivial measure on the configu-
ration space. Diffeomorphism invariance arises then in a
natural but subtle manner.
Note that as the end result we recover the Kronecker
delta inner product for the new fundamental states:
χβ(q) := δ
1/2(q, β).
Thus, in this new B-polymer representation, the Hilbert
space HB is the completion with respect to the inner
product (13) of the states generated by taking (finite)
linear combinations of basis elements of the form χβ :
Ψ(q) =
bi χβi(q) (14)
Let us now introduce an equivalent description of this
Hilbert space. Instead of having the basis elements be
half-deltas as elements of the Hilbert space where the
inner product is given by the ordinary Lebesgue measure
dq, we redefine both the basis and the measure. We
could consider, instead of a half-delta with support β, a
Kronecker delta or characteristic function with support
on β:
χ′β(q) := δq,β
These functions have a similar behavior with respect to
the product as the half-deltas, namely: χ′β(q) · χ′α(q) =
δβ,α. The main difference is that neither χ
′ nor their
squares are integrable with respect to the Lebesgue mea-
sure (having zero norm). In order to fix that problem we
have to change the measure so that we recover the basic
inner product (13) with our new basis. The needed mea-
sure turns out to be the discrete counting measure on R.
Thus any state in the ‘half density basis’ can be written
(using the same expression) in terms of the ‘Kronecker
basis’. For more details and further motivation see the
next section.
Note that in this B-polymer representation, both Û
and V̂ have their roles interchanged with that of the
A-polymer representation: while U(α) is discontinuous
and thus q̂ is not defined in the A-representation, we
have that it is V (β) in the B-representation that has this
property. In this case, it is the operator p̂ that can not
be defined. We see then that given a physical system for
which the configuration space has a well defined physi-
cal meaning, within the possible representation in which
wave-functions are functions of the configuration variable
q, the A and B polymer representations are radically dif-
ferent and inequivalent.
Having said this, it is also true that the A and B
representations are equivalent in a different sense, by
means of the duality between q and p representations
and the d↔ 1/d duality: The A-polymer representation
in the “q-representation” is equivalent to the B-polymer
representation in the “p-representation”, and conversely.
When studying a problem, it is important to decide from
the beginning which polymer representation (if any) one
should be using (for instance in the q-polarization). This
has as a consequence an implication on which variable is
naturally “quantized” (even if continuous): p for A and q
for B. There could be for instance a physical criteria for
this choice. For example a fundamental symmetry could
suggest that one representation is more natural than an-
other one. This indeed has been recently noted by Chiou
in [10], where the Galileo group is investigated and where
it is shown that the B representation is better behaved.
In the other polarization, namely for wavefunctions
of p, the picture gets reversed: q is discrete for the A-
representation, while p is for the B-case. Let us end this
section by noting that the procedure of obtaining the
polymer quantization by means of an appropriate limit
of Fock-Schrödinger representations might prove useful in
more general settings in field theory or quantum gravity.
III. POLYMER QUANTUM MECHANICS:
KINEMATICS
In previous sections we have derived what we have
called the A and B polymer representations (in the q-
polarization) as limiting cases of ordinary Fock repre-
sentations. In this section, we shall describe, without
any reference to the Schrödinger representation, the ‘ab-
stract’ polymer representation and then make contact
with its two possible realizations, closely related to the A
and B cases studied before. What we will see is that one
of them (the A case) will correspond to the p-polarization
while the other one corresponds to the q−representation,
when a choice is made about the physical significance of
the variables.
We can start by defining abstract kets |µ〉 labelled by
a real number µ. These shall belong to the Hilbert space
Hpoly. From these states, we define a generic ‘cylinder
states’ that correspond to a choice of a finite collection of
numbers µi ∈ R with i = 1, 2, . . . , N . Associated to this
choice, there are N vectors |µi〉, so we can take a linear
combination of them
|ψ〉 =
ai |µi〉 (15)
The polymer inner product between the fundamental kets
is given by,
〈ν|µ〉 = δν,µ (16)
That is, the kets are orthogonal to each other (when ν 6=
µ) and they are normalized (〈µ|µ〉 = 1). Immediately,
this implies that, given any two vectors |φ〉 =
j=1 bj |νj〉
and |ψ〉 =
i=1 ai |µi〉, the inner product between them
is given by,
〈φ|ψ〉 =
b̄j ai 〈νj |µi〉 =
b̄k ak
where the sum is over k that labels the intersection points
between the set of labels {νj} and {µi}. The Hilbert
space Hpoly is the Cauchy completion of finite linear com-
bination of the form (15) with respect to the inner prod-
uct (16). Hpoly is non-separable. There are two basic
operators on this Hilbert space: the ‘label operator’ ε̂:
ε̂ |µ〉 := µ |µ〉
and the displacement operator ŝ (λ),
ŝ (λ) |µ〉 := |µ+ λ〉
The operator ε̂ is symmetric and the operator(s) ŝ(λ)
defines a one-parameter family of unitary operators on
Hpoly, where its adjoint is given by ŝ† (λ) = ŝ (−λ). This
action is however, discontinuous with respect to λ given
that |µ〉 and |µ + λ〉 are always orthogonal, no matter
how small is λ. Thus, there is no (Hermitian) operator
that could generate ŝ (λ) by exponentiation.
So far we have given the abstract characterization of
the Hilbert space, but one would like to make contact
with concrete realizations as wave functions, or by iden-
tifying the abstract operators ε̂ and ŝ with physical op-
erators.
Suppose we have a system with a configuration space
with coordinate given by q, and p denotes its canonical
conjugate momenta. Suppose also that for physical rea-
sons we decide that the configuration coordinate q will
have some “discrete character” (for instance, if it is to
be identified with position, one could say that there is
an underlying discreteness in position at a small scale).
How can we implement such requirements by means of
the polymer representation? There are two possibilities,
depending on the choice of ‘polarizations’ for the wave-
functions, namely whether they will be functions of con-
figuration q or momenta p. Let us the divide the discus-
sion into two parts.
A. Momentum polarization
In this polarization, states will be denoted by,
ψ(p) = 〈p|ψ〉
where
ψµ(p) = 〈p|µ〉 = ei
How are then the operators ε̂ and ŝ represented? Note
that if we associate the multiplicative operator
V̂ (λ) · ψµ(p) = ei
~ = ei
(µ+λ)
p = ψ(µ+λ)(p)
we see then that the operator V̂ (λ) corresponds precisely
to the shift operator ŝ (λ). Thus we can also conclude
that the operator p̂ does not exist. It is now easy to
identify the operator q̂ with:
q̂ · ψµ(p) = −i~
ψµ(p) = µ e
~ = µψµ(p)
namely, with the abstract operator ε̂. The reason we
say that q̂ is discrete is because this operator has as its
eigenvalue the label µ of the elementary state ψµ(p), and
this label, even when it can take value in a continuum
of possible values, is to be understood as a discrete set,
given that the states are orthonormal for all values of
µ. Given that states are now functions of p, the inner
product (16) should be defined by a measure µ on the
space on which the wave-functions are defined. In order
to know what these two objects are, namely, the quan-
tum “configuration” space C and the measure thereon1,
we have to make use of the tools available to us from
the theory of C∗-algebras. If we consider the operators
V̂ (λ), together with their natural product and ∗-relation
given by V̂ ∗(λ) = V̂ (−λ), they have the structure of
an Abelian C∗-algebra (with unit) A. We know from
the representation theory of such objects that A is iso-
morphic to the space of continuous functions C0(∆) on a
compact space ∆, the spectrum of A. Any representation
of A on a Hilbert space as multiplication operator will be
on spaces of the form L2(∆, dµ). That is, our quantum
configuration space is the spectrum of the algebra, which
in our case corresponds to the Bohr compactification Rb
of the real line [11]. This space is a compact group and
there is a natural probability measure defined on it, the
Haar measure µH. Thus, our Hilbert space Hpoly will be
isomorphic to the space,
Hpoly,p = L2(Rb, dµH) (17)
In terms of ‘quasi periodic functions’ generated by ψµ(p),
the inner product takes the form
〈ψµ|ψλ〉 :=
dµH ψµ(p)ψλ(p) :=
= lim
L 7→∞
dpψµ(p)ψλ(p) = δµ,λ (18)
note that in the p-polarization, this characterization cor-
responds to the ‘A-version’ of the polymer representation
of Sec. II (where p and q are interchanged).
B. q-polarization
Let us now consider the other polarization in which wave
functions will depend on the configuration coordinate q:
ψ(q) = 〈q|ψ〉
The basic functions, that now will be called ψ̃µ(q), should
be, in a sense, the dual of the functions ψµ(p) of the
previous subsection. We can try to define them via a
‘Fourier transform’:
ψ̃µ(q) := 〈q|µ〉 = 〈q|
dµH|p〉〈p|µ〉
which is given by
ψ̃µ(q) :=
dµH〈q|p〉ψµ(p) =
dµH e
−i p q
~ = δq,µ (19)
1 here we use the standard terminology of ‘configuration space’ to
denote the domain of the wave function even when, in this case,
it corresponds to the physical momenta p.
That is, the basic objects in this representation are Kro-
necker deltas. This is precisely what we had found in
Sec. II for the B-type representation. How are now the
basic operators represented and what is the form of the
inner product? Regarding the operators, we expect that
they are represented in the opposite manner as in the
previous p-polarization case, but that they preserve the
same features: p̂ does not exist (the derivative of the Kro-
necker delta is ill defined), but its exponentiated version
V̂ (λ) does:
V̂ (λ) · ψ(q) = ψ(q + λ)
and the operator q̂ that now acts as multiplication has
as its eigenstates, the functions ψ̃ν(q) = δν,q:
q̂ · ψ̃µ(q) := µ ψ̃µ(q)
What is now the nature of the quantum configurations
space Q? And what is the measure thereon dµq? that
defines the inner product we should have:
〈ψ̃µ(q), ψ̃λ(q)〉 = δµ,λ
The answer comes from one of the characterizations of
the Bohr compactification: we know that it is, in a precise
sense, dual to the real line but when equipped with the
discrete topology Rd. Furthermore, the measure on Rd
will be the ‘counting measure’. In this way we recover the
same properties we had for the previous characterization
of the polymer Hilbert space. We can thus write:
Hpoly,x := L2(Rd, dµc) (20)
This completes a precise construction of the B-type poly-
mer representation sketched in the previous section. Note
that if we had chosen the opposite physical situation,
namely that q, the configuration observable, be the quan-
tity that does not have a corresponding operator, then
we would have had the opposite realization: In the q-
polarization we would have had the type-A polymer rep-
resentation and the type-B for the p-polarization. As
we shall see both scenarios have been considered in the
literature.
Up to now we have only focused our discussion on the
kinematical aspects of the quantization process. Let us
now consider in the following section the issue of dynam-
ics and recall the approach that had been adopted in the
literature, before the issue of the removal of the regulator
was reexamined in [6].
IV. POLYMER QUANTUM MECHANICS:
DYNAMICS
As we have seen the construction of the polymer
representation is rather natural and leads to a quan-
tum theory with different properties than the usual
Schrödinger counterpart such as its non-separability, the
non-existence of certain operators and the existence of
normalized eigen-vectors that yield a precise value for
one of the phase space coordinates. This has been done
without any regard for a Hamiltonian that endows the
system with a dynamics, energy and so on.
First let us consider the simplest case of a particle of
mass m in a potential V (q), in which the Hamiltonian H
takes the form,
p2 + V (q)
Suppose furthermore that the potential is given by a non-
periodic function, such as a polynomial or a rational func-
tion. We can immediately see that a direct implementa-
tion of the Hamiltonian is out of our reach, for the simple
reason that, as we have seen, in the polymer representa-
tion we can either represent q or p, but not both! What
has been done so far in the literature? The simplest
thing possible: approximate the non-existing term by a
well defined function that can be quantized and hope for
the best. As we shall see in next sections, there is indeed
more that one can do.
At this point there is also an important decision to be
made: which variable q or p should be regarded as “dis-
crete”? Once this choice is made, then it implies that
the other variable will not exist: if q is regarded as dis-
crete, then p will not exist and we need to approximate
the kinetic term p2/2m by something else; if p is to be
the discrete quantity, then q will not be defined and then
we need to approximate the potential V (q). What hap-
pens with a periodic potential? In this case one would
be modelling, for instance, a particle on a regular lattice
such as a phonon living on a crystal, and then the natural
choice is to have q not well defined. Furthermore, the po-
tential will be well defined and there is no approximation
needed.
In the literature both scenarios have been considered.
For instance, when considering a quantum mechanical
system in [2], the position was chosen to be discrete,
so p does not exist, and one is then in the A type for
the momentum polarization (or the type B for the q-
polarization). With this choice, it is the kinetic term the
one that has to be approximated, so once one has done
this, then it is immediate to consider any potential that
will thus be well defined. On the other hand, when con-
sidering loop quantum cosmology (LQC), the standard
choice is that the configuration variable is not defined
[4]. This choice is made given that LQC is regarded as
the symmetric sector of full loop quantum gravity where
the connection (that is regarded as the configuration vari-
able) can not be promoted to an operator and one can
only define its exponentiated version, namely, the holon-
omy. In that case, the canonically conjugate variable,
closely related to the volume, becomes ‘discrete’, just as
in the full theory. This case is however, different from the
particle in a potential example. First we could mention
that the functional form of the Hamiltonian constraint
that implements dynamics has a different structure, but
the more important difference lies in that the system is
constrained.
Let us return to the case of the particle in a po-
tential and for definiteness, let us start with the aux-
iliary kinematical framework in which: q is discrete, p
can not be promoted and thus we have to approximate
the kinetic term p̂2/2m. How is this done? The stan-
dard prescription is to define, on the configuration space
C, a regular ‘graph’ γµ0 . This consists of a numerable
set of points, equidistant, and characterized by a pa-
rameter µ0 that is the (constant) separation between
points. The simplest example would be to consider the
set γµ0 = {q ∈ R | q = nµ0 , ∀ n ∈ Z}.
This means that the basic kets that will be considered
|µn〉 will correspond precisely to labels µn belonging to
the graph γµ0 , that is, µn = nµ0. Thus, we shall only
consider states of the form,
|ψ〉 =
bn |µn〉 . (21)
This ‘small’ Hilbert space Hγµ0 , the graph Hilbert space,
is a subspace of the ‘large’ polymer Hilbert space Hpoly
but it is separable. The condition for a state of the form
(21) to belong to the Hilbert space Hγµ0 is that the co-
efficients bn satisfy:
n |bn|2 <∞.
Let us now consider the kinetic term p̂2/2m. We have
to approximate it by means of trigonometric functions,
that can be built out of the functions of the form eiλ p/~.
As we have seen in previous sections, these functions can
indeed be promoted to operators and act as translation
operators on the kets |µ〉. If we want to remain in the
graph γ, and not create ‘new points’, then one is con-
strained to considering operators that displace the kets
by just the right amount. That is, we want the basic
shift operator V̂ (λ) to be such that it maps the ket with
label |µn〉 to the next ket, namely |µn+1〉. This can in-
deed achieved by fixing, once and for all, the value of the
allowed parameter λ to be λ = µ0. We have then,
V̂ (µ0) · |µn〉 = |µn + µ0〉 = |µn+1〉
which is what we wanted. This basic ‘shift operator’ will
be the building block for approximating any (polynomial)
function of p. In order to do that we notice that the
function p can be approximated by,
p ≈ ~
(µ0 p
~ − e−i
where the approximation is good for p << ~/µ0. Thus,
one can define a regulated operator p̂µ0 that depends on
the ‘scale’ µ0 as:
p̂µ0 · |µn〉 :=
[V (µ0) − V (−µ0)] · |µn〉 =
(|µn+1〉 − |µn−1〉) (22)
In order to regulate the operator p̂2, there are (at least)
two possibilities, namely to compose the operator p̂µ0
with itself or to define a new approximation. The oper-
ator p̂µ0 · p̂µ0 has the feature that shifts the states two
steps in the graph to both sides. There is however an-
other operator that only involves shifting once:
p̂2µ0 · |νn〉 :=
[2 − V̂ (µ0) − V̂ (−µ0)] · |νn〉 =
(2|νn〉 − |νn+1〉 − |νn−1〉) (23)
which corresponds to the approximation p2 ≈ 2~
cos(µ0 p/~)), valid also in the regime p << ~/µ0. With
these considerations, one can define the operator Ĥµ0 ,
the Hamiltonian at scale µ0, that in practice ‘lives’ on
the space Hγµ0 as,
Ĥµ0 :=
p̂2µ0 + V̂ (q) , (24)
that is a well defined, symmetric operator on Hγµ0 . No-
tice that the operator is also defined on Hpoly, but there
its physical interpretation is problematic. For example,
it turns out that the expectation value of the kinetic term
calculated on most states (states which are not tailored
to the exact value of the parameter µ0) is zero. Even
if one takes a state that gives “reasonable“ expectation
values of the µ0-kinetic term and uses it to calculate the
expectation value of the kinetic term corresponding to
a slight perturbation of the parameter µ0 one would get
zero. This problem, and others that arise when working
on Hpoly, forces one to assign a physical interpretation
to the Hamiltonian Ĥµ0 only when its action is restricted
to the subspace Hγµ0 .
Let us now explore the form that the Hamiltonian takes
in the two possible polarizations. In the q-polarization,
the basis, labelled by n is given by the functions χn(q) =
δq,µn . That is, the wave functions will only have sup-
port on the set γµ0 . Alternatively, one can think of a
state as completely characterized by the ‘Fourier coeffi-
cients’ an: ψ(q) ↔ an, which is the value that the wave
function ψ(q) takes at the point q = µn = nµ0. Thus,
the Hamiltonian takes the form of a difference equation
when acting on a general state ψ(q). Solving the time
independent Schrödinger equation Ĥ · ψ = E ψ amounts
to solving the difference equation for the coefficients an.
The momentum polarization has a different structure.
In this case, the operator p̂2µ0 acts as a multiplication
operator,
p̂2µ0 · ψ(p) =
1 − cos
(µ0 p
ψ(p) (25)
The operator corresponding to q will be represented as a
derivative operator
q̂ · ψ(p) := i~ ∂p ψ(p).
For a generic potential V (q), it has to be defined by
means of spectral theory defined now on a circle. Why
on a circle? For the simple reason that by restricting
ourselves to a regular graph γµ0 , the functions of p that
preserve it (when acting as shift operators) are of the
form e(i m µ0 p/~) for m integer. That is, what we have
are Fourier modes, labelled by m, of period 2π ~/µ0 in p.
Can we pretend then that the phase space variable p is
now compactified? The answer is in the affirmative. The
inner product on periodic functions ψµ0(p) of p coming
from the full Hilbert space Hpoly and given by
〈φ(p)|ψ(p)〉poly = lim
L 7→∞
dp φ(p)ψ(p)
is precisely equivalent to the inner product on the circle
given by the uniform measure
〈φ(p)|ψ(p)〉µ0 =
∫ π~/µ0
−π~/µ0
dp φ(p)ψ(p)
with p ∈ (−π~/µ0, π~/µ0). As long as one restricts at-
tention to the graph γµ0 , one can work in this separable
Hilbert space Hγµ0 of square integrable functions on S
Immediately, one can see the limitations of this descrip-
tion. If the mechanical system to be quantized is such
that its orbits have values of the momenta p that are
not small compared with π~/µ0 then the approximation
taken will be very poor, and we don’t expect neither the
effective classical description nor its quantization to be
close to the standard one. If, on the other hand, one is al-
ways within the region in which the approximation can be
regarded as reliable, then both classical and quantum de-
scriptions should approximate the standard description.
What does ‘close to the standard description’ exactly
mean needs, of course, some further clarification. In
particular one is assuming the existence of the usual
Schrödinger representation in which the system has a be-
havior that is also consistent with observations. If this is
the case, the natural question is: How can we approxi-
mate such description from the polymer picture? Is there
a fine enough graph γµ0 that will approximate the system
in such a way that all observations are indistinguishable?
Or even better, can we define a procedure, that involves
a refinement of the graph γµ0 such that one recovers the
standard picture?
It could also happen that a continuum limit can be de-
fined but does not coincide with the ‘expected one’. But
there might be also physical systems for which there is
no standard description, or it just does not make sense.
Can in those cases the polymer representation, if it ex-
ists, provide the correct physical description of the sys-
tem under consideration? For instance, if there exists a
physical limitation to the minimum scale set by µ0, as
could be the case for a quantum theory of gravity, then
the polymer description would provide a true physical
bound on the value of certain quantities, such as p in
our example. This could be the case for loop quantum
cosmology, where there is a minimum value for physical
volume (coming from the full theory), and phase space
points near the ‘singularity’ lie at the region where the
approximation induced by the scale µ0 departs from the
standard classical description. If in that case the poly-
mer quantum system is regarded as more fundamental
than the classical system (or its standard Wheeler-De
Witt quantization), then one would interpret this dis-
crepancies in the behavior as a signal of the breakdown
of classical description (or its ‘naive’ quantization).
In the next section we present a method to remove
the regulator µ0 which was introduced as an intermedi-
ate step to construct the dynamics. More precisely, we
shall consider the construction of a continuum limit of
the polymer description by means of a renormalization
procedure.
V. THE CONTINUUM LIMIT
This section has two parts. In the first one we motivate
the need for a precise notion of the continuum limit of
the polymeric representation, explaining why the most
direct, and naive approach does not work. In the sec-
ond part, we shall present the main ideas and results of
the paper [6], where the Hamiltonian and the physical
Hilbert space in polymer quantum mechanics are con-
structed as a continuum limit of effective theories, follow-
ing Wilson’s renormalization group ideas. The resulting
physical Hilbert space turns out to be unitarily isomor-
phic to the ordinary Hs = L2(R, dq) of the Schrödinger
theory.
Before describing the results of [6] we should discuss
the precise meaning of reaching a theory in the contin-
uum. Let us for concreteness consider the B-type repre-
sentation in the q-polarization. That is, states are func-
tions of q and the orthonormal basis χµ(q) is given by
characteristic functions with support on q = µ. Let us
now suppose we have a Schrödinger state Ψ(q) ∈ Hs =
L2(R, dq). What is the relation between Ψ(q) and a state
in Hpoly,x? We are also interested in the opposite ques-
tion, that is, we would like to know if there is a preferred
state in Hs that is approximated by an arbitrary state
ψ(q) in Hpoly,x. The first obvious observation is that a
Schödinger state Ψ(q) does not belong to Hpoly,x since it
would have an infinite norm. To see that note that even
when the would-be state can be formally expanded in the
χµ basis as,
Ψ(q) =
Ψ(µ) χµ(q)
where the sum is over the parameter µ ∈ R. Its associ-
ated norm in Hpoly,x would be:
|Ψ(q)|2poly =
|Ψ(µ)|2 → ∞
which blows up. Note that in order to define a mapping
P : Hs → Hpoly,x, there is a huge ambiguity since the
values of the function Ψ(q) are needed in order to expand
the polymer wave function. Thus we can only define a
mapping in a dense subset D of Hs where the values of the
functions are well defined (recall that in Hs the value of
functions at a given point has no meaning since states are
equivalence classes of functions). We could for instance
ask that the mapping be defined for representatives of the
equivalence classes in Hs that are piecewise continuous.
From now on, when we refer to an element of the space
Hs we shall be refereeing to one of those representatives.
Notice then that an element of Hs does define an element
of Cyl∗γ , the dual to the space Cylγ , that is, the space
of cylinder functions with support on the (finite) lattice
γ = {µ1, µ2, . . . , µN}, in the following way:
Ψ(q) : Cylγ −→ C
such that
Ψ(q)[ψ(q)] = (Ψ|ψ〉 :=
Ψ(µ) 〈χµ|
ψi χµi〉polyγ
Ψ(µi)ψi < ∞ (26)
Note that this mapping could be seen as consisting of two
parts: First, a projection Pγ : Cyl
∗ → Cylγ such that
Pγ(Ψ) = Ψγ(q) :=
i Ψ(µi)χµi(q) ∈ Cylγ . The state
Ψγ is sometimes refereed to as the ‘shadow of Ψ(q) on
the lattice γ’. The second step is then to take the inner
product between the shadow Ψγ(q) and the state ψ(q)
with respect to the polymer inner product 〈Ψγ |ψ〉polyγ .
Now this inner product is well defined. Notice that for
any given lattice γ the corresponding projector Pγ can be
intuitively interpreted as some kind of ‘coarse graining
map’ from the continuum to the lattice γ. In terms of
functions of q the projection is replacing a continuous
function defined on R with a function over the lattice
γ ⊂ R which is a discrete set simply by restricting Ψ to
γ. The finer the lattice the more points that we have
on the curve. As we shall see in the second part of this
section, there is indeed a precise notion of coarse graining
that implements this intuitive idea in a concrete fashion.
In particular, we shall need to replace the lattice γ with
a decomposition of the real line in intervals (having the
lattice points as end points).
Let us now consider a system in the polymer represen-
tation in which a particular lattice γ0 was chosen, say
with points of the form {qk ∈ R |qk = ka0 , ∀ k ∈ Z},
namely a uniform lattice with spacing equal to a0. In this
case, any Schrödinger wave function (of the type that we
consider) will have a unique shadow on the lattice γ0. If
we refine the lattice γ 7→ γn by dividing each interval in
2n new intervals of length an = a0/2
n we have new shad-
ows that have more and more points on the curve. Intu-
itively, by refining infinitely the graph we would recover
the original function Ψ(q). Even when at each finite step
the corresponding shadow has a finite norm in the poly-
mer Hilbert space, the norm grows unboundedly and the
limit can not be taken, precisely because we can not em-
bed Hs into Hpoly. Suppose now that we are interested
in the reverse process, namely starting from a polymer
theory on a lattice and asking for the ‘continuum wave
function’ that is best approximated by a wave function
over a graph. Suppose furthermore that we want to con-
sider the limit of the graph becoming finer. In order
to give precise answers to these (and other) questions we
need to introduce some new technology that will allow us
to overcome these apparent difficulties. In the remaining
of this section we shall recall these constructions for the
benefit of the reader. Details can be found in [6] (which
is an application of the general formalism discussed in
[9]).
The starting point in this construction is the concept
of a scale C, which allows us to define the effective the-
ories and the concept of continuum limit. In our case a
scale is a decomposition of the real line in the union of
closed-open intervals, that cover the whole line and do
not intersect. Intuitively, we are shifting the emphasis
from the lattice points to the intervals defined by the
same points with the objective of approximating con-
tinuous functions defined on R with functions that are
constant on the intervals defined by the lattice. To be
precise, we define an embedding, for each scale Cn from
Hpoly to Hs by means of a step function:
Ψ(man) χman(q) →
Ψ(man) χαm(q) ∈ Hs
with χαn(q) a characteristic function on the interval
αm = [man, (m + 1)an). Thus, the shadows (living on
the lattice) were just an intermediate step in the con-
struction of the approximating function; this function is
piece-wise constant and can be written as a linear com-
bination of step functions with the coefficients provided
by the shadows.
The challenge now is to define in an appropriate sense
how one can approximate all the aspects of the theory
by means of this constant by pieces functions. Then the
strategy is that, for any given scale, one can define an
effective theory by approximating the kinetic operator
by a combination of the translation operators that shift
between the vertices of the given decomposition, in other
words by a periodic function in p. As a result one has a
set of effective theories at given scales which are mutually
related by coarse graining maps. This framework was
developed in [6]. For the convenience of the reader we
briefly recall part of that framework.
Let us denote the kinematic polymer Hilbert space at
the scale Cn as HCn , and its basis elements as eαi,Cn ,
where αi = [ian, (i + 1)an) ∈ Cn. By construction this
basis is orthonormal. The basis elements in the dual
Hilbert space H∗Cn are denoted by ωαi,Cn ; they are also
orthonormal. The states ωαi,Cn have a simple action on
Cyl, ωαi,Cn(δx0,q) = χαi,Cn(x0). That is, if x0 is in the
interval αi of Cn the result is one and it is zero if it is
not there.
Given any m ≤ n, we define d∗m,n : H∗Cn → H
as the ‘coarse graining’ map between the dual Hilbert
spaces, that sends the part of the elements of the dual
basis to zero while keeping the information of the rest:
d∗m,n(ωαi,Cn) = ωβj ,Cm if i = j2
n−m, in the opposite case
d∗m,n(ωαi,Cn) = 0.
At every scale the corresponding effective theory is
given by the hamiltonian Hn. These Hamiltonians will
be treated as quadratic forms, hn : HCn → R, given by
hn(ψ) = λ
(ψ,Hnψ) , (27)
where λ2Cn is a normalizaton factor. We will see later
that this rescaling of the inner product is necessary in
order to guarantee the convergence of the renormalized
theory. The completely renormalized theory at this scale
is obtained as
hrenm := lim
d⋆m,nhn. (28)
and the renormalized Hamiltonians are compatible with
each other, in the sense that
d⋆m,nh
n = h
In order to analyze the conditions for the convergence
in (28) let us express the Hamiltonian in terms of its
eigen-covectors end eigenvalues. We will work with effec-
tive Hamiltonians that have a purely discrete spectrum
(labelled by ν) Hn · Ψν,Cn = Eν,Cn Ψν,Cn . We shall also
introduce, as an intermediate step, a cut-off in the energy
levels. The origin of this cut-off is in the approximation
of the Hamiltonian of our system at a given scale with
a Hamiltonian of a periodic system in a regime of small
energies, as we explained earlier. Thus, we can write
hνcut−offm =
νcut−off
Eν,CmΨν,Cm ⊗ Ψν,Cm , (29)
where the eigen covectors Ψν,Cm are normalized accord-
ing to the inner product rescaled by 1
, and the cut-
off can vary up to a scale dependent bound, νcut−off ≤
νmax(Cm). The Hilbert space of covectors together with
such inner product will be called H⋆renCm .
In the presence of a cut-off, the convergence of the
microscopically corrected Hamiltonians, equation (28) is
equivalent to the existence of the following two limits.
The first one is the convergence of the energy levels,
Eν,Cn = E
ν . (30)
Second is the existence of the completely renormalized
eigen covectors,
d⋆m,n Ψν,Cn = Ψ
∈ H⋆renCm ⊂ Cyl
⋆ . (31)
We clarify that the existence of the above limit means
that Ψrenν,Cm(δx0,q) is well defined for any δx0,q ∈ Cyl. No-
tice that this point-wise convergence, if it can take place
at all, will require the tuning of the normalization factors
λ2Cn .
Now we turn to the question of the continuum limit
of the renormalized covectors. First we can ask for the
existence of the limit
Ψrenν,Cn(δx0,q) (32)
for any δx0,q ∈ Cyl. When this limits exists there is
a natural action of the eigen covectors in the continuum
limit. Below we consider another notion of the continuum
limit of the renormalized eigen covectors.
When the completely renormalized eigen covectors
exist, they form a collection that is d⋆-compatible,
d⋆m,nΨ
= Ψrenν,Cm . A sequence of d
⋆-compatible nor-
malizable covectors define an element of
, which is
the projective limit of the renormalized spaces of covec-
H⋆renCn . (33)
The inner product in this space is defined by
({ΨCn}, {ΦCn})renR := lim
(ΨCn ,ΦCn)
The natural inclusion of C∞0 in
is by an antilinear
map which assigns to any Ψ ∈ C∞0 the d⋆-compatible
collection ΨshadCn :=
ωαiΨ̄(L(αi)) ∈ H⋆renCn ⊂ Cyl
ΨshadCn will be called the shadow of Ψ at scale Cn and acts
in Cyl as a piecewise constant function. Clearly other
types of test functions like Schwartz functions are also
naturally included in
. In this context a shadow is
a state of the effective theory that approximates a state
in the continuum theory.
Since the inner product in
is degenerate, the
physical Hilbert space is defined as
H⋆phys :=
/ ker(·, ·)ren
Hphys := H⋆⋆phys
The nature of the physical Hilbert space, whether it is
isomorphic to the Schrödinger Hilber space, Hs, or not, is
determined by the normalization factors λ2Cn which can
be obtained from the conditions asking for compatibil-
ity of the dynamics of the effective theories at different
scales. The dynamics of the system under consideration
selects the continuum limit.
Let us now return to the definition of the Hamilto-
nian in the continuum limit. First consider the contin-
uum limit of the Hamiltonian (with cut-off) in the sense
of its point-wise convergence as a quadratic form. It
turns out that if the limit of equation (32) exists for
all the eigencovectors allowed by the cut-off, we have
νcut−off ren
: Hpoly,x → R defined by
νcut−off ren
(δx0,q) := lim
hνcut−off renn ([δx0,q]Cn). (34)
This Hamiltonian quadratic form in the continuum can
be coarse grained to any scale and, as can be ex-
pected, it yields the completely renormalized Hamilto-
nian quadratic forms at that scale. However, this is not
a completely satisfactory continuum limit because we can
not remove the auxiliary cut-off νcut−off . If we tried, as
we include more and more eigencovectors in the Hamilto-
nian the calculations done at a given scale would diverge
and doing them in the continuum is just as divergent.
Below we explore a more successful path.
We can use the renormalized inner product to induce
an action of the cut–off Hamiltonians on
νcut−off ren
({ΨCn}) := lim
hνcut−off renn ((ΨCn , ·)renCn ),
where we have used the fact that (ΨCn , ·)renCn ∈ HCn . The
existence of this limit is trivial because the renormalized
Hamiltonians are finite sums and the limit exists term by
term.
These cut-off Hamiltonians descend to the physical
Hilbert space
νcut−off ren
([{ΨCn}]) := h
νcut−off ren
({ΨCn})
for any representative {ΨCn} ∈ [{ΨCn}] ∈ H⋆phys.
Finally we can address the issue of removal of the cut-
off. The Hamiltonian hren
→ R is defined by the
limit
:= lim
νcut−off→∞
νcut−off ren
when the limit exists. Its corresponding Hermitian form
in Hphys is defined whenever the above limit exists. This
concludes our presentation of the main results of [6]. Let
us now consider several examples of systems for which
the continuum limit can be investigated.
VI. EXAMPLES
In this section we shall develop several examples of
systems that have been treated with the polymer quanti-
zation. These examples are simple quantum mechanical
systems, such as the simple harmonic oscillator and the
free particle, as well as a quantum cosmological model
known as loop quantum cosmology.
A. The Simple Harmonic Oscillator
In this part, let us consider the example of a Simple Har-
monic Oscillator (SHO) with parameters m and ω, clas-
sically described by the following Hamiltonian
mω2 x2.
Recall that from these parameters one can define a length
scale D =
~/mω. In the standard treatment one uses
this scale to define a complex structure JD (and an in-
ner product from it), as we have described in detail that
uniquely selects the standard Schrödinger representation.
At scale Cn we have an effective Hamiltonian for the
Simple Harmonic Oscillator (SHO) given by
HCn =
1 − cos anp
mω2x2 . (35)
If we interchange position and momentum, this Hamilto-
nian is exactly that of a pendulum of mass m, length l
and subject to a constant gravitational field g:
ĤCn = −
+mgl(1 − cos θ)
where those quantities are related to our system by,
mω an
, g =
, θ =
That is, we are approximating, for each scale Cn the
SHO by a pendulum. There is, however, an important
difference. From our knowledge of the pendulum system,
we know that the quantum system will have a spectrum
for the energy that has two different asymptotic behav-
iors, the SHO for low energies and the planar rotor in
the higher end, corresponding to oscillating and rotating
solutions respectively2. As we refine our scale and both
the length of the pendulum and the height of the periodic
potential increase, we expect to have an increasing num-
ber of oscillating states (for a given pendulum system,
there is only a finite number of such states). Thus, it
is justified to consider the cut-off in the energy eigenval-
ues, as discussed in the last section, given that we only
expect a finite number of states of the pendulum to ap-
proximate SHO eigenstates. With these consideration in
mind, the relevant question is whether the conditions for
the continuum limit to exist are satisfied. This question
has been answered in the affirmative in [6]. What was
shown there was that the eigen-values and eigen func-
tions of the discrete systems, which represent a discrete
and non-degenerate set, approximate those of the contin-
uum, namely, of the standard harmonic oscillator when
the inner product is renormalized by a factor λ2Cn = 1/2
This convergence implies that the continuum limit exists
as we understand it. Let us now consider the simplest
possible system, a free particle, that has nevertheless the
particular feature that the spectrum of the energy is con-
tinuous.
2 Note that both types of solutions are, in the phase space, closed.
This is the reason behind the purely discrete spectrum. The
distinction we are making is between those solutions inside the
separatrix, that we call oscillating, and those that are above it
that we call rotating.
B. Free Polymer Particle
In the limit ω → 0, the Hamiltonian of the Simple
Harmonic oscillator (35) goes to the Hamiltonian of a
free particle and the corresponding time independent
Schrödinger equation, in the p−polarization, is given by
(1 − cos anp
) − ECn
ψ̃(p) = 0
where we now have that p ∈ S1, with p ∈ (−π~
Thus, we have
ECn =
1 − cos
≤ ECn,max ≡ 2
. (36)
At each scale the energy of the particle we can describe
is bounded from above and the bound depends on the
scale. Note that in this case the spectrum is continu-
ous, which implies that the ordinary eigenfunctions of
the Hilbert are not normalizable. This imposes an upper
bound in the value that the energy of the particle can
have, in addition to the bound in the momentum due to
its “compactification”.
Let us first look for eigen-solutions to the time inde-
pendent Schrödinger equation, that is, for energy eigen-
states. In the case of the ordinary free particle, these
correspond to constant momentum plane waves of the
form e±(
) and such that the ordinary dispersion re-
lation p2/2m = E is satisfied. These plane waves are
not square integrable and do not belong to the ordinary
Hilbert space of the Schrödinger theory but they are still
useful for extracting information about the system. For
the polymer free particle we have,
ψ̃Cn(p) = c1δ(p− PCn) + c2δ(p+ PCn)
where PCn is a solution of the previous equation consid-
ering a fixed value of ECn . That is,
PCn = P (ECn) =
arccos
1 − ma
The inverse Fourier transform yields, in the ‘x represen-
tation’,
ψCn(xj) =
∫ π~/an
−π~/an
ψ̃(p) e
p j dp =
ixjPCn /~ + c2e
−ixjPCn /~
.(37)
with xj = an j for j ∈ Z. Note that the eigenfunctions
are still delta functions (in the p representation) and thus
not (square) normalizable with respect to the polymer
inner product, that in the p polarization is just given
by the ordinary Haar measure on S1, and there is no
quantization of the momentum (its spectrum is still truly
continuous).
Let us now consider the time dependent Schrödinger
equation,
i~ ∂t Ψ̃(p, t) = Ĥ · Ψ̃(p, t).
Which now takes the form,
Ψ̃(p, t) =
(1 − cos (an p/~)) Ψ̃(p, t)
that has as its solution,
Ψ̃(p, t) = e−
(1−cos (an p/~)) t ψ̃(p) = e(−iECn /~) t ψ̃(p)
for any initial function ψ̃(p), where ECn satisfy the dis-
persion relation (36). The wave function Ψ(xj , t), the
xj-representation of the wave function, can be obtained
for any given time t by Fourier transforming with (37)
the wave function Ψ̃(p, t).
In order to check out the convergence of the micro-
scopically corrected Hamiltonians we should analyze the
convergence of the energy levels and of the proper cov-
ectors. In the limit n → ∞, ECn → E = p2/2m so
we can be certain that the eigen-values for the energy
converge (when fixing the value of p). Let us write the
proper covector as ΨCn = (ψCn , ·)renCn ∈ H
. Then we
can bring microscopic corrections to scale Cm and look
for convergence of such corrections
ΨrenCm
= lim
d⋆m,nΨCn .
It is easy to see that given any basis vector eαi ∈ HCm
the following limit
ΨrenCm(eαi,Cm) = limCn→∞
ΨCn(dn,m(eαi,Cm))
exists and is equal to
ΨshadCm (eαi,Cm) = [d
⋆ΨSchr](eαi,Cm) = Ψ
Schr(iam)
where ΨshadCm is calculated using the free particle Hamilto-
nian in the Schrödinger representation. This expression
defines the completely renormalized proper covector at
the scale Cm.
C. Polymer Quantum Cosmology
In this section we shall present a version of quantum
cosmology that we call polymer quantum cosmology. The
idea behind this name is that the main input in the quan-
tization of the corresponding mini-superspace model is
the use of a polymer representation as here understood.
Another important input is the choice of fundamental
variables to be used and the definition of the Hamiltonian
constraint. Different research groups have made differ-
ent choices. We shall take here a simple model that has
received much attention recently, namely an isotropic,
homogeneous FRW cosmology with k = 0 and coupled
to a massless scalar field ϕ. As we shall see, a proper
treatment of the continuum limit of this system requires
new tools under development that are beyond the scope
of this work. We will thus restrict ourselves to the intro-
duction of the system and the problems that need to be
solved.
The system to be quantized corresponds to the phase
space of cosmological spacetimes that are homogeneous
and isotropic and for which the homogeneous spatial
slices have a flat intrinsic geometry (k = 0 condition).
The only matter content is a mass-less scalar field ϕ. In
this case the spacetime geometry is given by metrics of
the form:
ds2 = −dt2 + a2(t) (dx2 + dy2 + dz2)
where the function a(t) carries all the information and
degrees of freedom of the gravity part. In terms of the
coordinates (a, pa, ϕ, pϕ) for the phase space Γ of the the-
ory, all the dynamics is captured in the Hamiltonian con-
straint
C := −3
+ 8πG
2|a|3
The first step is to define the constraint on the kine-
matical Hilbert space to find physical states and then a
physical inner product to construct the physical Hilbert
space. First note that one can rewrite the equation as:
p2a a
2 = 8πG
If, as is normally done, one chooses ϕ to act as an in-
ternal time, the right hand side would be promoted, in
the quantum theory, to a second derivative. The left
hand side is, furthermore, symmetric in a and pa. At
this point we have the freedom in choosing the variable
that will be quantized and the variable that will not be
well defined in the polymer representation. The standard
choice is that pa is not well defined and thus, a and any
geometrical quantity derived from it, is quantized. Fur-
thermore, we have the choice of polarization on the wave
function. In this respect the standard choice is to select
the a-polarization, in which a acts as multiplication and
the approximation of pa, namely sin(λ pa)/λ acts as a
difference operator on wave functions of a. For details of
this particular choice see [5]. Here we shall adopt the op-
posite polarization, that is, we shall have wave functions
Ψ(pa, ϕ).
Just as we did in the previous cases, in order to gain
intuition about the behavior of the polymer quantized
theory, it is convenient to look at the equivalent prob-
lem in the classical theory, namely the classical system
we would get be approximating the non-well defined ob-
servable (pa in our present case) by a well defined object
(made of trigonometric functions). Let us for simplicity
choose to replace pa 7→ sin(λ pa)/λ. With this choice
we get an effective classical Hamiltonian constraint that
depends on λ:
Cλ := −
sin(λ pa)
λ2|a|
+ 8πG
2|a|3
We can now compute effective equations of motion by
means of the equations: Ḟ := {F, Cλ}, for any observable
F ∈ C∞(Γ), and where we are using the effective (first
order) action:
dτ(pa ȧ+ pϕ ϕ̇−N Cλ)
with the choice N = 1. The first thing to notice is that
the quantity pϕ is a constant of the motion, given that
the variable ϕ is cyclic. The second observation is that
ϕ̇ = 8πG
has the same sign as pϕ and never vanishes.
Thus ϕ can be used as a (n internal) time variable. The
next observation is that the equation for
, namely
the effective Friedman equation, will have a zero for a
non-zero value of a given by
λ2p2ϕ.
This is the value at which there will be bounce if the
trajectory started with a large value of a and was con-
tracting. Note that the ‘size’ of the universe when the
bounce occurs depends on both the constant pϕ (that
dictates the matter density) and the value of the lattice
size λ. Here it is important to stress that for any value
of pϕ (that uniquely fixes the trajectory in the (a, pa)
plane), there will be a bounce. In the original description
in terms of Einstein’s equations (without the approxima-
tion that depends on λ), there in no such bounce. If
ȧ < 0 initially, it will remain negative and the universe
collapses, reaching the singularity in a finite proper time.
What happens within the effective description if we re-
fine the lattice and go from λ to λn := λ/2
n? The only
thing that changes, for the same classical orbit labelled
by pϕ, is that the bounce occurs at a ‘later time’ and for
a smaller value of a∗ but the qualitative picture remains
the same.
This is the main difference with the systems considered
before. In those cases, one could have classical trajecto-
ries that remained, for a given choice of parameter λ,
within the region where sin(λp)/λ is a good approxima-
tion to p. Of course there were also classical trajectories
that were outside this region but we could then refine the
lattice and find a new value λ′ for which the new clas-
sical trajectory is well approximated. In the case of the
polymer cosmology, this is never the case: Every classical
trajectory will pass from a region where the approxima-
tion is good to a region where it is not; this is precisely
where the ‘quantum corrections’ kick in and the universes
bounces.
Given that in the classical description, the ‘original’
and the ‘corrected’ descriptions are so different we expect
that, upon quantization, the corresponding quantum the-
ories, namely the polymeric and the Wheeler-DeWitt will
be related in a non-trivial way (if at all).
In this case, with the choice of polarization and for a
particular factor ordering we have,
sin(λpa)
· Ψ(pa, ϕ) = 0
as the Polymer Wheeler-DeWitt equation.
In order to approach the problem of the continuum
limit of this quantum theory, we have to realize that the
task is now somewhat different than before. This is so
given that the system is now a constrained system with
a constraint operator rather than a regular non-singular
system with an ordinary Hamiltonian evolution. Fortu-
nately for the system under consideration, the fact that
the variable ϕ can be regarded as an internal time allows
us to interpret the quantum constraint as a generalized
Klein-Gordon equation of the form
Ψ = Θλ · Ψ
where the operator Θλ is ‘time independent’. This al-
lows us to split the space of solutions into ‘positive and
negative frequency’, introduce a physical inner product
on the positive frequency solutions of this equation and
a set of physical observables in terms of which to de-
scribe the system. That is, one reduces in practice the
system to one very similar to the Schrödinger case by
taking the positive square root of the previous equation:
Θλ · Ψ. The question we are interested is
whether the continuum limit of these theories (labelled
by λ) exists and whether it corresponds to the Wheeler-
DeWitt theory. A complete treatment of this problem
lies, unfortunately, outside the scope of this work and
will be reported elsewhere [12].
VII. DISCUSSION
Let us summarize our results. In the first part of the
article we showed that the polymer representation of the
canonical commutation relations can be obtained as the
limiting case of the ordinary Fock-Schrödinger represen-
tation in terms of the algebraic state that defines the
representation. These limiting cases can also be inter-
preted in terms of the naturally defined coherent states
associated to each representation labelled by the param-
eter d, when they become infinitely ‘squeezed’. The two
possible limits of squeezing lead to two different polymer
descriptions that can nevertheless be identified, as we
have also shown, with the two possible polarizations for
an abstract polymer representation. This resulting the-
ory has, however, very different behavior as the standard
one: The Hilbert space is non-separable, the representa-
tion is unitarily inequivalent to the Schrödinger one, and
natural operators such as p̂ are no longer well defined.
This particular limiting construction of the polymer the-
ory can shed some light for more complicated systems
such as field theories and gravity.
In the regular treatments of dynamics within the poly-
mer representation, one needs to introduce some extra
structure, such as a lattice on configuration space, to con-
struct a Hamiltonian and implement the dynamics for the
system via a regularization procedure. How does this re-
sulting theory compare to the original continuum theory
one had from the beginning? Can one hope to remove
the regulator in the polymer description? As they stand
there is no direct relation or mapping from the polymer
to a continuum theory (in case there is one defined). As
we have shown, one can indeed construct in a systematic
fashion such relation by means of some appropriate no-
tions related to the definition of a scale, closely related
to the lattice one had to introduce in the regularization.
With this important shift in perspective, and an appro-
priate renormalization of the polymer inner product at
each scale one can, subject to some consistency condi-
tions, define a procedure to remove the regulator, and
arrive to a Hamiltonian and a Hilbert space.
As we have seen, for some simple examples such as
a free particle and the harmonic oscillator one indeed
recovers the Schrödinger description back. For other sys-
tems, such as quantum cosmological models, the answer
is not as clear, since the structure of the space of classi-
cal solutions is such that the ‘effective description’ intro-
duced by the polymer regularization at different scales
is qualitatively different from the original dynamics. A
proper treatment of these class of systems is underway
and will be reported elsewhere [12].
Perhaps the most important lesson that we have
learned here is that there indeed exists a rich inter-
play between the polymer description and the ordinary
Schrödinger representation. The full structure of such re-
lation still needs to be unravelled. We can only hope that
a full understanding of these issues will shed some light
in the ultimate goal of treating the quantum dynamics
of background independent field systems such as general
relativity.
Acknowledgments
We thank A. Ashtekar, G. Hossain, T. Pawlowski and P.
Singh for discussions. This work was in part supported
by CONACyT U47857-F and 40035-F grants, by NSF
PHY04-56913, by the Eberly Research Funds of Penn
State, by the AMC-FUMEC exchange program and by
funds of the CIC-Universidad Michoacana de San Nicolás
de Hidalgo.
[1] R. Beaume, J. Manuceau, A. Pellet and M. Sirugue,
“Translation Invariant States In Quantum Mechanics,”
Commun. Math. Phys. 38, 29 (1974); W. E. Thirring and
H. Narnhofer, “Covariant QED without indefinite met-
ric,” Rev. Math. Phys. 4, 197 (1992); F. Acerbi, G. Mor-
chio and F. Strocchi, “Infrared singular fields and non-
regular representations of canonical commutation rela-
tion algebras”, J. Math. Phys. 34, 899 (1993); F. Cav-
allaro, G. Morchio and F. Strocchi, “A generalization of
the Stone-von Neumann theorem to non-regular repre-
sentations of the CCR-algebra”, Lett. Math. Phys. 47
307 (1999); H. Halvorson, “Complementarity of Repre-
sentations in quantum mechanics”, Studies in History
and Philosophy of Modern Physics 35 45 (2004).
[2] A. Ashtekar, S. Fairhurst and J.L. Willis, “Quantum
gravity, shadow states, and quantum mechanics”, Class.
Quant. Grav. 20 1031 (2003) [arXiv:gr-qc/0207106].
[3] K. Fredenhagen and F. Reszewski, “Polymer state ap-
proximations of Schrödinger wave functions”, Class.
Quant. Grav. 23 6577 (2006) [arXiv:gr-qc/0606090].
[4] M. Bojowald, “Loop quantum cosmology”, Living Rev.
Rel. 8, 11 (2005) [arXiv:gr-qc/0601085]; A. Ashtekar,
M. Bojowald and J. Lewandowski, “Mathematical struc-
ture of loop quantum cosmology”, Adv. Theor. Math.
Phys. 7 233 (2003) [arXiv:gr-qc/0304074]; A. Ashtekar,
T. Pawlowski and P. Singh, “Quantum nature of the
big bang: Improved dynamics” Phys. Rev. D 74 084003
(2006) [arXiv:gr-qc/0607039]
[5] V. Husain and O. Winkler, “Semiclassical states for
quantum cosmology” Phys. Rev. D 75 024014 (2007)
[arXiv:gr-qc/0607097]; V. Husain V and O. Winkler, “On
singularity resolution in quantum gravity”, Phys. Rev. D
69 084016 (2004). [arXiv:gr-qc/0312094].
[6] A. Corichi, T. Vukasinac and J.A. Zapata. “Hamil-
tonian and physical Hilbert space in polymer quan-
tum mechanics”, Class. Quant. Grav. 24 1495 (2007)
[arXiv:gr-qc/0610072]
[7] A. Corichi and J. Cortez, “Canonical quantization from
an algebraic perspective” (preprint)
[8] A. Corichi, J. Cortez and H. Quevedo, “Schrödinger
and Fock Representations for a Field Theory on
Curved Spacetime”, Annals Phys. (NY) 313 446 (2004)
[arXiv:hep-th/0202070].
[9] E. Manrique, R. Oeckl, A. Weber and J.A. Zapata, “Loop
quantization as a continuum limit” Class. Quant. Grav.
23 3393 (2006) [arXiv:hep-th/0511222]; E. Manrique,
R. Oeckl, A. Weber and J.A. Zapata, “Effective theo-
ries and continuum limit for canonical loop quantization”
(preprint)
[10] D.W. Chiou, “Galileo symmetries in polymer particle
representation”, Class. Quant. Grav. 24, 2603 (2007)
[arXiv:gr-qc/0612155].
[11] W. Rudin, Fourier analysis on groups, (Interscience, New
York, 1962)
[12] A. Ashtekar, A. Corichi, P. Singh, “Contrasting LQC
and WDW using an exactly soluble model” (preprint);
A. Corichi, T. Vukasinac, and J.A. Zapata, “Continuum
limit for quantum constrained system” (preprint).
http://arxiv.org/abs/gr-qc/0207106
http://arxiv.org/abs/gr-qc/0606090
http://arxiv.org/abs/gr-qc/0601085
http://arxiv.org/abs/gr-qc/0304074
http://arxiv.org/abs/gr-qc/0607039
http://arxiv.org/abs/gr-qc/0607097
http://arxiv.org/abs/gr-qc/0312094
http://arxiv.org/abs/gr-qc/0610072
http://arxiv.org/abs/hep-th/0202070
http://arxiv.org/abs/hep-th/0511222
http://arxiv.org/abs/gr-qc/0612155
| La mecánica cuántica de polímeros y su límite de continuidad
Alejandro Corichi,1, 2, 3, ∗ Tatjana Vukašinac,4, † y José A. Zapata1, ‡
Instituto de Matemáticas, Unidad Morelia, Universidad Nacional Autónoma de México,
UNAM-Campus Morelia, A. Postal 61-3, Morelia, Michoacán 58090, México
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México,
A. Postal 70-543, México D.F. 04510, México
Instituto de Física Gravitacional y Geometría, Departamento de Física,
Pennsylvania State University, University Park PA 16802, EE.UU.
Facultad de Ingeniería Civil, Universidad Michoacana de San Nicolás de Hidalgo,
Morelia, Michoacán 58000, México
Una representación cuántica bastante no estándar de las relaciones canónicas de conmutación de
sistemas mecánicos de tom, conocidos como la representación del polímero ha ganado cierta atención en los últimos
años, debido a su posible relación con la física a escala de Planck. En particular, este enfoque ha sido el siguiente:
seguido en un sector simétrico de la gravedad cuántica del bucle conocido como cosmología cuántica del bucle. Aquí vamos.
explorar diferentes aspectos de la relación entre la teoría ordinaria de Schrödinger y el polímero
descripción. El periódico tiene dos partes. En el primero, derivamos la mecánica cuántica del polímero
a partir de la teoría ordinaria de Schrödinger y mostrar que la descripción del polímero surge como un
límite adecuado. En la segunda parte consideramos el límite continuo de esta teoría, a saber, el
proceso inverso en el que se parte de la teoría discreta e intenta recuperar de nuevo lo ordinario
Schrödinger mecánica cuántica. Consideramos varios ejemplos de interés, incluyendo el armónico
oscilador, la partícula libre y un modelo cosmológico simple.
Números PACS: 04.60.Pp, 04.60.Ds, 04.60.Nc 11.10.Gh.
I. INTRODUCCIÓN
La llamada mecánica cuántica polimérica, una no-
representación regular y algo «exótica» de la
las relaciones canónicas de conmutación (CCR) [1],
utilizado para explorar tanto las cuestiones matemáticas y físicas en
teorías independientes de fondo tales como la grav cuántica-
ity [2, 3]. Un ejemplo notable de este tipo de cuantificación,
cuando se aplica a modelos minisuperespacio ha dado paso a
lo que se conoce como cosmología cuántica de bucle [4, 5]. Al igual que en
cualquier situación modelo de juguete, uno espera aprender sobre el
sutiles cuestiones técnicas y conceptuales que están presentes
en la gravedad cuántica completa por medio de di- simple, finito
Ejemplos mensionales. Este formalismo no es una excepción
a este respecto. Aparte de esta motivación que viene de
física en la escala de Planck, uno puede preguntar independientemente
para la relación entre la representación continua estándar
las sentaciones y sus primos poliméricos a nivel de matemáticas...
Física emática. Una comprensión más profunda de esta relación
se vuelve importante por sí solo.
La cuantificación del polímero está hecha de varios pasos.
El primero es construir una representación de la
álgebra Heisenberg-Weyl en un espacio Kinematical Hilbert
que es “independiente en el fondo”, y que a veces es
conocido como el poliespacial polimérico Hilbert Hpoly. Los
la segunda y más importante parte, la aplicación de
dinámica, se ocupa de la definición de un Hamiltonian (o
Constreñimiento hamiltoniano) en este espacio. En los ejemplos
* Dirección electrónica: corichi@matmor.unam.mx
†Dirección electrónica: tatjana@shi.matmor.unam.mx
‡Dirección electrónica: zapata@matmor.unam.mx
estudiado hasta ahora, la primera parte es bastante bien entendido,
dando el espacio cinemático Hilbert Hpoly es decir, cómo-
Nunca, no-separable. Para el segundo paso, un im natural
la aplicación de la dinámica ha demostrado ser un poco más
difícil, dado que una definición directa de la Hamiltonian
de, digamos, una partícula en un potencial en el espacio Hpoly es
no es posible ya que una de las principales características de esta representación
sentation es que los operadores qâ € y pâ € no pueden ser a la vez
definido simultáneamente (ni sus análogos en las teorías
con variables más elaboradas). Por lo tanto, cualquier operador
que implica (poderes de) la variable no definida tiene que
estar regulados por un operador bien definido que normalmente
implica la introducción de una estructura adicional en la configuración
ración (o impulso) espacio, es decir, una celosía. Sin embargo,
esta nueva estructura que juega el papel de un regulador puede
no se retira cuando se trabaja en Hpoly y se deja uno
con la ambigüedad que está presente en cualquier regularización.
La libertad a la hora de elegirla puede ser asociada a veces
con una escala de longitud (el espaciado de celosía). En el caso de las personas de edad ordinaria
sistemas cuánticos tales como un oscilador armónico simple,
que se ha estudiado en detalle desde el punto de vista del polímero
punto, se ha argumentado que si se toma esta escala de longitud
para ser «suficientemente pequeño», se puede aproximar arbitrariamente
Mecánica cuántica estándar de Schrödinger [2, 3]. En el
caso de cosmología cuántica de bucle, la brecha de área mínima
A0 de la teoría de la gravedad cuántica completa impone tal
escala, que entonces se considera fundamental [4].
Una pregunta natural es preguntar qué sucede cuando nosotros
cambiar esta escala e ir a ‘distancias’ aún más pequeñas, que
es, cuando refinamos la celosía en la que la dinámica de
la teoría está definida. ¿Podemos definir la consistencia con-
¿diciones entre estas escalas? O incluso mejor, ¿podemos
tomar el límite y encontrar así un límite continuo? Como ella.
http://arxiv.org/abs/0704.0007v2
mailto:corichi@matmor.unam.mx
mailto:tatjana@shi.matmor.unam.mx
mailto:zapata@matmor.unam.mx
se ha mostrado recientemente en detalle, la respuesta a ambos
las preguntas son afirmativas [6]. En este caso, una
la noción de escala se definía de tal manera que se podía
definir los refinamientos de la teoría y posar en un preciso
forma la cuestión del límite continuo de la teoría.
Estos resultados también podrían ser vistos como la entrega de un procedimiento
para eliminar el regulador cuando se trabaja en el apro-
se comió el espacio. El propósito de este documento es explorar más a fondo
diferentes aspectos de la relación entre el continuum
y la representación del polímero. En particular, en la primera
parte planteamos una nueva manera de derivar el polímero
representación del ordinario Schrödinger represen-
sión como límite adecuado. In Sec. II derivamos dos
versiones de la representación del polímero como diferente lim-
es de la teoría de Schrödinger. In Sec. III mostramos que
estas dos versiones pueden ser vistas como diferentes polarizaciones
de la representación «abstracta» del polímero. Estos resultados,
a lo mejor de nuestro conocimiento, son nuevos y no han sido
notificada en otro lugar. In Sec. IV planteamos el problema de
la aplicación de la dinámica en el polímero representa-
tion. In Sec. V motivamos aún más la cuestión de la
límite continuo (es decir, la eliminación adecuada del regulador)
y recordar las construcciones básicas de [6]. Varios exámenes...
ples se consideran en Sec. VI. En particular, un simple
oscilador armónico, la partícula libre de polímero y un sim-
Se considera el modelo cuántico de cosmología. El libre
la partícula y el modelo cosmológico representan un
lización de los resultados obtenidos en [6], en los que sólo los sistemas
con un espectro discreto y no degenerado,
Sidered. Terminamos el trabajo con una discusión en Sec. VII.
Con el fin de hacer el papel autónomo, vamos a mantener
el nivel de rigor en la presentación a la que se encuentra en el
literatura física teórica estándar.
II. CUANTIZACIÓN Y POLÍMER
REPRESENTACIÓN
En esta sección derivamos el llamado repre-
envío de la mecánica cuántica a partir de un
reformulación de la representación ordinaria de Schrödinger.
Nuestro punto de partida será el más simple de todos los posibles
espacios de fase, a saber, • = R2 correspondientes a una partícula
viviendo en la línea real R. Elijamos las coordenadas (q, p)
sobre el mismo. Como primer paso consideraremos la cuantificación
de este sistema que conduce a la teoría cuántica estándar
en la descripción de Schrödinger. Una ruta conveniente es a
introducir la estructura necesaria para definir el Fock rep-
el resentimiento de tal sistema. Desde esta perspectiva, el
el paso al caso polimérico se vuelve más claro. Aproximadamente
hablando por una cuantificación uno significa un pasaje del
soporte algebraico clásico, el soporte Poisson,
{q, p} = 1 (1)
a un soporte cuántico dado por el conmutador de la
los operadores correspondientes,
[ qâ, pâ €] = i~ 1â € (2)
Estas relaciones, conocidas como la conmutación canónica re-
ración (CCR) se convierten en la piedra más común de la esquina de
la (kinemática de la) teoría cuántica; deben ser
satisfecho por el sistema cuántico, cuando se representa en un
Hilbert Space H.
Hay puntos de partida alternativos para el cuántico
cinemática. Aquí consideramos el álgebra generada por
las versiones exponenteadas de qâ € y pâ € que se denotan
U(α) = ei(α q)/~ ; V (β) = ei(β p)/~
donde α y β tienen dimensiones de impulso y longitud,
respectivamente. El CCR ahora se convierte en
U(α) · V (β) = e(−iα β)/~V (β) · U(α) (3)
y el resto del producto es
U(α1)·U(α2) = U(α12) ; V (β1)·V (β2) = V (β1+2)
El álgebra W de Weyl se genera tomando lineal finito
combinaciones de los generadores U(αi) y V (βi) donde
el producto (3) se amplía por linealidad,
(Ai U(αi) +Bi V (βi))
Desde esta perspectiva, la cuantificación significa encontrar un
representación unitaria del álgebra W de Weyl en una
Hilbert espacio H′ (que podría ser diferente de los ordi-
nary Schrödinger representación). Al principio podría parecer
raro para intentar este enfoque dado que sabemos cómo
para cuantificar un sistema tan sencillo; ¿qué necesitamos?
¿Un objeto complicado como W? Es infinitamente dimensional,
mientras que el conjunto S = {1», q», p, el punto de partida de la
la cuantificación ordinaria de Dirac, es bastante simple. Está en
la cuantificación de sistemas de campo que las ventajas de
el enfoque de Weyl se puede apreciar plenamente, pero es
también útil para la introducción de la cuantificación del polímero y
comparándolo con la cuantificación estándar. Esta es la
estrategia que seguimos.
Una pregunta que uno puede hacer es si hay alguna
libertad en la cuantificación del sistema para obtener lo ordinario
Representación de Schrödinger. A primera vista podría parecer
que no hay ninguno dado el único Stone-Von Neumann-
Teorema de ness. Repasemos cuál sería el argumento.
para la construcción estándar. Pidamos que el representante...
El envío que queremos construir es del tipo Schrödinger,
a saber, donde los estados son funciones de onda de configuración
espacio (q). Hay dos ingredientes en la construcción
de la representación, a saber, la especificación de cómo la
los operadores básicos (qá, pá) actuarán, y la naturaleza del espacio
de las funciones a las que • pertenece, que normalmente se fija por
la elección del producto interior en H, o la medida μ en R.
La opción estándar es seleccionar el espacio Hilbert a ser,
H = L2(R, dq)
el espacio de funciones integrables cuadradas con respecto a
la medida de Lebesgue dq (invariante bajo constante trans-
lations) en R. Los operadores se representan entonces como,
qâ · â € (q) = (q â €)(q) y pâ · â € (q) = −i ~
•(q) (4)
¿Es posible encontrar otras representaciones? Con el fin de
apreciar esta libertad vamos al álgebra de Weyl y
construir la teoría cuántica al respecto. La representación
del álgebra de Weyl que se puede llamar del ‘tipo Fock’
implica la definición de una estructura adicional en la fase
espacio: una estructura compleja J. Es decir, un mapa lineal.
Ping de a sí mismo de tal manera que J2 = −1. En dos dimensiones.
sions, toda la libertad en la elección de J está contenida en
la elección de un parámetro d con dimensiones de longitud. Lo siento.
También es conveniente definir: k = p/~ que tiene dimensiones
de 1/L. Tenemos entonces,
Jd : (q, k) 7→ (−d2 k, q/d2)
Este objeto junto con la estructura simpléctica:
(q′, p′)) = q p′ − p q′ define un producto interior en
* por la fórmula gd(· ; ·) = (· ; Jd ·) de tal manera que:
gd(q, p); (q
′, p′)) =
q q′ +
que es sin dimensión y positiva definida. Tenga en cuenta que
con estas cantidades se puede definir coordenadas complejas
(, ) como de costumbre:
q + i
p ; =
q − i d
a partir de la cual se puede construir el estándar Fock representa-
tion. Por lo tanto, se puede ver alternativamente la introducción
del parámetro de longitud d como la cantidad necesaria para de-
Coordenadas complejas finas (sin dimensión) en la fase
espacio. Pero ¿cuál es la relevancia de este objeto (J o
d)? La definición de coordenadas complejas es útil para
la construcción del espacio Fock ya que de ellos uno
puede definir, de una manera natural, la creación y la aniquilación
operadores. Pero para la representación de Schrödinger somos
Interesado aquí, es un poco más sutil. La sutileza es
que dentro de este enfoque se utiliza la prop algebraica
erties de W para construir el espacio Hilbert a través de lo que es
conocido como el Gel’fand-Naimark-Segal (GNS)
tion. Esto implica que la medida en el asunto Schrödinger
representación se convierte en no trivial y por lo tanto la momen-
el operador adquiere un término adicional con el fin de renderizar
el operador autoadjunto. La representación del Weyl
álgebra es entonces, cuando se actúa sobre las funciones فارسى(q) [7]:
*(α) ·*(q) := (eiα q/~ ♥)(q)
(β) · (q) := e
(q/2)
(q − β)
La estructura espacial de Hilbert es introducida por el defini-
ión de un estado algebraico (un funcional lineal positivo)
D : W → C, que debe coincidir con la expectativa
valor en el espacio Hilbert tomado en un estado especial ref-
ered a como el vacío: d(a) = vac, para todos un W.
En nuestro caso, esta especificación de J induce a un único
de que los rendimientos,
(α)vac = e−
d2 α2
~2 (5)
Vócalo (β)Vócalo = e−
d2 (6)
Tenga en cuenta que los exponentes en la expectativa de vacío
los valores corresponden a la métrica construida a partir de J :
d2 α2
= gd(0, α); (0, α)) y
= gd(β, 0); (β, 0).
Las funciones de onda pertenecen al espacio L2(R, dμd), donde
la medida que dicta el producto interior en este rep-
la resensión es dada por,
dμd =
d2 dq
En esta representación, el vacío es dado por el iden-
función de la tity Ł0(q) = 1 es decir, al igual que cualquier onda de plano,
normalizado. Tenga en cuenta que para cada valor de d > 0, el rep-
la resención es bien definida y continua en α y β.
Tenga en cuenta también que hay una equivalencia entre la q-
representación definida por d y la k-representación de-
multado por 1/d.
¿Cómo podemos recuperar entonces la representación estándar
en la que la medida es dada por la medida Lebesgue
y los operadores están representados como en (4)? Es fácil de
ver que hay un isomorfismo isométrico K que mapea
la d-representación en Hd a la norma Schrödinger
representación en Hschr por:
(q) = K · (q) = e
d1/2η1/4
Hschr = L2(R, dq)
Así vemos que todas las representaciones d son unitariamente equiv-
Alent. Esto era de esperar en vista de la Stone-Von
Resultado de la singularidad de Neumann. Tenga en cuenta también que el vacío
ahora se convierte en
0(q) =
d1/2η1/4
2 d2,
Así que incluso cuando no hay información sobre el param-
eter d en la representación en sí, está contenida en el
estado de vacío. Este procedimiento para la construcción del GNS-
Schrödinger representación para la mecánica cuántica ha
también se generalizó a los campos escalares sobre curvas arbitrarias
espacio en [8]. Nótese, sin embargo, que hasta ahora el tratamiento ha
todos fueron cinemáticos, sin ningún conocimiento de un Hamil-
Tonian. Para el Oscilador Armónico Simple de masa m
y la frecuencia, hay una opción natural compatible
con la dinámica dada por d =
, en el que algunos
los cálculos se simplifican (por ejemplo, para los estados coherentes),
pero en principio se puede utilizar cualquier valor de d.
Nuestro estudio se simplificará concentrándose en la
las entidades mentales en el Hilbert Space Hd, a saber, los
los estados generados por la acción con فارسى(α) en el vacío
0(q) = 1. Vamos a denotar esos estados por,
(q) = (α) · 0(q) = ei
El producto interno entre dos de estos estados es dado por
, d =
dμd e
~ = e−
()2 d2
4 ~2 (7)
Note, por cierto, que, contrariamente a alguna creencia común,
las ‘ondas del avión’ en este espacio GNS Hilbert son de hecho
normalizable.
Consideremos ahora la representación del polímero. Por
que, es importante tener en cuenta que hay dos posibles
casos límite para el parámetro d: i) El límite 1/d 7→ 0
y ii) El caso d 7→ 0. En ambos casos, tenemos ex-
presiones que se definan mal en la representación o
medida, por lo que uno necesita tener cuidado.
A. El caso 1/d 7→ 0.
La primera observación es que de las expresiones (5) y
(6) para el estado algebraico...........................................................................................................................................................................................................................................................
En efecto, los casos están bien definidos. En nuestro caso obtenemos, A :=
lim1/d→0
A (α) =,0 y A (β) = 1 (8)
A partir de esto, de hecho podemos construir la representación
mediante la construcción del GNS. Con el fin de hacer eso
y para mostrar cómo se obtiene esto vamos a considerar varios
expresiones. Sin embargo, hay que tener cuidado, ya que el límite
tiene que ser tomado con cuidado. Consideremos la medida
sobre la representación que se comporta como:
dμd =
d2 dq 7→ 1
por lo que las medidas tienden a una medida homogénea, pero
cuya ‘normalización constante’ va a cero, por lo que el límite
se vuelve algo sutil. Volveremos a este punto.
Más tarde.
Veamos ahora qué pasa con el producto interior.
entre las entidades fundamentales en el Hilbert Space Hd
dado por (7). Es inmediato ver que en el 1/d 7→ 0
limitar el producto interior se convierte,
, d 7→,
con Kronecker como el delta de Kronecker. Vemos entonces que el
ondas planas (q) se convierten en una base ortonormal para el
nuevo espacio Hilbert. Por lo tanto, hay una interacción delicada
entre los dos términos que contribuyen a la medida en
mantener la normalidad de estas funciones;
Necesitamos que la medida se humedezca (por 1/d) en orden
evitar que las ondas planas adquieran una norma infinita
(como sucede con la medida estándar de Lebesgue), pero
por otro lado la medida, que para cualquier valor finito
de d es un gaussiano, se vuelve cada vez más extendido.
Es importante señalar que, en este límite, los operadores
• (α) llegar a ser discontinuo con respecto a α, dado que
para cualquier α1 y α2 (diferente), su acción en un determinado
vector base (q) produce vectores ortogonales. Desde el
continuidad de estos operadores es uno de los hipotesis de
el teorema de Stone-Von Neumann, el resultado de la singularidad
no se aplica aquí. La representación es inequivalente
al estándar.
Analicemos ahora el otro operador, a saber, el
acción del operador Vó (β) sobre la base de (q):
(β) · (q) = e−
~ e(β/d
2+iα/~)q
que en el límite 1/d 7→ 0 va a,
(β) · (q) 7→ ei
~ (q)
que es continuo en β. Por lo tanto, en el límite, el operador
= −iq está bien definido. Además, tenga en cuenta que en este límite
el operador tiene (q) como su propio estado con valor propio
dado por α:
· (q) 7→ (q)
Para resumir, la teoría resultante obtenida por
el límite 1/d 7→ 0 de la descripción ordinaria de Schrödinger
sión, que llamaremos la «representación de polímeros de
tipo A», tiene las siguientes características: los operadores U(α)
están bien definidos pero no continuos en α, por lo que no hay
generador (sin operador asociado a q). La base vec-
tors son ortonormales (para α tomando valores en un contin-
y son autovectores del operador que es
bien definido. El espacio resultante Hilbert HA será el
(A-versión de la) representación del polímero. Vamos ahora.
considerar el otro caso, a saber, el límite cuando d 7→ 0.
B. El caso d 7→ 0
Exploremos ahora el otro caso limitante de la
representaciones de Schrödinger/Fock etiquetadas por el
eter d. Al igual que en el caso anterior, la limitación algebraica
el estado se convierte, B := limd→0 •d de tal manera que,
B(α) = 1 y B(V® (β)) = 0 (10)
A partir de esta función lineal positiva, uno puede de hecho con-
structe la representación usando la construcción GNS.
Primero tomemos nota de que la medida, incluso cuando el límite
debe ser tomado con el debido cuidado, se comporta como:
dμd =
d2 dq 7→ (q) dq
Es decir, como distribución delta de Dirac. Es inmediato a
ver que, en el límite d 7→ 0, el producto interior entre
los estados fundamentales (q) se convierte,
, d 7→ 1 (11)
Esto de hecho significa que el vector = − pertenece
al Kernel del producto interior limitante, por lo que uno tiene que
mod hacia fuera por estos (y todos) estados de la norma cero con el fin de
Conseguir el espacio de Hilbert.
Analicemos ahora el otro operador, a saber, el
acción del operador Vâr (β) sobre el vacío Ø0(q) = 1,
que para arbitrario d tiene la forma,
:= Vó (β) · 0(q) = e
(q/2)
El producto interno entre dos de estos estados es dado por
, d = e−
()2
En el límite d → 0,, d →,β. Podemos ver entonces.
que son estas funciones las que se vuelven ortonormales,
‘bases discretas’ en la teoría. Sin embargo, la función (q)
en este límite se vuelve mal definido. Por ejemplo, para β > 0,
crece sin límite para q > β/2, es igual a uno si
q = β/2 y cero de lo contrario. Con el fin de superar estos
las dificultades y hacer más transparente el resultado de
ory, vamos a considerar la otra forma de la representación
en la que la medida se incorpora a los Estados (y
el espacio resultante de Hilbert es L2(R, dq). Por lo tanto, la nueva
estado
(q) := K · (Vâ ° (β) · Ø0(q)) =
(q)2
Ahora podemos tomar el límite y lo que obtenemos es
d 7→0
(q) := ♥
1/2(q, β)
donde por Ł1/2(q, β) nos referimos a algo como ‘el cuadrado
raíz de la distribución de Dirac». Lo que realmente queremos decir es
un objeto que satisface la siguiente propiedad:
1/2(q, β) · 1/2(q, α) = 1/2(q, β)
Es decir, si α = β entonces es sólo el delta ordinario, otro-
sabiamente es cero. En cierto sentido, este objeto puede ser considerado como
medias densidades que no pueden integrarse por sí mismas,
pero cuyo producto puede. Concluimos entonces que el interior
el producto es,
, =
dq (q)(q) =
dq (q, α), α = α,α
que es justo lo que esperábamos. Nótese que en esta repre-
sentation, el estado de vacío se convierte en 0(q) :=
1/2(q, 0),
a saber, la mitad delta con apoyo en el origen. Lo es.
importante tener en cuenta que estamos llegando de una manera natural a
estados como medias densidades, cuyos cuadrados se pueden integrar
sin necesidad de una medida no trivial en la configuración
espacio de racionamiento. La invarianza del difeomorfismo surge entonces en un
natural pero sutil.
Tenga en cuenta que como resultado final recuperamos el Kronecker
producto interior delta para los nuevos estados fundamentales:
(q) := ♥
1/2(q, β).
Así, en esta nueva representación de B-polímero, el Hilbert
espacio HB es la terminación con respecto al interior
producto (13) de los estados generados por la toma (finito)
combinaciones lineales de elementos de base de la forma :
(q) =
bi i(q) (14)
Ahora vamos a introducir una descripción equivalente de esto
Espacio Hilbert. En lugar de tener los elementos de base ser
medio-deltas como elementos del espacio Hilbert donde el
producto interior es dado por la medida ordinaria de Lebesgue
dq, redefinimos tanto la base como la medida. Nosotros
podría considerar, en lugar de una media-delta con soporte β, una
Kronecker delta o función característica con soporte
sobre β:
(q) := ♥q,β
Estas funciones tienen un comportamiento similar con respecto a
el producto como media delta, a saber: (q) · (q) =
# # # # #, # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # La principal diferencia es que ninguna de las dos χ
′ ni sus
Los cuadrados son integrables con respecto al Lebesgue mea-
seguro (tiene cero norma). Con el fin de solucionar ese problema nosotros
tiene que cambiar la medida para que recuperemos la base
producto interior (13) con nuestra nueva base. La mea necesaria...
seguro resulta ser la medida de conteo discreto en R.
Por lo tanto, cualquier estado en la «base de la media densidad» se puede escribir
(utilizando la misma expresión) en términos de «Kronecker
base». Para más detalles y más motivación vea el
siguiente sección.
Nótese que en esta representación de polímero B, ambos
sus roles intercambiados con el de la
Representación de A-polímero: mientras que U(α) es discontinuo
y, por lo tanto, qâ € no se define en la representación-A, nosotros
tener que es V (β) en la representación B que tiene este
propiedad. En este caso, es el operador el que no puede
se definan. Vemos entonces que dado un sistema físico para
que el espacio de configuración tiene un fisico bien definido
en la posible representación en la que
funciones de onda son funciones de la variable de configuración
q, las representaciones de polímeros A y B son radicalmente dif-
Ferent e inequivalente.
Dicho esto, también es cierto que la A y B
representaciones son equivalentes en un sentido diferente, por
medios de la dualidad entre q y p representaciones
y la dualidad d↔ 1/d: La representación de A-polímero
en la “representación q” es equivalente al polímero B
representación en la "p-representación", e inversamente.
Cuando se estudia un problema, es importante decidir desde
el comienzo de la representación del polímero (en su caso)
debe ser utilizado (por ejemplo en la q-polarización). Esto
tiene como consecuencia una implicación sobre qué variable es
naturalmente “cuantificada” (incluso si continua): p para A y q
para B. Podría haber, por ejemplo, un criterio físico para
esta elección. Por ejemplo, una simetría fundamental podría
Sugiere que una representación es más natural que una...
otro. Esto ha sido observado recientemente por Chiou.
en [10], donde se investiga el grupo Galileo y
se demuestra que la representación B se comporta mejor.
En la otra polarización, es decir, para las funciones de onda
de p, la imagen se invierte: q es discreto para el A-
representación, mientras que p es para el caso B. Terminemos con esto.
, señalando que el procedimiento de obtención de la
cuantificación del polímero mediante un límite adecuado
de las representaciones Fock-Schrödinger podrían resultar útiles en
ajustes más generales en teoría de campo o gravedad cuántica.
III. MECANISMOS DE CUANTO POLÍMICO:
KINEMÁTICAS
En secciones anteriores hemos derivado lo que tenemos
las llamadas representaciones poliméricas A y B (en
la polarización) como casos limitantes de la representación ordinaria de Fock
Enviaciones. En esta sección, describiremos, sin
cualquier referencia a la representación de Schrödinger, la «ab-
representación de polímero de estrazo y luego hacer contacto
con sus dos posibles realizaciones, estrechamente relacionadas con la A
y B casos estudiados anteriormente. Lo que vamos a ver es que uno
de ellos (el caso A) corresponderá a la p-polarización
mientras que el otro corresponde a la representación q,
cuando se toma una decisión sobre el significado físico de
las variables.
Podemos empezar por definir kets abstractos etiquetados por
un número real μ. Estos pertenecerán al espacio de Hilbert.
Hpoly. A partir de estos estados, definimos un 'cilindro genérico
estados’ que corresponden a una elección de una colección finita de
Números μi-R con i = 1, 2,...., N. Asociados a esto
elección, hay N vectores i, por lo que podemos tomar un lineal
combinación de ellos
= 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1
ai i (15)
El producto interior del polímero entre los kets fundamentales
es administrado por,
=,μ (16)
Es decir, los kets son ortogonales entre sí (cuando ν 6=
μ) y se normalizan ( = 1). Inmediatamente,
esto implica que, dado que cualquier dos vectores =
j=1 bj j
y =
i=1 ai i, el producto interior entre ellos
es administrado por,
= 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
b̄k ak
donde la suma es sobre k que etiqueta los puntos de intersección
entre el conjunto de etiquetas j} y i}. El Hilbert
espacio Hpoly es la terminación Cauchy de finito lineal com-
de la forma (15) con respecto al pro blema interno
uct (16). Hpoly no es separable. Hay dos básicos
operadores en este espacio Hilbert: el «operador de etiquetas» :
:= μ
y el operador de desplazamiento.. (..........................................................................................................................................................................................................................................................
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
El operador es simétrico y el(los) operador(es)
define una familia de un parámetro de operadores unitarios en
Hpoly, donde su contiguo es dado por () = (). Esto
la acción es, sin embargo, discontinuo con respecto a
que y + son siempre ortogonales, no importa
que tan pequeño es. Por lo tanto, no hay operador (hermitano)
que podría generar... (..........................................................................................................................................................................................................................................................
Hasta ahora hemos dado la caracterización abstracta de
el espacio Hilbert, pero uno quisiera hacer contacto
con realizaciones concretas como funciones de onda, o por iden-
• la adaptación de los operadores abstractos a las condiciones físicas de trabajo;
Erators.
Supongamos que tenemos un sistema con un espacio de configuración
con coordenada dada por q, y p denota su canónico
conjugate momenta. Supongamos también que para la física rea-
hijos decidimos que la configuración coordin q will
tienen algún “carácter discreto” (por ejemplo, si se trata de
se identifican con la posición, se podría decir que hay
una posición discreta subyacente en pequeña escala).
¿Cómo podemos aplicar estos requisitos por medio de
¿La representación del polímero? Hay dos posibilidades,
dependiendo de la elección de las ‘polarizaciones’ para la ola-
funciones, a saber, si serán funciones de
Figuración q o momenta p. Vamos a dividir el disco-
sión en dos partes.
A. Polarización momentánea
En esta polarización, los estados serán denotados por,
(p) = (p)
donde
(p) = p = ei
¿Cómo están representados entonces los operadores y? Nota
que si asociamos el operador multiplicativo
Vá r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r s r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r r s r r r r r r r r r r r r r r r r r r r r r r r r
~ = ei
()
p = ()(p)
Vemos entonces que el operador VÃ3r () corresponde exactamente
para el operador de turnos. Por lo tanto, también podemos concluir
que el operador no existe. Ahora es fácil
identificar al operador qÃ3r con:
q · (p) = −i~
(p) = μ e
~ = (p)
a saber, con el operador abstracto. La razón por la que
decir que qâ € es discreto es porque este operador tiene como su
eigenvalue la etiqueta μ del estado elemental (p), y
esta etiqueta, incluso cuando puede tomar valor en un continuum
de los posibles valores, debe entenderse como un conjunto discreto,
dado que los estados son ortonormales para todos los valores de
μ. Dado que los estados son ahora funciones de p, el interior
el producto (16) debe definirse mediante una medida μ en la
espacio en el que se definen las funciones de onda. En orden
saber cuáles son estos dos objetos, a saber, el quan-
el espacio "configuración" C y la medida correspondiente1,
tenemos que hacer uso de las herramientas disponibles para nosotros desde
la teoría de C*-álgebras. Si consideramos a los operadores
Vócalo, junto con su producto natural y su relación con
dado por Váš ∗() = Váš (), que tienen la estructura de
a Abelian C*-álgebra (con unidad) A. Sabemos por
la teoría de la representación de tales objetos que A es iso-
mórfico al espacio de las funciones continuas C0() en una
espacio compacto, el espectro de A. Cualquier representación
de A en un espacio Hilbert como operador de multiplicación será
sobre los espacios de la forma L2(, dμ). Es decir, nuestro cuántico
espacio de configuración es el espectro del álgebra, que
en nuestro caso corresponde a la compactación de Bohr Rb
de la línea real [11]. Este espacio es un grupo compacto y
hay una medida de probabilidad natural definida en ella, el
Medida Haar μH. Por lo tanto, nuestro Hilbert espacio Hpoly será
isomórfico al espacio,
Hpoly,p = L2(Rb, dμH) (17)
En términos de «funciones periódicas cuasi» generadas por (p),
el producto interior toma la forma
:=
dμH (p)(p) :=
= lim
L 7° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° °
dp(p)(p) =,
nota que en la p-polarización, esta caracterización cor-
responde a la «versión A» de la representación del polímero
de Sec. II (donde se intercambian p y q).
B. Q-polarización
Consideremos ahora la otra polarización en la que la ola
las funciones dependerán de la coordenada de configuración q:
(q) = (q) + (q) + (q) + (q) = (q) + (q) + (q) = (q) + (q) = (q) + (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q) = (q)
Las funciones básicas, que ahora se llamará (q), debe
ser, en cierto sentido, el dual de las funciones (p) de la
subsección anterior. Podemos tratar de definirlos a través de un
«Fourier transform»:
(q) := q = q
dμHpp
que es dada por
(q) :=
dμHqp(p) =
dμH e
−i p q
~ = q,μ (19)
1 aquí utilizamos la terminología estándar de ‘espacio de configuración’ para
denotar el dominio de la función de onda incluso cuando, en este caso,
corresponde al momento físicoa p.
Es decir, los objetos básicos en esta representación son Kro-
cuello deltas. Esto es precisamente lo que habíamos encontrado en
Sec. II para la representación del tipo B. ¿Cómo está ahora el
los operadores básicos representados y cuál es la forma de la
¿Producto interior? En cuanto a los operadores, esperamos que
están representados de la manera opuesta como en el
p-polarización anterior, pero que preservan el
las mismas características: p® no existe (la derivada de la Kro-
cuello delta está mal definido), pero su versión exponencial
En el caso de Vázquez, se entiende por:
VÃ3r () · () = () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () () ()
y el operador qâ € que ahora actúa como multiplicación ha
como sus propios estados, las funciones (q) =,q:
q · (q) := μ (q)
¿Cuál es ahora la naturaleza de las configuraciones cuánticas
espacio Q? ¿Y cuál es la medida sobre dμq? que
define el producto interior que deberíamos tener:
(q), (q) =,
La respuesta viene de una de las caracterizaciones de
la compactación de Bohr: sabemos que es, en un preciso
sentido, dual a la línea real, pero cuando está equipado con el
topología discreta Rd. Además, la medida relativa a Rd
será la «medida de contabilidad». De esta manera recuperamos el
las mismas propiedades que teníamos para la caracterización anterior
del espacio del polímero Hilbert. Así podemos escribir:
Hpoly,x := L2(Rd, dμc) (20)
Esto completa una construcción precisa del poli-tipo B
representación mer bosquejada en la sección anterior. Nota
que si hubiéramos elegido la situación física opuesta,
a saber q, la configuración observable, ser el quan-
dad que no tiene un operador correspondiente, entonces
habríamos tenido la realización opuesta: en el q-
polarización habríamos tenido el polímero tipo A rep-
resentimiento y el tipo-B para la p-polarización. As
veremos que ambos escenarios han sido considerados en el
literatura.
Hasta ahora sólo hemos centrado nuestra discusión en el
Aspectos cinemáticos del proceso de cuantificación. Déjanos
Ahora considere en la siguiente sección la cuestión de la dinam-
y recordar el enfoque que se había adoptado en el informe de la Comisión de Asuntos Económicos y Monetarios y de Política Industrial.
literatura, antes de la cuestión de la eliminación del regulador
fue reexaminado en [6].
IV. MECANISMOS DE CUANTO POLÍMICO:
DINÁMICA
Como hemos visto la construcción del polímero
la representación es bastante natural y conduce a un
teoría de tum con diferentes propiedades que la habitual
Schrödinger homólogo como su no separabilidad, la
no existencia de determinados operadores y la existencia de
eigen-vectores normalizados que dan un valor preciso para
una de las coordenadas espaciales de fase. Esto se ha hecho.
sin tener en cuenta a un Hamiltoniano que dota a la
sistema con una dinámica, energía y así sucesivamente.
Primero consideremos el caso más simple de una partícula de
masa m en un potencial V (q), en el que el Hamiltonian H
adopta la forma,
p2 + V (q)
Supongamos además que el potencial es dado por un no-
función periódica, como un polinomio o una función racional
tion. Podemos ver inmediatamente que una implementación directa-
de los Hamiltonianos está fuera de nuestro alcance, para el simple
la razón de que, como hemos visto, en el polímero representa-
¡Podemos representar q o p, pero no los dos! ¿Qué?
¿Se ha hecho hasta ahora en la literatura? La más simple.
cosa posible: aproximar el término no existente por un
bien definida función que se puede cuantificar y la esperanza de
el mejor. Como veremos en las próximas secciones, hay
más de lo que uno puede hacer.
En este punto también hay una decisión importante que debe ser
hecho: que la variable q o p debe ser considerada como “des-
¿Cerveza? Una vez que se hace esta elección, entonces implica que
la otra variable no existirá: si q se considera como dis-
ocre, entonces p no existirá y tenemos que aproximarnos
el término cinético p2/2m por otra cosa; si p va a ser
la cantidad discreta, entonces q no se definirá y luego
tenemos que aproximar el potencial V (q). ¿Qué hap-
¿lápices con potencial periódico? En este caso uno podría
ser modelar, por ejemplo, una partícula en una celosía regular
como un fonón que vive en un cristal, y luego el natural
elección es tener q no bien definido. Por otra parte, la po-
tential estará bien definido y no hay aproximación
necesario.
En la literatura se han considerado ambos escenarios.
Por ejemplo, cuando se considera un mecánico cuántico
sistema en [2], la posición fue elegida para ser discreta,
así que p no existe, y uno está entonces en el tipo A para
la polarización del momento (o el tipo B para el q-
polarización). Con esta elección, es el término cinético el
uno que tiene que ser aproximado, así que una vez que se ha hecho
esto, entonces es inmediato considerar cualquier potencial que
Por lo tanto, se definirá bien. Por otro lado, cuando con-
cosmología cuántica del bucle lateral (LQC), el estándar
elección es que la variable de configuración no está definida
[4]. Esta elección se hace teniendo en cuenta que LQC se considera como
el sector simétrico de la gravedad cuántica del bucle completo donde
la conexión (que se considera como la configuración vari-
no puede ser promovido a un operador y se puede
sólo definir su versión exponencial, a saber, el holón-
Omy. En ese caso, la variable canónicamente conjugada,
estrechamente relacionado con el volumen, se convierte en «discreto», al igual que
en la teoría completa. Este caso es, sin embargo, diferente de la
partícula en un ejemplo potencial. Primero podríamos mencionar
que la forma funcional de la restricción hamiltoniana
que implementa dinámica tiene una estructura diferente, pero
la diferencia más importante radica en que el sistema es
constreñido.
Volvamos al caso de la partícula en una po-
tential y para la definición, comencemos con el aux-
Marco iliar cinemático en el que: q es discreto, p
no pueden ser promovidos y por lo tanto tenemos que aproximarnos
el término cinético pÃ3s/2/2m. ¿Cómo se hace esto? El Stan...
prescripción dard es definir, en el espacio de configuración
C, un «gráfico» regular 0. Esto consiste en un numerable
conjunto de puntos, equidistante, y caracterizado por un pa-
rameter μ0 que es la separación (constante) entre
puntos. El ejemplo más sencillo sería considerar la
set 0 = {q R q = nμ0,
Esto significa que los kets básicos que se considerarán
Se corresponderá precisamente con las etiquetas μn pertenecientes a
el gráfico 0, es decir, μn = nμ0. Por lo tanto, sólo
considerar los estados de la forma,
= 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1 = 1
bn n. (21)
Este espacio ‘pequeño’ Hilbert H0, el espacio gráfico Hilbert,
es un subespacio del polímero ‘grande’ Hilbert Space Hpoly
pero es separable. La condición para un estado de la forma
(21) pertenecer al espacio Hilbert H0 es que el co-
efficients bn satisfacer:
n bn2.
Consideremos ahora el término cinético pÃ2/2m. Tenemos
para aproximarlo mediante funciones trigonométricas,
que se puede construir a partir de las funciones de la forma ei. p/~.
Como hemos visto en secciones anteriores, estas funciones pueden
ser promovidos a los operadores y actuar como traducción
operadores en los kets. Si queremos permanecer en el
γ, y no crear ‘nuevos puntos’, entonces uno es
a considerar a los operadores que desplazan los kets
por la cantidad justa. Es decir, queremos lo básico
el operador de turno Vâ ° ° ° ° ° sea tal que mapee el ket con
etiqueta nÃ3 al siguiente ket, es decir n+1Ã3. Esto puede...
acción realizada mediante la fijación, de una vez por todas, del valor de la
se permite que el parámetro sea = μ0. Tenemos entonces,
Vóz (μ0) · nó = n + μ0ó = n+1ó
que es lo que queríamos. Este «operador de turnos» básico
ser el bloque de construcción para aproximar cualquier (polinomio)
función de p. Con el fin de hacer eso nos damos cuenta de que la
función p se puede aproximar por,
* * * ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~
(μ0 p
~ − e−i
donde la aproximación es buena para p • ~/μ0. Por lo tanto,
se puede definir un operador regulado p0 que depende de
la «escala» μ0 como:
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
[V (μ0) − V (0)]
(n+1â − n−1â > ) (22)
Para regular el operador, hay (al menos)
dos posibilidades, a saber, componer el operador p0
con sí mismo o para definir una nueva aproximación. La operación...
ator p0 · p0 tiene la característica que cambia los estados dos
pasos en el gráfico a ambos lados. Sin embargo, hay un...
otro operador que sólo implica el cambio una vez:
2μ0 · n :=
[2 − Vâr (μ0) − Vâr (0)] · nâr =
(23)
lo que corresponde a la aproximación p2 2~
cos(μ0 p/~)), válido también en el régimen p • ~/μ0. Con
estas consideraciones, uno puede definir el operador 0,
el Hamiltoniano a escala μ0, que en la práctica «vive» en
el espacio H0 como,
0 :=
p+2μ0 + V+ (q), (24)
que es un operador bien definido, simétrico en H0. No...
que el operador también se define en Hpoly, pero hay
su interpretación física es problemática. Por ejemplo,
resulta que el valor de expectativa del término cinético
calculados en la mayoría de los estados (estados que no están adaptados a
al valor exacto del parámetro μ0) es cero. Incluso
si uno toma un estado que da expectativas “razonables”
valores del término μ0-cinético y lo utiliza para calcular el
valor de expectativa del término cinético correspondiente a
una ligera perturbación del parámetro μ0 se obtendría
cero. Este problema, y otros que surgen cuando se trabaja
sobre Hpoly, obliga a uno a asignar una interpretación física
a los hamiltonianos 0 sólo cuando su acción está restringida
al subespacio H0.
Ahora exploremos la forma que toma el Hamiltoniano
en las dos posibles polarizaciones. En la q-polarización,
la base, etiquetada por n viene dada por las funciones χn(q) =
*q,μn. Es decir, las funciones de onda sólo tendrán sup-
puerto en el conjunto 0. Alternativamente, se puede pensar en un
como completamente caracterizado por el ‘Fourier coeffi-
an: فارسى(q) ↔ an, que es el valor que la ola
función •(q) toma en el punto q = μn = nμ0. Por lo tanto,
el Hamiltoniano toma la forma de una ecuación de diferencia
cuando actúa sobre un estado general............................................................................................................................................................................................................................................................ Resolver el tiempo
Ecuación independiente de Schrödinger ·
para resolver la ecuación de diferencia para los coeficientes a.
La polarización del impulso tiene una estructura diferente.
En este caso, el operador pâ € 2μ0 actúa como una multiplicación
operador,
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
1 − cos
(μ0 p
•(p) (25)
El operador correspondiente a q se representará como un
operador derivado
p): = i~ Łp (p).
Para un potencial genérico V (q), tiene que ser definido por
medios de teoría espectral definidos ahora en un círculo. ¿Por qué?
¿En un círculo? Por la sencilla razón de que al restringir
nosotros mismos a un gráfico regular 0, las funciones de p que
preservarlo (cuando actúa como operador de turnos) son de la
forma e(i m μ0 p/~) para m entero. Es decir, lo que tenemos
son modos Fourier, etiquetados por m, del período 2η ~/μ0 en p.
¿Podemos pretender entonces que la variable de espacio de fase p es
¿Ahora compactado? La respuesta es afirmativa. Los
producto interno en las funciones periódicas 0(p) de p que viene
del espacio completo Hilbert Hpoly y dado por
(p)(p)polio = lim
L 7° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° ° °
dp (p) (p) (p)
es exactamente equivalente al producto interior en el círculo
dado por la medida uniforme
(p)(p)0 =
∫ /μ0
/μ0
dp (p) (p) (p)
con p (/μ0, /μ0). Mientras uno restrinja a...
la atención a la gráfica 0, uno puede trabajar en este separable
Hilbert espacio H0 de funciones integrables cuadradas en S
Inmediatamente, se pueden ver las limitaciones de este descrip-
tion. Si el sistema mecánico a cuantificar es tal
que sus órbitas tienen valores de los momenta p que son
no pequeño en comparación con /μ0 entonces la aproximación
tomado será muy pobre, y no esperamos ni el
descripción clásica eficaz ni su cuantificación para ser
cerca de la estándar. Si, por otro lado, uno es al-
en la región en la que la aproximación puede ser
Considerado como fiable, entonces tanto clásico como cuántico de-
las inscripciones deben aproximarse a la descripción estándar.
¿Qué hace «cerca de la descripción estándar» exactamente
necesidades medias, por supuesto, algunas aclaraciones adicionales. In
particular está asumiendo la existencia de la habitual
Schrödinger representación en la que el sistema tiene un be-
havior que también es coherente con las observaciones. Si esto es
el caso, la pregunta natural es: ¿cómo podemos
¿Aparear tal descripción de la foto del polímero? ¿Está ahí?
un gráfico bastante fino 0 que se aproximará al sistema
¿De tal manera que todas las observaciones sean indistinguibles?
O mejor aún, ¿podemos definir un procedimiento, que implica
un refinamiento del gráfico 0 tal que uno recupera el
¿Un cuadro estándar?
También podría ocurrir que un límite continuo puede ser de-
multada, pero no coincide con la «esperada». Pero
también podría haber sistemas físicos para los que hay
ninguna descripción estándar, o simplemente no tiene sentido.
Puede en esos casos la representación del polímero, si ex-
ists, proporcionar la descripción física correcta de la sys-
¿Tem en consideración? Por ejemplo, si existe un
limitación física de la escala mínima fijada en μ0, como
podría ser el caso de una teoría cuántica de la gravedad, entonces
la descripción del polímero proporcionaría un verdadero
por el valor de determinadas cantidades, como p en
nuestro ejemplo. Este podría ser el caso para el lazo cuántico
cosmología, cuando exista un valor mínimo para la
volumen (proviene de la teoría completa), y el espacio de fase
puntos cerca de la “singularidad” se encuentran en la región donde el
la aproximación inducida por la escala μ0 se aparta de la
descripción clásica estándar. Si en ese caso el poli-
sistema cuántico mer se considera más fundamental
que el sistema clásico (o su estándar Wheeler-De
Witt cuantización), entonces uno interpretaría este dis-
crepancias en el comportamiento como señal de la avería
de descripción clásica (o su cuantificación ‘naive’).
En la siguiente sección presentamos un método para eliminar
el regulador μ0 que se introdujo como
comieron el paso para construir la dinámica. Más precisamente, nosotros
considerará la construcción de un límite continuo de
la descripción del polímero mediante una renormalización
procedimiento.
V. LÍMITE CONTINUO
Esta sección consta de dos partes. En el primero motivamos
la necesidad de una noción precisa del límite continuo de
la representación polimérica, explicando por qué más
El enfoque directo e ingenuo no funciona. En la segunda fase:
en parte, presentaremos las principales ideas y resultados de
el papel [6], donde el hamiltoniano y el físico
El espacio de Hilbert en la mecánica cuántica polimérica es...
como un continuum límite de teorías eficaces, seguir-
Las ideas del grupo de renormalización de Wilson. El resultado
El espacio físico Hilbert resulta ser unitariamente isomor-
phic a las Hs ordinarias = L2(R, dq) del Schrödinger
teoría.
Antes de describir los resultados de [6] debemos discutir
el significado preciso de llegar a una teoría en el contin-
uum. Consideremos, para mayor concreción, la representación del tipo B.
sentacion en la q-polarizacion. Es decir, los estados son func...
ciones de q y la base ortonormal (q) es dada por
funciones características con soporte en q = μ. Déjanos
Ahora supongamos que tenemos un estado de Schrödinger
L2(R, dq). ¿Cuál es la relación entre Ł(q) y un estado?
¿En Hpoly, X? También estamos interesados en las preguntas opuestas.
sión, es decir, nos gustaría saber si hay una preferencia
estado en Hs que es aproximado por un estado arbitrario
(q) en Hpoly,x. La primera observación obvia es que un
Estado Schödinger (q) no pertenece a Hpoly,x ya que
tendría una norma infinita. Para ver esa nota que incluso
cuando el Estado aspirante puede ser formalmente ampliado en el
base como,
(q) =
(μ) (q)
donde la suma es sobre el parámetro μ â € R. Su associ-
ated norma en Hpoly,x sería:
(q)2polio =
(μ)2 →
que explota. Tenga en cuenta que para definir una asignación
P : Hs → Hpoly, x, hay una gran ambigüedad desde el
se necesitan los valores de la función فارسى(q) con el fin de ampliar
la función de la onda polimérica. Por lo tanto, sólo podemos definir un
mapping en un denso subconjunto D de Hs donde los valores de la
funciones están bien definidas (recordemos que en Hs el valor de
funciones en un punto dado no tiene significado ya que los estados son
clases de equivalencia de funciones). Podríamos, por ejemplo,
pedir que la asignación se defina para los representantes de la
clases de equivalencia en Hs que son continua por partes.
A partir de ahora, cuando nos referimos a un elemento del espacio
Nos referiremos a uno de esos representantes.
Observe entonces que un elemento de Hs define un elemento
de Cyl, el dual al espacio Cylγ, es decir, el espacio
de funciones de cilindro con soporte en la celosía (finita)
γ = 1, μ2,. .., μN}, de la siguiente manera:
(q) : Cylγ C
de tal manera que
*(q)[(q)] = ( :=
(μ)
- ¡No! - ¡No! - ¡No! - ¡No! - ¡No!
• (μi) • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Nótese que este mapeo podría ser visto como consistente en dos
partes: Primero, una proyección Pγ : Cyl
∗ → Cylγ de tal manera que
Pγ() = (q) :=
i-(μi)i(q)-Cylγ. El Estado
se refiere a veces como la «sombra de Ł(q) en
la celosía γ». El segundo paso es entonces tomar el interior
producto entre la sombra (q) y el estado (q)
con respecto al producto interno del polímero poliγ.
Ahora este producto interior está bien definido. Note que para
cualquier celosía dada γ el proyector correspondiente Pγ puede ser
intuitivamente interpretado como una especie de ‘granulación gruesa
mapa’ del continuum a la celosía γ. En términos de
funciones de q la proyección está reemplazando un continuo
función definida en R con una función sobre la celosía
γ R, que es un conjunto discreto simplemente restringiendo a
γ. Cuanto más fina sea la celosía, más puntos tendremos.
en la curva. Como veremos en la segunda parte de este
sección, hay de hecho una noción precisa de grano grueso
que implementa esta idea intuitiva de una manera concreta.
En particular, tendremos que sustituir la celosía γ por
una descomposición de la línea real en intervalos
puntos de celosía como puntos finales).
Consideremos ahora un sistema en el polímero represen-
en la que se eligió una celosía particular γ0, por ejemplo
con puntos de la forma {qk â R qk = ka0, â k â Z},
es decir, una celosía uniforme con espaciamiento igual a a0. En este
caso, cualquier función de onda Schrödinger (del tipo que
considerar) tendrá una sombra única en la celosía γ0. Si
refinamos la celosía γ 7→ γn dividiendo cada intervalo en
2n nuevos intervalos de longitud a = a0/2
Tenemos una nueva sombra...
ows que tienen más y más puntos en la curva. Intu-
itativamente, refinando infinitamente el gráfico nos recuperaríamos
la función original فارسى(q). Incluso cuando en cada paso finito
la sombra correspondiente tiene una norma finita en el poli-
mer Hilbert espacio, la norma crece ilimitadamente y el
el límite no se puede tomar, precisamente porque no podemos em-
cama Hs en Hpoly. Supongamos ahora que estamos interesados
en el proceso inverso, es decir, a partir de un polímero
teoría sobre una celosía y pidiendo la "onda continua
función’ que se aproxima mejor por una función de onda
sobre un gráfico. Supongamos, además, que queremos con-
sider el límite de la gráfica cada vez más fino. En orden
para dar respuestas precisas a estas (y otras) preguntas
necesidad de introducir algunas nuevas tecnologías que nos permitirán
para superar estas aparentes dificultades. En el resto
de esta sección recordaremos estas construcciones para el
beneficio del lector. Los detalles se pueden encontrar en [6] (que
es una aplicación del formalismo general discutido en
[9]).
El punto de partida de esta construcción es el concepto
de una escala C, que nos permite definir la eficacia de
y el concepto de límite continuo. En nuestro caso, un
escala es una descomposición de la línea real en la unión de
intervalos cerrados-abiertos, que cubren toda la línea y hacen
no se intersectan. Intuitivamente, estamos cambiando el énfasis
desde los puntos de celosía a los intervalos definidos por el
los mismos puntos con el objetivo de aproximar
funciones tinuas definidas en R con funciones que son
constante en los intervalos definidos por la celosía. Ser
precisa, definimos una incrustación, para cada escala Cn de
Hpoly a Hs por medio de una función escalonada:
•(hombre) χman(q) →
*(hombre) m(q)* Hs
con n(q) una función característica en el intervalo
αm = [hombre, (m + 1)an). Por lo tanto, las sombras (viviendo en
la celosía) eran sólo un paso intermedio en el con-
estructuración de la función de aproximación; esta función es
constante por pieza y se puede escribir como un com- lineal
bination de funciones de escalón con los coeficientes proporcionados
por las sombras.
El desafío ahora es definir en un sentido apropiado
cómo se pueden aproximar todos los aspectos de la teoría
por medio de esta constante por piezas funciones. Entonces el
estrategia es que, para cualquier escala dada, se puede definir un
teoría eficaz mediante la aproximación del operador cinético
por una combinación de los operadores de traducción que cambian
entre los vértices de la descomposición dada, en otros
palabras por una función periódica en p. Como resultado uno tiene un
conjunto de teorías eficaces a escalas determinadas que son mutuamente
relacionados con mapas de granulación gruesa. Este marco era el siguiente:
desarrollado en [6]. Para la comodidad del lector nosotros
Recordemos brevemente parte de ese marco.
Vamos a denotar el espacio cinemático polímero Hilbert en
la escala Cn como HCn, y sus elementos de base como eαi,Cn,
donde αi = [ian, (i + 1)an) • Cn. Por la construcción de este
la base es ortonormal. Los elementos de base en la dualidad
Hilbert espacio H*Cn se denotan por i,Cn; también son
Ortonormal. Los estados i, Cn tienen una acción simple en
Cyl, i,Cn(lx0,q) = i,Cn(lx0). Es decir, si x0 está en el
intervalo αi de Cn el resultado es uno y es cero si es
No está ahí.
Dado cualquier m ≤ n, definimos d*m,n : H*Cn → H
como el mapa de ‘granulación gruesa’ entre el doble Hilbert
espacios, que envía la parte de los elementos del dual
base a cero manteniendo la información del resto:
d*m,n(i,Cn) = j,Cm si i = j2
n-m, en el caso contrario
d*m,n(i,Cn) = 0.
En cada escala la teoría efectiva correspondiente es
dado por el hamiltoniano Hn. Estos Hamiltonianos lo harán.
ser tratados como formas cuadráticas, hn : HCn → R, dado por
hn() =
(,Hn), (27)
en la que 2Cn es un factor de normalización. Veremos más tarde.
que este reescalamiento del producto interior es necesario en
para garantizar la convergencia de los renormalizados
teoría. La teoría completamente renormalizada a esta escala
se obtiene como
hrenm := lim
- Sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí, sí., sí, sí, sí., sí, sí, sí., sí, sí, sí., sí, sí., sí., sí, sí., sí. (28)
y los Hamiltonianos renormalizados son compatibles con
el uno al otro, en el sentido de que
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
n = h
Con el fin de analizar las condiciones para la convergencia
en (28) vamos a expresar el hamiltoniano en términos de su
eigen-covectores fin eigenvalues. Trabajaremos con...
tiva Hamiltonianos que tienen un espectro puramente discreto
(marcado por: · Hn ·, Cn = E/, Cn, Cn. También lo haremos.
introducir, como paso intermedio, un corte en la energía
niveles. El origen de este corte está en la aproximación
del Hamiltoniano de nuestro sistema en una escala dada con
a Hamiltoniano de un sistema periódico en un régimen de pequeño
energías, como explicamos antes. Por lo tanto, podemos escribir
h vcut-offm =
vcut-off
E/,Cm,Cm ,Cm,(29)
donde los covectores eígenos,Cm se normalizan de acuerdo-
al producto interior redistribuido por 1
, y el corte...
off puede variar hasta una escala dependiente unida, νcut−off ≤
vmax(Cm). El espacio Hilbert de los covectores junto con
tal producto interno se llamará H.renCm.
En presencia de un corte, la convergencia de la
Hamiltonianos microscópicamente corregidos, ecuación (28) es
equivalente a la existencia de los dos límites siguientes.
El primero es la convergencia de los niveles de energía,
E/Cn = E
/. (30)
Segundo es la existencia de la completamente renormalizada
covectores autóctonos,
m,n,Cn =
- ¿Qué es esto? - ¿Qué es esto?
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * 31)
Aclaramos que la existencia del límite anterior significa
que la letra c) del apartado 1 del artículo 3 del Reglamento (CEE) n° 1408/71 del Consejo, de 17 de diciembre de 1971, relativo a la aplicación de los regímenes de seguridad social a los trabajadores por cuenta ajena, a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta propia y a los trabajadores por cuenta propia, a los trabajadores por cuenta ajena y a los trabajadores por cuenta ajena, a los trabajadores por cuenta ajena y a los trabajadores por cuenta propia, a los trabajadores por cuenta ajena y a los trabajadores por cuenta ajena, a los trabajadores por cuenta ajena, a los trabajadores por cuenta ajena y a los trabajadores por cuenta ajena, a los trabajadores por cuenta ajena y a los trabajadores por cuenta ajena, No...
que esta convergencia punto a punto, si se puede llevar a cabo
en absoluto, requerirá la afinación de los factores de normalización
2Cn.
Pasamos ahora a la cuestión del límite del continuum
de los covectores renormalizados. En primer lugar podemos pedir por el
existencia del límite
El Tribunal de Primera Instancia decidió, en primer lugar, si la Decisión de la Comisión de 17 de diciembre de 1994 (en lo sucesivo, «Decisión impugnada»), que, en el caso de autos, debía interpretarse en el sentido de que la Decisión de la Comisión de 17 de diciembre de 1994 (en lo sucesivo, «Decisión impugnada») no había sido adoptada por el Tribunal de Primera Instancia en el sentido de que la Decisión de la Comisión de 21 de diciembre de 1995 (en lo sucesivo, «Decisión impugnada») no había sido adoptada por el Tribunal de Justicia en el sentido de que la Decisión de la Comisión de 21 de diciembre de 1995 (en lo sucesivo, «Decisión impugnada») no había sido adoptada por el Tribunal de Justicia en el sentido de que la Decisión de la Comisión de 21 de diciembre de 1995 (en lo sucesivo, «Decisión impugnada»).
para cualquier فارسىx0,q Cyl. Cuando estos límites existen hay
una acción natural de los covectores autóctonos en el continuum
límite. A continuación consideramos otra noción del continuum
límite de los covectores autóctonos renormalizados.
Cuando los covectores autóctonos completamente renormalizados
existen, forman una colección que es compatible,
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
= Ren/,Cm. Una secuencia de d
- Compatibles ni compatibles.
Los covectores maleables definen un elemento de
, que es
el límite proyectivo de los espacios renormalizados de covec-
HerenCn. 33)
El producto interior en este espacio está definido por
(Cn}, Cn})R := lim
(Cn,ΦCn)
La inclusión natural de C0 en
es por un antilineal
mapa que asigna a cualquier â € â € € â € TM Câ € TM el dâ €-compatible
colección shadCn :=
i(L(αi))
Se le llamará a ShadCn la sombra de # a escala Cn y actúa
en Cyl como una función constante a trozos. Claramente otro
tipos de funciones de prueba como las funciones de Schwartz son también
naturalmente incluidos en
. En este contexto una sombra es
un estado de la teoría efectiva que se aproxima a un estado
en la teoría del continuum.
Desde el producto interior en
es degenerado, el
espacio físico Hilbert se define como
Höphys :=
/ ker(·, ·)ren
Hphys := Hóphys
La naturaleza del espacio físico Hilbert, si es
isomórfico al espacio de Schrödinger Hilber, Hs, o no, es
determinado por los factores de normalización
se obtiene de las condiciones que exigen la compatibil-
ity de la dinámica de las teorías eficaces en diferentes
básculas. La dinámica del sistema que se examina
selecciona el límite del continuum.
Volvamos ahora a la definición de la Hamilto-
nian en el límite del continuum. En primer lugar considerar la continuación de
uum límite del Hamiltoniano (con corte) en el sentido
de su convergencia puntual como forma cuadrática. Lo siento.
resulta que si el límite de la ecuación (32) existe para
todos los covectores autóctonos permitidos por el corte, tenemos
vcut-off ren
: Hpoly,x → R definido por
vcut-off ren
(x0,q) := lim
h/cut−off Renn ([lx0,q]Cn). (34)
Esta forma cuadrática hamiltoniana en el continuum puede
ser de grano grueso a cualquier escala y, como puede ser ex-
, produce el Hamilto completamente renormalizado-
Nian forma cuadrática a esa escala. Sin embargo, esto no es
un límite de continuum completamente satisfactorio porque podemos
no retirar el corte auxiliar νcut−off. Si lo intentamos, como
incluimos más y más covectores propios en el Hamilto-
nian los cálculos hechos a una escala dada divergerían
y hacerlos en el continuum es igual de divergente.
A continuación exploramos un camino más exitoso.
Podemos utilizar el producto interno renormalizado para inducir
una acción de los hamiltonianos de corte en
vcut-off ren
(Cn} := lim
h/cutoff renn (()Cn, ·)renCn ),
donde hemos utilizado el hecho de que (Cn, ·)renCn • HCn. Los
la existencia de este límite es trivial porque el renormalizado
Hamiltonianos son sumas finitas y el límite existe término por
término.
Estos hamiltonianos de corte descienden a lo físico
Espacio Hilbert
vcut-off ren
([Cn}]):= h
vcut-off ren
(Cn}
para cualquier representante Cn} [Cn}] Hóphys.
Por último, podemos abordar la cuestión de la eliminación de la
Fuera. El hamiltoniano hren
→ R se define por la
límite
:= lim
& cct−off & cclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclclcl
vcut-off ren
cuando el límite existe. Su correspondiente forma ermitaña
en Hphys se define siempre que exista el límite anterior. Esto
concluye nuestra presentación de los principales resultados de [6]. Vamos.
ahora consideremos varios ejemplos de sistemas para los que
el límite del continuum puede ser investigado.
VI. EJEMPLOS
En esta sección vamos a desarrollar varios ejemplos de
sistemas que han sido tratados con el polímero cuanti-
Zation. Estos ejemplos son simples mecánicos cuánticos
sistemas, como el oscilador armónico simple y el
partículas libres, así como un modelo cosmológico cuántico
conocido como cosmología cuántica del bucle.
A. El Oscilador Armónico Simple
En esta parte, vamos a considerar el ejemplo de un simple Har-
Oscilador mónico (SHO) con parámetros m y
sicamente descrito por el siguiente Hamiltonian
2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2
Recuerde que a partir de estos parámetros se puede definir una longitud
escala D =
- Sí. En el tratamiento estándar se utiliza
esta escala para definir una estructura compleja JD (y un
r producto de la misma), como hemos descrito en detalle que
Selecciona de forma única la representación estándar de Schrödinger.
A escala Cn tenemos un Hamiltoniano eficaz para el
Oscilador Armónico Simple (SHO) dado por
HCn =
1 − como anp
má2x2. (35)
Si intercambiamos posición e impulso, este Hamilto...
nian es exactamente el de un péndulo de masa m, longitud l
y sujeto a un campo gravitatorio constante g:
Cn = −
+mgl(1 − cos )
cuando esas cantidades estén relacionadas con nuestro sistema,
m-a-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-d-e-e-d-d-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-e-
, g =
...............................................................
Es decir, estamos aproximando, para cada escala Cn el
SHO por un péndulo. Hay, sin embargo, un importante
diferencia. De nuestro conocimiento del sistema del péndulo,
Sabemos que el sistema cuántico tendrá un espectro
para la energía que tiene dos behav asintóticos diferentes -
iors, el SHO para bajas energías y el rotor planar en
el extremo superior, correspondiente a oscilación y rotación
soluciones, respectivamente2. A medida que refinamos nuestra escala y ambos
la longitud del péndulo y la altura del periódico
aumento potencial, esperamos tener un aumento de num-
br de estados oscilantes (para un sistema de péndulo dado,
sólo hay un número finito de tales estados). Por lo tanto,
se justifica considerar el corte en el eigenval de la energía
Como se ha dicho en la última sección, dado que sólo
esperar un número finito de estados del péndulo a ap-
Eigenstatos de SHO próximos. Con estas consideraciones en
mente, la pregunta relevante es si las condiciones para
el continuum límite a existir está satisfecho. Esta pregunta
se ha respondido afirmativamente en [6]. ¿Qué fue?
se demostró que los valores propios y eigen func-
ciones de los sistemas discretos, que representan un
y no degenerados, aproximándose a los de los contin-
uum, es decir, del oscilador armónico estándar cuando
el producto interior se vuelve a normalizar por un factor 2Cn = 1/2
Esta convergencia implica que existe el límite continuo
como lo entendemos. Consideremos ahora la más simple
sistema posible, una partícula libre, que tiene sin embargo la
particular característica de que el espectro de la energía es
Tinuous.
2 Tenga en cuenta que ambos tipos de soluciones están, en el espacio de fase, cerrados.
Esta es la razón detrás del espectro puramente discreto. Los
la distinción que estamos haciendo es entre esas soluciones dentro de la
separatrix, que llamamos oscilante, y aquellos que están por encima de ella
que llamamos rotación.
B. Partícula libre de polímero
En el límite فارسى → 0, el Hamiltoniano de lo Simple
El oscilador armónico (35) va al Hamiltoniano de un
partícula libre y el tiempo correspondiente independiente
La ecuación de Schrödinger, en la p-polarización, está dada por
(1 − cos anp
) − CEn
(p) = 0
donde ahora tenemos que p â € S1, con p â € (
Por lo tanto, tenemos
ECn =
1 − cos
≤ CEn,max. 2
. (36)
A cada escala podemos describir la energía de la partícula.
está limitado desde arriba y el límite depende de la
escala. Nótese que en este caso el espectro es continuo
ous, lo que implica que las funciones propias ordinarias de
El Hilbert no es normal. Esto impone una
limitada en el valor que la energía de la partícula puede
tienen, además de los límites en el impulso debido a
su “compactación”.
Busquemos en primer lugar soluciones eigen a la hora inde-
péndulo Schrödinger ecuación, es decir, para la energía eigen-
estados. En el caso de la partícula libre ordinaria, estos
corresponden a ondas planas de impulso constante de la
forma e±(
) y de tal manera que la dispersión ordinaria re-
ión p2/2m = E está satisfecho. Estas ondas planas son
no cuadrado integrable y no pertenecen a lo ordinario
Hilbert espacio de la teoría Schrödinger, pero todavía son
útil para extraer información sobre el sistema. Por
la partícula libre de polímero que tenemos,
Cn(p) = c1♥(p− PCn) + c2/23370/(p+ PCn)
donde PCn es una solución de la ecuación anterior consid-
, con un valor fijo de ECn. Es decir,
PCn = P (ECn) =
arccos
− 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 −
El inverso Fourier transforma los rendimientos, en el ‘x represen-
dad»,
Cn(xj) =
∫ /un
/un
(p) e
p j dp =
ixjPCn /~ + c2e
- ixjPCn /~
.(37)
con xj = un j para j â € Z. Tenga en cuenta que las funciones propias
son todavía funciones delta (en la representación p) y por lo tanto
no (cuadrado) normalizable con respecto al polímero
producto interno, que en la polarización p se acaba de dar
por la medida ordinaria de Haar en S1, y no hay
la cuantificación del impulso (su espectro sigue siendo verdaderamente
continuum).
Consideremos ahora el tiempo dependiente Schrödinger
ecuación,
(p, t) = · (p, t).
Que ahora toma la forma,
(p, t) =
(1 − cos (un p/~)) (p, t)
que tiene como solución,
(p, t) = e−
(1−cos (un p/~)) t (p) = e(−iECn /~) t (p)
para cualquier función inicial (p), donde la CEn satisface la
relación de persión (36). La función de onda (xj, t), la
xj-representación de la función de onda, se puede obtener
para cualquier tiempo dado t por Fourier transformando con (37)
la función de onda (p, t).
Con el fin de comprobar la convergencia de la micro-
scopicamente corregido Hamiltonians debemos analizar el
la convergencia de los niveles de energía y de la
ectors. En el límite n → فارسى, ECn → E = p2/2m tan
podemos estar seguros de que los valores propios para la energía
convergen (al fijar el valor de p). Vamos a escribir el
el covector adecuado como Cn = (Cn, ·)ren Cn • H
. Entonces nosotros
puede traer correcciones microscópicas a escala Cm y mirar
para la convergencia de dichas correcciones
*RenCm*
= lim
cn..........................................................................................................................
Es fácil ver que dado cualquier vector de base eαi HCm
el límite siguiente:
renCm(eαi,Cm) = limCn
Cn(dn,m(eαi,Cm))
existe y es igual a
(eαi,Cm) = [d
Schr](eαi,Cm) =
Schr(iam)
donde se calcula el valor de la sustancia problema utilizando la partícula libre Hamilto-
Nian en la representación de Schrödinger. Esta expresión
define el covector adecuado completamente renormalizado en
la escala Cm.
C. Cosmología cuántica de polímeros
En esta sección vamos a presentar una versión de cuántica
cosmología que llamamos cosmología cuántica polimérica. Los
La idea detrás de este nombre es que la entrada principal en el quan-
tización del modelo mini-superespacio correspondiente es
el uso de una representación de polímero tal como se entiende aquí.
Otra aportación importante es la elección de los elementos fundamentales
variables a utilizar y la definición del Hamiltoniano
restricción. Distintos grupos de investigación han diferen-
ent opciones. Vamos a tomar aquí un modelo simple que tiene
recibió mucha atención recientemente, a saber, un isotrópico,
cosmología homogénea del FRW con k = 0 y acoplada
a un campo escalar sin masa. Como veremos, un
el tratamiento del límite continuo de este sistema requiere
nuevos instrumentos en desarrollo que están más allá del ámbito de aplicación
de este trabajo. Por lo tanto, nos limitaremos a la introducción
miento del sistema y de los problemas que deben
Resuelto.
El sistema a cuantificar corresponde a la fase
espacio de espacios cosmológicos que son homogéneos
e isotrópico y para los cuales la homogeneidad espacial
las rebanadas tienen una geometría intrínseca plana (k = condición 0).
El único contenido de materia es un campo escalar sin masa. In
este caso la geometría espacio-tiempo es dada por las métricas de
el formulario:
ds2 = −dt2 + a2(t) (dx2 + dy2 + dz2)
donde la función a(t) lleva toda la información y
grados de libertad de la parte gravitatoria. En términos de la
Coordenadas (a, pa, , p) para el espacio de fase de la
Ory, todas las dinámicas son capturadas en el con-
strantest
C := −3
+ 8ηG
2a3
El primer paso es definir la restricción sobre la kine-
matical Hilbert espacio para encontrar estados físicos y luego un
producto interior físico para construir el Hilbert físico
espacio. Primero note que se puede reescribir la ecuación como:
p2a a
2 = 8ηG
Si, como se hace normalmente, se opta por actuar como un in-
tiempo, el lado derecho sería promovido, en
la teoría cuántica, a una segunda derivada. La izquierda
lado de la mano es, además, simétrico en a y pa. En
este punto tenemos la libertad en la elección de la variable
que será cuantificado y la variable que no será
bien definido en la representación del polímero. El estándar
elección es que pa no está bien definido y por lo tanto, a y cualquier
cantidad geométrica derivada de ella, se cuantifica. Piel...
termorre, tenemos la opción de polarización en la onda
función. A este respecto, la elección estándar es seleccionar
la a-polarización, en la que una actúa como multiplicación y
la aproximación de pa, a saber, sin(
diferencia operador en las funciones de onda de a. Para más detalles:
esta elección particular véase [5]. En este contexto, adoptaremos la op-
posite polarización, es decir, tendremos funciones de onda
(pa, فارسى).
Al igual que hicimos en los casos anteriores, con el fin de ganar
intuición sobre el comportamiento del polímero cuantificado
la teoría, es conveniente mirar el prob equivalente-
en la teoría clásica, a saber, el sistema clásico
Estaríamos aproximándonos a lo no bien definido.
servible (pa en nuestro caso actual) por un objeto bien definido
(hecho de funciones trigonométricas). Vamos a la simplicidad
opte por reemplazar pa 7→ sin( Con esta opción
Obtenemos una restricción clásica Hamiltoniana eficaz que
depende de :
C. := −
sin(l pa)
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
+ 8ηG
2a3
Ahora podemos calcular ecuaciones efectivas de movimiento por
medios de las ecuaciones: := {F, C, para cualquier observable
F. C. C. C. C. C., y donde estamos utilizando la eficacia (primero
orden) acción:
*(pa N C
con la opción N = 1. Lo primero que hay que notar es que
la cantidad de p.o.p. es una constante de la moción, dado que
la variable فارسى es cíclica. La segunda observación es que
= 8ηG
tiene la misma señal que pÃ3 y nunca desaparece.
Por lo tanto, puede ser utilizado como una variable de tiempo (n interna). Los
siguiente observación es que la ecuación para
, a saber:
la ecuación efectiva de Friedman, tendrá un cero para un
valor no cero de un dado por
2p2o.
Este es el valor en el que se rebotará si el
la trayectoria comenzó con un gran valor de a y fue
Tracciones. Note que el ‘tamaño’ del universo cuando el
rebote se produce depende tanto de la constante
dicta la densidad de la materia) y el valor de la celosía
tamaño ♥. Aquí es importante subrayar que para cualquier valor
(que fija de manera única la trayectoria en el (a, pa)
avión), habrá un rebote. En la descripción original
en términos de las ecuaciones de Einstein (sin la
No hay tal rebote. Si
< 0 inicialmente, permanecerá negativo y el universo
colapsa, alcanzando la singularidad en un tiempo finito apropiado.
¿Qué sucede dentro de la descripción efectiva si re-
afinar la celosía y pasar de
¿N? El único
que cambia, para la misma órbita clásica etiquetada
por pŁ, es que el rebote se produce en un ‘tiempo posterior’ y para
un valor menor de un* pero el cuadro cualitativo sigue siendo
Lo mismo.
Esta es la principal diferencia con los sistemas considerados
antes. En esos casos, uno podría tener trayectoria clásica...
rs que quedaron, para una determinada elección de parámetro
dentro de la región donde el pecado es una buena
Por supuesto, también había trayectorias clásicas.
que estaban fuera de esta región, pero entonces podríamos refinar el
retícula y encontrar un nuevo valor para el cual el nuevo clas-
La trayectoria sical está bien aproximada. En el caso de la
cosmología polimérica, este nunca es el caso: Cada clásico
la trayectoria pasará de una región donde la
sión es buena para una región en la que no lo es; esto es precisamente
donde las ‘correcciones cuánticas’ entran en juego y los universos
rebotes.
Dado que en la descripción clásica, el «original»
y las descripciones ‘corregidas’ son tan diferentes que esperamos
que, tras la cuantificación, el cuántico correspondiente el-
ories, a saber, el polimérico y el Wheeler-DeWitt
estar relacionado de manera no trivial (si es que existe).
En este caso, con la elección de la polarización y para una
particular el orden de los factores que tenemos,
sin(lpa)
· (pa, ) = 0
como la ecuación Polymer Wheeler-DeWitt.
A fin de abordar el problema del continuo
límite de esta teoría cuántica, tenemos que darnos cuenta de que la
la tarea es ahora algo diferente que antes. Esto es así.
dado que el sistema es ahora un sistema limitado con
un operador de restricción en lugar de un no-singular regular
sistema con una evolución Hamiltoniana ordinaria. Fortu...
nalmente para el sistema que se examina, el hecho de que
puede ser considerado como un tiempo interno permite
para interpretar la restricción cuántica como una
Klein-Gordon ecuación de la forma
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
cuando el operador sea «independiente en el tiempo». Esta al-
nos reduce a dividir el espacio de soluciones en ‘positivos y
frecuencia negativa», introducir un producto interior físico
sobre las soluciones de frecuencia positiva de esta ecuación y
un conjunto de observables físicos en función de los cuales de-
escriba el sistema. Es decir, se reduce en la práctica la
sistema a uno muy similar al caso Schrödinger por
tomando la raíz cuadrada positiva de la ecuación anterior:
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # La pregunta que nos interesa es:
si el continuum límite de estas teorías (marcado
y si se corresponde con el Wheeler-
La teoría de DeWitt. Un tratamiento completo de este problema
Desgraciadamente, está fuera del ámbito de este trabajo y
se informará en otro lugar [12].
VII. DEBATE
Resumamos nuestros resultados. En la primera parte de la
artículo mostramos que la representación del polímero de la
las relaciones canónicas de conmutación se pueden obtener como la
el caso limitador de la Fock-Schrödinger ordinario represen-
en términos del estado algebraico que define el
representación. Estos casos limitantes también pueden ser inter-
pretendidos en términos de los estados coherentes definidos naturalmente
asociado a cada representación etiquetada por el
eter d, cuando se vuelven infinitamente ‘estrujados’. Los dos
posibles límites de compresión conducen a dos polímeros diferentes
descripciones que, sin embargo, se pueden identificar, como nosotros
también han demostrado, con las dos posibles polarizaciones para
una representación polímero abstracta. El resultado fue el siguiente:
ory tiene, sin embargo, un comportamiento muy diferente como el estándar
Uno: El espacio Hilbert no es separable, el representa-
es inequivalente unitariamente a la de Schrödinger, y
los operadores naturales como pÃ3n ya no están bien definidos.
Esta construcción limitante particular del polímero el-
Ory puede arrojar algo de luz para sistemas más complicados
como las teorías de campo y la gravedad.
En los tratamientos regulares de la dinámica dentro de la poli-
representación mer, uno necesita introducir algunos extra
estructura, como una celosía en el espacio de configuración, a con-
construir un Hamiltoniano e implementar la dinámica para el
sistema mediante un procedimiento de regularización. ¿Cómo es que esto re-
teoría sulting comparar con la teoría del continuum original
uno tenía desde el principio? ¿Puede uno esperar eliminar
el regulador en la descripción del polímero? Tal como están.
no hay relación directa o mapeo del polímero
a una teoría de continuum (en caso de que haya una definida). As
hemos demostrado, uno puede construir de hecho en un sistema
dad, por medio de alguna enmienda apropiada que no se refiera a los derechos humanos, a los derechos humanos, a los derechos humanos y a las libertades fundamentales, y a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos, a los derechos humanos y a las libertades fundamentales,
ciones relacionadas con la definición de una escala,
a la celosía uno tenía que introducir en la regularización.
Con este importante cambio en la perspectiva, y una
renormalización priato del producto interior del polímero en
cada escala uno puede, sujeto a alguna condición de consistencia-
ciones, definir un procedimiento para eliminar el regulador, y
llegar a un Hamiltoniano y un espacio Hilbert.
Como hemos visto, para algunos ejemplos simples como
una partícula libre y el oscilador armónico uno de hecho
recupera la descripción de Schrödinger. Para otros sistemas:
tems, como los modelos cosmológicos cuánticos, la respuesta
no es tan claro, ya que la estructura del espacio de classi-
Las soluciones de cal son tales que la «descripción eficaz»
producido por la regularización del polímero a diferentes escalas
es cualitativamente diferente de la dinámica original. A
el tratamiento adecuado de esta clase de sistemas está en marcha
y se informará de ello en otro lugar [12].
Tal vez la lección más importante que tenemos
En este sentido, se ha aprendido que existe en efecto una riqueza intergubemamental.
juego entre la descripción del polímero y el ordinario
Representación de Schrödinger. La estructura completa de esta re-
dad de la Unión Europea y de los Estados miembros de la Unión Europea, así como de los Estados miembros de la Unión Europea y de los Estados miembros de la Unión Europea. Sólo podemos esperar que
una comprensión completa de estas cuestiones arrojará algo de luz
en el objetivo final de tratar la dinámica cuántica
de los sistemas sobre el terreno independientes de antecedentes, como
relatividad.
Agradecimientos
Agradecemos a A. Ashtekar, G. Hossain, T. Pawlowski y P.
Singh para discutir. Este trabajo fue apoyado en parte
por subvenciones CONACyT U47857-F y 40035-F, por NSF
PHY04-56913, por los Fondos de Investigación Eberly de Penn
Estado, por el programa de intercambio AMC-FUMEC y por
fondos del CIC-Universidad Michoacana de San Nicolás
de Hidalgo.
[1] R. Beaume, J. Manuceau, A. Pellet y M. Sirugue,
“Estados Invariantes de Traducción en Mecánica Cuántica,”
Comun. Matemáticas. Phys. 38, 29 (1974); W. E. Thirring y
H. Narnhofer, “Covariante QED sin
ric”, Rev. Matemáticas. Phys. 4, 197 (1992); F. Acerbi, G. Mor-
chio y F. Strocchi, “Campos singulares infrarrojos y no-
representaciones regulares de la conmutación canónica rela-
álgebras de tion”, J. Matemáticas. Phys. 34, 899 (1993); F. Cav-
allaro, G. Morchio y F. Strocchi, “Una generalización de
el teorema de Stone-von Neumann a la representación no regular-
sentaciones del CCR-álgebra”, Lett. Matemáticas. Phys. 47
307 (1999); H. Halvorson, "Completaridad de la
sentaciones en mecánica cuántica”, Estudios de Historia
y Filosofía de la Física Moderna 35 45 (2004).
[2] A. Ashtekar, S. Fairhurst y J.L. Willis, “Quantum
gravedad, estados de sombra y mecánica cuántica”, Class.
Quant. Grav. 20 1031 (2003) [arXiv:gr-qc/0207106].
[3] K. Fredenhagen y F. Reszewski, “Polymer state ap-
proximaciones de las funciones de la onda Schrödinger”, Class.
Quant. Grav. 23 6577 (2006) [arXiv:gr-qc/0606090].
[4] M. Bojowald, “Loop quantum cosmology”, Living Rev.
Rel. 8, 11 (2005) [arXiv:gr-qc/0601085]; A. Ashtekar,
M. Bojowald y J. Lewandowski, "Estructura matemática
tura de la cosmología cuántica del bucle”, Adv. Teor. Matemáticas.
Phys. 7 233 (2003) [arXiv:gr-qc/0304074]; A. Ashtekar,
T. Pawlowski y P. Singh, “Naturaleza cualitativa de la
big bang: Dinámica mejorada” Phys. Rev. D 74 084003
(2006) [arXiv:gr-qc/0607039]
[5] V. Husain y O. Winkler, “Estados semiclásicos para
cosmología cuántica” Phys. Rev. D 75 024014 (2007)
[arXiv:gr-qc/0607097]; V. Husain V y O. Winkler, “On
resolución de singularidad en la gravedad cuántica”, Phys. Rev. D
69 084016 (2004). [arXiv:gr-qc/0312094].
[6] A. Corichi, T. Vukasinac y J.A. Zapata. “Hamil-
toniano y espacio físico Hilbert en polímero quan-
tum mechanics”, Class. Quant. Grav. 24 1495 (2007)
[arXiv:gr-qc/0610072]
[7] A. Corichi y J. Cortez, “Cantización canónica de
una perspectiva algebraica” (preimpresión)
[8] A. Corichi, J. Cortez y H. Quevedo, “Schrödinger
y las representaciones de Fock para una teoría de campo sobre
Tiempo espacial curvado”, Annals Phys. (NY) 313 446 (2004)
[arXiv:hep-th/0202070].
[9] E. Manrique, R. Oeckl, A. Weber y J.A. Zapata, “Loop
quantización como un límite continuo” Clase. Quant. Grav.
23 3393 (2006) [arXiv:hep-th/0511222]; E. Manrique,
R. Oeckl, A. Weber y J.A. Zapata, “Teo-
rios y límite continuo para la cuantificación canónica del bucle”
(preimpresión)
[10] D.W. Chiou, “Simetrías de Galileo en partículas poliméricas
representación”, Class. Quant. Grav. 24, 2603 (2007)
[arXiv:gr-qc/0612155].
[11] W. Rudin, análisis de Fourier sobre los grupos, (Interscience, New
York, 1962)
[12] A. Ashtekar, A. Corichi, P. Singh, “Contrasting LQC
y WDW utilizando un modelo exactamente soluble” (preimpresión);
A. Corichi, T. Vukasinac y J.A. Zapata, “Continuum
límite para el sistema limitado cuántico” (preimpresión).
http://arxiv.org/abs/gr-qc/0207106
http://arxiv.org/abs/gr-qc/0606090
http://arxiv.org/abs/gr-qc/0601085
http://arxiv.org/abs/gr-qc/0304074
http://arxiv.org/abs/gr-qc/0607039
http://arxiv.org/abs/gr-qc/0607097
http://arxiv.org/abs/gr-qc/0312094
http://arxiv.org/abs/gr-qc/0610072
http://arxiv.org/abs/hep-th/0202070
http://arxiv.org/abs/hep-th/051122
http://arxiv.org/abs/gr-qc/0612155
|
704.001
| Numerical solution of shock and ramp compression for general material
properties
| A general formulation was developed to represent material models for
applications in dynamic loading. Numerical methods were devised to calculate
response to shock and ramp compression, and ramp decompression, generalizing
previous solutions for scalar equations of state. The numerical methods were
found to be flexible and robust, and matched analytic results to a high
accuracy. The basic ramp and shock solution methods were coupled to solve for
composite deformation paths, such as shock-induced impacts, and shock
interactions with a planar interface between different materials. These
calculations capture much of the physics of typical material dynamics
experiments, without requiring spatially-resolving simulations. Example
calculations were made of loading histories in metals, illustrating the effects
of plastic work on the temperatures induced in quasi-isentropic and
shock-release experiments, and the effect of a phase transition.
| Numerical solution of shock and ramp compression
for general material properties
Damian C. Swift∗
Materials Science and Technology Division,
Lawrence Livermore National Laboratory,
7000, East Avenue, Livermore, CA 94550, U.S.A.
(Dated: March 7, 2007; revised April 8, 2008 and July 1, 2008 – LA-UR-07-2051)
Abstract
A general formulation was developed to represent material models for applications in dynamic
loading. Numerical methods were devised to calculate response to shock and ramp compression, and
ramp decompression, generalizing previous solutions for scalar equations of state. The numerical
methods were found to be flexible and robust, and matched analytic results to a high accuracy.
The basic ramp and shock solution methods were coupled to solve for composite deformation
paths, such as shock-induced impacts, and shock interactions with a planar interface between
different materials. These calculations capture much of the physics of typical material dynamics
experiments, without requiring spatially-resolving simulations. Example calculations were made of
loading histories in metals, illustrating the effects of plastic work on the temperatures induced in
quasi-isentropic and shock-release experiments, and the effect of a phase transition.
PACS numbers: 62.50.+p, 47.40.-x, 62.20.-x, 46.35.+z
Keywords: material dynamics, shock, isentrope, adiabat, numerical solution, constitutive behavior
∗Electronic address: damian.swift@physics.org
http://arxiv.org/abs/0704.0008v3
mailto:damian.swift@physics.org
I. INTRODUCTION
The continuum representation of matter is widely used for material dynamics in sci-
ence and engineering. Spatially-resolved continuum dynamics simulations are the most
widespread and familiar, solving the initial value problem by discretizing the spatial domain
and integrating the dynamical equations forward in time to predict the motion and defor-
mation of components of the system. This type of simulation is used, for instance, to study
hypervelocity impact problems such as the vulnerability of armor to projectiles [1, 2], the
performance of satellite debris shields [3], and the impact of meteorites with planets, notably
the formation of the moon [4]. The problem can be divided into the dynamical equations
of the continuum, the state field of the components s(~r), and the inherent properties of
the materials. Given the local material state s, the material properties allow the stress τ
to be determined. Given the stress field τ(~r) and mass density field ρ(~r), the dynamical
equations describe the fields of acceleration, compression, and thermodynamic work done
on the materials.
The equations of continuum dynamics describe the behavior of a dynamically deforming
system of arbitrary complexity. Particular, simpler deformation paths can be described more
compactly by different sets of equations, and solved by different techniques than those used
for continuum dynamics in general. Simpler deformation paths occur often in experiments
designed to develop and calibrate models of material properties. These paths can be regarded
as different ways of interrogating the material properties. The principal examples in material
dynamics are shock and ramp compression [5, 6]. Typical experiments are designed to induce
such loading histories and measure or infer the properties of the material in these states
before they are destroyed by release from the edges or by reflected waves.
The development of the field of material dynamics was driven by applications in the
physics of hypervelocity impact and high explosive systems, including nuclear weapons [7].
In the regimes of interest, typically components with dimensions ranging from millime-
ters to meters and pressures from 1GPa to 1TPa, material behavior is dominated by the
scalar equation of state (EOS): the relationship between pressure, compression (or mass
density), and internal energy. Other components of stress (specifically shear stresses) are
much smaller, and chemical explosives react promptly so can be treated by simple mod-
els of complete detonation. EOS were developed as fits to experimental data, particularly
to series of shock states and to isothermal compression measurements [8]. It is relatively
straightforward to construct shock and ramp compression states from an EOS algebraically
or numerically depending on the EOS, and to fit an EOS to these measurements. More
recently, applications and scientific interest have grown to include a wider range of pressures
and time scales, such as laser-driven inertial confinement fusion [9], and experiments are
designed to measure other aspects than the EOS, such as the kinetics of phase changes, con-
stitutive behavior describing shear stresses, incomplete chemical reactions, and the effects of
microstructure, including grain orientation and porosity. Theoretical techniques have also
evolved to predict the EOS with ∼1% accuracy [10] and elastic contributions to shear stress
with slightly poorer accuracy [11].
A general convention for representing material states is described, and numerical methods
are reported for calculating shock and ramp compression states from general representations
of material properties.
II. CONCEPTUAL STRUCTURE FOR MATERIAL PROPERTIES
The desired structure for the description of the material state and properties under dy-
namic loading was developed to be as general as possible with respect to the types of material
or models to be represented in the same framework, and designed to give the greatest amount
of commonality between spatially-resolved simulations and calculations of shock and ramp
compressions.
In condensed matter on sub-microsecond time scales, heat conduction is often too slow to
have a significant effect on the response of the material, and is ignored here. The equations
of non-relativistic continuum dynamics are, in Lagrangian form, i.e. along characteristics
moving with the local material velocity ~u(~r),
Dρ(~r, t)
= −ρ(~r, t)div~u(~r, t) (1)
D~u(~r, t)
ρ(~r, t)
div τ(~r, t) (2)
De(~r, t)
= ||τ(~r, t)grad~u(~r, t)|| (3)
where ρ is the mass density and e the specific internal energy. Changes in e can be related
to changes in the temperature T through the heat capacity. The inherent properties of
each material in the problem are described by its constitutive relation or equation of state
τ(s). As well as experiencing compression and work from mechanical deformation, the local
material state s(~r, t) can evolve through internal processes such as plastic flow. In general,
Ds(~r, t)
≡ ṡ[s(~r, t), U(~r, t)] : U ≡ grad ~u(~r, t) (4)
which can also include the equations for ∂ρ/∂t and ∂e/∂t. Thus the material properties must
describe at a minimum τ(s) and ṡ[s(~r, t), U(~r, t)] for each material. If they also describe T (s),
the conductivity, and ṡ(ė), then heat conduction can be treated. Other functions may be
needed for particular numerical methods in continuum dynamics, such as the need for wave
speeds (e.g. the longitudinal sound speed), which are needed for time step control in explicit
time integration. Internally, within the material properties models, it is desirable to re-use
software as much as possible, and other functions of the state are therefore desirable to allow
models to be constructed in a modular and hierarchical way. Arithmetic manipulations must
be performed on the state during numerical integration, and these can be encoded neatly
using operator overloading, so the operator of the appropriate type is invoked automatically
without having to include ‘if-then-else’ structures for each operator as is the case in non-
object-oriented programming languages such as Fortran-77. For instance, if ṡ is calculated
in a forward-time numerical method then changes of state are calculated using numerical
evolution equations such as
s(t+ δt) = s(t) + δtṡ. (5)
Thus for a general state s and its time derivative ṡ, which has an equivalent set of compo-
nents, it is necessary to multiply a state by a real number and to add two states together.
For a specific software implementation, other operations may be needed, for example to
create, copy, or destroy a new instance of a state.
The attraction of this approach is that, by choosing a reasonably general form for the
constitutive relation and associated operations, it is possible to separate the continuum
dynamics part of the problem from the inherent behavior of the material. The relations
describing the properties of different types of material can be encapsulated in a library form
where the continuum dynamics program need know nothing about the relations for any spe-
cific type of material, and vice versa. The continuum dynamics programs and the material
properties relations can be developed and maintained independently of each other, provided
that the interface remains the same (Table I). This is an efficient way to make complicated
material models available for simulations of different types, including Lagrangian and Eule-
rian hydrocodes operating on different numbers of dimensions, and calculations of specific
loading or heating histories such as shock and ramp loading discussed below. Software in-
terfaces have been developed in the past for scalar EOS with a single structure for the state
[12], but object-oriented techniques make it practical to extend the concept to much more
complicated states, to combinations of models, and to alternative types of model selected
when the program is run, without having to find a single super-set state encompassing all
possible states as special cases.
A very wide range of types of material behavior can be represented with this formalism.
At the highest level, different types of behavior are characterized by different structures for
the state s (Table II). For each type of state, different specific models can be defined, such
as perfect gas, polytropic and Grüneisen EOS. For each specific model, different materials
are represented by choosing different values for the parameters in the model, and different
local material states are represented through different values for the components of s. In the
jargon of object-oriented programming, the ability to define an object whose precise type
is undetermined until the program is run is known as polymorphism. For our application,
polymorphism is used at several levels in the hierarchy of objects, from the overall type of a
material (such as ‘one represented by a pressure-density-energy EOS’ or ‘one represented by
a deviatoric stress model’) through the type of relation used to describe the properties of that
material type (such as perfect gas, polytropic, or Grüneisen for a pressure-density-energy
EOS, or Steinberg-Guinan [13] or Preston-Tonks-Wallace [14] for a deviatoric stress model),
to the type of general mathematical function used to represent some of these relations (such
as a polynomial or a tabular representation of γ(ρ) in a polytropic EOS) (Table III). States
or models may be defined by extending or combining other states or models – this can be
implemented using the object-oriented programming concept of inheritance. Thus deviatoric
stress models can be defined as an extension to any pressure-density-energy EOS (they are
usually written assuming a specific type, such as Steinberg’s cubic Grüneisen form), homo-
geneous mixtures can be defined as combinations of any pressure-density-temperature EOS,
and heterogeneous mixtures can be defined as combinations of materials each represented
by any type of material model.
Trial implementations have been made as libraries in the C++ and Java programming
languages [15]. The external interface to the material properties was general at the level
of representing a generic material type and state. The type of state and model were then
selected when programs using the material properties library were run. In C++, objects
which were polymorphic at run time had to be represented as pointers, requiring additional
software constructions to allocate and free up physical memory associated with each object.
It was possible to include general re-usable functions as polymorphic objects when defining
models: real functions of one real parameter could be polynomials, transcendentals, tabular
with different interpolation schemes, piecewise definitions over different regions of the one
dimensional line, sums, products, etc; again defined specifically at run time. Object-oriented
polymorphism and inheritance were thus very powerful techniques for increasing software
re-use, making the software more compact and more reliable through the greater use of
functions which had already been tested.
Given conceptual and software structures designed to represent general material proper-
ties suitable for use in spatially-resolved continuum dynamics simulations, we now consider
the use of these generic material models for calculating idealized loading paths.
III. IDEALIZED ONE-DIMENSIONAL LOADING
Experiments to investigate the response of materials to dynamic loading, and to calibrate
parameters in models of their behavior, are usually designed to apply as simple a loading
history as is consistent with the transient state of interest. The simplest canonical types of
loading history are shock and ramp [5, 6]. Methods of solution are presented for calculating
the result of shock and ramp loading for materials described by generalized material models
discussed in the previous section. Such direct solution removes the need to use a time-
and space-resolved continuum dynamics simulation, allowing states to be calculated with
far greater efficiency and without the need to consider and make allowance for attributes of
resolved simulations such as the finite numerical resolution and the effect of numerical and
artificial viscosities.
A. Ramp compression
Ramp compression is taken here to mean compression or decompression. If the material
is represented by an inviscid scalar EOS, i.e. ignoring dissipative processes and non-scalar
effects from elastic strain, ramp compression follows an isentrope. This is no longer true
when dissipative processes such as plastic heating occur. The term ‘quasi-isentropic’ is
sometimes used in this context, particularly for shockless compression; here we prefer to
refer to the thermodynamic trajectories as adiabats since this is a more appropriate term:
no heat is exchanged with the surroundings on the time scales of interest.
For adiabatic compression, the state evolves according to the second law of thermody-
namics,
de = T dS − p dv (6)
where T is the temperature and S the specific entropy. Thus
ė = T Ṡ − p v̇ = T Ṡ −
pdiv~u
, (7)
or for a more general material whose stress tensor is more complicated than a scalar pressure,
de = T dS + τn dv ⇒ ė = T Ṡ +
τndiv~u
where τn is the component of stress normal to the direction of deformation. The velocity
gradient was expressed through a compression factor η ≡ ρ′/ρ and a strain rate ǫ̇. In all
ramp experiments used in the development and calibration of accurate material models,
the strain has been applied uniaxially. More general strain paths, for instance isotropic or
including a shear component, can be treated by the same formalism, and that the working
rate is then a full inner product of the stress and strain tensors.
The acceleration or deceleration of the material normal to the wave as it is compressed
or expanded adiabatically is
, (9)
from which it can be deduced that
where cl is the longitudinal wave speed.
As with continuum dynamics, internal evolution of the material state can be calculated
simultaneously with the continuum equations, or operator split and calculated periodically
at constant compression [16]. The results are the same to second order in the compression
increment. Operator-splitting allows calculations to be performed without an explicit en-
tropy, if the continuum equations are integrated isentropically and dissipative processes are
captured by internal evolution at constant compression.
Operator-splitting is desirable when internal evolution can produce highly nonlinear
changes, such as reaction from solid to gas: rapid changes in state and properties can
make numerical schemes unstable. Operator-splitting is also desirable when the integration
time step for internal evolution is much shorter than the continuum dynamics time step.
Neither of these considerations is very important for ramp compression without spatial res-
olution, but operator-splitting was used as an option in the ramp compression calculations
for consistency with continuum dynamics simulations.
The ramp compression equations were integrated using forward-time Runge-Kutta nu-
merical schemes of second order. The fourth order scheme is a trivial extension. The
sequence of operations to calculate an increment of ramp compression is as follows:
1. Time increment:
δt = −
| ln η|
2. Predictor:
s(t + δt/2) = s(t) +
ṡm(s(t), ǫ̇) (12)
3. Corrector:
s(t+ δt) = s(t) + δtṡm(s(t+ δt/2), ǫ̇) (13)
4. Internal evolution:
s(t+ δt) → s(t+ δt) +
∫ t+δt
ṡi(s(t
′), ǫ̇) dt′ (14)
where ṡm is the model-dependent state evolution from applied strain, and ṡi is internal
evolution at constant compression.
The independent variable for integration is specific volume v or mass density ρ; for
numerical integration finite steps are taken in ρ and v. The step size ∆ρ can be controlled so
that the numerical error during integration remains within chosen limits. A tabular adiabat
can be calculated by integrating over a range of v or ρ, but when simulating experimental
scenarios the upper limit for integration is usually that one of the other thermodynamic
quantities reaches a certain value, for example that the normal component of stress reaches
zero, which is the case on release from a high pressure state at a free surface. Specific
end conditions were found by monitoring the quantity of interest until bracketed by a finite
integration step, then bisecting until the stop condition was satisfied to a chosen accuracy.
During bisection, each trial calculation was performed as an integration from the first side
of the bracket by the trial compression.
B. Shock compression
Shock compression is the solution of a Riemann problem for the dynamics of a jump
in compression moving with constant speed and with a constant thickness. The Rankine-
Hugoniot (RH) equations [5] describing the shock compression of matter are derived in
the continuum approximation, where the shock is a formal discontinuity in the continuum
fields. In reality, matter is composed of atoms, and shocks have a finite width governed by
the kinetics of dissipative processes – at a fundamental level, matter does not distinguish
between shock compression and ramp compression with a high strain rate – but the RH
equations apply as long as the width of the region of matter where unresolved processes
occur is constant. Compared with the isentropic states induced by ramp compression in
a material represented by an EOS, a shock always increases the entropy and hence the
temperature. With dissipative processes included, the distinction between a ramp and a
shock may become blurred.
The RH equations express the conservation of mass, momentum, and energy across a
moving discontinuity in state. They are usually expressed in terms of the pressure, but are
readily generalized for materials supporting shear stresses by using the component of stress
normal to the shock (i.e., parallel with the direction of propagation of the shock), τn:
u2s = −v
τn − τn0
v0 − v
, (15)
∆up =
−(τn − τn0)(v0 − v), (16)
e = e0 −
(τn + τn0)(v0 − v), (17)
where us is the speed of the shock wave with respect to the material, ∆up is the change in
material speed normal to the shock wave (i.e., parallel to its direction of propagation), and
subscript 0 refers to the initial state.
The RH relations can be applied to general material models if a time scale or strain rate
is imposed, and an orientation chosen for the material with respect to the shock. Shock
compression in continuum dynamics is almost always uniaxial.
The RH equations involve only the initial and final states in the material. If a material
has properties that depend on the deformation path – such as plastic flow or viscosity –
then physically the detailed shock structure may make a difference [17]. This is a limitation
of discontinuous shocks in continuum dynamics: it may be addressed as discussed above
by including dissipative processes and considering ramp compression, if the dissipative pro-
cesses can be represented adequately in the continuum approximation. Spatially-resolved
simulations with numerical differentiation to obtain spatial derivatives and forward time
differencing are usually not capable of representing shock discontinuities directly, and an
artificial viscosity is used to smear shock compression over a few spatial cells [18]. The
trajectory followed by the material in thermodynamic space is a smooth adiabat with dissi-
pative heating supplied by the artificial viscosity. If plastic work is also included during this
adiabatic compression, the overall heating for a given compression is greater than from the
RH equations. To be consistent, plastic flow should be neglected while the artificial viscosity
is non-zero. This localized disabling of physical processes, particularly time-dependent ones,
during the passage of the unphysically smeared shock was previously found necessary for
numerically stable simulations of detonation waves by reactive flow [19].
Detonation waves are reactive shock waves. Steady planar detonation (the Chapman-
Jouguet state [20]) may be calculated using the RH relations, by imposing the condition
that the material state behind the shock is fully reacted.
Several numerical methods have been used to solve the RH equations for materials repre-
sented by an EOS only [21, 22]. The general RH equations may be solved numerically for a
given shock compression ∆ρ by varying the specific internal energy e until the normal stress
from the material model equals that from the RH energy equation, Eq. 17. The shock and
particle speeds are then calculated from Eqs 15 and 16. This numerical method is particu-
larly convenient for EOS of the form p(ρ, e), as e may be varied directly. Solutions may still
be found for general material models using ṡ(ė), by which the energy may be varied until
the solution is found.
Numerically, the solution was found by bracketing and bisection:
1. For given compression ∆ρ, take the low-energy end for bracketing as a nearby state
s− (e.g. the previous state, of lower compression, on the Hugoniot), compressed adia-
batically (to state s̃), and cooled so the specific internal energy is e(s−).
2. Bracket the desired state: apply successively larger heating increments ∆e to s̃, evolv-
ing each trial state internally, until τn(s) from the material model exceeds τn(e − e0)
from Eq. 17.
3. Bisect in ∆e, evolving each trial state internally, until τn(s) equals τn(e − e0) to the
desired accuracy.
As with ramp compression, the independent variable for solution was mass density ρ,
and finite steps ∆ρ were taken. Each shock state was calculated independently of the rest,
so numerical errors did not accumulate along the shock Hugoniot. The accuracy of the
solution was independent of ∆ρ. A tabular Hugoniot can be calculated by solving over a
range of ρ, but again when simulating experimental scenarios it is usually more useful to
calculate the shock state where one of the other thermodynamic quantities reaches a certain
value, often that up and τn match the values from another, simultaneous shock calculation
for another material – the situation in impact and shock transmission problems, discussed
below. Specific stop conditions were found by monitoring the quantity of interest until
bracketed by a finite solution step, then bisecting until the stop condition was satisfied to a
chosen accuracy. During bisection, each trial calculation was performed as a shock from the
initial conditions to the trial shock compression.
C. Accuracy: application to air
The accuracy of these numerical schemes was tested by comparing with shock and ramp
compression of a material represented by a perfect gas EOS,
p = (γ − 1)ρe. (18)
The numerical solution requires a value to be chosen for every parameter in the material
model, here γ. Air was chosen as an example material, with γ = 1.4. Air at standard tem-
perature and pressure has approximately ρ = 10−3 g/cm3 and e = 0.25MJ/kg. Isentropes
for the perfect gas EOS have the form
pρ−γ = constant, (19)
and shock Hugoniots have the form
p = (γ − 1)
2e0ρ0ρ+ p0(ρ− ρ0)
(γ + 1)ρ0 − (γ − 1)ρ
. (20)
The numerical solutions reproduced the principal isentrope and Hugoniot to 10−3% and 0.1%
respectively, for a compression increment of 1% along the isentrope and a solution tolerance
of 10−6GPa for each shock state (Fig. 1). Over most of the range, the error in the Hugoniot
was 0.02% or less, only approaching 0.1% near the maximum shock compression.
IV. COMPLEX BEHAVIOR OF CONDENSED MATTER
The ability to calculate shock and ramp loci in state space, i.e. as a function of vary-
ing loading conditions, is particularly convenient for investigating complex aspects of the
response of condensed matter to dynamic loading. Each locus can be obtained by a single
series of shock or ramp solutions, rather than having to perform a series of time- and space-
resolved continuum dynamics simulations, varying the initial or boundary conditions and
reducing the solution. We consider the calculation of temperature in the scalar EOS, the
effect of material strength and the effect of phase changes.
A. Temperature
The continuum dynamics equations can be closed using a mechanical EOS relating stress
to mass density, strain, and internal energy. For a scalar EOS, the ideal form to close the
continuum equations is p(ρ, e), with s = {ρ, e} the natural choice for the primitive state
fields. However, the temperature is needed as a parameter in physical descriptions of many
contributions to the constitutive response, including plastic flow, phase transitions, and
chemical reactions. Here, we discuss the calculation of temperature in different forms of the
scalar EOS.
1. Density-temperature equations of state
If the scalar EOS is constructed from its underlying physical contributions for continuum
dynamics, it may take the form e(ρ, T ), from which p(ρ, T ) can be calculated using the
second law of thermodynamics [10]. An example is the ‘SESAME’ form of EOS, based on
interpolated tabular relations for {p, e}(ρ, T ) [23]. A pair of relations {p, e}(ρ, T ) can be
used as a mechanical EOS by eliminating T , which is equivalent to inverting e(ρ, T ) to find
T (ρ, e), then substituting in p(ρ, T ). For a general e(ρ, T ) relation, for example for the
SESAME EOS, the inverse can be calculated numerically as required, along an isochore. In
this way, a {p, e}(ρ, T ) can be used as a p(ρ, e) EOS.
Alternatively, the same p(ρ, T ) relation can be used directly with a primitive state field
including temperature instead of energy: s = {ρ, T}. The evolution of the state under
mechanical work then involves the calculation of Ṫ (ė), i.e. the reciprocal of the specific heat
capacity, which is a derivative of e(ρ, T ). As this calculation does not require e(ρ, T ) to be
inverted, it is computationally more efficient to use {p, e}(ρ, T ) EOS with a temperature-
based, rather than energy-based, state. The main disadvantage is that it is more difficult
to ensure exact energy conservation as the continuum dynamics equations are integrated in
time, but any departure from exact conservation is at the level of accuracy of the algorithm
used to integrate the heat capacity.
Both structures of EOS have been implemented for material property calculations. Taking
a SESAME type EOS, thermodynamic loci were calculated with {ρ, e} or {ρ, T} primitive
states, for comparison (Fig. 2). For a monotonic EOS, the results were indistinguishable
within differences from forward or reverse interpolation of the tabular relations. When
the EOS, or the effective surface using a given order of interpolating function, was non-
monotonic, the results varied greatly because of non-uniqueness when eliminating T for the
{ρ, e} primitive state.
2. Temperature model for mechanical equations of state
Mechanical EOS are often available as empirical, algebraic relations p(ρ, e), derived from
shock data. Temperature can be calculated without altering the mechanical EOS by adding
a relation T (ρ, e). While this relation could take any form in principle, one can also follow
the logic of the Grüneisen EOS, in which the pressure is defined in terms of its deviation
∆p(ρ, e − er) from a reference curve {pr, er}(ρ). Thus temperatures can be calculated by
reference to a compression curve along which the temperature and specific internal energy
are known, {Tr, er}(ρ), and a specific heat capacity defined as a function of density cv(ρ).
In the calculations, this augmented EOS was represented as a ‘mechanical-thermal’ form
comprising any p(ρ, e) EOS plus the reference curves – an example of software inheritance
and polymorphism.
One natural reference curve for temperature is the cold curve, Tr = 0K. The cold curve
can be estimated from the principal isentrope e(ρ)|s0 using the estimated density variation
of the Grüneisen parameter:
er(ρ) = e(ρ)|s0 − T0cpe
a(1−ρ0/ρ)
)γ0−a
[24]. In this work, the principal isentrope was calculated in tabular form from the mechanical
EOS, using the ramp compression algorithm described above.
Empirical EOS are calibrated using experimental data. Shock and adiabatic compression
measurements on strong materials inevitably include elastic-plastic contributions as well as
the scalar EOS itself. If the elastic-plastic contributions are not taken into account self-
consistently, the EOS may implicitly include contributions from the strength. A unique
scalar EOS can be constructed to reproduce the normal stress as a function of compression
for any unique loading path: shock or adiabat, for a constant or smoothly-varying strain
rate. Such an EOS would not generally predict the response to other loading histories. The
EOS and constitutive properties for the materials considered here were constructed self-
consistently from shock data – this does not mean the models are accurate for other loading
paths, as neither the EOS nor the strength model includes all the physical terms that real
materials exhibit. This does not in any case matter for the purposes of demonstrating the
properties of the numerical schemes.
This mechanical-thermal procedure was applied to Al using a Grüneisen EOS fitted to the
same shock data used to calculate the {p, e}(ρ, T ) EOS discussed above [24]. Temperatures
were in good agreement (Fig. 2). The mechanical-thermal calculations required a similar
computational effort to the tabular {p, e}(ρ, T ) EOS with a {ρ, T} primitive states (and
were thus much more efficient than the tabular EOS with {ρ, e} states), and described the
EOS far more compactly.
B. Strength
For dynamic compressions to o(10GPa) and above, on microsecond time scales, the flow
stress of solids is often treated as a correction or small perturbation to the scalar EOS.
However, the flow stress has been observed to be much higher on nanosecond time scales
[25], and interactions between elastic and plastic waves may have a significant effect on
the compression and wave propagation. The Rankine-Hugoniot equations should be solved
self-consistently with strength included.
1. Preferred representation of isotropic strength
There is an inconsistency in the standard continuum dynamics treatment of scalar (pres-
sure) and tensor (stress) response. The scalar EOS expresses the pressure p(ρ, e) as the
dependent quantity, which is the most convenient form for use in the continuum equations.
Standard practice is to use sub-Hookean elasticity (hypoelastic form) [16] (Table II), in
which the state parameters include the stress deviator σ, evolved by integration
σ̇ = G(s)ǫ̇ (22)
where G is the shear modulus and ǫ̇ the strain rate deviator. Thus the isotropic and devia-
toric contributions to stress are not treated in an equivalent way: the pressure is calculated
from a local state involving a strain-like parameter (mass density), whereas the stress de-
viator evolves with the time-derivative of strain. This inconsistency causes problems along
complicated loading paths because G varies strongly with compression: if a material is sub-
jected to a shear strain ǫ, then isotropic compression (increasing the shear modulus from
G to G′, leaving ǫ unchanged), then shear unloading to isotropic stress, the true unloading
strain is −ǫ, whereas the hypoelastic calculation would require a strain of −ǫG/G′. Using
Be and the Steinberg-Guinan strength model as an example of the difference between hy-
poelastic and hyperelastic calculations, consider an initial strain to a flow stress of 0.3GPa
followed by isothermal, isotropic compression to 100GPa,. the strain to unload to a state
of isotropic stress is 0.20% (hyperelastic) and 0.09% (hypoelastic). The discrepancy arises
because the hypoelastic model does not increase the deviatoric stress under compression at
constant deviatoric strain.
The stress can be considered as a direct response of the material to the instantaneous state
of elastic strain: σ(ǫ, T ). This relation can be predicted directly with electronic structure
calculations of the stress tensor in a solid for a given compression and elastic strain state [11],
and is a direct generalization of the scalar equation of state. A more consistent representation
of the state parameters is to use the strain deviator ǫ rather than σ, and to calculate σ from
scratch when required using
σ = G(s)ǫ (23)
– a hyperelastic formulation. The state parameters are then {ρ, e, ǫ, ǫ̃p}.
The different formulations give different answers when deviatoric strain is accumulated
at different compressions, in which case the hyperelastic formulation is correct. If the shear
modulus varies with strain deviator – i.e., for nonlinear elasticity – then the definition of
G(ǫ) must be adjusted to give the same stress for a given strain.
Many isotropic strength models use scalar measures of the strain and stress to parame-
terize work hardening and to apply a yield model of flow stress:
fǫ||ǫ2||, σ̃ =
fσ||σ2||. (24)
Inconsistent conventions for equivalent scalar measures have been used by different workers.
In the present work, the common shock physics convention was used that the flow stress
component of τn is
Y where Y is the flow stress. For consistency with published speeds
and amplitudes for elastic waves, fǫ = fσ =
, in contrast to other values previously used
for lower-rate deformation [26]. In principle, the values of fǫ and fσ do not matter as long as
the strength parameters were calibrated using the same values then used in any simulations.
2. Beryllium
The flow stress measured from laser-driven shock experiments on Be crystals a few tens
of micrometers thick is, at around 5-9GPa [25], much greater than the 0.3-1.3GPa mea-
sured on microsecond time scales. A time-dependent crystal plasticity model for Be is being
developed, and the behavior under dynamic loading depends on the detailed time depen-
dence of plasticity. Calculations were performed with the Steinberg-Guinan strength model
developed for microsecond scale data [24], and, for the purposes of rough comparison, with
elastic-perfectly plastic response with a flow stress of 10GPa. The elastic-perfectly plastic
model neglected pressure- and work- hardening.
Calculations were made of the principal adiabat and shock Hugoniot, and of a release
adiabat from a state on the principal Hugoniot. Calculations were made with and without
strength. Considering the state trajectories in stress-volume space, it is interesting to note
that heating from plastic flow may push the adiabat above the Hugoniot, because of the
greater heating obtained by integrating along the adiabat compared with jumping from
the initial to the final state on the Hugoniot (Fig. 3). Even with an elastic-perfectly plastic
strength model, the with-strength curves do not lie exactly 2
Y above the strengthless curves,
because heating from plastic flow contributes an increasing amount of internal energy to the
EOS as compression increases.
An important characteristic for the seeding of instabilities by microstructural variations
in shock response is the shock stress at which an elastic wave does not run ahead of the
shock. In Be with the high flow stress of nanosecond response, the relation between shock
and particle speeds is significantly different from the relation for low flow stress (Fig. 4). For
low flow stress, the elastic wave travels at 13.2 km/s. A plastic shock travels faster than this
for pressures greater than 110GPa, independent of the constitutive model. The speed of a
plastic shock following the initial elastic wave is similar to the low strength case, because the
material is already at its flow stress, but the speed of a single plastic shock is appreciably
higher.
For compression to a given normal stress, the temperature is significantly higher with
plastic flow included. The additional heating is particularly striking on the principal adi-
abat: the temperature departs significantly from the principal isentrope. Thus ramp-wave
compression of strong materials may lead to significant levels of heating, contrary to com-
mon assumptions of small temperature increases [27]. Plastic flow is largely irreversible, so
heating occurs on unloading as well as loading. Thus, on adiabatic release from a shock-
compressed state, additional heating occurs compared with the no-strength case. These
levels of heating are important as shock or release melting may occur at a significantly lower
shock pressure than would be expected ignoring the effect of strength. (Fig. 5.)
C. Phase changes
An important property of condensed matter is phase changes, including solid-solid poly-
morphism and solid-liquid. An equilibrium phase diagram can be represented as a single
overall EOS surface as before. Multiple, competing phases with kinetics for each phase trans-
formation can be represented conveniently using the structure described above for general
material properties, for example by describing the local state as a set of volume fractions
fi of each possible simple-EOS phase, with transition rates and equilibration among them.
This model is described in more detail elsewhere [19]. However, it is interesting to investi-
gate the robustness of the numerical scheme for calculating shock Hugoniots when the EOS
has the discontinuities in value and gradient associated with phase changes.
The EOS of molten metal, and the solid-liquid phase transition, can be represented to a
reasonable approximation as an adjustment to the EOS of the solid:
ptwo-phase(ρ, e) = psolid(ρ, ẽ) (25)
where
e : T (ρ, e) < Tm(ρ)
e−∆h̃m : ∆h̃m ≡ cv(ρ, e) [T (ρ, e)− Tm(ρ)] < ∆hm
e−∆hm : otherwise
and ∆hm is the specific latent heat of fusion. Taking the EOS and a modified Lindemann
melting curve for Al [24], and using ∆hm = 0.397MJ/kg, the shock Hugoniot algorithm was
found to operate stably across the phase transition (Fig. 6).
V. COMPOSITE LOADING PATHS
Given methods to calculate shock and adiabatic loading paths from arbitrary initial
states, a considerable variety of experimental scenarios can be treated from the interaction
of loading or unloading waves with interfaces between different materials, in planar geometry
for uniaxial compression. The key physical constraint is that, if two dissimilar materials are
to remain in contact after an interaction such as an impact or the passage of a shock, the
normal stress τn and particle speed up in both materials must be equal on either side of the
interface. The change in particle speed and stress normal to the waves were calculated above
for compression waves running in the direction of increasing spatial ordinate (left to right).
Across an interface, the sense is reversed for the material at the left. Thus a projectile
impacting a stationary target to the right is decelerated from its initial speed by the shock
induced by impact.
The general problem at an interface can be analyzed by considering the states at the
instant of first contact – on impact, or when a shock traveling through a sandwich of ma-
terials first reaches the interface. The initial states are {ul, sl; ur, sr}. The final states are
{uj, s
l; uj, r
r} where uj is the joint particle speed, τn(s
l) = τn(s
r), and s
i is connected to si
by either a shock or an adiabat, starting at the appropriate initial velocity and stress, and
with orientation given by the side of the system each material occurs on. Each type of wave
is considered in turn, looking for an intersection in the up − τn plane. Examples of these
wave interactions are the impact of a projectile with a stationary target (Fig. 7), release of a
shock state at a free surface or a material (e.g. a window) of lower shock impedance (hence
reflecting a release wave into the shocked material – Fig. 8), reshocking at a surface with a
material of higher shock impedance (Fig. 8), or tension induced as materials try to separate
in opposite directions when joined by a bonded interface (Fig. 9). Each of these scenarios
may occur in turn following the impact of a projectile with a target: if the target is layered
then a shock is transmitted across each interface with a release or a reshock reflected back,
depending on the materials; release ultimately occurs at the rear of the projectile and the
far end of the target, and the oppositely-moving release waves subject the projectile and
target to tensile stresses when they interact (Fig. 10).
As an illustration of combining shock and ramp loading calculations, consider the problem
of an Al projectile, initially traveling at 3.6 km/s, impacting a stationary, composite target
comprising a Mo sample and a LiF release window [28, 29]. The shock and release states were
calculated using published material properties [24]. The initial shock state was calculated to
have a normal stress of 63.9GPa. On reaching the LiF, the shock was calculated to transmit
at 27.1GPa, reflecting as a release in the Mo. These stresses match the continuum dynamics
simulation to within 0.1GPa in the Mo and 0.3GPa in the LiF, using the same material
properties (Fig. 11). The associated wave and particle speeds match to a similar accuracy;
wave speeds are much more difficult to extract from the continuum dynamics simulation.
An extension of this analysis can be used to calculate the interaction of oblique shocks
with an interface [30].
VI. CONCLUSIONS
A general formulation was developed to represent material models for applications in
dynamic loading, suitable for software implementation in object-oriented programming lan-
guages. Numerical methods were devised to calculate the response of matter represented
by the general material models to shock and ramp compression, and ramp decompression,
by direct evaluation of the thermodynamic pathways for these compressions rather than
spatially-resolved simulations. This approach is a generalization of earlier work on solutions
for materials represented by a scalar equation of state. The numerical methods were found
to be flexible and robust: capable of application to materials with very different properties.
The numerical solutions matched analytic results to a high accuracy.
Care was needed with the interpretation of some types of physical response, such as plas-
tic flow, when applied to deformation at high strain rates. The underlying time-dependence
of processes occurring during deformation should be taken into account. The actual history
of loading and heating experienced by material during the passage of a shock may influence
the final state – this history is not captured in the continuum approximation to material
dynamics, where shocks are treated as discontinuities. Thus care is also needed in spa-
tially resolved simulations when shocks are modeled using artificial viscosity to smear them
unphysically over a finite thickness.
Calculations were shown to demonstrate the operation of the algorithms for shock and
ramp compression with material models representative of complex solids including strength
and phase transformations.
The basic ramp and shock solution methods were coupled to solve for composite defor-
mation paths, such as shock-induced impacts, and shock interactions with a planar interface
between different materials. Such calculations capture much of the physics of typical ma-
terial dynamics experiments, without requiring spatially-resolving simulations. The results
of direct solution of the relevant shock and ramp loading conditions were compared with
hydrocode simulations, showing complete consistency.
Acknowledgments
Ian Gray introduced the author to the concept of multi-model material properties soft-
ware. Lee Markland developed a prototype Hugoniot-calculating computer program for
equations of state while working for the author as an undergraduate summer student.
Evolutionary work on material properties libraries was supported by the U.K. Atomic
Weapons Establishment, Fluid Gravity Engineering Ltd, andWessex Scientific and Technical
Services Ltd. Refinements to the technique and applications to the problems described were
undertaken at Los Alamos National Laboratory (LANL) and Lawrence Livermore National
Laboratory (LLNL).
The work was performed partially in support of, and funded by, the National Nuclear Se-
curity Agency’s Inertial Confinement Fusion program at LANL (managed by Steven Batha),
and LLNL’s Laboratory-Directed Research and Development project 06-SI-004 (Principal
Investigator: Hector Lorenzana). The work was performed under the auspices of the U.S.
Department of Energy under contracts W-7405-ENG-36, DE-AC52-06NA25396, and DE-
AC52-07NA27344.
References
[1] J.K. Dienes, J.M. Walsh, in R. Kinslow (Ed), “High-Velocity Impact Phenomena” (Academic
Press, New York, 1970).
[2] D.J. Benson, Comp. Mech. 15, 6, pp 558-571 (1995).
[3] J.W. Gehring, Jr, in R. Kinslow (Ed), “High-Velocity Impact Phenomena” (Academic Press,
New York, 1970).
[4] R.M. Canup, E. Asphaug, Nature 412, pp 708-712 (2001).
[5] For a recent review and introduction, see e.g. M.R. Boslough and J.R. Asay, in J.R. Asay,
M. Shahinpoor (Eds), “High-Pressure Shock Compression of Solids” (Springer-Verlag, New
York, 1992).
[6] For example, C.A. Hall, J.R. Asay, M.D. Knudson, W.A. Stygar, R.B. Spielman, T.D. Pointon,
D.B. Reisman, A. Toor, and R.C. Cauble, Rev. Sci. Instrum. 72, 3587 (2001).
[7] M.A. Meyers, “Dynamic Behavior of Materials” (Wiley, New York, 1994).
[8] G. McQueen, S.P. March, J.W. Taylor, J.N. Fritz, W.J. Carter, in R. Kinslow (Ed), “High-
Velocity Impact Phenomena” (Academic Press, New York, 1970).
[9] J.D. Lindl, “Inertial Confinement Fusion” (Springer-Verlag, New York, 1998).
[10] D.C. Swift, G.J. Ackland, A. Hauer, G.A. Kyrala, Phys. Rev. B 64, 214107 (2001).
[11] J.P. Poirier, G.D. Price, Phys. of the Earth and Planetary Interiors 110, pp 147-56 (1999).
[12] I.N. Gray, P.C. Thompson, B.J. Parker, D.C. Swift, J.R. Maw, A. Giles and others (AWE
Aldermaston), unpublished.
[13] D.J. Steinberg, S.G. Cochran, M.W. Guinan, J. Appl. Phys. 51, 1498 (1980).
[14] D.L. Preston, D.L. Tonks, and D.C. Wallace, J. Appl. Phys. 93, 211 (2003).
[15] A version of the software, including representative parts of the material model library and the
algorithms for calculating the ramp adiabat and shock Hugoniot, is available as a supplemen-
tary file provided with the preprint of this manuscript, arXiv:0704.0008. Software support,
and versions with additional models, are available commercially from Wessex Scientific and
Technical Services Ltd (http://wxres.com).
[16] D. Benson, Computer Methods in Appl. Mechanics and Eng. 99, 235 (1992).
http://arxiv.org/abs/0704.0008
http://wxres.com
[17] J.L. Ding, J. Mech. and Phys. of Solids 54, pp 237-265 (2006).
[18] J. von Neumann, R.D. Richtmyer, J. Appl. Phys. 21, 3, pp 232-237 (1950).
[19] R.M. Mulford, D.C. Swift, in preparation.
[20] W. Fickett, W.C. Davis, “Detonation” (University of California Press, Berkeley, 1979).
[21] R. Menikoff, B.J. Plohr, Rev. Mod. Phys. 61, pp 75-130 (1989).
[22] A. Majda, Mem. Amer. Math. Soc., 41, 275 (1983).
[23] K.S. Holian (Ed.), T-4 Handbook of Material Property Data Bases, Vol 1c: Equations of State,
Los Alamos National Laboratory report LA-10160-MS (1984).
[24] D.J. Steinberg, Equation of State and Strength Properties of Selected Materials, Lawrence
Livermore National Laboratory report UCRL-MA-106439 change 1 (1996).
[25] D.C. Swift, T.E. Tierney, S.-N. Luo, D.L. Paisley, G.A. Kyrala, A. Hauer, S.R. Greenfield,
A.C. Koskelo, K.J. McClellan, H.E. Lorenzana, D. Kalantar, B.A. Remington, P. Peralta,
E. Loomis, Phys.Plasmas 12, 056308 (2005).
[26] R. Hill, “The Mathematical Theory of Plasticity” (Clarendon Press, Oxford, 1950).
[27] C.A. Hall, Phys. Plasmas 7, 5, pp 2069-2075 (2000).
[28] D.C. Swift, A. Seifter, D.B. Holtkamp, and D.A. Clark, Phys. Rev. B 76, 054122 (2007).
[29] A. Seifter and D.C. Swift, Phys. Rev. B 77, 134104 (2008).
[30] E. Loomis, D.C. Swift, J. Appl. Phys. 103, 023518 (2008).
TABLE I: Interface to material models required for explicit forward-time continuum dynamics
simulations.
purpose interface calls
program set-up read/write material data
continuum dynamics equations stress(state)
time step control sound speed(state)
evolution of state (deformation) d(state)/dt(state,grad ~u)
evolution of state (heating) d(state)/dt(state,ė)
internal evolution of state d(state)/dt
manipulation of states create and delete
add states
multiply state by a scalar
check for self-consistency
Parentheses in the interface calls denote functions, e.g. “stress(state)” for “stress as a function of
the instantaneous, local state.” The evolution functions are shown in the operator-split structure
that is most robust for explicit, forward-time numerical solutions and can also be used for
calculations of the shock Hugoniot and ramp compression. Checks for self-consistency include
that mass density is positive, volume or mass fractions of components of a mixture add up to one,
TABLE II: Examples of types of material model, distinguished by different structures in the state
vector.
model state vector effect of mechanical strain
s ṡm(s, gradu)
mechanical equation of state ρ, e −ρdiv~u,−pdiv~u/ρ
thermal equation of state ρ, T −ρdiv~u,−pdiv~u/ρcv
heterogeneous mixture {ρ, e, fv}i {−ρdiv~u,−pdiv~u/ρ, 0}i
homogeneous mixture ρ, T, {fm}i {−ρdiv~u,−pdiv~u/ρcv , 0i
traditional deviatoric strength ρ, e, σ, ǫ̃p −ρdiv~u,
−pdiv~u+fp||σǫ̇p||
, Gǫ̇e,
fǫ||ǫ̇
The symbols are ρ: mass density; e: specific internal energy, T : temperature, fv: volume fraction,
fm: mass fraction, σ: stress deviator, fp: fraction of plastic work converted to heat, gradup:
plastic part of velocity gradient, G: shear modulus, ǫ̇e,p: elastic and plastic parts of strain rate
deviator, ǫ̃p: scalar equivalent plastic strain, fǫ: factor in effective strain magnitude. Reacting
solid explosives can be represented as heterogeneous mixtures, one component being the reacted
products; reaction, a process of internal evolution, transfers material from unreacted to reacted
components. Gas-phase reaction can be represented as a homogeneous mixture, reactions
transferring mass between components representing different types of molecule. Symmetric
tensors such as the stress deviator are represented more compactly by their 6 unique upper
triangular components, e.g. using Voigt notation.
TABLE III: Outline hierarchy of material models, illustrating the use of polymorphism (in the
object-oriented programming sense).
material (or state) type model type
mechanical equation of state polytropic, Grüneisen, energy-based
Jones-Wilkins-Lee, (ρ, T ) table, etc
thermal equation of state temperature-based Jones-Wilkins-
Lee, quasiharmonic, (ρ, T ) table,
reactive equation of state modified polytropic, reactive Jones-
Wilkins-Lee
spall Cochran-Banner
deviatoric stress elastic-plastic, Steinberg-Guinan,
Steinberg-Lund, Preston-Tonks-
Wallace, etc
homogeneous mixture mixing and reaction models
heterogeneous mixture equilibration and reaction models
Continuum dynamics programs can refer to material properties as an abstract ‘material type’
with an abstract material state. The actual type of a material (e.g. mechanical equation of
state), the specific model type (e.g. polytropic), and the state of material of that type are all
handled transparently by the object-oriented software structure.
The reactive equation of state has an additional state parameter λ, and the software operations
are defined by extending those of the mechanical equation of state. Spalling materials can be
represented by a solid state plus a void fraction fv, with operations defined by extending those of
the solid material. Homogeneous mixtures are defined as a set of thermal equations of state, and
the state is the set of states and mass fractions for each. Heterogeneous mixtures are defined as a
set of ‘pure’ material properties of any type, and the state is the set of states for each component
plus its volume fraction.
0.0001
0.001
0.01
0.001 0.01
mass density (g/cm3)
isentrope
Hugoniot
0.0001
0.001
0.01
0.001 0.01
mass density (g/cm3)
isentrope
Hugoniot
FIG. 1: Principal isentrope and shock Hugoniot for air (perfect gas): numerical calculations for
general material models, compared with analytic solutions.
0 1000 2000 3000 4000 5000
temperature (K)
solid: Grueneisen
dashed: SESAME 3716
FIG. 2: Shock Hugoniot for Al in pressure-temperature space, for different representations of the
equation of state.
0.7 0.75 0.8 0.85 0.9 0.95 1
volume compression
each pair of lines:
upper is Hugoniot,
lower is adiabat
FIG. 3: Principal adiabat and shock Hugoniot for Be in normal stress-compression space, neglecting
strength (dashed), for Steinberg-Guinan strength (solid), and for elastic-perfectly plastic with
Y = 10GPa (dotted).
0 20 40 60 80 100 120 140
normal stress (GPa)
elastic wave
plastic shock
FIG. 4: Principal adiabat and shock Hugoniot for Be in shock speed-normal stress space, neglecting
strength (dashed), for Steinberg-Guinan strength (solid), and for elastic-perfectly plastic with
Y = 10GPa (dotted).
0 1000 2000 3000 4000 5000
temperature (K)
principal
adiabat
principal
Hugoniot
release
adiabat
FIG. 5: Principal adiabat, shock Hugoniot, and release adiabat for Be in normal stress-temperature
space, neglecting strength (dashed), for Steinberg-Guinan strength (solid), and for elastic-perfectly
plastic with Y = 10GPa (dotted).
0 1000 2000 3000 4000 5000
temperature (K)
melt locus
solid Hugoniot
FIG. 6: Demonstration of shock Hugoniot solution across a phase boundary: shock-melting of Al,
for different initial porosities.
initial state
particle speed
initial state
of projectile
principal Hugoniot
of target
principal
Hugoniot
of projectile
shock state:
intersection
of target
FIG. 7: Wave interactions for the impact of a flat projectile moving from left to right with a
stationary target. Dashed arrows are a guide to the sequence of states. For a projectile moving
from right to left, the construction is the mirror image reflected in the normal stress axis.
states
particle speed
secondary
Hugoniot
of target
initial shock state
in target
principal Hugoniot:
high impedance window
low impedance
window
target release isentrope
target release at free surface
window
release
FIG. 8: Wave interactions for the release of a shocked state (shock moving from left to right) into
a stationary ‘window’ material to its right. The release state depends whether the window has
a higher or lower shock impedance than the shocked material. Dashed arrows are a guide to the
sequence of states. For a shock moving from right to left, the construction is the mirror image
reflected in the normal stress axis.
projectile release
in projectile and target
final tensile state
in projectile and target
particle speed
target release
target release
projectile release
initial shock state
FIG. 9: Wave interactions for the release of a shocked state by tension induced as materials try
to separate in opposite directions when joined by a bonded interface. Material damage, spall, and
separation are neglected: the construction shows the maximum tensile stress possible. For general
material properties, e.g. if plastic flow is included, the state of maximum tensile stress is not just
the negative of the initial shock state. Dashed arrows are a guide to the sequence of states. The
graph shows the initial state after an impact by a projectile moving from right to left; for a shock
moving from right to left, the construction is the mirror image reflected in the normal stress axis.
tension
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
�����
target
impact shocks
transmitted shock;
reflected wave
free surface
release
release interactions:
FIG. 10: Schematic of uniaxial wave interactions induced by the impact of a flat projectile with a
composite target.
0 5 10 15 20
position (mm)
LiFAl Mo
reflected
transmitted
release
shock
original
shock state
FIG. 11: Hydrocode simulation of Al projectile at 3.6 km/s impacting a Mo target with a LiF
release window, 1.1µs after impact. Structures on the waves are elastic precursors.
List of figures
1. Principal isentrope and shock Hugoniot for air (perfect gas): numerical calculations
for general material models, compared with analytic solutions.
2. Shock Hugoniot for Al in pressure-temperature space, for different representations of
the equation of state.
3. Principal adiabat and shock Hugoniot for Be in normal stress-compression space,
neglecting strength (dashed), for Steinberg-Guinan strength (solid), and for elastic-
perfectly plastic with Y = 10GPa (dotted).
4. Principal adiabat and shock Hugoniot for Be in shock speed-normal stress space,
neglecting strength (dashed), for Steinberg-Guinan strength (solid), and for elastic-
perfectly plastic with Y = 10GPa (dotted).
5. Principal adiabat, shock Hugoniot, and release adiabat for Be in normal stress-
temperature space, neglecting strength (dashed), for Steinberg-Guinan strength
(solid), and for elastic-perfectly plastic with Y = 10GPa (dotted).
6. Demonstration of shock Hugoniot solution across a phase boundary: shock-melting of
Al, for different initial porosities.
7. Wave interactions for the impact of a flat projectile moving from left to right with a
stationary target. Dashed arrows are a guide to the sequence of states. For a projectile
moving from right to left, the construction is the mirror image reflected in the normal
stress axis.
8. Wave interactions for the release of a shocked state (shock moving from left to right)
into a stationary ‘window’ material to its right. The release state depends whether
the window has a higher or lower shock impedance than the shocked material. Dashed
arrows are a guide to the sequence of states. For a shock moving from right to left,
the construction is the mirror image reflected in the normal stress axis.
9. Wave interactions for the release of a shocked state by tension induced as materials
try to separate in opposite directions when joined by a bonded interface. Material
damage, spall, and separation are neglected: the construction shows the maximum
tensile stress possible. For general material properties, e.g. if plastic flow is included,
the state of maximum tensile stress is not just the negative of the initial shock state.
Dashed arrows are a guide to the sequence of states. The graph shows the initial state
after an impact by a projectile moving from right to left; for a shock moving from
right to left, the construction is the mirror image reflected in the normal stress axis.
10. Schematic of uniaxial wave interactions induced by the impact of a flat projectile with
a composite target.
11. Hydrocode simulation of Al projectile at 3.6 km/s impacting a Mo target with a LiF
release window, 1.1µs after impact. Structures on the waves are elastic precursors.
Introduction
Conceptual structure for material properties
Idealized one-dimensional loading
Ramp compression
Shock compression
Accuracy: application to air
Complex behavior of condensed matter
Temperature
Density-temperature equations of state
Temperature model for mechanical equations of state
Strength
Preferred representation of isotropic strength
Beryllium
Phase changes
Composite loading paths
Conclusions
Acknowledgments
References
References
List of figures
| Introduction
Conceptual structure for material properties
Idealized one-dimensional loading
Ramp compression
Shock compression
Accuracy: application to air
Complex behavior of condensed matter
Temperature
Density-temperature equations of state
Temperature model for mechanical equations of state
Strength
Preferred representation of isotropic strength
Beryllium
Phase changes
Composite loading paths
Conclusions
Acknowledgments
References
References
List of figures
| Solución numérica de choque y compresión de rampa
para propiedades materiales generales
Damian C. Swift*
División de Ciencia y Tecnología de Materiales,
Laboratorio Nacional Lawrence Livermore,
7000, East Avenue, Livermore, CA 94550, U.S.A.
(Fecha: 7 de marzo de 2007; versión revisada el 8 de abril de 2008 y el 1 de julio de 2008 – LA-UR-07-2051)
Resumen
Se elaboró una formulación general para representar modelos de materiales para aplicaciones en dinámica
cargando. Se diseñaron métodos numéricos para calcular la respuesta a la compresión de golpes y rampas, y
descompresión de rampa, generalizando soluciones previas para ecuaciones escalares de estado. El número
se encontró que los métodos eran flexibles y robustos, y que los resultados analíticos se ajustaban a una alta precisión.
Los métodos básicos de la rampa y la solución de choque se acoplaron para resolver la deformación compuesta
rutas, tales como impactos inducidos por choque, e interacciones de choque con una interfaz planar entre
diferentes materiales. Estos cálculos captan gran parte de la física de las dinámicas materiales típicas
experimentos, sin requerir simulaciones de resolución espacial. Se hicieron cálculos de ejemplo de
historia de la carga en metales, ilustrando los efectos del trabajo plástico en las temperaturas inducidas en
experimentos cuasi-isentrópicos y de liberación de choque, y el efecto de una transición de fase.
Números PACS: 62.50.+p, 47.40.-x, 62.20.-x, 46.35.+z
Palabras clave: dinámica material, choque, isentropo, adiabat, solución numérica, comportamiento constitutivo
* Dirección electrónica: damian.swift@physics.org
http://arxiv.org/abs/0704.0008v3
mailto:damian.swift@physics.org
I. INTRODUCCIÓN
La representación continua de la materia se utiliza ampliamente para la dinámica material de la ciencia y la tecnología.
Ence e ingeniería. Las simulaciones de dinámica de continuum espacialmente resueltas son las más
generalizado y familiar, resolviendo el problema de valor inicial discretizando el dominio espacial
e integrar las ecuaciones dinámicas hacia adelante en el tiempo para predecir el movimiento y defor-
nes de los componentes del sistema. Este tipo de simulación se utiliza, por ejemplo, para estudiar
problemas de impacto hipervelocidad tales como la vulnerabilidad de la armadura a los proyectiles [1, 2], el
rendimiento de los escudos de desechos de satélites [3], y el impacto de los meteoritos con los planetas, en particular
la formación de la luna [4]. El problema se puede dividir en las ecuaciones dinámicas
del continuum, el campo de estado de los componentes s(~r), y las propiedades inherentes de
los materiales. Teniendo en cuenta el estado material local s, las propiedades materiales permiten el estrés
por determinar. Teniendo en cuenta el campo de tensión (~r) y el campo de densidad de masa (~r), la dinámica
ecuaciones describen los campos de aceleración, compresión y trabajo termodinámico realizado
sobre los materiales.
Las ecuaciones de la dinámica del continuum describen el comportamiento de una deformación dinámica
sistema de complejidad arbitraria. Trayectorias de deformación particulares y más simples se pueden describir más
compactamente por diferentes conjuntos de ecuaciones, y resuelto por diferentes técnicas que las utilizadas
para la dinámica del continuum en general. Caminos de deformación más simples ocurren a menudo en experimentos
diseñado para desarrollar y calibrar modelos de propiedades del material. Estos caminos pueden ser considerados
como diferentes formas de interrogar las propiedades materiales. Los principales ejemplos en material
la dinámica son la compresión de choque y rampa [5, 6]. Los experimentos típicos están diseñados para inducir
tales historias de carga y medir o inferir las propiedades del material en estos estados
antes de ser destruidos por liberación de los bordes o por ondas reflejadas.
El desarrollo del campo de la dinámica material fue impulsado por aplicaciones en el
física de los impactos a hipervelocidad y de los sistemas de explosivos elevados, incluidas las armas nucleares [7].
En los regímenes de interés, por lo general los componentes con dimensiones que van desde
ters a metros y presiones de 1GPa a 1TPa, el comportamiento material está dominado por el
Ecuación escalar del estado (EOS): la relación entre presión, compresión (o masa)
densidad), y la energía interna. Otros componentes del estrés (específicamente esfuerzos de corte) son:
mucho más pequeño, y los explosivos químicos reaccionan rápidamente por lo que puede ser tratado por
Els de detonación completa. EOS se desarrollaron como ajuste a los datos experimentales, en particular
a series de estados de choque y a mediciones de compresión isotérmica [8]. Es relativamente
directo para construir estados de compresión de choque y rampa de un EOS algebraicamente
o numéricamente dependiendo del EOS, y para ajustar un EOS a estas mediciones. Más
recientemente, las aplicaciones y el interés científico han crecido para incluir una gama más amplia de presiones
y escalas de tiempo, tales como la fusión inercial de confinamiento impulsado por láser [9], y los experimentos son
El objetivo de este estudio es medir otros aspectos distintos de la EOS, tales como la cinética de los cambios de fase,
Comportamiento estetutivo que describe tensiones de corte, reacciones químicas incompletas y los efectos de
microestructura, incluyendo orientación de grano y porosidad. Las técnicas teóricas también tienen
evolucionó para predecir el EOS con una precisión de +1% [10] y contribuciones elásticas al estrés por cizallamiento
con una precisión ligeramente inferior [11].
Se describe una convención general para representar estados materiales, y métodos numéricos
se informan para calcular los estados de compresión de choque y rampa de las representaciones generales
de propiedades materiales.
II. ESTRUCTURA CONCEPTUAL PARA PROPIEDADES MATERIALES
La estructura deseada para la descripción del estado del material y las propiedades bajo dy-
la carga namic se desarrolló para ser lo más general posible con respecto a los tipos de material
o modelos que deben estar representados en el mismo marco, y diseñados para dar la mayor cantidad
de similitud entre simulaciones espacialmente resueltas y cálculos de choque y rampa
compresiones.
En la materia condensada en escalas de tiempo sub-microsegundo, la conducción de calor es a menudo demasiado lenta para
tienen un efecto significativo en la respuesta del material, y es ignorado aquí. Las ecuaciones
de la dinámica del continuum no relativista son, en forma lagrangiana, es decir. a lo largo de las características
movimiento con la velocidad de material local ~u(~r),
(~r, t)
= (~r, t)div~u(~r, t) (1)
D~u(~r, t)
(~r, t)
(~r, t) (2)
De(~r, t)
= (~r, t)grad~u(~r, t) (3)
donde la densidad de masa y la energía interna específica. Los cambios en e pueden estar relacionados
a los cambios en la temperatura T a través de la capacidad de calor. Las propiedades inherentes de
cada material en el problema se describen por su relación constitutiva o ecuación de estado
(s). Además de experimentar la compresión y el trabajo de la deformación mecánica, el local
El estado del material s(~r, t) puede evolucionar a través de procesos internos como el flujo de plástico. En general,
Ds(~r, t)
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
que también puede incluir las ecuaciones para /t y e/t. Por lo tanto, las propiedades del material deben
Describa como mínimo los siguientes elementos para cada material: Si también describen T (s),
la conductividad, y ė(ė), entonces la conducción del calor puede ser tratada. Otras funciones pueden ser:
necesarios para determinados métodos numéricos en la dinámica del continuum, como la necesidad de
velocidades (por ejemplo, la velocidad de sonido longitudinal), que se necesitan para el control del paso del tiempo en forma explícita
integración del tiempo. Internamente, dentro de los modelos de propiedades materiales, es deseable reutilizar
software tanto como sea posible, y otras funciones del estado son por lo tanto deseables para permitir
modelos que se construirán de forma modular y jerárquica. Las manipulaciones aritméticas deben
se realiza en el estado durante la integración numérica, y estos se pueden codificar cuidadosamente
utilizando sobrecarga del operador, por lo que el operador del tipo apropiado se invoca automáticamente
sin tener que incluir estructuras «si-entonces-else» para cada operador, como es el caso en
lenguajes de programación orientados a objetos como Fortran-77. Por ejemplo, si se calcula
en un método numérico de tiempo de avance entonces los cambios de estado se calculan utilizando numérico
ecuaciones de evolución tales como
s(t+ t) = s(t) + t». 5)
Por lo tanto, para un estado general s y su derivado de tiempo, que tiene un conjunto equivalente de compo-
nents, es necesario multiplicar un estado por un número real y agregar dos estados juntos.
Para una implementación de software específica, pueden ser necesarias otras operaciones, por ejemplo:
crear, copiar o destruir una nueva instancia de un estado.
La atracción de este enfoque es que, al elegir una forma razonablemente general para el
relación constitutiva y operaciones asociadas, es posible separar el continuum
dinámica parte del problema del comportamiento inherente del material. Relaciones
describiendo las propiedades de diferentes tipos de material se puede encapsular en una forma de biblioteca
donde el programa de dinámica continua no necesita saber nada sobre las relaciones para
tipo cífico de material, y viceversa. Los programas de dinámica continua y el material
las relaciones de propiedades pueden ser desarrolladas y mantenidas independientemente unas de otras, siempre y cuando
que la interfaz sigue siendo la misma (cuadro I). Esta es una manera eficiente de complicar
modelos de materiales disponibles para simulaciones de diferentes tipos, incluyendo Lagrangian y Eule-
rios que funcionan en diferentes números de dimensiones, y cálculos de
historial de carga o calefacción, como el choque y la carga en rampa que se examina a continuación. Programas informáticos en-
terfaces se han desarrollado en el pasado para escalar EOS con una sola estructura para el estado
[12], pero las técnicas orientadas a objetos hacen práctico extender el concepto a mucho más
estados complicados, a combinaciones de modelos, y a tipos alternativos de modelos seleccionados
cuando se ejecuta el programa, sin tener que encontrar un solo estado super-set que abarque todo
posibles estados como casos especiales.
Una gama muy amplia de tipos de comportamiento material se puede representar con este formalismo.
En el nivel más alto, diferentes tipos de comportamiento se caracterizan por diferentes estructuras para
el estado s (cuadro II). Para cada tipo de estado, se pueden definir diferentes modelos específicos, tales como:
como gas perfecto, politrópico y Grüneisen EOS. Para cada modelo específico, diferentes materiales
se representan eligiendo diferentes valores para los parámetros en el modelo, y diferentes
los estados materiales locales se representan a través de diferentes valores para los componentes de s.
jerga de programación orientada a objetos, la capacidad de definir un objeto cuyo tipo preciso
no está determinado hasta que el programa se ejecuta se conoce como polimorfismo. Para nuestra aplicación,
polimorfismo se utiliza en varios niveles en la jerarquía de objetos, desde el tipo general de un
material (como «uno representado por un EOS de presión-densidad-energía» o «uno representado por
un modelo de estrés desviatorio») a través del tipo de relación utilizado para describir las propiedades de
tipo de material (como gas perfecto, politrópico, o Grüneisen para una densidad de presión-energía
EOS, o Steinberg-Guinan [13] o Preston-Tonks-Wallace [14] para un modelo de estrés desviatorio,
al tipo de función matemática general utilizada para representar algunas de estas relaciones (como
como polinomio o representación tabular de γ(l) en un EOS politrópico) (Tabla III). Estados
o modelos pueden definirse extendiendo o combinando otros estados o modelos - esto puede ser
se aplica utilizando el concepto de herencia basado en la programación orientada a los objetos. Así desviatoria
modelos de estrés pueden definirse como una extensión a cualquier presión-densidad-energía EOS (que son
Por lo general escrito asumiendo un tipo específico, como la forma cúbica Grüneisen de Steinberg), homo-
las mezclas genéticas pueden definirse como combinaciones de cualquier EOS a presión-densidad-temperatura,
y mezclas heterogéneas pueden definirse como combinaciones de materiales representados cada uno
por cualquier tipo de modelo de material.
Las implementaciones de prueba se han hecho como bibliotecas en la programación C++ y Java
idiomas [15]. La interfaz externa con las propiedades del material era general a nivel
de representar un tipo y estado de material genérico. El tipo de estado y modelo eran entonces
seleccionado cuando los programas que utilizan la biblioteca de propiedades de material se ejecutaron. En C++, objetos
que eran polimórficos en el tiempo de ejecución tuvo que ser representado como punteros, lo que requiere adicional
construcciones de software para asignar y liberar la memoria física asociada con cada objeto.
Era posible incluir funciones reutilizables generales como objetos polimórficos al definir
modelos: funciones reales de un parámetro real podrían ser polinomios, trascendentales, tabular
con diferentes sistemas de interpolación, definiciones por partes en diferentes regiones de la
línea dimensional, sumas, productos, etc; otra vez definido específicamente en el tiempo de ejecución. Orientado a los objetos
El polimorfismo y la herencia eran por lo tanto técnicas muy poderosas para aumentar el software
reutilizar, haciendo el software más compacto y más fiable a través de un mayor uso de
funciones que ya se habían puesto a prueba.
Dadas las estructuras conceptuales y de software diseñadas para representar
lazos adecuados para su uso en simulaciones de dinámica de continuum espacialmente resueltos, ahora consideramos
el uso de estos modelos genéricos de material para calcular las rutas de carga idealizadas.
III. CARGA DE UN DIMENSIONAL IDEALIZADA
Experimentos para investigar la respuesta de los materiales a la carga dinámica, y para calibrar
los parámetros en los modelos de su comportamiento, son generalmente diseñados para aplicar como simple una carga
la historia como es consistente con el estado transitorio de interés. Los tipos canónicos más simples de
el historial de carga son el choque y la rampa [5, 6]. Los métodos de solución se presentan para el cálculo
el resultado del choque y la carga en rampa para los materiales descritos por los modelos de materiales generalizados
se examina en la sección anterior. Tal solución directa elimina la necesidad de utilizar un tiempo-
y simulación de la dinámica del continuum resuelta desde el espacio, que permite calcular los estados con
mucho mayor eficiencia y sin la necesidad de tener en cuenta y tener en cuenta los atributos de
simulaciones resueltas como la resolución numérica finita y el efecto de la resolución numérica y
viscosidades artificiales.
A. Compresión de rampas
La compresión de la rampa se toma aquí para significar compresión o descompresión. Si el material
está representado por un EOS escalar invisible, es decir. ignorando procesos disipativos y no escalares
efectos de la tensión elástica, la compresión de la rampa sigue un isentrope. Esto ya no es cierto.
cuando se producen procesos disipativos como el calentamiento de plástico. El término «cuasi-isentrópico» es
a veces utilizados en este contexto, especialmente para la compresión sin golpes; aquí preferimos
se refieren a las trayectorias termodinámicas como adiabats, ya que se trata de un término más adecuado:
ningún calor se intercambia con el entorno en las escalas de tiempo de interés.
Para la compresión adiabática, el estado evoluciona de acuerdo con la segunda ley de termo-
los namics,
de = T dS − p dv (6)
donde T es la temperatura y S la entropía específica. Por lo tanto
ė = T − p v = T −
pdiv~u
, (7)
o para un material más general cuyo tensor de tensión sea más complicado que una presión escalar,
de = T dS + n dv ė = T +
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
en la que el componente del estrés es normal a la dirección de la deformación. La velocidad
El gradiente se expresó a través de un factor de compresión η ° ° ° ° ° ° y una tasa de deformación. En total
experimentos en rampa utilizados en el desarrollo y calibración de modelos de materiales precisos,
la cepa se ha aplicado uniaxialmente. Vías de deformación más generales, por ejemplo isotrópicas o
incluyendo un componente de corte, puede ser tratado por el mismo formalismo, y que el trabajo
tasa es entonces un producto interno completo de los tensores de tensión y tensión.
La aceleración o desaceleración del material normal a la onda a medida que se comprime
o expandido adiabáticamente es
, (9)
de la cual puede deducirse que
donde cl es la velocidad de onda longitudinal.
Como con la dinámica del continuum, la evolución interna del estado material se puede calcular
simultáneamente con las ecuaciones de continuum, o operador dividido y calculado periódicamente
a compresión constante [16]. Los resultados son iguales al segundo orden en la compresión
incremento. El fraccionamiento del operador permite realizar los cálculos sin un explici-
tropy, si las ecuaciones del continuum están integradas isentrópicamente y los procesos disipativos son
capturado por la evolución interna en constante compresión.
La división del operador es deseable cuando la evolución interna puede producir altamente no lineal
cambios, como la reacción del sólido al gas: cambios rápidos en el estado y las propiedades pueden
que los esquemas numéricos sean inestables. El reparto de operadores también es deseable cuando la integración
el paso del tiempo para la evolución interna es mucho más corto que el paso del tiempo de la dinámica continua.
Ninguna de estas consideraciones es muy importante para la compresión de rampas sin res-
olución, pero el operador-splitting se utilizó como una opción en los cálculos de compresión de rampa
para la coherencia con las simulaciones de dinámica continuum.
Las ecuaciones de compresión de rampa se integraron usando Runge-Kutta nu-
esquemas mericos de segundo orden. El esquema del cuarto orden es una extensión trivial. Los
la secuencia de operaciones para calcular un incremento de compresión de rampa es la siguiente:
1. Incremento de tiempo:
T = −
ln
2. Predictor:
s(t + t/2) = s(t) +
(m(s)(t), ) (12)
3. Corrector:
s(t+ t) = s(t) + tóm(s(t+ t/2), ) (13)
4. Evolución interna:
s(t+ t) → s(t+ t) +
∫ tÃ3 °t
•i(s)(t)
′), ) dt′ (14)
donde m es la evolución del estado dependiente del modelo a partir de la cepa aplicada, y i es interna
evolución en constante compresión.
La variable independiente para la integración es volumen específico v o densidad de masa
los pasos finitos de integración numérica son tomados en ♥ y v. El tamaño del paso se puede controlar así
que el error numérico durante la integración permanece dentro de los límites elegidos. Un adiabat tabular
se puede calcular mediante la integración en un rango de v o ♥, pero al simular experimental
escenarios el límite superior para la integración suele ser que uno de los otros termodinámicos
las cantidades alcanzan un valor determinado, por ejemplo, que el componente normal del estrés alcanza
cero, que es el caso en la liberación de un estado de alta presión en una superficie libre. Específico
condiciones finales se encontraron mediante el seguimiento de la cantidad de interés hasta que entre corchetes por un finito
paso de integración, a continuación, biseccionar hasta que la condición de parada se satisfizo a una precisión elegida.
Durante la bisección, cada cálculo de ensayo se realizó como una integración desde el primer lado
del soporte por la compresión del ensayo.
B. Compresión por choque
La compresión de choque es la solución de un problema de Riemann para la dinámica de un salto
en compresión moviéndose con velocidad constante y con un espesor constante. El Rankine...
Las ecuaciones de Hugoniot (RH) [5] que describen la compresión de choque de la materia se derivan en
la aproximación del continuum, donde el choque es una discontinuidad formal en el continuum
campos. En realidad, la materia está compuesta de átomos, y los choques tienen un ancho finito gobernado por
la cinética de los procesos disipativos – a un nivel fundamental, la materia no distingue
entre compresión de choque y compresión de rampa con una alta tasa de deformación, pero el RH
las ecuaciones se aplican siempre y cuando la anchura de la región de la materia donde los procesos no resueltos
ocurren es constante. En comparación con los estados isentrópicos inducidos por la compresión de rampa en
un material representado por un EOS, un choque siempre aumenta la entropía y por lo tanto la
temperatura. Con procesos disipativos incluidos, la distinción entre una rampa y una
El shock se puede desdibujar.
Las ecuaciones RH expresan la conservación de la masa, el impulso y la energía a través de un
la discontinuidad en movimiento en estado. Por lo general se expresan en términos de la presión, pero son
fácilmente generalizado para materiales que soportan tensiones de cizallamiento mediante el uso del componente de estrés
normal al choque (es decir, paralelo con la dirección de propagación del choque),
u2s = −v
N-N-N-N-O-N-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O
v0 − v
, (15)
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
— (ln) — (l) — (l) — (l) — (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l) (l)
e = e0 −
(ln + ln0)(v0 − v), (17)
donde nosotros es la velocidad de la onda de choque con respecto al material, arriba es el cambio en
velocidad del material normal a la onda de choque (es decir, paralela a su dirección de propagación), y
El subíndice 0 se refiere al estado inicial.
Las relaciones RH se pueden aplicar a los modelos de material general si una escala de tiempo o velocidad de deformación
se impone, y una orientación elegida para el material con respecto al choque. Shock
la compresión en la dinámica del continuum es casi siempre uniaxial.
Las ecuaciones RH implican sólo los estados inicial y final en el material. Si un material
tiene propiedades que dependen de la trayectoria de deformación – tales como flujo de plástico o viscosidad –
entonces físicamente la estructura de choque detallada puede hacer una diferencia [17]. Esto es una limitación.
de choques discontinuos en la dinámica del continuum: puede abordarse como se ha señalado anteriormente
mediante la inclusión de procesos disipativos y la consideración de la compresión en rampa, si la
Los procesos pueden representarse adecuadamente en la aproximación del continuum. Resuelto desde el punto de vista espacial
simulaciones con diferenciación numérica para obtener derivados espaciales y tiempo de avance
las diferencias no suelen ser capaces de representar las discontinuidades de choque directamente, y un
la viscosidad artificial se utiliza para la compresión de choque de frotis en unas pocas células espaciales [18]. Los
trayectoria seguida por el material en el espacio termodinámico es un adiabat suave con dissi-
calefacción pative suministrada por la viscosidad artificial. Si el trabajo de plástico también se incluye durante este
compresión adiabática, el calentamiento total para una compresión dada es mayor que desde el
Ecuaciones RH. Para ser consistente, el flujo de plástico debe ser descuidado mientras la viscosidad artificial
no es cero. Esta desactivación localizada de los procesos físicos, en particular los que dependen del tiempo,
durante el paso de la conmoción no físicamente manchada se encontró previamente necesario para
simulaciones numéricamente estables de ondas de detonación por flujo reactivo [19].
Las ondas de detonación son ondas de choque reactivas. Detonación plana constante (el Chapman-
Estado de Jouguet [20]) puede calcularse utilizando las relaciones RH, mediante la imposición de la condición
que el estado material detrás del shock es totalmente reaccionado.
Varios métodos numéricos se han utilizado para resolver las ecuaciones RH para los materiales repré-
enviado por un EOS únicamente [21, 22]. Las ecuaciones generales de RH pueden ser resueltas numéricamente para un
compresión de choque dada variando la energía interna específica e hasta el estrés normal
del modelo material es igual a la de la ecuación de energía RH, Eq. 17. El shock y
Las velocidades de partículas se calculan a partir de Eqs 15 y 16. Este método numérico es particu-
larly conveniente para EOS de la forma p(l, e), ya que e puede variar directamente. Las soluciones todavía pueden
se encuentran para los modelos de material general que utilizan (ė), por lo que la energía puede ser variada hasta
se encuentra la solución.
Numéricamente, la solución se encontró por soporte y bisección:
1. Para la compresión dada, tomar el extremo de baja energía para el soporte como un estado cercano
s− (por ejemplo: el estado anterior, de compresión inferior, en el Hugoniot), adia comprimida
baticamente (estado s"), y se enfría por lo que la energía interna específica es e(s-).
2. Bracket el estado deseado: aplicar incrementos de calefacción sucesivamente más grandes
en cada estado de ensayo internamente, hasta que el (los) n(s) del modelo de material supere el (e − e0)
de Eq. 17.
3. Bisecte en la e, evolucionando cada estado de prueba internamente, hasta que el n(s) es igual al n(e − e0) a la
precisión deseada.
Al igual que con la compresión en rampa, la variable independiente para la solución fue la densidad de masa
y pasos finitos fueron tomados. Cada estado de shock fue calculado independientemente del resto,
así que los errores numéricos no se acumularon a lo largo del shock Hugoniot. La exactitud de la
la solución era independiente de. Un Hugoniot tabular se puede calcular resolviendo sobre un
rango de............................................................................................................................
calcular el estado de choque en el que una de las otras cantidades termodinámicas alcanza una determinada
valor, a menudo que hasta y Łn coinciden con los valores de otro, cálculo de choque simultáneo
para otro material – la situación en los problemas de transmisión de impactos y choques, discutido
abajo. Las condiciones específicas de parada se comprobaron mediante el control de la cantidad de interés hasta
entre corchetes por un paso de solución finita, luego bisecar hasta que la condición de parada se satisfizo a un
Precisión elegida. Durante la bisección, cada cálculo de ensayo se realizó como un shock de la
condiciones iniciales para la compresión de choque del ensayo.
C. Precisión: aplicación al aire
La precisión de estos esquemas numéricos fue probada comparando con el choque y la rampa
compresión de un material representado por un EOS de gas perfecto,
p = (γ − 1) (18)
La solución numérica requiere que se elija un valor para cada parámetro del material
modelo, aquí γ. El aire fue elegido como material de ejemplo, con γ = 1.4. Aire en el tem-
la peratura y la presión tienen unas dimensiones aproximadas de 10 a 3 g/cm3 y de 0,25 MJ/kg. Isentropos
para el gas perfecto EOS tienen la forma
p = constante, (19)
y el shock Hugoniots tienen la forma
p = (γ − 1)
2e0-0 p0( 0)
(γ + 1)}0 − (γ − 1)
. (20)
Las soluciones numéricas reprodujeron el isentrope principal y Hugoniot al 10-3% y al 0,1%
respectivamente, para un incremento de compresión del 1% a lo largo del isentrope y una tolerancia a la solución
de 10-6GPa por cada estado de shock (fig. 1). Sobre la mayor parte del rango, el error en el Hugoniot
fue igual o inferior al 0,02%, aproximándose sólo al 0,1% cerca de la compresión máxima de choque.
IV. COMPLEJO COMPATIBILIDAD DE LA IMPORTACIÓN CONDENADA
La capacidad de calcular choque y loci rampa en el espacio de estado, es decir. en función de la diversidad de
las condiciones de carga, es particularmente conveniente para investigar aspectos complejos de la
respuesta de la materia condensada a la carga dinámica. Cada locus puede ser obtenido por un solo
serie de soluciones de choque o rampa, en lugar de tener que realizar una serie de tiempo y espacio-
simulaciones de dinámica continua resueltas, variando las condiciones iniciales o de frontera, y
reducir la solución. Consideramos el cálculo de la temperatura en el escalar EOS, el
efecto de la fuerza material y el efecto de los cambios de fase.
A. Temperatura
Las ecuaciones de dinámica continua se pueden cerrar usando un EOS mecánico relacionado con el estrés
a la densidad de masa, la tensión y la energía interna. Para un EOS escalar, la forma ideal para cerrar el
ecuaciones continuum es p(l, e), con s =, e} la elección natural para el estado primitivo
campos. Sin embargo, la temperatura es necesaria como parámetro en las descripciones físicas de muchos
contribuciones a la respuesta constitutiva, incluidos el flujo de plástico, las transiciones de fase, y
reacciones químicas. Aquí, discutimos el cálculo de la temperatura en diferentes formas de la
escalar EOS.
1. Ecuaciones de densidad-temperatura del estado
Si el EOS escalar se construye a partir de sus contribuciones físicas subyacentes para el continuum
la dinámica, puede tomar la forma e(l, T ), a partir de la cual p(l, T ) se puede calcular utilizando la
segunda ley de la termodinámica [10]. Un ejemplo es la forma ‘SESAME’ de EOS, basada en
relaciones tabulares interpoladas para {p, e}(l, T ) [23]. Un par de relaciones {p, e}(l, T ) puede ser
utilizado como un EOS mecánico mediante la eliminación de T, que es equivalente a invertir e(l, T ) para encontrar
T (l, e), sustituyéndolo en p(l, T ). Para una relación general e(l, T ), por ejemplo para la
SESAME EOS, el inverso se puede calcular numéricamente según sea necesario, a lo largo de un isochore. In
de esta manera, un {p, e}(l, T ) puede ser utilizado como un p(l, e) EOS.
Alternativamente, la misma relación p(l, T ) se puede utilizar directamente con un campo de estado primitivo
incluyendo la temperatura en lugar de la energía: s =, T}. La evolución del estado bajo
El trabajo mecánico implica entonces el cálculo de (ė), es decir. el recíproco del calor específico
capacidad, que es un derivado de e(l, T ). Dado que este cálculo no requiere que e(l, T ) sea
invertido, es computacionalmente más eficiente para utilizar {p, e}(l, T ) EOS con una temperatura-
Estado basado, en lugar de basado en la energía. La principal desventaja es que es más difícil
para garantizar la conservación exacta de la energía a medida que las ecuaciones de dinámica continua se integran en
tiempo, pero cualquier desviación de la conservación exacta está en el nivel de precisión del algoritmo
utilizado para integrar la capacidad de calor.
Ambas estructuras de EOS han sido implementadas para cálculos de propiedades materiales. Tomando
a SESAME tipo EOS, los loci termodinámicos fueron calculados con, e} o, T} primitivos
los estados, para la comparación (Fig. 2). Para un EOS monotónico, los resultados fueron indistinguibles
dentro de las diferencias de interpolación hacia adelante o hacia atrás de las relaciones tabulares. Cuándo
el EOS, o la superficie efectiva utilizando un orden dado de función de interpolación, no fue
monotónicos, los resultados variaron mucho debido a la no-unidad al eliminar T para el
, e} estado primitivo.
2. Modelo de temperatura para ecuaciones mecánicas de estado
EOS mecánicos a menudo están disponibles como empíricas, algebraicas relaciones p(l, e), derivadas de
Datos de choque. La temperatura se puede calcular sin alterar el EOS mecánico añadiendo
a relación T (l, e). Si bien esta relación podría adoptar cualquier forma en principio, también se puede seguir
la lógica del Grüneisen EOS, en la que la presión se define en términos de su desviación
(p, e-er) de una curva de referencia {pr, er}(l). Por lo tanto, las temperaturas se pueden calcular por
referencia a una curva de compresión a lo largo de la cual la temperatura y la energía interna específica
son conocidos, {Tr, er}(l), y una capacidad calorífica específica definida como función de la densidad cv(l).
En los cálculos, esta EOS aumentada se representó como una forma «mecánica-térmica»
que comprende cualquier p(e), e) EOS más las curvas de referencia – un ejemplo de herencia de software
y polimorfismo.
Una curva de referencia natural para la temperatura es la curva de frío, Tr = 0K. La curva de frío
puede estimarse a partir del isentrope principal e(l)s0 utilizando la variación de densidad estimada
del parámetro Grüneisen:
er(l) = e(l)s0 − T0cpe
a(10/l)
)γ0−a
[24]. En este trabajo, el isentropo principal se calculó en forma tabular a partir de la mecánica
EOS, usando el algoritmo de compresión de rampa descrito anteriormente.
El EOS empírico se calibra con datos experimentales. Amortiguación y compresión adiabática
medidas en materiales fuertes inevitablemente incluyen contribuciones elásticas-plásticas, así como
el EOS escalar en sí mismo. Si las contribuciones elásticas-plásticas no se tienen en cuenta
sistemáticamente, la EOS puede incluir implícitamente contribuciones de la fuerza. Un único
EOS escalar se puede construir para reproducir el estrés normal en función de la compresión
para cualquier trayectoria de carga única: choque o adiabat, para una tensión constante o que varíe suavemente
tasa. Tal EOS generalmente no predeciría la respuesta a otras historias de carga. Los
EOS y propiedades constitutivas de los materiales considerados aquí fueron construidos auto-
consistentemente a partir de datos de choque – esto no significa que los modelos son precisos para otras cargas
caminos, ya que ni el EOS ni el modelo de fuerza incluye todos los términos físicos que real
exposición de materiales. Esto no importa en ningún caso a los efectos de demostrar
propiedades de los esquemas numéricos.
Este procedimiento mecánico-térmico se aplicó a Al utilizando un Grüneisen EOS instalado en el
los mismos datos de choque utilizados para calcular el {p, e}(l, T ) EOS analizado anteriormente [24]. Temperaturas
estaban de acuerdo (Fig. 2). Los cálculos mecánicos-térmicos requirieron un similar
esfuerzo computacional para el tabular {p, e}(l, T ) EOS con un, T} estados primitivos (y
eran por lo tanto mucho más eficiente que el EOS tabular con, e} estados), y describió el
EOS mucho más compacto.
B. Dosis
Para compresiones dinámicas a o(10GPa) y superiores, en escalas de tiempo de microsegundo, el flujo
El estrés de los sólidos a menudo se trata como una corrección o una pequeña perturbación del EOS escalar.
Sin embargo, se ha observado que el estrés de flujo es mucho mayor en las escalas de tiempo de nanosegundos
[25], y las interacciones entre ondas elásticas y plásticas pueden tener un efecto significativo sobre
la compresión y la propagación de ondas. Las ecuaciones de Rankine-Hugoniot deben ser resueltas
auto-consistente con la fuerza incluida.
1. Representación preferida de la fuerza isotrópica
Existe una inconsistencia en el tratamiento de la dinámica continua estándar de escalar (pres-
respuesta del tensor (estrés). El EOS escalar expresa la presión p(l, e) como la
cantidad dependiente, que es la forma más conveniente para su uso en las ecuaciones de continuum.
La práctica habitual consiste en utilizar la elasticidad subhookea (forma hipoelástica) [16] (cuadro II), en
que los parámetros de estado incluyen el desviador de estrés
= G(s) (22)
donde G es el módulo de corte y el desviador de la tasa de deformación. Por lo tanto, el isotrópico y el devia-
las contribuciones técnicas al estrés no se tratan de manera equivalente: la presión se calcula
de un estado local que implica un parámetro similar a la deformación (densidad de masa), mientras que el estrés de-
viator evoluciona con el tiempo-derivado de la cepa. Esta inconsistencia causa problemas a lo largo de
rutas de carga complicadas porque G varía fuertemente con la compresión: si un material es sub-
se inyecta a una cepa de cizallamiento, a continuación, compresión isotrópica (aumento del módulo de cizallamiento de
G a G′, dejando sin cambios ), después la descarga de cizallamiento a la tensión isotrópica, la descarga verdadera
la cepa es, mientras que el cálculo hipoelástico requeriría una cepa de G/G′. Uso
Ser y el modelo de fuerza Steinberg-Guinan como ejemplo de la diferencia entre
cálculos poelásticos e hiperelásticos, considerar una cepa inicial a un esfuerzo de flujo de 0.3GPa
seguido de compresión isotrópica isotrópica a 100GPa,. la cepa a descargar a un estado
de estrés isotrópico es 0,20% (hiperelástico) y 0,09% (hipoelástico). La discrepancia surge
porque el modelo hipoelástico no aumenta el estrés desviatorio en la compresión en
tensión desviatoria constante.
El estrés puede ser considerado como una respuesta directa del material al estado instantáneo
de cepa elástica: (, T ). Esta relación puede predecirse directamente con la estructura electrónica
cálculos del tensor de tensión en un sólido para un determinado estado de compresión y tensión elástica [11],
y es una generalización directa de la ecuación escalar del estado. Una representación más coherente
de los parámetros de estado es utilizar el desviador de la deformación más bien que el desviador de la deformación, y calcular a partir de
rascar cuando sea necesario utilizando
= G(s)® (23)
– una formulación hiperelástica. Los parámetros de estado son entonces, e,, p}.
Las diferentes formulaciones dan diferentes respuestas cuando se acumula tensión desviatoria
a diferentes compresiones, en cuyo caso la formulación hiperelástica es correcta. Si la cizalla
El módulo varía con el desviador de deformación – es decir, para la elasticidad no lineal – a continuación, la definición de
G() debe ajustarse para dar el mismo estrés para una determinada cepa.
Muchos modelos de resistencia isotrópica utilizan medidas escalares de la tensión y el estrés para
terilizar el trabajo endurecimiento y aplicar un modelo de rendimiento de tensión de flujo:
F2, =
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no. (24)
Los diferentes trabajadores han utilizado convenios inconsistentes para medidas escalares equivalentes.
En el presente trabajo, se utilizó la convención de física de choque común que el estrés de flujo
componente de la
Y donde Y es el estrés de flujo. Por coherencia con las velocidades publicadas
y amplitudes para ondas elásticas,
, en contraste con otros valores utilizados anteriormente
para una deformación de menor velocidad [26]. En principio, los valores de las letras f) y f) no importan mientras
los parámetros de resistencia fueron calibrados utilizando los mismos valores utilizados en cualquier simulación.
2. Berilio
La tensión de flujo medida a partir de los experimentos de choque impulsados por láser en los cristales de Be unas pocas decenas
de micrómetros de espesor es, en torno a 5-9GPa [25], mucho mayor que el
seguro en escalas de tiempo de microsegundos. Un modelo de plasticidad cristalina dependiente del tiempo para Be está siendo
desarrollado, y el comportamiento bajo carga dinámica depende del tiempo detallado depen-
Dence de la plasticidad. Los cálculos se realizaron con el modelo de resistencia Steinberg-Guinan
desarrollados para datos a escala de microsegundos [24], y, a efectos de comparación aproximada, con
respuesta elástica-perfectamente plástica con un esfuerzo de flujo de 10GPa. El plástico elástico perfectamente
modelo descuidado presión- y trabajo- endurecimiento.
Se hicieron cálculos del principal adiabat y el shock Hugoniot, y de una liberación
adiabat de un estado en el principal Hugoniot. Los cálculos se hicieron con y sin
fuerza. Teniendo en cuenta las trayectorias del estado en el espacio de volumen de estrés, es interesante notar
que el calentamiento del flujo de plástico puede empujar el adiabat por encima del Hugoniot, debido a la
mayor calentamiento obtenido mediante la integración a lo largo del adiabat en comparación con el salto de
el estado inicial al final en el Hugoniot (Fig. 3). Incluso con un plástico elástico perfecto
modelo de fuerza, las curvas con fuerza no mienten exactamente 2
Y por encima de las curvas sin fuerza,
porque la calefacción a partir del flujo de plástico contribuye a aumentar la cantidad de energía interna a la
EOS a medida que aumenta la compresión.
Una característica importante para la siembra de inestabilidades por variaciones microestructurales
en respuesta de choque es el estrés de choque en el que una onda elástica no se ejecuta por delante de la
Choque. En Be con el alto estrés de flujo de la respuesta nanosegundo, la relación entre el choque
y la velocidad de las partículas es significativamente diferente de la relación para el bajo esfuerzo de flujo (Fig. 4). Por
bajo esfuerzo de flujo, la onda elástica viaja a 13,2 km/s. Un choque de plástico viaja más rápido que esto
para presiones superiores a 110GPa, independientemente del modelo constitutivo. La velocidad de un
choque de plástico después de la onda elástica inicial es similar a la caja de baja resistencia, porque el
el material ya está en su tensión de flujo, pero la velocidad de un solo choque de plástico es sensiblemente
Más alto.
Para la compresión a una tensión normal dada, la temperatura es significativamente más alta con
flujo de plástico incluido. La calefacción adicional es particularmente llama-
abat: la temperatura se aparta significativamente del isentropo principal. Así la onda de la rampa
la compresión de materiales fuertes puede conducir a niveles significativos de calefacción, contrariamente a
la hipótesis de pequeñas subidas de temperatura [27]. El flujo de plástico es en gran parte irreversible, así que
la calefacción se produce tanto en la descarga como en la carga. Por lo tanto, en la liberación adiabática de un shock-
estado comprimido, se produce calefacción adicional en comparación con el caso sin resistencia. Estos
los niveles de calentamiento son importantes ya que el choque o el derretimiento de la liberación pueden ocurrir a una
presión de choque de lo que cabría esperar ignorando el efecto de la fuerza. (Fig. 5.)
C. Cambios de fase
Una propiedad importante de la materia condensada son los cambios de fase, incluyendo polisólidos sólidos
Morfismo y líquido sólido. Un diagrama de fase de equilibrio se puede representar como un solo
superficie total de EOS como antes. Múltiples fases competidoras con cinética para cada fase trans-
formación se puede representar convenientemente utilizando la estructura descrita anteriormente para general
propiedades materiales, por ejemplo, al describir el estado local como un conjunto de fracciones de volumen
fi de cada posible fase de EOS simple, con tasas de transición y equilibrio entre ellas.
Este modelo se describe con más detalle en otras partes [19]. Sin embargo, es interesante investigar
puerta la robustez del esquema numérico para calcular el choque Hugoniots cuando el EOS
tiene las discontinuidades en valor y gradiente asociadas con los cambios de fase.
El EOS de metal fundido, y la transición de fase sólido-líquido, se puede representar a un
aproximación razonable como ajuste a la EOS del sólido:
pdosfase(l, e) = psólido(l, ) (25)
donde
e : T (l, e) < Tm(l)
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
ehm : de lo contrario
es el calor específico latente de la fusión. Tomando el EOS y un Lindemann modificado
la curva de fusión para Al [24], y utilizando el algoritmo de choque Hugoniot fue
se encontró que funciona de forma estable a lo largo de la transición de fase (Fig. 6).
V. PATOS COMPUESTOS DE CARGO
Dados los métodos para calcular el choque y las rutas de carga adiabáticas desde el inicial arbitrario
los estados, una variedad considerable de escenarios experimentales se pueden tratar a partir de la interacción
de ondas de carga o descarga con interfaces entre diferentes materiales, en geometría plana
para la compresión uniaxial. La restricción física clave es que, si dos materiales diferentes son
para permanecer en contacto después de una interacción como un impacto o el paso de un shock, el
la velocidad normal de la tensión y la velocidad de las partículas en ambos materiales deben ser iguales a ambos lados de la
interfaz. El cambio en la velocidad de las partículas y el estrés normal a las ondas fueron calculados arriba
para ondas de compresión que corren en la dirección de aumentar la ordenación espacial (de izquierda a derecha).
A través de una interfaz, el sentido se invierte para el material a la izquierda. Por lo tanto, un proyectil
impacto en un objetivo estacionario a la derecha se desacelera a partir de su velocidad inicial por el choque
inducidos por el impacto.
El problema general en una interfaz se puede analizar considerando los estados en el
instantánea del primer contacto – en el impacto, o cuando un shock viaja a través de un sandwich de ma-
terials primero llega a la interfaz. Los estados iniciales son {ul, sl; ur, sr}. Los estados finales son
{uj, s
l; uj, r
r} donde uj es la velocidad de la partícula de la junta, n(s)
l) = n(s)
r), y s
i está conectado a si
por un shock o un adiabat, comenzando con la velocidad inicial y el estrés adecuados, y
con la orientación dada por el lado del sistema cada material se produce en. Cada tipo de onda
se considera a su vez, en busca de una intersección en el plano ascendente. Ejemplos de ello
Las interacciones de ondas son el impacto de un proyectil con un objetivo estacionario (Fig. 7), liberación de un
estado de choque en una superficie libre o un material (por ejemplo, a ventana) de menor impedancia de choque (de ahí
reflejando una onda de liberación en el material conmocionado – Fig. 8), que chocan en una superficie con una
material de mayor impedancia al choque (fig. 8), o tensión inducida como materiales tratar de separar
en direcciones opuestas cuando se une una interfaz enlazada (Fig. 9). Cada uno de estos escenarios
puede ocurrir a su vez después del impacto de un proyectil con un objetivo: si el objetivo está en capas
entonces un choque se transmite a través de cada interfaz con una liberación o una réplica reflejada en la espalda,
dependiendo de los materiales; la liberación se produce en última instancia en la parte trasera del proyectil y el
el extremo lejano del objetivo, y las ondas de liberación que se mueven opuestamente sujetan el proyectil y
objetivo a tensiones de tracción cuando interactúan (Fig. 10).
Como ilustración de la combinación de cálculos de choque y carga en rampa, considere el problema
de un proyectil Al, viajando inicialmente a 3,6 km/s, impactando en un objetivo compuesto estacionario
que comprende una muestra de Mo y una ventana de liberación de LiF [28, 29]. Los estados de shock y liberación fueron
calculados utilizando propiedades de materiales publicados [24]. El estado de shock inicial se calculó para
tiene un estrés normal de 63.9GPa. Al llegar a la LiF, el shock fue calculado para transmitir
a 27.1GPa, reflejándose como un lanzamiento en el Mo. Estos esfuerzos coinciden con la dinámica del continuum
simulación a dentro de 0,1GPa en el Mo y 0,3GPa en el LiF, utilizando el mismo material
propiedades (Fig. 11). Las velocidades de onda y partícula asociadas coinciden con una precisión similar;
Las velocidades de onda son mucho más difíciles de extraer de la simulación de la dinámica de continuum.
Una extensión de este análisis se puede utilizar para calcular la interacción de choques oblicuos
con una interfaz [30].
VI. CONCLUSIONES
Se elaboró una formulación general para representar modelos de materiales para aplicaciones en
carga dinámica, adecuada para la implementación de software en la programación orientada a objetos lan-
¡Guages! Se diseñaron métodos numéricos para calcular la respuesta de la materia representada
por los modelos de material general para la compresión de golpes y rampas, y la descompresión de rampas,
mediante la evaluación directa de las vías termodinámicas para estas compresiones en lugar de
simulaciones espacialmente resueltas. Este enfoque es una generalización de la labor anterior sobre soluciones
para los materiales representados por una ecuación escalar de estado. Los métodos numéricos fueron encontrados
ser flexible y robusto: capaz de aplicarse a materiales con propiedades muy diferentes.
Las soluciones numéricas combinaban los resultados analíticos con una alta precisión.
Se necesita atención con la interpretación de algunos tipos de respuesta física, como por ejemplo:
flujo tic, cuando se aplica a la deformación a altas tasas de deformación. La dependencia temporal subyacente
deben tenerse en cuenta los procesos que se produzcan durante la deformación. La historia real
la carga y el calentamiento experimentados por el material durante el paso de un choque puede influir
el estado final – esta historia no se captura en la aproximación continuum al material
dinámica, donde los choques se tratan como discontinuidades. Por lo tanto, la atención también es necesaria en el spa.
simulaciones resueltas cuando los choques se modelan utilizando la viscosidad artificial para untarlos
unphysically sobre un espesor finito.
Se demostró que los cálculos demuestran el funcionamiento de los algoritmos de choque y
compresión de rampa con modelos de material representativos de sólidos complejos, incluida la resistencia
y las transformaciones de fase.
Los métodos básicos de la rampa y la solución de choque se acoplaron para resolver
Vías de comunicación, tales como impactos inducidos por choque, e interacciones de choque con una interfaz planar
entre diferentes materiales. Tales cálculos captan gran parte de la física de la
experimentos de dinámica terial, sin requerir simulaciones de resolución espacial. Resultados
de la solución directa de las condiciones de choque y de carga en rampa pertinentes se compararon con
simulaciones de hidrocódigo, mostrando consistencia completa.
Agradecimientos
Ian Gray presentó al autor el concepto de propiedades materiales multimodelo
ware. Lee Markland desarrolló un prototipo de programa informático de cálculo de Hugoniot para
ecuaciones de estado mientras trabaja para el autor como estudiante de verano de pregrado.
El trabajo evolutivo sobre las bibliotecas de propiedades materiales fue apoyado por el U.K. Atomic
Establecimiento de Armas, Fluid Gravity Engineering Ltd, y Wessex Científico y Técnico
Services Ltd. Los refinamientos de la técnica y las aplicaciones a los problemas descritos fueron:
realizado en el Laboratorio Nacional de Los Alamos (LANL) y Lawrence Livermore National
Laboratorio (LLNL).
El trabajo se llevó a cabo parcialmente en apoyo de, y financiado por,
programa de fusión de confinamiento inercial de la Agencia de curidad en LANL (gestionado por Steven Batha),
Proyecto de Investigación y Desarrollo Dirigido a Laboratorios y LLNL 06-SI-004 (Principal
Investigador: Héctor Lorenzana). El trabajo se llevó a cabo bajo los auspicios de los Estados Unidos.
Departamento de Energía en virtud de los contratos W-7405-ENG-36, DE-AC52-06NA25396 y DE-
AC52-07NA27344.
Bibliografía
[1] J.K. Dienes, J.M. Walsh, en R. Kinslow (Ed), “High-Velocity Impact Phenomenas” (Academic
Press, Nueva York, 1970).
[2] D.J. Benson, Comp. Mech. 15, 6, pp 558-571 (1995).
[3] J.W. Gehring, Jr, en R. Kinslow (Ed), “High-Velocity Impact Phenomenas” (Prensa Académica,
Nueva York, 1970).
[4] R.M. Canup, E. Asphaug, Nature 412, pp 708-712 (2001).
[5] Para una revisión e introducción recientes, véase, por ejemplo: M.R. Boslough y J.R. Asay, en J.R. Asay,
M. Shahinpoor (Eds), “Compresión de choque de alta presión de sólidos” (Springer-Verlag, New
York, 1992).
[6] Por ejemplo, C.A. Hall, J.R. Asay, M.D. Knudson, W.A. Stygar, R.B. Spielman, T.D. Pointon,
D.B. Reisman, A. Toor y R.C. Cauble, Rev. Sci. Instrum. 72, 3587 (2001).
[7] M.A. Meyers, “Comportamiento dinámico de los materiales” (Wiley, Nueva York, 1994).
[8] G. McQueen, S.P. Marzo, J.W. Taylor, J.N. Fritz, W.J. Carter, en R. Kinslow (Ed), “High-
Fenómenos de Impacto de Velocidad” (Prensa Académica, Nueva York, 1970).
[9] J.D. Lindl, “Inertial Confinament Fusion” (Springer-Verlag, Nueva York, 1998).
[10] D.C. Swift, G.J. Ackland, A. Hauer, G.A. Kyrala, Phys. Rev. B 64, 214107 (2001).
[11] J.P. Poirier, G.D. Price, Phys. de la Tierra y los Interiores Planetarios 110, págs. 147 y 56 (1999).
[12] I.N. Gray, P.C. Thompson, B.J. Parker, D.C. Swift, J.R. Maw, A. Giles y otros (AWE
Aldermaston), inédito.
[13] D.J. Steinberg, S.G. Cochran, M.W. Guinan, J. Appl. Phys. 51, 1498 (1980).
[14] D.L. Preston, D.L. Tonks, y D.C. Wallace, J. Appl. Phys. 93, 211 (2003).
[15] Una versión del software, incluyendo partes representativas de la biblioteca de modelos de material y la
Los algoritmos para calcular el adiabat rampa y el choque Hugoniot, está disponible como un supplemen-
archivo tary proporcionado con la preimpresión de este manuscrito, arXiv:0704.008. Apoyo a programas informáticos,
y versiones con modelos adicionales, están disponibles comercialmente de Wessex Scientific y
Technical Services Ltd (http://wxres.com).
[16] D. Benson, Métodos informáticos en Appl. Mecánica e Ing. 99, 235 (1992).
http://arxiv.org/abs/0704.0008
http://wxres.com
[17] J.L. Ding, J. Mech. y Phys. de Solids 54, págs. 237 y 265 (2006).
[18] J. von Neumann, R.D. Richtmyer, J. Appl. Phys. 21, 3, págs. 232 a 237 (1950).
[19] R.M. Mulford, D.C. Swift, en preparación.
[20] W. Fickett, W.C. Davis, “Detonación” (Universidad de California Press, Berkeley, 1979).
[21] R. Menikoff, B.J. Plohr, Rev. Mod. Phys. 61, págs. 75 y 130 (1989).
[22] A. Majda, Mem. Amer. Matemáticas. Soc., 41, 275 (1983).
[23] K.S. Holian (Ed.), T-4 Manual de bases de datos de bienes materiales, Vol 1c: Ecuaciones de Estado,
Informe del Laboratorio Nacional de Los Alamos LA-10160-MS (1984).
[24] D.J. Steinberg, Ecuación de Estado y Propiedades de Fuerza de Materiales Seleccionados, Lawrence
Informe del Laboratorio Nacional Livermore UCRL-MA-106439 cambio 1 (1996).
[25] D.C. Swift, T.E. Tierney, S.-N. Luo, D.L. Paisley, G.A. Kyrala, A. Hauer, S.R. Greenfield,
A.C. Koskelo, K.J. McClellan, H.E. Lorenzana, D. Kalantar, B.A. Remington, P. Peralta,
E. Loomis, Phys.Plasmas 12, 056308 (2005).
[26] R. Hill, “La teoría matemática de la plasticidad” (Clarendon Press, Oxford, 1950).
[27] C.A. Hall, Phys. Plasmas 7, 5, pp 2069-2075 (2000).
[28] D.C. Swift, A. Seifter, D.B. Holtkamp, y D.A. Clark, Phys. Rev. B 76, 054122 (2007).
[29] A. Seifter y D.C. Swift, Phys. Rev. B 77, 134104 (2008).
[30] E. Loomis, D.C. Swift, J. Appl. Phys. 103, 023518 (2008).
CUADRO I: Interfaz con los modelos materiales necesarios para una dinámica de continuidad explícita en el futuro
simulaciones.
llamadas de interfaz de propósito
configuración del programa lectura/escritura de datos de material
continuo dinámica ecuaciones estrés (estado)
tiempo paso control de la velocidad del sonido (estado)
evolución del estado (deformación) d(estado)/dt(estado,grado ~u)
Evolución del estado (calentamiento) d(estado)/dt(estado,ė)
evolución interna del estado d(estado)/dt
manipulación de estados crear y eliminar
añadir estados
multiplicar el estado por un escalar
comprobar la autocoherencia
Los paréntesis en las llamadas de la interfaz denotan funciones, por ejemplo. “estrés (estado)” para “estrés en función de
las funciones de evolución se muestran en la estructura de operador-dividido.
que es más robusto para soluciones numéricas explícitas de tiempo de avance y también se puede utilizar para
cálculos del choque Hugoniot y compresión de rampa. Los cheques de auto-coherencia incluyen
que la densidad de masa es positiva, el volumen o las fracciones de masa de los componentes de una mezcla se suman a una,
CUADRO II: Ejemplos de tipos de modelo material, distinguidos por diferentes estructuras en el estado
vector.
modelo de estado efecto vectorial de la tensión mecánica
s(s), gradú
Ecuación mecánica del estado, e div~u,-pdiv~u/
Ecuación térmica del estado, T div~u,-pdiv~u/lcv
mezcla heterogénea, e, fv}i div~u,−pdiv~u/l, 0}i
mezcla homogénea, T, {fm}i div~u,−pdiv~u/Ćcv, 0i
la fuerza desviatoria tradicional, e, , p div~u,
−pdiv~u+fpp
, Ge,
F
Los símbolos son: densidad de masa; e: energía interna específica, T: temperatura, fv: fracción de volumen,
fm: fracción de masa, : desviador de tensión, fp: fracción de trabajo de plástico convertido al calor, gradup:
Parte plástica del gradiente de velocidad, G: módulo de corte, e,p: partes elásticas y plásticas de la tasa de deformación
desviador, p: cepa plástica equivalente escalar, f: factor en la magnitud efectiva de la cepa. Reaccionando
Los explosivos sólidos pueden representarse como mezclas heterogéneas, siendo un componente la reacción
productos; reacción, un proceso de evolución interna, transfiere material de no reaccionado a reaccionado
componentes. La reacción en fase gaseosa puede representarse como una mezcla homogénea, reacciones
transferencia de masa entre componentes que representan diferentes tipos de molécula. Simétrico
tensores como el desviador de tensión se representan más compactamente por su 6 superior único
componentes triangulares, por ejemplo: utilizando la notación Voigt.
CUADRO III: Esquema de la jerarquía de los modelos materiales, que ilustra el uso del polimorfismo (en el
sentido de programación orientado a objetos).
Tipo de modelo de material (o estado)
ecuación mecánica del estado politrópico, Grüneisen, basado en energía
Jones-Wilkins-Lee, (, T ) mesa, etc.
ecuación térmica del estado basado en la temperatura Jones-Wilkins-
Lee, mesa cuasiarmoníaca,
ecuación reactiva de estado politrópico modificado, reactivo Jones-
Wilkins-Lee
spall Cochran-Banner
estrés desviatorio elástico-plástico, Steinberg-Guinan,
Steinberg-Lund, Preston-Tonks...
Wallace, etc.
modelos homogéneos de mezcla y reacción
modelos heterogéneos de equilibrio y reacción de la mezcla
Los programas de dinámica continua pueden referirse a las propiedades materiales como un ‘tipo material’ abstracto
con un estado material abstracto. El tipo real de un material (e.g. ecuación mecánica de
state), el tipo de modelo específico (por ejemplo, politrópico), y el estado del material de ese tipo son todos
manejado transparentemente por la estructura de software orientada a objetos.
La ecuación reactiva de estado tiene un parámetro de estado adicional ♥, y las operaciones de software
se definen extendiendo los de la ecuación mecánica de estado. Los materiales de espaciado pueden ser
representado por un estado sólido más una fracción de vacío fv, con operaciones definidas mediante la extensión de las de
el material sólido. Las mezclas homogéneas se definen como un conjunto de ecuaciones térmicas de estado, y
el estado es el conjunto de estados y fracciones de masa para cada uno. Las mezclas heterogéneas se definen como
conjunto de propiedades de material puro de cualquier tipo, y el estado es el conjunto de estados para cada componente
más su fracción de volumen.
0,0001
0,001
0,01
0,001 0,01
densidad de masa (g/cm3)
isentrope
Hugoniot
0,0001
0,001
0,01
0,001 0,01
densidad de masa (g/cm3)
isentrope
Hugoniot
FIG. 1: Principal isentrope y choque Hugoniot para el aire (gas perfecto): cálculos numéricos para
modelos de materiales generales, en comparación con soluciones analíticas.
0 1000 2000 3000 4000 5000
temperatura (K)
Sólido: Grueneisen
chasquido: SESAME 3716
FIG. 2: Shock Hugoniot para Al en el espacio a presión-temperatura, para las diferentes representaciones de la
ecuación de estado.
0,7 0,75 0,80,85 0,90,95 1
compresión de volumen
cada par de líneas:
la parte superior es Hugoniot,
inferior es adiabat
FIG. 3: Principal adiabat y choque Hugoniot para Estar en el espacio normal de la compresión del estrés, descuidando
resistencia (dashed), para la resistencia Steinberg-Guinan (sólida), y para el plástico elástico-perfectamente con
Y = 10GPa (punto).
0 20 40 60 80 100 120 140
estrés normal (GPa)
onda elástica
choque de plástico
FIG. 4: principal adiabat y choque Hugoniot para Estar en choque velocidad-espacio de estrés normal, descuidando
resistencia (dashed), para la resistencia Steinberg-Guinan (sólida), y para el plástico elástico-perfectamente con
Y = 10GPa (punto).
0 1000 2000 3000 4000 5000
temperatura (K)
principal
adiabat
principal
Hugoniot
liberación
adiabat
FIG. 5: principal adiabat, choque Hugoniot, y lanzamiento de adiabat para Be en la temperatura de estrés normal
espacio, despreocupando la fuerza (dashed), para Steinberg-Guinan fuerza (sólida), y para el elástico-perfectamente
plástico con Y = 10GPa (punto).
0 1000 2000 3000 4000 5000
temperatura (K)
locus de fusión
Hugoniot sólido
FIG. 6: Demostración de la solución de choque Hugoniot a través de un límite de fase: la fusión de choque de Al,
para diferentes porosidades iniciales.
estado inicial
velocidad de las partículas
estado inicial
de proyectil
director Hugoniot
del objetivo
principal
Hugoniot
de proyectil
Estado de shock:
intersección
del objetivo
FIG. 7: Interacciones de onda para el impacto de un proyectil plano que se mueve de izquierda a derecha con una
objetivo estacionario. Las flechas estrujadas son una guía de la secuencia de estados. Para un proyectil en movimiento
de derecha a izquierda, la construcción es la imagen del espejo reflejada en el eje de tensión normal.
estados
velocidad de las partículas
secundaria
Hugoniot
del objetivo
estado de choque inicial
en el objetivo
Director Hugoniot:
ventana de alta impedancia
baja impedancia
ventana
isentrope de liberación objetivo
liberación objetivo en superficie libre
ventana
liberación
FIG. 8: Interacciones de onda para la liberación de un estado de choque (choque que se mueve de izquierda a derecha) en
un material estacionario de «ventana» a su derecha. El estado de liberación depende de si la ventana tiene
una impedancia de choque mayor o menor que el material conmocionado. Las flechas estrujadas son una guía para el
secuencia de estados. Para un choque que se mueve de derecha a izquierda, la construcción es la imagen del espejo
reflejado en el eje de tensión normal.
Liberación de proyectil
en proyectil y objetivo
Estado de tracción final
en proyectil y objetivo
velocidad de las partículas
liberación de objetivo
liberación de objetivo
Liberación de proyectil
estado de choque inicial
FIG. 9: Interacciones de ondas para la liberación de un estado escandalizado por la tensión inducida por los materiales
para separarse en direcciones opuestas cuando se une una interfaz enlazada. Daños materiales, escalofríos y
la separación se descuidan: la construcción muestra el esfuerzo de tracción máximo posible. Por cuestiones generales
propiedades del material, por ejemplo: si se incluye el flujo de plástico, el estado de tensión máxima no es sólo
el negativo del estado de choque inicial. Las flechas estrujadas son una guía de la secuencia de estados. Los
el gráfico muestra el estado inicial después de un impacto por un proyectil que se mueve de derecha a izquierda; para un choque
moviéndose de derecha a izquierda, la construcción es la imagen del espejo reflejada en el eje de tensión normal.
tensión
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
- No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no.
objetivo
choques de impacto
choque transmitido;
onda reflejada
superficie libre
liberación
interacciones de liberación:
FIG. 10: Esquema de las interacciones de ondas uniaxiales inducidas por el impacto de un proyectil plano con una
objetivo compuesto.
0 5 10 15 20
posición (mm)
LiFAl Mo
reflejados
transmitidos
liberación
shock
original
estado de shock
FIG. 11: Simulación de hidrocódigo del proyectil Al a 3,6 km/s impactando un objetivo Mo con un LiF
ventana de liberación, 1,1μs después del impacto. Las estructuras en las ondas son precursores elásticos.
Lista de cifras
1. Principal isentrope y choque Hugoniot para el aire (gas perfecto): cálculos numéricos
para modelos de material general, en comparación con soluciones analíticas.
2. Shock Hugoniot para Al en el espacio a presión-temperatura, para diferentes representaciones de
la ecuación del estado.
3. Principal adiabat y choque Hugoniot para estar en el espacio normal de la compresión del estrés,
Abandonar la fuerza (dashed), para la resistencia Steinberg-Guinan (sólida), y para el elástico-
perfectamente plástico con Y = 10GPa (punto).
4. Principal adiabat y choque Hugoniot para estar en choque velocidad-espacio de estrés normal,
Abandonar la fuerza (dashed), para la resistencia Steinberg-Guinan (sólida), y para el elástico-
perfectamente plástico con Y = 10GPa (punto).
5. Principal adiabat, choque Hugoniot, y liberar a adiabat para estar en el estrés normal-
espacio de temperatura, resistencia despreocupante (dashed), para la fuerza Steinberg-Guinan
(sólido), y para plástico elástico-perfectamente con Y = 10GPa (punto).
6. Demostración de la solución de choque Hugoniot a través de un límite de fase:
Al, para diferentes porosidades iniciales.
7. Interacciones de onda para el impacto de un proyectil plano que se mueve de izquierda a derecha con una
objetivo estacionario. Las flechas estrujadas son una guía de la secuencia de estados. Para un proyectil
moviéndose de derecha a izquierda, la construcción es la imagen del espejo reflejada en el normal
eje de esfuerzo.
8. Interacciones de onda para la liberación de un estado de choque (choque que se mueve de izquierda a derecha)
en un material estacionario de «ventana» a su derecha. El estado de liberación depende de si
la ventana tiene una impedancia de choque mayor o menor que el material conmocionado. Dashed
Las flechas son una guía para la secuencia de estados. Para un choque que se mueve de derecha a izquierda,
la construcción es la imagen del espejo reflejada en el eje de tensión normal.
9. Interacciones de ondas para la liberación de un estado de choque por la tensión inducida como materiales
tratar de separarse en direcciones opuestas cuando se une con una interfaz enlazada. Material
se descuidan los daños, la caída y la separación: la construcción muestra el máximo
Es posible el estrés por tracción. En el caso de las propiedades materiales generales, por ejemplo: si se incluye el flujo de plástico,
el estado de tensión de tracción máxima no es sólo el negativo del estado de choque inicial.
Las flechas estrujadas son una guía de la secuencia de estados. El gráfico muestra el estado inicial
después de un impacto por un proyectil que se mueve de derecha a izquierda; para un choque que se mueve de
de derecha a izquierda, la construcción es la imagen del espejo reflejada en el eje de tensión normal.
10. Esquema de interacciones de ondas uniaxiales inducidas por el impacto de un proyectil plano con
un objetivo compuesto.
11. Simulación de hidrocódigo del proyectil Al a 3,6 km/s impactando un objetivo Mo con un LiF
ventana de liberación, 1,1μs después del impacto. Las estructuras en las ondas son precursores elásticos.
Introducción
Estructura conceptual para propiedades del material
Carga unidimensional idealizada
Compresión de rampas
Compresión por choque
Precisión: aplicación al aire
Comportamiento complejo de la materia condensada
Temperatura
Ecuaciones de densidad-temperatura del estado
Modelo de temperatura para ecuaciones mecánicas de estado
Dosis
Representación preferida de la fuerza isotrópica
Berilio
Cambios de fase
Vías de carga compuestas
Conclusiones
Agradecimientos
Bibliografía
Bibliografía
Lista de cifras
|
704.001
| Partial cubes: structures, characterizations, and constructions
| " Partial cubes are isometric subgraphs of hypercubes. Structures on a graph\ndefined by means of s
(...TRUNCATED)
| "Partial cubes: structures, characterizations, and\nconstructions\nSergei Ovchinnikov\nMathematics D
(...TRUNCATED)
| "Introduction\nA hypercube H(X) on a set X is a graph which vertices are the finite subsets\nof X ;
(...TRUNCATED)
| "Cubos parciales: estructuras, caracterizaciones, y\nconstrucciones\nSergei Ovchinnikov\nDepartament
(...TRUNCATED)
|
704.001
| "Computing genus 2 Hilbert-Siegel modular forms over $\\Q(\\sqrt{5})$ via\n the Jacquet-Langlands c
(...TRUNCATED)
| " In this paper we present an algorithm for computing Hecke eigensystems of\nHilbert-Siegel cusp fo
(...TRUNCATED)
| "COMPUTING GENUS 2 HILBERT-SIEGEL MODULAR\nFORMS OVER Q(\n5) VIA THE JACQUET-LANGLANDS\nCORRESPONDEN
(...TRUNCATED)
| "Introduction\nLet F be a real quadratic field of narrow class number one and let B be the\nunique (
(...TRUNCATED)
| "COMPUTANDO GENUS 2 HILBERT-SIEGEL MODULAR\nFORMULARIOS SOBRE Q(\n5) VIA LAS JACQUETAS-LANGLANDAS\nC
(...TRUNCATED)
|
704.001
| "Distribution of integral Fourier Coefficients of a Modular Form of Half\n Integral Weight Modulo P
(...TRUNCATED)
| " Recently, Bruinier and Ono classified cusp forms $f(z) := \\sum_{n=0}^{\\infty}\na_f(n)q ^n \\in
(...TRUNCATED)
| "DISTRIBUTION OF INTEGRAL FOURIER COEFFICIENTS OF A\nMODULAR FORM OF HALF INTEGRAL WEIGHT MODULO\nPR
(...TRUNCATED)
| "Introduction and Results\nLet Mλ+ 1\n(Γ0(N), χ) and Sλ+ 1\n(Γ0(N), χ) be the spaces, respecti
(...TRUNCATED)
| "DISTRIBUCIÓN DE CUARTOS COEFICIENTES INTEGRALES DE UNA\nFORMA MODULAR DE MÓDULO DE PESO INTEGRAL\
(...TRUNCATED)
|
704.001
| "$p$-adic Limit of Weakly Holomorphic Modular Forms of Half Integral\n Weight"
(...TRUNCATED)
| " Serre obtained the p-adic limit of the integral Fourier coefficient of\nmodular forms on $SL_2(\\
(...TRUNCATED)
| "p-ADIC LIMIT OF THE FOURIER COEFFICIENTS OF WEAKLY\nHOLOMORPHIC MODULAR FORMS OF HALF INTEGRAL WEIG
(...TRUNCATED)
| "Introduction and Statement of Main Results\nSerre obtained the p-adic limits of the integral Fourie
(...TRUNCATED)
| "LÍMITE PÁDICO DE LOS CUARTOS COEFICIENCIAS DE LA DEBILIDAD\nFORMAS MODULARES HOLOMÓRFICAS DE PES
(...TRUNCATED)
|
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card