id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
|---|---|---|---|
no-problem/9901/gr-qc9901041.html
|
ar5iv
|
text
|
# SEED MAGNETIC FIELDS FROM THE BREAKING OF LORENTZ INVARIANCE
## 1 Introduction
Magnetic field of nearby galaxies coherent over $`Mpc`$ scales are estimated to be of order $`B10^6`$ G. The most plausible explanation for these fields involves some sort of dynamo effect. Indeed, if one assumes that a galactic dynamo has operated during about 10 Gyr then a seed magnetic field could be amplified by a factor of $`e^{30}`$ and the observed galactic magnetic fields at present may have had its origin in a seed magnetic field of about $`B10^{19}G`$. On the other hand, the galactic magnetic fields can emerge directly from the compression of a primordial magnetic field, in the collapse of protogalactic clouds. In this case, it is required a seed magnetic field of $`B10^9G`$ over a scale $`\lambda Mpc`$, the comoving size of a region which condenses to form a galaxy. Since the Universe through most of its history has behaved as a good conductor it implies that the evolution of any primeval cosmic magnetic field will conserve magnetic flux. Thus, the ratio denoted by $`r`$, of the energy density of a magnetic field $`\rho _B=\frac{B^2}{8\pi }`$ relative to the energy density of the cosmic microwave background radiation $`\rho _\gamma =\frac{\pi ^2}{15}T^4`$ remains essentially constant and provides a invariant measure of magnetic field strength. It then follows that pregalactic magnetic fields of about $`r10^{34}`$ are required if one invokes dynamo amplification processes, and $`r10^8`$ if one assumes only the collapse of protogalactic clouds.
In what follows we shall describe a mechanism to generate primordial magnetic fields that is based on a putative violation of the Lorentz invariance in string field theory solutions and that relies on inflation for the amplification of quantum fluctuations of the electromagnetic field . Invoking a period of inflation to explain the creation of seed magnetic fields is a quite attractive suggestion as inflation provides the means of generating large-scale phenomena from microphysics that operates on subhorizon scales . Indeed, inflation, through de Sitter-space-produced quantum fluctuations, generates excitations of the electromagnetic field allowing for an increase of the magnetic flux before the Universe is filled with a highly conducting plasma. Furthermore, in this amplification process, long-wavelength modes for which $`\lambda H^1`$, are enhanced.
However, it is not possible to produce the required seed magnetic fields from a conformally invariant theory as is the usual U(1) gauge theory. The reason being that, in a conformally invariant theory, the magnetic field decreases as $`a^2`$, where $`a`$ is the scale factor, and during inflation, the total energy density in the Universe is constant, so the magnetic field energy density is strongly suppressed, yielding $`r=10^{104}\lambda _{Mpc}^4`$.
In the context of string field theory, there exists solutions where conformal invariance may be broken, actually due to the possibility of spontaneous breaking of the Lorentz invariance . This possibility arises explicitly from solutions of string field theory of the open bosonic string, as interactions are cubic in the string field and these give origin in the static field theory potential to cubic interaction terms of the type $`SSS`$, $`STT`$ and $`TTT`$, where $`S`$ and $`T`$ denote scalar and tensor fields. Lorentz invariance may then be broken as can be seen, for instance, from static potential involving the tachyon, $`\phi `$, and a generic vector field . The vacuum of this model is unstable and gives rise to a mass-squared term for the vector field that is proportional to $`\phi `$. If $`\phi `$ is negative, then the Lorentz symmetry itself is spontaneously broken as the vector field can acquire a non-vanishing vacuum expectation values. This mechanism can give rise to vacuum expectation values to tensor fields inducing for the fields that do not acquire vacuum expectation values, such as the photon , mass-squared terms proportional to $`T`$. Hence, one should expect from this mechanism terms for the photon of the form $`TA_\mu A^\mu `$, $`T_{\mu \nu }A^\mu A^\nu `$ and so on. Naturally, these terms break explicitly the conformal invariance of the electromagnetic action.
Observational constraints on the breaking of the Lorentz invariance arising from the measurements of the quadrupole splitting time dependence of nuclear Zeeman levels along Earth’s orbit, have been performed over the years, the most recent one indicating that $`\delta <3\times 10^{21}`$ . Bounds on the violation of momentum conservation and existence of a preferred reference frame can be also extracted from limits on the parametrized post-Newtonian parameter $`\alpha _3`$ obtained from the period of millisecond pulsars, namely $`|\alpha _3|<2.2\times 10^{20}`$ implying the Lorentz symmetry is unbroken up to this level. These limits indicate that if the Lorentz invariance is broken then its violation is suppressed by powers of energy over the string scale. Similar conclusions can be drawn for possible violations of the CPT symmetry .
In order to relate the theoretical possibility of spontaneous breaking of Lorentz invariance to the observational limits discussed above we parametrize the vacuum expectation values of the Lorentz tensors in the following way:
$$<T>=m_L^2\left(\frac{E}{M_S}\right)^{2l},$$
(1)
where $`m_L`$ is a light mass scale when compared to string typical energy scale, $`M_S`$, where we assume that $`M_SM_P`$, $`M_P`$ being the Planck mass; $`E`$ is the temperature of the Universe in a given period and $`2l`$ is a positive integer. We shall further replace the temperature of the Universe by the inverse of the scale factor, given that expansion of the Universe is adiabatic. Parametrization (1) is similar to the one used in previous work .
## 2 Generation of Seed Magnetic Fields
We consider spatially flat Friedmann-Robertson-Walker cosmologies with the metric given in the conformal time, $`\eta `$, the corresponding scale factor being, $`a(\eta )`$, and the stress tensor of a perfect fluid. The Hubble constant is written as $`H_0=100h_0kms^1Mpc^1`$ and the present Hubble radius is $`R_0=10^{26}h_0^1m`$, where $`0.4h_01`$. We shall assume the Universe has gone through a period of expontential inflation at a scale $`M_{GUT}`$ and whose associated energy density is given by $`\rho _IM_{GUT}^4`$. Hence, from the Friedmann equation, $`H_{dS}=(\frac{8\pi }{3})^{1/2}\frac{M_{GUT}^2}{M_P}`$.
From our discussion on the breaking of Lorentz invariance we consider for simplicity only the term, $`TA_\mu A^\mu `$, from which follows the Lagrangian density for the photon:
$$=\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+M_L^2a^{2l}A_\mu A^\mu ,$$
(2)
where $`M_L^2\frac{m_L^2}{M_p^{2l}}`$. One can readily obtain the wave equation for the magnetic field:
$$\frac{1}{a^2}\frac{^2}{\eta ^2}a^2\stackrel{}{B}^2\stackrel{}{B}+\frac{n}{\eta ^2}\stackrel{}{B}=0.$$
(3)
The corresponding equation for the Fourier components of $`\stackrel{}{B}`$ is given by:
$$\ddot{\stackrel{}{F}_k}+k^2\stackrel{}{F}_k+\frac{n}{\eta ^2}\stackrel{}{F}_k=0,$$
(4)
where the dots denote derivatives according to the conformal time and $`\stackrel{}{F}_k(\eta )a^2d^3xe^{i\stackrel{}{k}.\stackrel{}{x}}\stackrel{}{B}(\stackrel{}{x},\eta )`$, $`\stackrel{}{F}_k`$ being a measure of the magnetic flux associated with the comoving scale $`\lambda k^1`$. The energy density of the magnetic field is given by $`\rho _B(k)\frac{|\stackrel{}{F}_k|^2}{a^4}`$.
For modes well outside the horizon, $`a\lambda >>H^1`$ or $`|k\eta |<<1`$, solutions of Eq. (4) are given in terms of the conformal time :
$$|\stackrel{}{F}_k|\eta ^{m_\pm }$$
(5)
where $`m_\pm =\frac{1}{2}\left[1\pm \sqrt{14n}\right]`$.
By requiring that $`n`$ is not a growing function of conformal time, it follows that $`n`$ has to be either a constant or that $`2l`$ is negative, which is excluded by our assumption (1). Hence for different phases of evolution of the Universe:
(I) Inflationary de Sitter (dS) phase, where $`a(\eta )\frac{1}{\eta H_{dS}}`$, it follows that $`l=0`$ and
$$n=\frac{M_{dS}^2}{H_{dS}^2},$$
(6)
where we refer $`M_L`$ by its index in the relevant phase of evolution of the Universe.
(II) Phase of Reheating (RH) and Matter Domination (MD), where $`a(\eta )\frac{1}{4}H_0^2R_0^3\eta ^2`$, yields from the condition $`n`$ is a constant that $`2l=3`$ and
$$n=\frac{4M_{MD}^2}{H_0^2R_0^3}.$$
(7)
(III) Phase of Radiation Domination (RD), where $`a(\eta )H_0R_0^2\eta `$, from which follows that $`l=2`$ and
$$n=\frac{M_{RD}^2}{H_0^2R_0^4}.$$
(8)
It is clear that in this case last case $`n1`$.
Assuming the Universe has gone through a period of inflation at scale $`M_{GUT}`$ and that fluctuations of the electromagnetic field have come out from the horizon when the Universe had gone through about $`55`$ $`e`$-foldings of inflation, yields in terms of $`r`$ :
$$r(7\times 10^{25})^{2(p+2)}\times (\frac{M_{GUT}}{M_P})^{4(qp)/3}\times (\frac{T_{RH}}{M_P})^{2(2qp)/3}\times (\frac{T_{}}{M_P})^{8q/3}\times \lambda _{Mpc}^{2(p+2)},$$
(9)
where $`T_{}`$ is the temperature at which plasma effects become dominant and can be estimated from the reheating process $`T_{}=min\{(T_{RH}M_{GUT})^{{\scriptscriptstyle \frac{1}{2}}};(T_{RH}^2M_p)^{{\scriptscriptstyle \frac{1}{3}}}\}`$. For the reheating temperature we assume either a poor or a quite efficient reheating, $`T_{RH}=\{10^9GeV;M_{GUT}\}`$. Finally, $`pm_{dS}=\frac{1}{2}\left[1\sqrt{1+\left(\frac{2M_{dS}}{H_{dS}}\right)^2}\right]`$ and $`qm_{+RH}=\frac{1}{2}\left[1+\sqrt{1+16\frac{M_L^2}{H_0^2R_0^3}}\right]`$ are the fastest growing solutions for $`\stackrel{}{F}_k`$ in the de Sitter and reheating phases, respectively.
In order to obtain numerical estimates for $`r`$ we have to compute $`M_L`$. At the de Sitter phase we have that $`M_{dS}=m_{dS}`$. As we have seen $`m_L`$ is a light energy scale when compared to $`M_P`$ and $`M_{GUT}`$, and hence we introduce a parameter, $`\chi `$, so that $`m_{dS}=\chi M_{GUT}`$ and $`\chi 1`$.
At the matter domination phase, we have to impose that the mass term $`M_{MD}=m_{MD}(\frac{T_\gamma }{M_p})^l`$, $`T_\gamma `$ being the temperature of the cosmic background radiation at about the recombination time, is consistent with the present-day limits of the photon mass, $`m_\gamma 3\times 10^{36}GeV`$ . Thus, at the matter domination phase, we have to satisfy the condition, $`M_{MD}m_\gamma `$ which implies that $`m_{MD}(\frac{T_\gamma }{M_P})^{3/2}3\times 10^{36}GeV`$, following that $`m_{MD}7.8\times 10^4GeV`$. A more stringent bound on $`m_{MD}`$ could be obtained from the limit $`m_\gamma 1.7\times 10^{42}h_0GeV`$ arising from the absence of rotation in the polarization of light of distant galaxies due to Faraday effect .
We present in the following table our estimates for the ratio $`r`$ for $`M_{GUT}=10^{16}GeV`$. One can see that we obtain values that are in the range $`10^{35}<r<10^9`$, where a poor reheating and the lower values for $`\chi `$ tend to render $`r`$ too low even for an amplification via dynamo processes. Estimates for different values of $`M_{GUT}`$ can be found in Ref. .
Table
## 3 Summary
We have shown that the strength of the magnetic field produced by considering the spontaneous breaking of the Lorentz symmetry in the context of string theory together with inflation is sensitive to the values of the light mass, $`m_L`$, (cf. Eq. (1)), $`M_{GUT}`$, and the reheating temperature, $`T_{RH}`$. Our results indicate that for rather diverse set of values of these parameters we can obtain values for $`r`$ that are consistent with amplification via galactic dynamo or collapse of protogalactic clouds.
## References
|
no-problem/9901/cs9901013.html
|
ar5iv
|
text
|
# Analysis of Approximate Nearest Neighbor Searching with Clustered Point SetsThe support of the National Science Foundation under grant CCR–9712379 is gratefully acknowledged.
## 1 Introduction
Nearest neighbor searching is the following problem: we are given a set $`S`$ of $`n`$ data points in a metric space, $`X`$, and are asked to preprocess these points so that, given any query point $`qX`$, the data point nearest to $`q`$ can be reported quickly. Nearest neighbor searching has applications in many areas, including knowledge discovery and data mining , pattern recognition and classification , machine learning , data compression , multimedia databases , document retrieval , and statistics .
There are many possible choices of the metric space. Throughout we will assume that the space is $`R^d`$, real $`d`$-dimensional space, where distances are measured using any Minkowski $`L_m`$ distance metric. For any integer $`m1`$, the $`L_m`$-distance between points $`p=(p_1,p_2,\mathrm{},p_d)`$ and $`q=(q_1,q_2,\mathrm{},q_d)`$ in $`R^d`$ is defined to be the $`m`$-th root of $`_{1id}|p_iq_i|^m`$. The $`L_1`$, $`L_2`$, and $`L_{\mathrm{}}`$ metrics are the well-known Manhattan, Euclidean and max metrics, respectively.
Our primary focus is on data structures that are stored in main memory. Since data sets can be large, we limit ourselves to consideration of data structures whose total space grows linearly with $`d`$ and $`n`$. Among the most popular methods are those based on hierarchical decompositions of space. The seminal work in this area was by Friedman, Bentley, and Finkel who showed that $`O(n)`$ space and $`O(\mathrm{log}n)`$ query time are achievable for fixed dimensional spaces in the expected case for data distributions of bounded density through the use of kd-trees. There have been numerous variations on this theme. However, all known methods suffer from the fact that as dimension increases, either running time or space increase exponentially with dimension.
The difficulty of obtaining algorithms that are efficient in the worst case with respect to both space and query time suggests the alternative problem of finding approximate nearest neighbors. Consider a set $`S`$ of data points in $`R^d`$ and a query point $`qR^d`$. Given $`ϵ>0`$, we say that a point $`pS`$ is a $`(1+ϵ)`$-approximate nearest neighbor of $`q`$ if
$$\text{dist}(p,q)(1+ϵ)\text{dist}(p^{},q),$$
where $`p^{}`$ is the true nearest neighbor to $`q`$. In other words, $`p`$ is within relative error $`ϵ`$ of the true nearest neighbor. The approximate nearest neighbor problem has been heavily studied recently. Examples include algorithms by Bern , Arya and Mount , Arya, et al. , Clarkson , Chan , Kleinberg , Indyk and Motwani , and Kushilevitz, Ostrovsky and Rabani .
In this study we restrict attention to data structures of size $`O(dn)`$ based on hierarchical spatial decompositions, and the kd-tree in particular. In large part this is because of the simplicity and widespread popularity of this data structure. A kd-tree is binary tree based on a hierarchical subdivision of space by splitting hyperplanes that are orthogonal to the coordinate axes . It is described further in the next section. A key issue in the design of the kd-tree is the choice of the splitting hyperplane. Friedman, Bentley, and Finkel proposed a splitting method based on selecting the plane orthogonal to the median coordinate along which the points have the greatest spread. They called the resulting tree an optimized kd-tree, and henceforth we call the resulting splitting method the standard splitting method. Another common alternative uses the shape of the cell, rather than the distribution of the data points. It splits each cell through its midpoint by a hyperplane orthogonal to its longest side. We call this the midpoint split method.
A number of other data structures for nearest neighbor searching based on hierarchical spatial decompositions have been proposed. Yianilos introduced the vp-tree . Rather than using an axis-aligned plane to split a node as in kd-tree, it uses a data point, called the vantage point, as the center of a hypersphere that partitions the space into two regions. There has also been quite a bit of interest from the field of databases. There are several data structures for database applications based on $`R`$-trees and their variants . For example, the X-tree improves the performance of the R-tree by avoiding high overlap. Another example is the SR-tree . The TV-tree uses a different approach to deal with high dimensional spaces. It reduces dimensionality by maintaining a number of active dimensions. When all data points in a node share the same coordinate of an active dimension, that dimension will be deactivated and the set of active dimensions shifts.
In this paper we study the performance of two other splitting methods, and compare them against the kd-tree splitting method. The first, called sliding-midpoint, is a splitting method that was introduced by Mount and Arya in the ANN library for approximate nearest neighbor searching . This method was introduced into the library in order to better handle highly clustered data sets. We know of no analysis (empirical or theoretical) of this method. This method was designed as a simple technique for addressing one of the most serious flaws in the standard kd-tree splitting method. The flaw is that when the data points are highly clustered in low dimensional subspaces, then the standard kd-tree splitting method may produce highly elongated cells, and these can lead to slow query times. This splitting method starts with a simple midpoint split of the longest side of the cell, but if this split results in either subcell containing no data points, it translates (or “slides”) the splitting plane in the direction of the points until hitting the first data point. In Section 3.1 we describe this splitting method and analyze some of its properties.
The second splitting method, called minimum-ambiguity, is a query-based technique. The tree is given not only the data points, but also a collection of sample query points, called the training points. The algorithm applies a greedy heuristic to build the tree in an attempt to minimize the expected query time on the training points. We model query processing as the problem of eliminating data points from consideration as the possible candidates for the nearest neighbor. Given a collection of query points, we can model any stage of the nearest neighbor algorithm as a bipartite graph, called the candidate graph, whose vertices correspond to the union of the data points and the query points, and in which each query point is adjacent to the subset of data points that might be its nearest neighbor. The minimum-ambiguity selects the splitting plane at each stage that eliminates the maximum number of remaining edges in the candidate graph. In Section 3.2 we describe this splitting method in greater detail.
We implemented these two splitting methods, along with the standard kd-tree splitting method. We compared them on a number of synthetically generated point distributions, which were designed to model low-dimensional clustering. We believe this type of clustering is not uncommon in many application data sets . We used synthetic data sets, as opposed to standard benchmarks, so that we could adjust the strength and dimensionality of the clustering. Our results show that these new splitting methods can provide significant improvements over the standard kd-tree splitting method for data sets with low-dimensional clustering. The rest of the paper is organized as follows. In the next section we present background information on the kd-tree and how to perform nearest neighbor searches in this tree. In Section 3 we present the two new splitting methods. In Section 4 we describe our implementation and present our empirical results.
## 2 Background
In this section we describe how kd-trees are used for performing exact and approximate nearest neighbor searching. Bentley introduced the kd-tree as a generalization of the binary search tree in higher dimensions . Each node of the tree is implicitly associated with a $`d`$-dimensional rectangle, called its cell. The root node is associated with the bounding rectangle, which encloses all of the data points. Each node is also implicitly associated with the subset of data points that lie within this rectangle. (Data points lying on the boundary between two rectangles, may be associated with either.) If the number of points associated with a node falls below a given threshold, called the bucket size, then this node is a leaf, and these points are stored with the leaf. (In our experiments we used a bucket size of one.) Otherwise, the construction algorithm selects a splitting hyperplane, which is orthogonal to one of the coordinate axes and passes through the cell. There are a number of splitting methods that may be used for choosing this hyperplane. We will discuss these in greater detail below. The hyperplane subdivides the associated cell into two subrectangles, which are then associated with the children of this node, and the points are subdivided among these children according to which side of the hyperplane they lie. Each internal node of the tree is associated with its splitting hyperplane (which may be given as the index of the orthogonal axis and a cutting value along this axis).
Friedman, Bentley and Finkel present an algorithm to find the nearest neighbor using the kd-trees. They introduce the following splitting method, which we call the standard splitting method. For each internal node, the splitting hyperplane is chosen to be orthogonal to the axis along which the points have the greatest spread (difference of maximum and minimum). The splitting point is chosen at the median coordinate, so that the two subsets of data points have nearly equal sizes. The resulting tree has $`O(n)`$ size and $`O(\mathrm{log}n)`$ height. White and Jain proposed an alternative, called the VAM-split, with the same basic idea, but the splitting dimension is chosen to be the one with the maximum variance.
Queries are answered by a simple recursive algorithm. In the basis case, when the algorithm arrives at a leaf of the tree, it computes the distance from the query point to each of the data points associated with this node. The smallest such distance is saved. When arriving at an internal node, it first determines the side of the associated hyperplane on which the query point lies. The query point is necessarily closer to this child’s cell. The search recursively visits this child. On returning from the search, it determines whether the cell associated with the other child is closer to the query point than the closest point seen so far. If so, then this child is also visited recursively. When the search returns from the root, the closest point seen is returned. An important observation is that for each query point, every leaf whose distance from the query point is less than the nearest neighbor will be visited by the algorithm.
It is an easy matter to generalize this search algorithm for answering approximate nearest neighbor queries. Let $`ϵ`$ denote the allowed error bound. In the processing of an internal node, the further child is visited only if its distance from the query point is less than the distance to the closest point so far, divided by $`(1+ϵ)`$. Arya et al. show the correctness of this procedure. They also show how to generalize the search algorithm for computing the $`k`$-closest neighbors, either exactly or approximately.
Arya and Mount proposed a number of improvements to this basic algorithm. The first is called incremental distance calculation. This technique can be applied for any Minkowski metric. In addition to storing the splitting hyperplane, each internal node of the tree also stores the extents of associated cell projected orthogonally onto its splitting axis. The algorithm does not maintain true distances, but instead (for the Euclidean metric) maintains squared distances. When the algorithm arrives at an internal node, it knows the squared distance from the query point to the associated cell. They show that in constant time (independent of dimension) it is possible to use this information to compute the squared distance to each of the children’s cell. They also presented a method called priority search, which uses a heap to visit the leaves of the tree in increasing order of distance from the query point, rather than in the recursive order dictated by the structure of the tree. Yet another improvement is a well-known technique from nearest neighbor searching, called partial distance calculation . When computing the distance between the query point and a data point, if the accumulated sum of squared components ever exceeds the squared distance to the nearest point so far, then the distance computation is terminated.
One of the important elements of approximate nearest neighbor searching, which was observed by Arya et al. , is that there are two important properties of any data structure for approximate nearest neighbor searching based on spatial decomposition.
The height of the tree should be $`O(\mathrm{log}n)`$, where $`n`$ is the number of data points.
The leaf cells of the tree should have bounded aspect ratio, meaning that the ratio of the longest to shortest side of each leaf cell should be bounded above by a constant.
Given these two constraints, they show that approximate nearest neighbor searching (using priority search) can be performed in $`O(\mathrm{log}n)`$ time from a data structure of size $`O(dn)`$. The hidden constant factors in time grow as $`O(d/ϵ)^d`$. Unfortunately, achieving both of these properties does not always seem to be possible for kd-trees. This is particularly true when the point distribution is highly clustered. Arya et al. present a somewhat more complex data structure called a balanced box-decomposition tree, which does satisfy these properties. The extra complexity seems to be necessary in order to prove their theoretical results, and they show empirically that it is important when data sets are highly clustered in low-dimensional subspaces. An interesting practical question is whether there exist methods that retain the essential simplicity of the kd-tree, while providing practical efficiency for clustered data distributions (at least in most instances, if not in the worst case).
Bounded aspect ratio is a sufficient condition for efficiency, but it is not necessary. The more precise condition in order for their results to apply is called the packing constraint . Define a ball of radius $`r`$ to be the locus of points that are within distance $`r`$ of some point in $`R^d`$ according to the chosen metric. The packing constraint says that the number of large cells that intersect any such ball is bounded.
The number of leaf cells of size at least $`s`$ that intersect an open ball of radius $`r>0`$ is bounded above by a function of $`r/s`$ and $`d`$, but independent of $`n`$.
If a tree has cells of bounded aspect ratio, then it satisfies the packing constraint. Arya et al., show that if this assumption is satisfied, then priority search runs in time that is proportional to the depth of the tree, times the number of cells of maximum side length $`rϵ/d`$ that intersect a ball of radius $`r`$. By the packing constraint this number of cells depends only on the dimension and $`ϵ`$. The main shortcoming of the standard splitting method is that it may result in cells of unbounded aspect ratio.
## 3 Splitting Methods
In this section we describe the splitting methods that are considered in our experiments. As mentioned in the introduction, we implemented two splitting methods, in addition to the standard kd-tree splitting method. We describe them further in each of the following sections.
### 3.1 Sliding-Midpoint
The sliding-midpoint splitting method was first introduced in the ANN library for approximate nearest neighbor searching . This method was motivated to remedy the deficiencies of two other splitting methods, the standard kd-tree splitting method and the midpoint splitting method. To understand the problem, suppose that the data points are highly clustered along a few dimensions but vary greatly along some the others (see Fig. 1). The standard kd-tree splitting method will repeatedly split along the dimension in which the data points have the greatest spread, leading to many cells with high aspect ratio. A nearest neighbor query near the center of the bounding square would visit a large number of these cells. In contrast, the midpoint splitting method bisects the cell along its longest side, irrespective of the point distribution. (If there are ties for the longest side, then the tie is broken in favor of the dimension along which the points have the highest spread.) This method produces cells of aspect ratio at most 2, but it may produce leaf cells that contain no data points. The size of the resulting tree may be very large when the data distribution is highly clustered data and the dimension is high.
The sliding midpoint method works as follows. It first attempts to perform a midpoint split, by the same method described above. If data points lie on both sides of the splitting plane then the algorithm acts exactly as it would for the midpoint split. However, if a trivial split were to result (in which all the points lie to one side of the splitting plane), then it attempts to avoid this by “sliding” the splitting plane towards the points until it encounters the first data point. More formally, if the split is performed orthogonal to the $`i`$th coordinate, and all the data points have $`i`$-coordinates that are larger than that of the splitting plane, then the splitting plane is translated so that its $`i`$th coordinate equals the minimum $`i`$th coordinate among all the data points. Let this point be $`p_1`$. Then the points are partitioned with $`p_1`$ in one part of the partition, and all the other data points in the other part. A symmetrical rule is applied if the points all have $`i`$th coordinates smaller than the splitting plane.
This method cannot result in any trivial splits, implying that the resulting tree has size $`O(n)`$. Thus it avoids the problem of large trees, which the midpoint splitting method is susceptible to. Because there is no guarantee that the point partition is balanced, the depth of the resulting tree may exceed $`O(\mathrm{log}n)`$. However, based on our empirical observations, the height of this tree rarely exceeds the height of the standard kd-tree by more than a small constant factor.
It is possible to generate a cell $`C`$ of very high aspect ratio, but it can be shown that if it does, then $`C`$ is necessarily adjacent to a sibling cell $`C^{}`$ that is fat along the same dimension that $`C`$ is skinny. As a result, it is not possible to generate arbitrarily long squences of skinny cells, as the standard splitting method could.
The sliding midpoint method can be implemented with little more effort than the standard kd-tree splitting method. But, because the depth of the tree is not necessarily $`O(\mathrm{log}n)`$, the $`O(n\mathrm{log}n)`$ construction time bound does not necessarily hold. There are more complex algorithms for constructing the tree that run in $`O(n\mathrm{log}n)`$ time . However, in spite of these shortcomings, we will see that the sliding-midpoint method, can perform quite well for highly clustered data sets.
### 3.2 Minimum-Ambiguity
All of the splitting methods described so far are based solely on the data points. This may be quite reasonable in applications where data points and query points come from the same distribution. However this is not always the case. (For example, a common use of nearest neighbor searching is in iterative clustering algorithms, such as the k-means algorithm . Depending on the starting conditions of the algorithm, the data points and query points may be quite different from one another.) If the two distributions are different, then it is reasonable that preprocessing should be informed of the expected distribution of the query points, as well as the data points. One way to do this is to provide the preprocessing phase with the data points and a collection of sample query points, called training points. The goal is to compute a data structure which is efficient, assuming that the query distribution is well-represented by the training points. The idea of presenting a training set of query points is not new. For example, Clarkson described a nearest neighbor algorithm that uses this concept.
The minimum-ambiguity splitting method is given a set $`S`$ of data points and a training set $`T`$ of sample query points. For each query point $`qT`$, we compute the nearest neighbor of $`q`$ in $`S`$ as part of the preprocessing. For each such $`q`$, let $`r(q)`$ denote the distance to the nearest point in $`S`$. Let $`b(q)`$ denote the nearest neighbor ball, that is, the locus of points (in the current metric) whose distance from $`q`$ is at most $`r(q)`$. As observed earlier, the search algorithm visits every leaf cell that overlaps $`b(q)`$ (and it may generally visit a large set of leaves).
Given any kd-tree, let $`C(q)`$ denote the set of leaf cells of the tree that overlap $`b(q)`$. This suggests the following optimization problem, given point sets $`S`$ and $`T`$, determine a hierarchical subdivision of $`S`$ of size $`O(n)`$ such that the total overlap, $`_{qT}|C(q)|`$, is minimized. This is analogous to the packing constraint, but applied only to the nearest neighbor balls of the training set. We do not know how to solve this problem optimally, but we devised the minimum-ambiguity splitting method as a greedy heuristic.
To motivate our method, we introduce a model for nearest neighbor searching in terms of a pruning process on a bipartite graph. Given a cell (i.e., a $`d`$-dimensional rectangle) $`C`$. Let $`S_C`$ denote the subset of data points lying within this cell and let $`T_C`$ denote the subset of training points whose such that the nearest neighbor balls intersects $`C`$. Define the candidate graph for $`C`$ to be the bipartite graph on the vertex set $`ST`$, whose edge set is $`S_C\times T_C`$. Intuitively, each edge $`(p,q)`$ in this graph reflects the possibility that data point $`p`$ is a candidate to be the nearest neighbor of training point $`q`$. Observe that if a cell $`C`$ intersects $`b(q)`$ and contains $`k`$ data points, then $`q`$ has degree $`k`$ in the candidate graph for $`C`$. Since it is our goal to minimize the number of leaf nodes that overlap $`C`$, and assuming that each leaf node contains at least one data point, then a reasonable heuristic for minimizing the number of overlapping leaf cells is to minimize the average degree of vertices in the candidate graph. This is equivalent to minimizing the total number of edges in the graph. This method is similar to techniques used in the design of linear classifiers based on impurity functions .
Here is how the minimum-ambiguity method selects the splitting hyperplane. If $`|S_C|1`$, then from our desire to generate a tree of size $`O(n)`$, we will not subdivide this cell any further. Otherwise, let $`H`$ be some orthogonal hyperplane that cuts $`C`$ into subcells $`C_1`$ and $`C_2`$. Let $`S_1`$ and $`S_2`$ be the resulting partition of data points into these respective subcells, and let $`T_1`$ and $`T_2`$ denote the subsets of training points whose nearest neighbor balls intersect $`C_1`$ and $`C_2`$, respectively. Notice that these subsets are not necessarily disjoint. We assign a score to each such hyperplane $`H`$, which is equal to the sum of the number of edges in the ambiguity graphs of $`C_1`$ and $`C_2`$. In particular,
$$\text{Score}(H)=|S_1||T_1|+|S_2||T_2|.$$
Intuitively a small score is good, because it means that the average ambiguity in the choice of nearest neighbors is small. The minimum-ambiguity splitting method selects the orthogonal hyperplane $`H`$ that produces a nontrivial partition of the data points and has the smallest score. For example, in Fig. 2 on the left, we show the score of the standard kd-tree splitting method. However, because of the higher concentration of training points on the right side of the cell, the splitting plane shown on the right actually has a lower score, and hence is preferred by the minimum-ambiguity method. In this way the minimum-ambiguity method tailors the structure of the tree to the distribution of the training points.
The minimum-ambiguity split is computed as follows. At each stage it is given the current cell $`C`$, and the subsets $`S_C`$ and $`T_C`$. For each coordinate axis, it projects the points of $`S_C`$ and the extreme coordinates of the balls $`b(q)`$ for each $`qT_C`$ orthogonally onto this axis. It then sweeps through this set of projections, from the leftmost to the rightmost data point projection, updating the score as it goes. It selects the hyperplane with the minimum score. If there are ties for the smallest score, then it selects the hyperplane that most evenly partitions the data points.
## 4 Empirical Results
We implemented a kd-tree in C++ using the three splitting methods: the standard method, sliding-midpoint, and minimum-ambiguity. For each splitting method we generated a number data point sets, query point sets, and (for minimum-ambiguity) training point sets. The tree structure was based on the same basic tree structure used in ANN . The experiments were run on a Sparc Ultra, running Solaris 5.5, and the program was compiled by the g++ compiler. We measured a number of statistics for the tree, including its size, depth, and the average aspect ratio of its cells.
Queries were answered using priority search. For each group of queries we computed a number of statistics including CPU time, number of nodes visited in the tree, number of floating-point operations, number of distance calculations, and number of coordinate accesses. In our plots we show only the number of nodes in the tree visited during the search. We chose this parameter because it is a machine-independent quantity, and was closely correlated with CPU time. In most of our experiments, nearest neighbors were computed approximately.
For each experiment we fixed the number of data points, the dimension, the data-point distribution, and the error bound $`ϵ`$. In the case of the minimum-ambiguity method, the query distribution is also fixed, and some number of training points were generated. Then a kd-tree was generated by applying the appropriate splitting method. For the standard and sliding-midpoint methods the tree construction does not depend on $`ϵ`$, implying that the same tree may be used for different error bounds. For the minimum-ambiguity tree, the error bound was used in computing the tree. In particular, the nearest neighbors of each of the training points was computed only approximately. Furthermore, the nearest neighbor balls $`b(q)`$ for each training point $`q`$ were shrunken in size by dividing their radius by the factor $`1+ϵ`$. This is because this is the size of the ball that is used in the search algorithm.
For each tree generated, we generated some number of query points. The query-point distribution was not always the same as the data distribution, but it is always the same as the training point distribution. Then the nearest neighbor search was performed on these query points, and the results were averaged over all queries. Although we ran a wide variety of experiments, for the sake of conciseness we show only a few representative cases. For all of the experiments described here, we used 4000 data points in dimension 20 for each data set, and there were 12,000 queries run for each data set. For the minimum-ambiguity method, the number of training points was 36,000.
The value of $`ϵ`$ was either 1, 2, or 3 (allowing the reported point to be a factor of 2, 3, or 4 further away than the true nearest neighbor, respectively). We computed the exact nearest neighbors off-line to guage the algorithm’s actual performance. The reason for allowing approximation errors is that in moderate to high dimensions, the search times are typically smaller by orders of magnitude. Also the errors that were observed are typically quite a bit smaller on average than these bounds (see Fig. 3). Note that average error committed was typically only about $`1/30`$ of the allowable error. The maximum error was computed for each run of 12,000 query points, and then averaged over all runs. Even this maximum error was only around $`1/4`$ of the allowed error. Some variation (on the order of a factor of 2) was observed depending on the choice of search tree and point distributions.
### 4.1 Distributions Tested
The distributions that were used in our experiments are listed below. The clustered-gaussian distribution is designed to model point sets that are clustered, but in which each cluster is full-dimensional. The clustered-orthogonal-ellipsoid and clustered-ellipsoid distributions are both explicitly designed to model point distributions which are clustered, and the clusters themselves are flat in the sense that the points lie close to a lower dimensional subspace. In the first case the ellipsoids are aligned with the axes, and in the other case they are more arbitrarily oriented.
Each coordinate was chosen uniformly from the interval $`[1,1]`$.
The distribution is given a number of color classes $`c`$, and a standard deviation $`\sigma `$. We generated $`c`$ points from the uniform distribution, which form cluster centers. Each point is generated from a gaussian distribution centered at a randomly chosen cluster center with standard deviation $`\sigma `$.
The distribution can be viewed as a degenerate clustered-gaussian distribution where the standard deviation of each coordinate is chosen from one of two classes of distributions, one with a large standard deviation and the other with a small standard deviation. The distribution is specified by the number of color classes $`c`$ and four additional parameters:
* $`d_{\mathrm{max}}`$ is the maximum number of fat dimensions.
* $`\sigma _{\mathrm{lo}}`$ and $`\sigma _{\mathrm{hi}}`$ are the minimum and maximum bounds on the large standard deviations, respectively (for the fat sides of the ellipsoid).
* $`\sigma _{\mathrm{thin}}`$ is the small standard deviation (for the thin sides of the ellipsoid).
Cluster centers are chosen as in the clustered-gaussian distribution. For each color class, a random number $`d^{}`$ between $`1`$ and $`d_{\mathrm{max}}`$ is generated, indicating the number of fat dimensions. Then $`d^{}`$ dimensions are chosen at random to be fat dimensions of the ellipse. For each fat dimension the standard deviation for this coordinate is chosen uniformly from $`[\sigma _{\mathrm{lo}},\sigma _{\mathrm{hi}}]`$, and for each thin dimension the standard deviation is set to $`\sigma _{\mathrm{thin}}`$. The points are then generated by the same process as clustered-gaussian, but using these various standard deviations.
This distribution is the result of applying $`d`$ random rotation transformations to the points of each cluster about its center. Each cluster is rotated by a different set of rotations. Each rotation is through a uniformly distributed angle in the range $`[0,\pi /2]`$ with respect to two randomly chosen dimensions.
In our experiments involving both clustered-orthogonal-ellipsoids and clustered-ellipsoids, we set the number of clusters to 5, $`d_{\mathrm{max}}=10`$, $`\sigma _{\mathrm{lo}}=\sigma _{\mathrm{hi}}=0.3`$, and $`\sigma _{\mathrm{thin}}`$ varied from $`0.03`$ to $`0.3`$. Thus, for low values of $`\sigma _{\mathrm{thin}}`$ the ellipsoids are relatively flat, and for high values this becomes equivalent to a clustered-gaussian distribution with standard deviation of 0.3.
### 4.2 Data and Query Points from the Same Distribution
For our first set of experiments, we considered data and query points from the same clustered distributions. We considered both clustered-orthogonal-ellipsoids and clustered-ellipsoid distributions in Figs. 4 and 5, respectively. The three different graphs are for (a) $`ϵ=1`$, (b) $`ϵ=2`$, and (c) $`ϵ=3`$. In all three cases the same clusters centers were used. Note that the graphs do not share the same $`y`$-range, and in particular the search algorithm performs significantly faster as $`ϵ`$ increases.
Observe that all of the splitting methods perform better when $`\sigma _{\mathrm{thin}}`$ is small, indicating that to some extent they exploit the fact that the data points are clustered in lower dimensional subspaces. The relative differences in running time were most noticeable for small values of $`\sigma _{\mathrm{thin}}`$, and tended to diminish for larger values.
Although the minimum-ambiguity splitting method was designed for dealing with data and query points from different distributions, we were somewhat surprised that it actually performed the best of the three methods in these cases. For small values of $`\sigma _{\mathrm{thin}}`$ (when low-dimensional clustering is strongest) its average running time (measured as the number of noded visited in the tree) was typically from 30-50% lower than the standard splitting method, and over 50% lower than the sliding-midpoint method. The standard splitting method typically performed better than the sliding-midpoint method, but the difference decreased to being insignificant (and sometimes a bit worse) as $`\sigma _{\mathrm{thin}}`$ increased.
### 4.3 Data and Query Points from Different Distributions
For our second set of experiments, we considered data points from a clustered distribution and query points from a uniform distribution. This particular choice was motivated by the situation shown in Fig. 1, where the standard splitting method can produce cells with high aspect ratios. For the data points we considered both the clustered-orthogonal-ellipsoids and clustered-ellipsoid distributions in Figs. 6 and 7, respectively. As before, the three different graphs are for (a) $`ϵ=1`$, (b) $`ϵ=2`$, and (c) $`ϵ=3`$. Again, note that the graphs do not share the same $`y`$-range.
Unlike the previous experiment, overall running times did not vary greatly with $`\sigma _{\mathrm{thin}}`$. Sometimes running times increased moderately and other times they decreased moderately as a function of $`\sigma _{\mathrm{thin}}`$. However, there were significant differences between the standard splitting method, which consistently performed much worse than the other two methods. For the smallest values of $`\sigma _{\mathrm{thin}}`$, there was around a 5-to-1 difference in running time between then standard method and sliding-midpoint.
For larger values of $`ϵ`$ (2 and 3) the performance of sliding-midpoint and minimum-ambiguity were very similar, with sliding-midpoint having the slight edge. It may seem somewhat surprising that minimum-ambiguity performed significantly worse (a factor of 2 to 3 times worse) than sliding-midpoint, since minimum-ambiguity was designed exactly for this the situation where there is a difference between data and query distributions. This may be due to limitations on the heuristic itself, or the limited size of the training set. However, it should be kept in mind that sliding-midpoint was specially designed to produce large empty cells in the uncluttered regions outside the clusters (recall Fig. 1).
### 4.4 Construction Times
The results of the previous sections suggest that the minimum ambiguity splitting produces trees that can answer queries efficiently for a variety of point and data distributions. Its main drawback is the amount of time that it takes to build the tree. Both the standard and sliding-midpoint methods can be built quite efficiently in time $`O(nh)`$, where $`n`$ is the number of data points, and $`h`$ is the height of the tree. The standard kd-tree has $`O(\mathrm{log}n)`$ height, and while the sliding-midpoint tree need not have $`O(\mathrm{log}n)`$ height, this seems to be true for many point distributions. For the 4000 point data sets in dimension 20, both of these trees could be constructed in under 10 CPU seconds.
However, the construction time for the minimum-ambiguity tree is quite a bit higher. It can be argued that the time to construct the tree is roughly (within logarithmic factors) proportional to the time to compute the (approximate) nearest neighbors for all the training points. In order to construct the tree, first the nearest neighbors for each of the training points must be computed. This is done in an auxiliary nearest neighbor tree, e.g., one built using the standard or sliding-midpoint method. Then to determine the splitting hyperplane for each cell of the minimum-ambiguity tree, requires consideration of all the nearest neighbor balls that overlap the current cell. However, in order to compute the nearest neighbors of the training points, each point whose nearest neighbor ball overlaps the cell would have to visit the cell in any case.
Since we used 9 times the number of data points as training points, it is easy to see that the minimum-ambiguity tree will take much longer to compute than the other two trees. Notice that when $`ϵ>0`$, we compute nearest neighbors approximately, and so this can offer an improvement in construction time. In Fig. 8 we present the construction time for the minimum-ambiguity tree for various combinations of data and training distributions. Observe that the construction times are considerably greater than those for the other two methods (which were under 10 CPU seconds), and that the construction time is significantly faster for higher values of $`ϵ`$.
## 5 Conclusions
In this paper we have presented an empirical analysis of two new splitting methods for kd-trees: sliding-midpoint and minimum-ambiguity. Both of these methods were designed to remedy some of the deficiencies of the standard kd-tree splitting method, with respect to data distributions that are highly clustered in low-dimensional subspaces. Both methods were shown to be considerably faster than the standard splitting method in answering queries when data points were drawn from a clustered distribution and query points were drawn from a uniform distribution. The minimum-ambiguity method performed better when both data and query points were drawn from a clustered distribution. But this method has a considerably higher construction time. The sliding-midpoint method, while easy to build, seems to perform sometimes better and sometimes worse than the standard kd-tree splitting method.
The enhanced performance of the minimum-ambiguity method suggests that even within the realm of kd-trees, there may be significant improvements to be made by fine-tuning the structure of the tree to the data and query distributions. However, because of its high construction cost, it would be nice to determine whether there are other heuristics that would lead to faster construction times. This suggest the intriguing possibility of search trees whose structure adapts dynamically to the structure of queries over time. The sliding-midpoint method raises hope that it may be possible to devise a simple and efficiently computable splitting method, that performs well across a wider variety of distributions than the standard splitting method.
## 6 Acknowledgements
We would like to thank Sunil Arya for helpful discussions on the performance of the sliding-midpoint method.
|
no-problem/9901/astro-ph9901323.html
|
ar5iv
|
text
|
# The shape of the blue/UV continuum of B3-VLA radio quasars: Dependence on redshift, blue/UV luminosity and radio power
## 1 Introduction
The study of the shape of the optical/UV continuum of quasars is an essential tool to test the different emission models invoked to explain the observed radiation, and understand their characteristic parameters. Whereas the overall quasar spectrum from the infrared to the X-ray has approximately a power-law form, $`S_\nu \nu ^\alpha `$, with spectral index about $`1`$ , in the optical/UV region there is a bump on top of the power-law continuum known as the big blue bump (Elvis et al. 1987 & Sanders et al. 1989). The big blue bump is generally interpreted with a two component model, consisting of the underlying power-law and black-body-like emission from an accretion disc (hereafter AD) around a massive black hole (Malkan 1983, Czerny & Elvis 1987). The determination of the shape and strength of the ionizing UV continumm is essential to constrain the accretion disc parameters, for the modelling of the broad line region of quasars, and other aspects such as the determination of the $`k`$-corrections for quasars, which affect the derivation of their luminosity functions.
Empirically, the shape of the optical/UV continuum of quasars is generally parameterized by a power law, although this is a local approximation, with the overall shape being more complicated. Measurements of the optical/UV continuum shape at fixed rest-frame wavelengths have been obtained by O’Brien et al. (1988), and more recently by Natali et al. (1998), on the basis of low-resolution spectroscopic data. O’Brien et al. selected their quasars from availability of $`IUE`$ spectra and published X-ray fluxes, and found a mean spectral index of $`0.67\pm 0.05`$ for the range 1900–1215 Å. The spectral index distribution of radio-loud and radio-quiet quasars appeared to be similar. The authors found a small hardening with redshift, with $`\alpha `$ ranging from $`0.87\pm 0.07`$ for $`z<1.1`$ to $`0.46\pm 0.05`$ for $`z>1.1`$, and a trend with luminosity at 1450 Å, in the sense that the more luminous quasars had harder spectra. From a joint regression analysis including spectral index, redshift and luminosity, the authors concluded that the dominant correlation was between $`\alpha `$ and $`z`$, and that the trend between $`\alpha `$ and luminosity was due to the correlation of both variables with redshift. Natali et al. (1998) used a complete sample of optically selected radio quiet quasars, and found that the spectra in the range 5500–1200 Å showed an abrupt change around the so called “3000 Å emission feaure” (Wills, Netzer & Wills 1985), with $`\alpha 0.15`$ for $`\lambda >3000`$ Å and $`\alpha 0.65`$ for $`\lambda <3000`$ Å.
Francis et al. (1991) analysed the composite spectrum obtained from optical spectra of around 700 quasars from the LBQS (Hewett et al. 1991). The individual spectra yielded slopes ranging from 1 to –1.5 and a median value $`\alpha `$=–0.3 in the range 5800–1300 Å. Francis et al. fitted to the composite spectrum a curved underlying continuum (a cubic spline), corresponding to $`\alpha `$ $``$ 0.0 for $`\lambda >3000`$ Å and $`\alpha `$ $``$ –0.6 below this limit. The authors noted that the high-luminosity quasars had harder spectra than the low luminosity ones.
The samples studied by Francis et al. (1991) and Natali et al. (1998) select quasars on the basis of the presence of UV excess (although the LBQS includes additional independent selection criteria), and are then biased against quasars with red colours. A similar bias is likely present in the data of O’Brien et al. (1988), who selected the quasars by availability of UV and X-ray measurements. In this paper we present a study of the shape of the blue/UV spectrum of quasars from the B3-VLA Quasar Sample (Vigotti et al. 1997) selected in radio, at 408 MHz. Optical selection bisases are significantly lessened in this sample, allowing for a thorough investigation of the shape of their blue/UV spectrum. Since the B3-VLA quasars have been selected at a low frequency, where steep-spectrum, extended emission dominates, the sample minimizes the inclusion of core-dominated quasars, for which an additional component of relativistically beamed optical synchrotron is likely present (Browne & Wright 1985).
The analysis of the blue/UV continuum of the B3-VLA quasars is based on $`UBVR`$ photometry of around 70 sources, with redshifts in the range 0.4–2.8. The quasar SEDs are studied both individually and from created composite spectra, and the dependence of the blue/UV slope and luminosity on redshift and radio properties is analysed.
Near infrared $`K`$-band imaging of 52 quasars in this work was presented by Carballo et al. (1998). For sixteen sources the images revealed extended emission, most likely related to starlight emission from the host galaxy.
## 2 The sample
The B3 survey (Ficarra, Grueff & Tomasetti 1985) catalogues sources to a radio flux-density limit $`S_{408\mathrm{MHz}}=0.1`$ Jy. From the B3 Vigotti et al. (1989) selected 1050 radio sources, the B3-VLA Sample, consisting of five complete subsamples in the flux density ranges $`0.10.2`$ Jy, $`0.20.4`$ Jy, $`0.40.8`$ Jy, $`0.81.6`$ Jy and $`S_{408}>`$1.6 Jy, all mapped at the VLA. Candidate quasar identifications (objects of any colour appearing starlike to the eye) were sought on the POSS-I red plates down to the plate limit, $`R20`$, yielding a sample of 172 quasar candidates. Optical spectroscopy was obtained for all the candidates and 125 were confirmed as quasars, forming the B3-VLA Quasar Sample. The sample covers the redshift range $`z=0.42.8`$, with mean redshift $`z=1.16`$, and radio powers $`P_{408\mathrm{MHz}}`$10<sup>33</sup>-10<sup>36</sup> erg s<sup>-1</sup> Hz<sup>-1</sup> (adopting $`H_0`$=50 km s<sup>-1</sup>Mpc<sup>-1</sup> and $`\mathrm{\Omega }_0`$=1). The sample of quasar candidates and the final B3-VLA Quasar Sample are described in Vigotti et al. (1997).
The optical incompleteness of the radio quasar sample, i.e. the fraction of quasars fainter than the optical limit of $`R20`$ mag, depends on the radio flux. From $`R`$-band photometry of the complete radio quasar sample presented by Willot et al. (1998), we infer that the fraction of radio quasars with $`R>20`$ is 35 per cent for the flux range $`0.4<S_{408}<1.0`$ Jy (mean flux density 0.66 Jy) and 45 per cent for $`0.20.4`$ Jy (mean flux density 0.28 Jy). The average $`BR`$ colour of these quasars is 0.6 (similar to the value we found for the B3-VLA quasars on Section 4.2). For the quasars with $`S_{408}>1`$ Jy we can use the distribution of $`B`$-band magnitudes obtained by Serjeant et al. (1998) for the Molonglo-APM Quasar Survey (average flux density $`S_{408}1.7`$ Jy). Adopting as limit $`B=20.6`$, equivalent to $`R=20`$ for typical radio quasar colours, the fraction of quasars with $`S_{408}>1`$ Jy fainter than this limit is around 15 per cent.
The B3-VLA Quasar Sample contains 64 quasars with $`S>0.8`$ Jy and average flux density of 2.25 Jy, 32 with $`S=0.40.8`$ Jy and average flux density of 0.53 Jy and 29 quasars with $`S=0.10.4`$ Jy and average flux density of 0.22 Jy. Adopting for the three groups optical incompleteness of 15 per cent, 25 per cent and 45 per cent respectively, the estimated optical completeness for the total sample would be around 75 per cent, improving to 80 per cent for $`S>0.4`$ Jy.
The present work is based on $`UBVR`$ photometry of a representative group of 73 quasars from the B3-VLA Quasar Sample. The quasars were selected to have right ascensions in the range 7<sup>h</sup>-15<sup>h</sup> and comprise the 44 with $`S_{408}>0.8`$ Jy, the 23 with $`0.4<S_{408}<0.8`$ Jy, the 4 with $`0.3<S_{408}<0.4`$ and 2 out of 15 with 0.1$`<S_{408}<0.3`$. The sample is equivalent to the Quasar Sample (except for the R.A. constraint) for $`S>0.4`$ Jy, and includes only a few quasars with fainter radio fluxes, although generally close to the limit. We estimate therefore the optical completeness of the studied sample to be about 80 per cent.
## 3 Observations, data reduction and photometric calibration
The optical images were obtained on 1997 February 5$``$8, using a 1024$`\times `$1024 TEK CCD at the Cassegrain focus of the 1.0-m JKT on La Palma (Spain), and on March 11$``$12, using a 2048 $`\times `$2048 SITe CCD at the Cassegrain focus of the 2.2-m telescope on Calar Alto (Spain). $`U,B,V`$ and $`R`$ standard Johnson filters were used and the pixel scale was 0.33 arcsec pixel<sup>-1</sup> for the JKT and 0.533 ascsec pixel<sup>-1</sup> for the 2.2-m telescope. The field of view was $``$6’$`\times `$6’ for all the images. A standard observing procedure was used. Bias and sky flat fields for each night and filter were obtained during the twilight. Faint photometric standards (Landolt 1992 and references therein) were observed each night in order to obtain the flux calibration. The exposure time was different for each quasar and filter, ranging from 60 s to 1200 s, and was set according to the red and blue APM magnitudes from POSS-I. The seeing varied from $`1.8`$ arcsec, on February 5, 6, 7 and March 11, to $`2.2`$ arcsec on February 8 and March 12. The data were reduced using standard tasks in the iraf software package <sup>1</sup><sup>1</sup>1iraf is distributed by the NOAO, which is operated by AURA, Inc., under contract to the NSF. Flat field correction was better than $`0.5`$ per cent. Large exposure time images were cleaned from cosmic-ray hits automatically.
Instrumental magnitudes for the standard stars were measured in circular apertures of $`13`$ arcsec diameter. The flux calibration was obtained, as a first step, assuming no colour effects in any of the bands. For all the nights at both telescopes this assumption was proved to be correct for filters $`B,V,`$ and $`R`$. At the $`U`$ band, for two of the nights, we took colour effects into account, introducing the colour term $`k^{\prime \prime }(UB)`$. Table 1 lists the results of the photometric calibration showing the extinction and colour coefficients for each night/filter together with the rms of the fits.
All nights were photometric except the second half of February 5th, during which we had clouds. In this part of the night we only obtained data for 3 objects in two bands. The first part of the night was photometric and the calibration data listed in Table 1 correspond to this part of the night.
## 4 Optical photometry of the quasars
### 4.1 $`UBVR`$ magnitudes
Quasar magnitudes were measured on the images using the same apertures as for the photometric standards. In one case (B3 0724+396) a nearby star was included within the aperture and it was subtracted by modelling it with a two-dimensional PSF. Measurement errors ranged from less than 0.01 mag to 0.9 mag, and the typical values were lower than 0.15 mag (85 per cent of the data). Apparent $`UBVR`$ magnitudes, not corrected from Galactic extinction, and their errors are listed in Table 2, along with the observing dates and the colour excess $`E(BV)`$ towards each object, obtained from the $`N`$(Hi) maps by Burnstein & Heiles (1982). The redshift and radio power of the sources is also listed in the Table.
Some of the quasars (11) were observed with the same filter twice on the same night or on the two runs. In these cases the difference in magnitudes never excedeed 0.2 mag, and the quoted magnitudes correspond to the average value. For three quasars (B3 0836+426, B3 0859+470 and B3 0906+430) the $`U`$ and $`V`$-band data were obtained the second part of Feb 5th, which was cloudy. We have made a crude estimate of these magnitudes on the basis of the photometric standards measured over the same period, and assigned them an error of $``$ 0.3 mag. These objects, however, have not been used for any further analysis.
For the 70 quasars observed under good photometric conditions rest-frame SEDs were built, using the zero-magnitude flux densities from Johnson (1966; $`U`$,$`B`$ and $`R`$ bands) and Wamsteker (1981; $`V`$ band). For the Galactic extinction correction Rieke & Lebofsky (1985) reddening law was used. The corrections were lower than 0.05 mag for 80 per cent of the sources. For the remaining sources the corrections for the $`U`$ band, which is the most affected, ranged from 0.05 to 0.56 mag, with a median value of 0.19. Figure 1 shows the SEDs of the 70 quasars, plotted in order of increasing redshift. The SEDs will be always referred to the log$`S_\nu `$–log$`\nu `$ plane.
Three sources in Table 2 (B3 0955+387, B3 1312+393 and B3 1444+417, see also Fig. 1) have magnitude errors larger than 0.3 mags in several bands. In addition, three sources have abrupt changes in their SEDs (B3 1317+380, B3 0726+431 and B3 0922+425), probably related to intrinsic variability or not-understood errors. These six sources will not be considered for the analysis presented in the forthcoming sections.
Histograms of the $`U`$,$`B`$,$`V`$ and $`R`$ magnitudes (corrected for Galactic extinction) for the 64 quasars with good photometry are presented on Figure 2. The $`R`$-band histogram shows that most of the quasars have $`R`$ magnitudes well below the POSS-I limit of $`R20`$ used for the quasar identifications, confirming that this limit guaranties a rather high optical completeness.
We have indicated on Fig. 1 the wavelengths of the strongest quasar emission lines in the studied range, like H$`\beta `$, Mgii$`\lambda `$2798, Ciii\]$`\lambda `$1909, Civ$`\lambda `$1549 and Ly$`\alpha `$. For each spectral point on the figure we have plotted the covered FWHM, assuming the standard FWHMs (observer frame) for the $`U`$,$`B`$,$`V`$ and $`R`$ bands from Johnson (1966). We infer from the figure that about 45 per cent of the spectral points do not include within half the filter bandpass the central wavelengths of any of the emission lines listed above. Broader emission features, such as the Feii and Feii+Balmer emission bumps in the ranges 2250–2650 Å and 3100–3800 Å, could also affect the measured broad-band fluxes.
Table 3 presents average equivalent width (EW) measurements of these emission lines for several quasar samples
from the literature, which can be used to estimate their average contribution to the broad-band fluxes. The listed EWs include the values obtained by Baker & Hunstead (1995) for Molonglo radio quasars, which is a sample selected at 408 MHz, with $`S_{408}>0.95`$ Jy, and therefore appropiate for comparison with our sample (although it does not list the EWs for the Feii bumps). Also listed are the EW measurements for radio-loud quasars given by Zheng et al. (1997, from an UV composite which extends to 3000 Å) and the EWs for optically-selected LBQS quasars from Francis et al. (1991) and P.J. Green (1998). Francis et al. (1991) mention that the height of the bumps derived from their work would be decreased if the regions around 1700 Å and 2650 Å were taken as continuum, although at the cost of a strongly curved continuum. Green (1998) uses as continuum the regions 2645–2700 Å, 3020–3100 Å and other wavelengths around 4430 and 4750 Å, and finds that the Feii+Balmer bump in the range 3100–3800 is absent in his composite. The EW for Fe ii 2400 from Green (1998) is very similar to that measured by Zheng et al. (1997).
Baker & Hunstead (1995) obtain qualitative estimates of the contribution of Feii bumps for different radio morphologies from the height of the composite spectra relative to a power-law continuum from $``$2000 to $``$5000 Å. The authors conclude that the Feii bumps are absent in lobe-dominated and compact-steep-spectrum quasars, but rather strong in core-dominated quasars, which resemble in various line properties optically-selected quasars. The continuumm of optically selected quasars is known to curve at around 3000 Å (Francis et al. 1991, Natali et al. 1998), and the strength of the bumps obtained by Baker & Hunstead will be significantly decreased if the underlying continuum was allowed to curve. Boroson & R.F. Green (1992) had previously found that flat-spectrum quasars have stronger Feii 4434–4684 Å than steep-spectrum ones, and that, as a whole, radio-loud quasars have fainter emission than radio-quiet ones. The latter result was also reported by Cristiani & Vio (1990); Feii 2400 is the only Feii bump revealed in their composite spectra for radio-loud quasars, and the feature, almost absent, is less pronounced than in the composite for radio-quiet quasars. Zheng et al. (1997) derive however a lower EW for Feii 2400 for radio-quiet quasars (22 Å versus 38 Å), showing evidence of the uncertainties in the contribution of these bumps.
Adopting as EWs for the Civ, Ciii\] and Mgii lines the values from the Molonglo sample we infer the maximum contribution to the broad-band fluxes for the Civ line, amounting on average to 25 per cent if the line is included in the $`U`$, $`B`$ or $`V`$ band. For the Ciii\] line the average contribution is below 7 per cent and for Mgii in the range 6–11 per cent. As EW for Feii $`2400`$ Å we adopted the value for the radio quasar sample by Zheng et al. (1997). The EWs for Civ, Ciii\] and Mgii by Zheng et al. (1997) are in fact very similar to those from the Molonglo quasars, and the radio sample is also very similar to ours in terms of optical luminosity. From the average EW of 38 Å the inferred contribution to the broad-band fluxes is 5–9 per cent. The Feii+Balmer $``$ 3400 feature lies outside the spectral region covered by the composite by Zheng et al., and is absent in the composites by Green (1998) and Cristiani & Vio (1990). Adopting half the EW from Francis et al. (1991), i.e. around 38 Å, its expected contribution would be below 6 per cent when included in the $`V`$ or $`R`$ bands.
### 4.2 Optical broad-band colours of the quasars
The distribution of the $`UR`$, $`BR`$ and $`UV`$ colours as a function of redshift is shown on Figure 3. The broad-band colours do not show a clear variation with redshift. If any, this would be a blueing of the $`UV`$ colour with redshift up to $`z`$ around 2.5. The mean colours derived for these quasars (indicated on the figure with dotted lines) are $`UR=0.08`$, $`BR=0.64`$ and $`UV=0.38`$ with dispersions 0.45, 0.27, and 0.42, respectively, and correspond to observed spectral indices $`\alpha _{\mathrm{obs}\mathrm{U}\mathrm{R}}=0.50`$, $`\alpha _{\mathrm{obs}\mathrm{B}\mathrm{R}}=0.39`$ and $`\alpha _{\mathrm{obs}\mathrm{U}\mathrm{V}}=0.75`$ with dispersions 0.64, 0.53, and 0.96. From the comparison of the $`BR`$ and $`UV`$ spectral indices a trend is found in the sense that the SED is steeper at higher frequencies, in agreement with the results found by Natali et al. (1998).
## 5 Spectral energy distribution of the individual sources
In this section we discuss the shape of the individual SEDs of the B3 quasars. The SEDs of the 61 sources with available photometry at the four bands were fitted through $`\chi ^2`$ minimization, using a power-law model and a quadratic model. The second model was chosen as a simple representation for the SEDs curved on the log$`S_\nu `$–log$`\nu `$ plane. A fit was accepted if the probability $`Q`$ that the $`\chi ^2`$ should exceed the particular measured value by chance was higher than 1 per cent. The best-fit spectral indices and their errors are listed in Table 4. Forty-one quasars have acceptable fits as power-laws. Thirty-nine of these have also acceptable fits as quadratics and in 6 cases the quadratic model gives a significant improvement of the fits (higher than 85 per cent using an $`F`$-test). These six cases are labelled as p$``$q on the last column of Table 4. There are cases where the shape of the SED is clearly curved, but a power-law model is acceptable due to large errors. Similarly, SEDs which resemble power-laws to the eye and have small photometric errors may not have acceptable fits for this model. The two sources with acceptable fits as power-laws but not as quadratics are B3 0849+424 and B3 1144+402, and both sources have poor power-law fits ($`Q<0.02`$). The SEDs of these sources could be contaminated by emission lines at the $`U`$ and $`V`$ bands.
Ten additional quasars without acceptable fits as power laws can be fitted with quadratics. The remaining 10 sources do not have acceptable fits with any of the models (indicated with an hyphen in the last column of Table 4). For some of these sources the lack of good fits could be due to the presence of emission peaks in their SEDs related to contamination by emission lines. We note for instance that contamination by Civ could be related to the maxima in the $`U`$ band for B3 1020+400, B3 1148+387 and B3 1240+381, and in the $`V`$ band for B3 0724+396.
Figure 4 shows the distribution of spectral indices versus redshift for the 61 sources. The slopes were obtained from fixed observed wavelengths, therefore they correspond to different rest-frames (from 2600–5000 for $`z=0.4`$ to 1000–1900 for $`z=2.8`$). The slopes show a wide range, with values from 0.4 to –1.7, regardless that the fits are formally acceptable or not. The mean and the dispersion for the total sample are –0.39 and 0.38 respectively (–0.41 and 0.40 for the sources with formally acceptable power-law fits). The comparison of these values with those obtained from two-band colours (Sect. 4.2) show the best agreement for $`BR`$, with $`\alpha _{\mathrm{obs}\mathrm{B}\mathrm{R}}=0.39`$ and standard deviation 0.53. The reason for this is that the $`B`$ and $`R`$-band errors are low, compared to those for the $`U`$-band, and the errors are weighted for the power-law fits.
Although a mean spectral index was obtained from the sample, the distribution of slopes is asymmetric, showing a tail to steep indices (see Fig. 4 and the histogram on Figure 5). Considering the total sample of 61 sources, the distribution of spectral indices for $`\alpha >0.9`$ is well represented by a gaussian with a mean of –0.21 and a dispersion of 0.34.
## 6 Spectral energy distribution from the composite SEDs
### 6.1 Normalized composite SEDs
The shape of the SED of the quasars can be analysed through a “composite spectrum”, in which the individual SEDs are merged in the rest-frame, adopting a specific criterium for the scaling of the fluxes. The coverage of the optical photometry and the range of redshifts of the quasars allows to study their spectrum in the range $``$$`13004500`$ Å. We chose to normalize the individual SEDs to have the same flux density at a fixed wavelength $`\lambda _\mathrm{n}`$ within the observed range, and the flux density at $`\lambda _\mathrm{n}`$ was obtained by linear interpolation between the two nearest data points. With this procedure we could not have an appropriate normalization for the whole sample (the coverage for the lowest redshift quasars is $`25005000`$ Å and for the highest redshift ones $`10002000`$ Å), therefore we obtained composite SEDs for different normalization wavelengths. The selected $`\lambda _\mathrm{n}`$ were 3800, 3500, 3200, 2400, 2200 and 2000 Å and the corresponding normalized SEDs are shown on Figure 6. The use of a broad range of values for $`\lambda _\mathrm{n}`$ is appropriate to analyse the possible dependence of the SED shape with $`\lambda _\mathrm{n}`$. The low separation between the $`\lambda _\mathrm{n}`$ values, typically around 300 Å, allows for a large overlap between the composites and for their comparison in a continuous way. A normalization around 2800 Å was not considered, to avoid the Mgii emission line. The fluxes at the normalization $`\lambda _\mathrm{n}`$=2000 could be contaminated by the weaker Ciii\]$`\lambda `$1909 line, and those at the normalizations 2400, 3200 and 3500 by the emission bumps at 2250–2650 and 3100–3800. However, we have seen in Sect. 4 that the expected contributions of these features are weak; typically below 10 per cent. A 10 per cent contribution corresponds to a vertical shift in log$`S_\nu `$ (Figure 6) of 0.04, which is clearly lower than the dispersion of the spectral points in the composites.
Some of the spectral points clearly deviating in the composite SEDs correspond to the two reddest quasars, B3 0918+381 and B3 1339+472 ($`\alpha 1.6`$). The extreme red colours of these two sources are uncommon in the sample, as can be seen from Fig. 3, where the symbols for the two sources appear underlined. Both sources show curved spectra in the log$`S_\nu `$–log$`\lambda `$ plane. B3 0918+381 has an acceptable fit for the power-law model, but this is due to the large magnitude error at the $`U`$-band. On Fig. 6 the normalized SEDs of these sources are indicated over the remaining spectral points, showing the notorious discrepancy relative to the average composite SEDs. These two sources with peculiar SEDs will not be considered for the discussion in this and the next section.
The overall shape of the composite spectra is found to be very uniform for the different normalizations. The dispersion of the spectral points is artificially reduced near the wavelength $`\lambda _\mathrm{n}`$. The spectral points appear to trace predominantly the continuum; only the Civ$`\lambda `$1549 line appears prominent on the composites, as an “emission feature”. No other emission bumps/lines are apparent from the composites, in agreement with the low contributions expected (Sect 4.1). The Civ feature is revealed in the three composites covering its wavelength and arises from the $`U`$-band data of various quasars, most of which were mentioned in Sect. 5 to have likely contamination by this line. The emission feature appears in the broad-band composite displaced to the red, peaking at around 1610 Å. The displacement is around five times lower than the rest-frame width over which the line is detected.
A general trend on Fig. 6, apparent especially from panels (b) to (d), is a steepening of the spectrum from large to short wavelengths, ocurring at around 3000 Å. Above this wavelength the SED appears to be rather flat. This trend is less evident on panel (a), probably because of the small number of spectral points at $`\lambda <3000`$ Å. The trend is weak in panel (e) and practically disappears in panel (f), but here the number of points with $`\lambda >3000`$ Å is small. Panels (d) and (e) appear to show that the SED flattens again below 2000 Å. This trend practically dissapears in panel (f), in spite of the large number of points, thus this flattening is not so clear in principle as the steepening at 3000 Å.
### 6.2 Power-law fits
We have obtained power-law fits of the composite SEDs in the regions above and below the 3000 Å break. The selected ranges were 4500–3000 Å (referred to as “blue”) and 2600–1700 Å (referred to as “UV”), which roughly correspond to the regions limited by H$`\beta `$ and Mgii$`\lambda `$2798 and Mgii$`\lambda `$2798 and Civ$`\lambda `$1549. These spectral points were taken as representing the continuum, since the contamination by emission lines/bumps in this range is expected to be very weak. The results of the fits, obtained by least-squares minimization, are listed in Table 5, including for each composite the slopes and their errors, the number of spectral points used and the redshift range of the quasars. Slopes with an asterisk correspond to lower quality fits, since the spectral points do not cover the whole wavelength range. Figure 7 shows the spectral index values as a function of $`\lambda _\mathrm{n}`$, illustrating its variation between the different composites. The four composite SEDs for which power-laws were fitted in both ranges show a steepening from low to high frequency. The differences $`\alpha _{\mathrm{blue}}\alpha _{\mathrm{UV}}`$ for these composites and their errors are listed in Table 5. For the following discussion only the good quality fits will be considered.
Concerning the 4500$``$3000 Å range, the spectral indices for all the normalizations are consistent within their errors. The first three normalizations, with $`\lambda _\mathrm{n}`$=3800, 3500 and 3200, show a better agreement with each other. Averaging the spectral indices for the fits for $`\lambda _\mathrm{n}`$=3800, 3200 and 2400 (the fit for 3500 Å has a large overlap with those for 3200 and 3800 Å) we obtain $`\alpha _{\mathrm{blue}}=0.11\pm 0.16`$.
The high frequency fits have again best-fit slopes consistent with each other within the errors. For the high frequency normalizations at $`\lambda _\mathrm{n}`$=2400, 2200 and 2000, the spectral indices are very similar, although the range of spectral points used do not show a wide overlap. The spectral index for the low frequency normalization at $`\lambda _\mathrm{n}`$=3200 is steeper. Averaging the slopes obtained for $`\lambda _\mathrm{n}`$=3200, 2400, 2200 and 2000 we find $`\alpha _{\mathrm{UV}}=0.66\pm 0.15`$. Considering the average measured values of $`\alpha _{\mathrm{blue}}`$ and $`\alpha _{\mathrm{UV}}`$ and their errors we find a steepening towards high frequencies $`\alpha _{\mathrm{blue}}\alpha _{\mathrm{UV}}=0.77\pm 0.22`$.
Although the expected contribution of emission lines/bumps in the ranges used for the power law fits is low, it is interesting to analyse the possibility that the slope change is artificially produced by contamination due to the Feii bumps at 3100–3400 and 2250–2650, since these features would enhance the emission used for the power-law fits (4500–3000 and 2600–1700 Å) in the regions next to the break, producing a slope change in the observed sense. However, a power law fit for the whole range from 1700 to 4500 Å for the composites (b) to (d), where the break is more obviously detected, yields a clear excess emission only in the region 2650–3200 Å i.e. between the two bumps. In fact some excess there could be due to Mgii, but if the Feii bumps were responsible for the slope change, the excess emission should extend over the Feii ranges 2250–2650 and 3100–3400. Therefore, although some of the broad-band spectral points are expected to include contamination due to emission lines and/or bumps, the break in the overall SED detected at $``$ 3000 Å is most likely related to an intrinsic change in the continuum of the quasars. A steepening of the quasar’s continuum at 3000 Å was reported by Natali et al. (1998) from spectra of optically selected quasars, and is also found in the composite spectrum of optically selected quasars by Francis et al. (1991). The slopes obtained by Natali et al. (1998) were $`\alpha _{\mathrm{blue}}0.15`$ for the range 2950–5500 Å, and $`\alpha _{\mathrm{UV}}0.65`$ for the range 1400–3200 Å, in good agreement with our values.
### 6.3 Redshift dependence of the blue/UV continuum shape
A striking characteristic of the best-fit slopes on Table 5 and Fig. 7 (top panels) is how they keep roughly constant for the normalizations at either side of the break around 3000 Å, suffering the largest variation when the normalization moves from one side of the break to the other. In particular, the change in $`\alpha _{\mathrm{UV}}`$ from $`\lambda _\mathrm{n}`$=3200 to lower values of $`\lambda _\mathrm{n}`$ is consistent with the description outlined on Section 6.1 of a possible flattening at around 2000 Å for the composites with $`\lambda _\mathrm{n}`$=2400 and 2200 Å. It is important to note that the change in normalizations from $`\lambda _\mathrm{n}3200`$ Å to $`\lambda _\mathrm{n}2400`$ Å in our study implies a substantial change in the redshifts of the quasars whose spectral points enter the fits. For instance, whereas that the high frequency fit for $`\lambda _\mathrm{n}`$=3200 comprises 47 spectral points with $`0.47<z<1.18`$, the next composite, with $`\lambda _\mathrm{n}`$=2400, includes all but one of these points plus 50 additional spectral points from $`1.21<z<1.88`$. For the final composite, with $`\lambda _\mathrm{n}`$=2000, only 23 out of the 80 spectral points used for the fit are in common with the fit for $`\lambda _\mathrm{n}`$=3200. A similar variation of the spectral points/redshifts with $`\lambda _\mathrm{n}`$ occurs for the low frequency fits, although in this case the low redshift limit for $`\lambda _\mathrm{n}`$=3800, 3500 and 3200 is similar and the variations of spectral points/redshifts are lower for all the normalizations (see Table 5). It is therefore interesting to analyse whether a dependence of the shape of the continuum spectrum with redshift is present.
From the results of the fits (Table 5, Fig. 7, and see also Fig. 6) we decided to separate the quasar sample in a low redshift bin with $`z<1.2`$ and a high redshift bin with $`z>1.2`$, and perform the power-law fitting for the two subsamples. With the selected limit, all the spectral points used for the normalizations with $`\lambda _\mathrm{n}>3000`$ Å correspond to the low redshift bin. The composites with $`\lambda _\mathrm{n}2400`$ Å include spectral points from quasars in the two redshift bins. Figures 8 and 9 show the composite SEDs for the two quasar subsamples. The results of the power-law fits are given in Table 6 and the variation of the slopes as a function of $`\lambda _\mathrm{n}`$ is shown on Fig. 7.
Concerning the low redshift quasars, we find now a better agreement between the slopes of the low-frequency fits obtained for the normalizations on either side of the break. Averaging the values for the normalizations with $`\lambda _\mathrm{n}=3800`$, 3200 and 2400 we find $`\alpha _{\mathrm{blue}}=0.21\pm 0.16`$. For the high frequency fits there is also a better agreement in $`\alpha _{\mathrm{UV}}`$ between the normalizations at either side of the break. The agreement is due to the steepening of $`\alpha _{\mathrm{UV}}`$ for the normalizations with $`\lambda _\mathrm{n}2400`$ Å when high redshift quasars have been excluded. The average value of $`\alpha _{\mathrm{UV}}`$ is now $`\alpha _{\mathrm{UV}}=0.87\pm 0.20`$. We find now a larger change from $`\alpha _{\mathrm{blue}}`$ to $`\alpha _{\mathrm{UV}}`$, with $`\alpha _{\mathrm{blue}}\alpha _{\mathrm{UV}}=1.08\pm 0.26`$. The larger steepening is evident from the comparison of the top and bottom panels of Fig. 7. The good agreement in the slopes obtained for the different normalizations indicates that the selection of $`\lambda _\mathrm{n}`$ does not affect the derived slopes.
For the high redshift quasars only the high frequency region can be studied (see Fig. 9). The average spectral index from the three normalizations is $`\alpha _{\mathrm{UV}}=0.48\pm 0.12`$. Comparing the values of $`\alpha _{\mathrm{UV}}`$ for low and high redshift quasars we find a difference $`\mathrm{\Delta }\alpha _{\mathrm{UV}}=0.39\pm 0.23`$. This difference although weak, is larger than the error, and is clearly evident from the results on Table 6 and Fig. 7, and the comparison of figs. 8 and 9. The flattening below around 2000 Å noted on Sect. 6.1 for the high frequency normalizations is related to this hardening of $`\alpha _{\mathrm{UV}}`$ for high redshift quasars.
Our result of a dependence of the spectral index with redshift, in the sense that quasars at $`z>1.2`$ tend to be flatter, i.e. harder, than low redshift ones, in the range 2600–1700 Å, is similar to that found by O’Brien et al. (1988) for an $`IUE`$-selected sample, with slopes in the region 1900–1215 Å ranging from –0.87 for $`z<1.1`$ to –0.46 for $`z>1.1`$. We note the remarkable agreement between these values and those obtained for our sample. This trend with redshift was not confirmed by Natali et al. (1998) for their sample of optically selected quasars, although their limits for the detection of a variation were around 0.4, which is comparable to the measured change in the slope found in O’Brien et al. and in this work.
A possible concern on our results is whether the 20 per cent incompleteness of the sample could produce a bias against the inclusion of red quasars at high redshifts. However, the distribution of $`R`$ magnitudes in the range $`z`$=0.5–2.5 is roughly constant, and does not suggest that the fraction of missed quasars ($`R>20`$) should be higher at the higher redshifts. On the other hand, 5 good B3-VLA quasar candidates excluded from the B3-VLA Quasar Sample were present on the blue POSS-I plate but not on the red POSS plate. Since the magnitude limit for POSS blue is around 20.5–21, these quasars have $`BR`$ colours similar or bluer than the average value for the quasars in the sample ($`BR`$= 0.64, Sect. 4.2). Although this number of quasars is small, it illustrates the presence of blue or normal colours among the missed quasars. Since the incompleteness does not appear to be biased towards the exclusion of red quasars at high redshifts, the $`\alpha _{\mathrm{UV}}z`$ trend is likely a real effect, not originated by incompleteness.
### 6.4 Relation of the blue/UV continuum shape to radio power
Figure 10 presents the $`P_{408}z`$ diagram for the 73 quasars observed for this work. In agreement with the expectations for flux-limited samples, high redshift quasars tend to have higher radio powers than low redshift ones, although the B3-VLA Quasar Sample is not strictly a flux-limited one (see Sect. 2). Using Spearman’s correlation coefficient $`r_s`$, we find for the trend $`r_s`$=0.52, with a significance level $`P>99.999`$. In order to check whether the $`\alpha _{\mathrm{UV}}z`$ trend could arise from an intrinsic $`\alpha _{\mathrm{UV}}P_{408}`$ correlation and the $`P_{408}z`$ trend, we performed an analysis similar to the one presented in sections 6.2 and 6.3, obtaining separated power-law fits for a “high-radio-power subsample” and a “low-radio-power subsample”. Instead of considering several composites, the dependence on radio power was analysed using only the composite with $`\lambda _\mathrm{n}`$=2400. This composite includes the largest number of quasars, and the redshift distribution comprises low as well as high redshift ones, with a median $`z`$ of 1.15, similar to that of the whole sample.
The composite with $`\lambda _\mathrm{n}`$=2400 includes 54 quasars (excluding B3 0918+381). The median of $`P_{408}`$ is $`10^{34.85}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> and this value was used as the division limit between low and high radio power. The low-power quasars have a mean redshift of 1.00 and a mean radio power $`10^{34.48}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, and the same parameters for the high-power quasars are 1.35 and $`10^{35.19}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>. The results of the derived spectral indices $`\alpha _{\mathrm{UV}}`$ for the two subsamples are listed on Table 7. The table includes for comparison the $`\alpha _{\mathrm{UV}}`$ values obtained for all the quasars and for the subsamples separated by redshift, for the same composite. The third column of the table gives $`\mathrm{\Delta }\alpha _{\mathrm{UV}}`$ and its error for each couple of subsamples with a varying parameter ($`z`$ or $`P_{408}`$), and the last column gives the probability that the difference in slope is statistically significant.
The largest difference in $`\alpha _{\mathrm{UV}}`$ and the highest significance is found for the subsamples separated by $`z`$, with $`\mathrm{\Delta }\alpha _{\mathrm{UV}}=0.54\pm 0.22`$, and 98.84 per cent significance ($`F`$-test). For the subsamples separated by $`P_{408}`$ the difference in $`\alpha _{\mathrm{UV}}`$ is lower than the errors. The trend $`\alpha _{\mathrm{UV}}z`$ is therefore unlikely to be a secondary correlation from an intrinsic $`\alpha _{\mathrm{UV}}P_{408}`$ correlation in conjunction with the $`P_{408}z`$ trend. Moreover, the last trend is less significant for the group of 54 quasars in the $`\lambda _\mathrm{n}`$=2400 composite ($`r_s`$=0.44 and $`P`$=99.906). This weak trend explains in fact the reverse result, i.e. that the $`\alpha _{\mathrm{UV}}z`$ trend in combination with the $`P_{408}z`$ trend does not cause a $`\alpha _{\mathrm{UV}}P_{408}`$ trend.
### 6.5 Relation of the blue/UV continuum shape to blue/UV luminosity
The possible relation between $`\alpha _{\mathrm{UV}}`$ and the blue/UV luminosity was also investigated. In order to construct each composite SED we had to interpolate the flux density at the normalization wavelength $`\lambda _n`$, and therefore we could obtain the monocromatic luminosity at $`\lambda _\mathrm{n}`$ straightforward. The possible trend $`L_{\mathrm{blue}/\mathrm{UV}}\alpha _{\mathrm{UV}}`$ was analysed in the same way as for the $`P_{408}\alpha _{\mathrm{UV}}`$ trend, measuring $`\alpha _{\mathrm{UV}}`$ in the composite with $`\lambda _\mathrm{n}=2400`$ for two subsamples separated by $`L_{2400}`$. The separation value was set at $`L_{2400}=10^{30.67}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, which is the median value for the quasars in the composite. The values of $`\alpha _{\mathrm{UV}}`$ (Table 7) indicate a weak flattening with luminosity, only slighly larger than the errors, but significant from the $`F`$-test, with $`P`$=89 per cent. The significance is lower than that found for the $`\alpha _{\mathrm{UV}}z`$ trend.
Figure 11 shows $`L_{2400}`$ versus $`z`$ for the quasars in the same composite. A clear correlation is found, in the sense that high redshift quasars are more luminous, similar to the trend with radio power. A Spearman correlation gives $`r_s`$=0.52 and significance 99.994 per cent. Therefore the three parameters $`\alpha _{\mathrm{UV}}`$, $`z`$ and $`L_{2400}`$ are correlated with each other, being in principle impossible to determine which are primary and which are secondary correlations. The quasars with luminosities below the limit of $`10^{30.67}`$ erg s<sup>-1</sup> Hz<sup>-1</sup> have a mean redshift and luminosity of 1.01 and $`10^{30.27}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>, and the same parameters for the luminous quasars are 1.33 and $`10^{31.02}`$ erg s<sup>-1</sup> Hz<sup>-1</sup>.
Wandel & Petrosian (1988) obtained predictions of the expected UV slope versus UV luminosity for various accretion disc models, spanning a range of values for the black hole mass, accretion rate and viscosity parameter. They found that the effect of a flattening of the spectrum for the more luminous quasars (or at higher redshifts), reported by O’Brien et al. (1988), could be explained as evolution along curves of constant black hole mass, with accretion rate decreasing with time. The emitted spectrum for these models under the assumption of constant black hole mass is less luminous and more steep in the UV as the accretion rate decreases, and a decrease in accretion rate with time could explain the observed $`L_{\mathrm{UV}}z`$ and $`\alpha _{\mathrm{UV}}z`$ trends. Our data show the same trend of a flattening of $`\alpha _{\mathrm{UV}}`$ with $`z`$ and $`L_{\mathrm{UV}}`$, as well as a $`L_{\mathrm{UV}}z`$ correlation, therefore the same interpretation could apply. The diagrams $`\alpha _{\mathrm{UV}}L_{\mathrm{UV}}`$ presented by Wandel & Petrosian (1988) correspond to a slope in the range $`15001000`$ Å and the luminosity at 1450 Å. We have represented in these diagrams the average slope for low-redshift and high-redshift quasars (separation at $`z=1.2`$), using the composite with $`\lambda _n=2400`$, and the 2600–1700 Å slopes listed in Table 7. The luminosities were obtained transforming the average values at 2400 Å to 1450 Å, adopting the measured slopes. Although we use a slope different than that of the models, at least for high redshift quasars the slope does not apperar to change between the two ranges (Fig. 9). A straight line connecting the location of the two subsamples on the $`\alpha _{\mathrm{UV}}L_{\mathrm{UV}}`$ plane would run approximately parallel to the curves of constant black hole mass, with $`\dot{m}`$ increasing in the same direction as the redshift. Therefore our results appear to be consistent with the interpretation outlined above, and the comparison with the models yields a roughly constant black hole mass, in the range $`M10^{8.28.5}M_{}`$, and accretion rates (in units of the Eddington value) ranging from $`\dot{m}`$ 1 for the quasars with $`z=0.61.2`$ to $`\dot{m}`$ 10 for the quasars with $`z=1.21.9`$.
## 7 The correlation between blue/UV luminosity and radio power
Figure 12(a) shows $`L_{2400}`$ versus $`P_{408}`$ for the 54 quasars of the $`\lambda _\mathrm{n}`$=2400 composite. $`L_{2400}`$ and $`P_{408\mathrm{MHz}}`$ are strongly correlated, with $`r_s`$=0.62 and $`P>99.9999`$ per cent. The optical completeness of the sample is high and we do not expect important selection effects in the optical that could explain the correlation. The correlation could be in principle artificially induced by independent evolution of both parameters with redshift. In fact the quasars with high blue/UV luminosity tend to have large redshifts and those weaker tend to have low redshifts, and a similar behaviour occurs for the radio power. However, this alternative is ruled out by the redshift distribution on Figure 12(a), showing that the half subsample with lower redshifts shows a similar $`L_{2400}P_{408}`$ trend as the whole sample (with unrestricted $`z`$). The correlation is confirmed in the flux-flux plane on Figure 12(b), with $`r_s=0.47`$ and $`P=99.966`$ over two decades in both radio and blue/UV flux.
The most likely origin for the radio-optical correlation is therefore a direct link between the blue/UV emission of the quasar nucleus, related to the accretion process, and the radio synchrotron emission at 408 MHz. A similar correlation was reported by Serjeant et al. (1998) on the basis of complete samples of radio-steep-spectrum quasars (from Molonglo, 3CR and the Bright Quasar Survey), in the redshift range 0.3–3. A linear fit for our data in the log$`L_{2400}`$–log$`P_{408\mathrm{MHz}}`$ plane yields $`L_{2400}P_{408\mathrm{MHz}}^{0.52\pm 0.10}`$, with a dispersion in blue/UV magnitudes of $``$ 0.9, for quasars in the redshift range $`0.61.9`$. The best-fitting slope is very similar to that obtained by Serjeant et al. (1998), with $`L_\mathrm{B}P_{408\mathrm{MHz}}^{0.6\pm 0.1}`$, although these authors give a larger dispersion, $``$1.6 mag.
In the light of an intrinsic correlation between radio power and blue/UV luminosity we can investigate in more detail the relations between $`\alpha _{\mathrm{UV}}`$, $`z`$, $`P_{408}`$ and $`L_{2400}`$ reported in sections 6.4 and 6.5. The lack of an $`\alpha _{\mathrm{UV}}P_{408}`$ trend is still acceptable, in spite of an $`\alpha _{\mathrm{UV}}L_{2400}`$ correlation, since the latter is weak. The trends $`P_{408}z`$ and $`L_{2400}z`$ are likely related through the radio-optical correlation.
If the correlation $`L_{2400}z`$ was predominantly intrinsic, rather than due to selection effects, the three correlations $`L_{2400}z`$, $`L_{2400}\alpha _{\mathrm{UV}}`$ and $`\alpha _{\mathrm{UV}}z`$ could be explained with AD models with constant black hole mass and $`\dot{m}`$ increasing with redshift (see Section 6.5). The correlation $`L_{2400}P_{408}`$ would then imply that the $`P_{408}z`$ trend is, at least in part, intrinsic. If the trend $`P_{408}z`$ was due only to a selection effect, preventing the detection of low power sources at high $`z`$, the trend $`L_{2400}z`$ could be also the result of this effect, rather than cosmic evolution. In this case one of the correlations $`\alpha _{\mathrm{UV}}z`$ or $`\alpha _{\mathrm{UV}}L_{2400}`$ could be induced by the other, in combination with the $`L_{2400}z`$ trend. According to the models by Wandel & Petrosian (1988), a correlation $`L_{2400}\alpha _{\mathrm{UV}}`$ in the observed sense would arise naturally if, keeping a constant black hole mass, the parameter $`\dot{m}`$ varies. We cannot exclude however the reverse interpretation, i.e. that the intrinsic trend is for $`\alpha _{\mathrm{UV}}z`$ and $`\alpha _{\mathrm{UV}}`$ is not directly dependent on blue/UV luminosity or radio power. O’Brien et al. (1988) found from a joint regression analysis for the three parameters that the dominant correlation for their data was between $`\alpha _{\mathrm{UV}}`$ and $`z`$.
## 8 Conclusions
In this work we present optical photometry of a sample of radio quasars in the redshift range $`z=0.42.8`$, around 80 per cent complete, aimed at studying their spectral energy distribution in the blue/UV range. The $`UR`$, $`BR`$ and $`UV`$ colours do not vary substantially with redshift, and the average values are $`UR=0.08`$ with rms 0.45, $`BR=0.64`$ with rms 0.27 and $`UV=0.38`$ with rms 0.42. Two quasars at $`z`$=0.50 and $`z`$=1.12 stand out for being particularly red, with $`UR>1`$.
Power-law fits to the SEDs of the quasars with available photometry in the four bands yield spectral indices ranging from $`0.4`$ to $`1.7`$. The distribution of slopes is asymmetric, with a tail to steep spectral indices. Excluding the sources in the tail, the distribution is well modelled as a gaussian with mean and dispersion of –0.21 and 0.34 respectively.
Composite SEDs nomalized at various wavelengths were constructed from the sample (exluding the two red quasars), and the overall shape of the composites was found to be very similar for the different normalizations. The only emission feature revealed from the composites was the Civ$`\lambda `$1549 line. This result is in agreement with the expectations from the EW measurements of broad emission lines and Feii bumps of steep-spectrum radio quasars, which predict, for the bandwidths and redshifts in our work, the largest contribution for the Civ line. For other emission features, like Mgii, Ciii\], and the Feii bumps at 2250–2650 and 3100-3800 Å, the expected average contributions to the broad band fluxes are below ten per cent. The composite SEDs show a clear steepening towards high frequencies at around 3000 Å which cannot be explained by line contamination and most likely reflect a trend of the continuum. Parameterizing the SEDs as power laws we obtained an average $`\alpha _{\mathrm{blue}}=0.11\pm 0.16`$ for the range 4500–3000 Å and $`\alpha _{\mathrm{UV}}=0.66\pm 0.15`$ for 2600–1700 Å.
Separating the quasar sample in two redshift bins, with the cut at $`z=1.2`$, a better agreement was found between the values of $`\alpha _{\mathrm{blue}}`$ and $`\alpha _{\mathrm{UV}}`$ obtained for the different normalizations, and a hardening of $`\alpha _{\mathrm{UV}}`$ with redshift emerged, with $`\alpha _{\mathrm{UV}}=0.87\pm 0.20`$ for $`z<1.2`$ and $`\alpha _{\mathrm{UV}}=0.48\pm 0.12`$ for $`z>1.2`$. The average spectral index $`\alpha _{\mathrm{blue}}`$ for the low redshift quasars is $`\alpha _{\mathrm{blue}}=0.21\pm 0.16`$, therefore these quasars show a steepening from the blue to the UV range $`\alpha _{\mathrm{blue}}\alpha _{\mathrm{UV}}=1.08\pm 0.26`$. The composite SEDs for the high redshift quasars do not cover the region above 3000 Å, therefore the presence of a similar break for these quasars could not be analysed from the present data.
Correlations were also found between luminosity at 2400 Å and redshift and between radio power and redshift. Separating the quasar sample in two bins of low and high luminosity at 2400 Å a trend was found between $`\alpha _{\mathrm{UV}}`$ and $`L_{2400}`$ (89 per cent significant). The quasar sample shows also an intrinsic correlation between $`L_{2400}`$ and $`P_{408}`$ ($`r_s=0.62`$ and $`P>99.9999`$), similar to that recently reported by Serjeant et al. (1998) for a sample of steep-spectrum quasars.
The observed trends $`L_{2400}\alpha _{\mathrm{UV}}`$, $`L_{2400}z`$ and $`\alpha _{\mathrm{UV}}z`$ appear to be consistent with the predictions from AD models for the case of constant black hole mass and accretion rate increasing with $`z`$ (Wandel & Petrosian 1988), with $`M_{\mathrm{BH}}10^{8.28.5}M_{}`$, and accretion rates (in units of the Eddington value) ranging from $`\dot{m}`$ 1 for $`z=0.61.2`$ to $`\dot{m}`$ 10 for $`z=1.21.9`$. An alternative interpretation is that the $`P_{408}z`$ and $`L_{2400}z`$ trends arise predominantly from a selection effect, due to the radio flux limits of the sample. In this case one of the correlations, $`\alpha _{\mathrm{UV}}z`$ or $`\alpha _{\mathrm{UV}}L_{2400}`$, could be induced by the other, in combination with the $`L_{2400}z`$ trend. The observed $`\alpha _{\mathrm{UV}}L_{2400}`$ correlation is consistent with the predictions from the AD models by Wandel & Petrosian assuming a constant black hole mass and a range of accretion rates, although in this case the accretion rate does not need to be correlated with $`z`$. We cannot exclude however that the intrinsic correlation is $`\alpha _{\mathrm{UV}}`$$`z`$, with $`\alpha _{\mathrm{UV}}`$ not being physically related to the blue/UV luminosity.
## 9 Acknowledgements
The 1.0-m JKT telescope is operated on the island of La Palma by the Isaac Newton Group in the spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. The 2.2-m telescope, at the Centro-Astronómico Hispano-Alemán, Calar Alto, is operated by the Max-Planck-Institute for Astronomy, Heidelberg, jointly with the spanish Comisión Nacional de Astronomía. RC, JIGS and SFS acknowledge financial support by the DGES under project PB95-0122 and by the Comisión Mixta Caja Cantabria-Universidad de Cantabria. SFS wants to thank the FPU/FPI program of the Spanish MEC for a fellowship.
|
no-problem/9901/nucl-th9901044.html
|
ar5iv
|
text
|
# Backbending in Dy isotopes within the Projected Shell Model
## I Introduction
Backbending in the moment of inertia , which is a common phenomenon for many heavy and deformed nuclei, is understood as a consequence of crossing between two rotational bands, one being the ground band and the other having a pair of aligned high-$`j`$ intruder particles . Its magnitude is related to the crossing angle between the crossing bands . A large crossing angle implies that the bands interact over a narrow angular momentum region, and a sharp backbending usually takes place. A small crossing angle implies an interaction which spreads its influence along a wide angular momentum region, therefore producing a smooth backbending; in some cases, only an upbending instead of a backbending is seen. The above physical picture of the band crossing can be clearly seen in the Projected Shell Model (PSM) , which has been applied successfully to the description of the energy spectra and electromagnetic transitions in deformed nuclei.
High spin properties along the yrast line in the Dy isotope chain were studied by various cranked mean-field theories, with and without particle number projection . It was concluded in Ref. that, while the average behavior of the experimental data can be reproduced by cranking models, a microscopic interpretation of some observations, in particular those related to band crossings, requires investigations going beyond the mean field approximation. More recent theoretical work used the PSM to go beyond the mean field, and the results were compared with the spectrum, the $`B(E2)`$ and the gyromagnetic factor (g-factor) data existing at that time. In Ref. , an improvement in overall description of the Dy isotopes was found but, in many cases, the PSM seemed to exaggerate the backbending.
The main goal of the present article is to demonstrate that, by increasing the mean field deformation by 20% on average, both the backbending and the angular momentum dependence of the quadrupole moments in the Dy chain can be quantitatively described within the PSM. It will be shown that the increase in deformation is reflected in a rearrangement of single particle states around the Fermi level that smoothes backbending curves, in agreement with the experimental observation. Nevertheless, for lighter isotopes, the oscillation in the recent g-factor data around angular momentum $`I=6\mathrm{}`$ and the reduction in $`B(E2)`$ values at high angular momenta cannot be reproduced by the PSM within the current model space.
The paper is arranged as follows: In section II, we outline the PSM. Interested readers are kindly referred to the review article and the corresponding computer codes in the literature. In section III, we introduce the mean field where our shell model space is truncated and discuss how it is related to our final results. Discussions on the spectra in terms of the backbending plots, the $`B(E2)`$’s and the g-factors for the <sup>154-164</sup>Dy isotopes are presented in sections IV, V and VI, respectively, where a large body of the experimental high spin data is compared systematically with the theory. Finally, a summary is given in section VII.
## II The Model
The PSM allows a many body quantum mechanical description of atomic nuclei, while avoiding the use of an extremely large Hilbert spaces commonly required in spherical shell model calculations . Assuming an axial symmetry in the single particle potential the PSM uses a multi-quasiparticle (qp) basis $`|\varphi _\kappa >`$ built on the Nilsson + BCS mean field. Its elegance and efficiency lies in the use of angular momentum projection through the angular momentum projection operator $`\widehat{P}_{KK^{}}^I`$ to carry the multi-qp states from the intrinsic to the laboratory system .
The present type of PSM Hamiltonian is schematic, with three classes of residual interactions: quadrupole-quadrupole, monopole pairing and quadrupole pairing. This is basically the quadrupole plus pairing Hamiltonian and has been widely used in nuclear structure studies . The Hamiltonian can be expressed as
$$\widehat{H}=\widehat{H_0}\frac{\chi }{2}\underset{\mu }{}\widehat{Q}_\mu ^+\widehat{Q}_\mu G_M\widehat{P}^+\widehat{P}G_Q\widehat{P}_\mu ^+\widehat{P}_\mu ,$$
(1)
where $`\widehat{H_0}`$ is the spherical single particle Hamiltonian. The Hamiltonian (1) is diagonalized in the angular momentum projected multi-qp basis { $`\widehat{P}_{MK}^I|\varphi _\kappa `$}. The eigenvalue equation is:
$$\underset{\kappa ^{}K^{}}{}(H_{\kappa K\kappa ^{}K^{}}^IEN_{\kappa K\kappa ^{}K^{}}^I)F_{\kappa ^{}K^{}}^{IE}=0,$$
(2)
with the normalization condition
$$\underset{\kappa K\kappa ^{}K^{}}{}F_{\kappa K}^{IE}N_{\kappa K\kappa ^{}K^{}}^IF_{\kappa ^{}K^{}}^{IE^{}}=\delta _{EE^{}},$$
(3)
where
$$\begin{array}{cc}H_{\kappa K\kappa ^{}K^{}}^I=\hfill & \varphi _\kappa |\widehat{H}\widehat{P}_{KK^{}}^I|\varphi _\kappa ^{}\hfill \\ N_{\kappa K\kappa ^{}K^{}}^I=\hfill & \varphi _\kappa |\widehat{P}_{KK^{}}^I|\varphi _\kappa ^{}.\hfill \end{array}$$
(4)
After solving equation (2), we obtain the normalized eigenstates with energy $`E`$:
$$|\mathrm{\Psi }_{IM}^E=\underset{\kappa K}{}F_{\kappa K}^{IE}\widehat{P}_{MK}^I|\varphi _\kappa .$$
(5)
Electromagnetic properties of the nuclei are then computed by using the wave functions obtained above.
## III The Mean field and the residual interaction strengths
In the PSM, the multi-qp basis $`|\varphi _\kappa `$ is built on qp excitations from the Nilsson + BCS vacuum. The shell model basis (the angular momentum projected multi-qp states) is then truncated according to energy excitation from the qp vacuum. Because for most quantities of interest in spectroscopy only states relatively near the Fermi surface are important, the usual dimension where diagonalization is carried out is about 100 in realistic calculations. Since this is a truncated theory, the quality of a calculation is related to the quality of the mean field, particularly the qp states around the Fermi level if the interest in question is the yrast line. Given the mean field is often used in describing heavy and deformed nuclei considerable knowledge exists concerning the qp structure near the Fermi surface .
In the PSM, the quadrupole-quadrupole strength in Eq. (1) is obtained self-consistently from the mean field quadrupole deformation $`ϵ_2`$, while the pairing and quadrupole pairing strengths in Eq. (1) are taken from systematics . In a previous study of <sup>160</sup>Dy, it has been shown that low energy rotational bands can only be reproduced if these self-consistent values are used.
The monopole and quadrupole-pairing interaction strengths $`G_M`$ and $`G_Q`$ used in this work are
$`G_M=(20.1213.13{\displaystyle \frac{NZ}{A}})A^1`$ (6)
$`G_Q=0.18G_M,`$ (7)
where the minus(plus) sign is applied to neutrons (protons). These are the same values used in Refs. .
The deformation $`ϵ_2`$ can be taken from tables which are extracted from experimental $`B(E2,2_1^+0_{gs}^+)`$ values by using a geometrical model. These values are very similar to those calculated from mean field models . This is expected because in mean field models the relation between the deformation and the electric quadrupole moment is constructed so that the experimental $`B(E2)`$ should be reproduced. However, it should be kept in mind that deformation is a model-dependent concept, while the $`B(E2)`$ is directly related to measured quantities.
We found that the use of an enlarged deformation $`ϵ_2^{}`$ strongly improves the description of the spectra and $`B(E2)`$ values within the PSM for the Dy chain. The same effect is operative in other rare earth isotope chains . A detailed analysis of the changes in the mean field energies and quasiparticle occupations introduced by this modified deformation, as well as its effects in the backbending plot, are presented in the following sections.
Table 1 shows the deformations used in this article for the different Dy isotopes, which are listed in the first column. In the second and third columns, we list the deformations reported in the literature and the enlarged ones used in our calculation. The latter ones were obtained by looking for the best available description of the energy spectra and backbending plots for each isotope. Once the deformation was fixed, the quadrupole-quadrupole interaction strength was obtained selfconsistently . An overall deformation increase of around 20% was found necessary to best reproduce the experimental data od Dy isotopes.
## IV The Backbending plots
The angular frequency $`\omega `$, defined as $`\mathrm{}\omega (I)=\frac{1}{2}[E(I+2)E(I)]`$, is plotted as a function of the spin $`I`$ for the yrast states in Fig. 1. Results for <sup>154,156,158,160,162,164</sup>Dy are presented in figures (a,b,c,d,e,f) respectively. In each plot the full line corresponds to standard deformation $`ϵ_2`$ and the dotted line to the enlarged deformation $`ϵ_2^{}`$. Diamonds represent the experimental data, taken from Ref. .
It can be seen that in each case a larger deformation is associated with a smoother curve, thus better reproducing the experimental data. For the first four Dy isotopes, the PSM calculation with standard deformation predicts a second backbending , which is visualized as a second minimum seen in Fig. 1 and has no experimental counterpart. This prediction disappears in the calculation with enlarged deformation. Only in <sup>154</sup>Dy does the experimental yrast band exhibit a very sharp second backbending at $`I=30\mathrm{}`$, which is not predicted by either the standard or the enlarged deformation calculations.
The same conclusion is emphasized when presented in a plot with twice the moment of inertia $`2\mathrm{\Theta }`$ vs. the square of the angular frequency (the backbending plots), as it is shown in Fig. 2. In these plots the backbending features are emphasized and the improvement obtained in the description of the experimental data using the enlarged deformation is clear. While for the heavier Dy isotopes the moment of inertia changes very slowly as a function of the angular frequency, a strong stretching effect is apparent for the lighter ones. This implies that, in the case of <sup>154,156</sup>Dy at small rotational frequencies, the moment of inertia has drastic changes which would require for its proper description the inclusion of a richer mean field basis, rather than a geometrically fixed deformation with axial symmetry. Due to the absence of this ingredient in the present calculation, the moment of inertia at low angular momenta in these two isotopes is not well described.
In order to obtain a deeper understanding of the results seen in the backbending plots with different deformations, in Fig. 3 we present the unperturbed band diagram of <sup>158</sup>Dy. In this figure the band energy, defined as
$$E_\kappa ^I=\frac{H_{\kappa K\kappa K}^I}{N_{\kappa K\kappa K}^I}=\frac{\varphi _\kappa |\widehat{H}\widehat{P}_{KK}^I|\varphi _\kappa }{\varphi _\kappa |\widehat{P}_{KK}^I|\varphi _\kappa },$$
(8)
is plotted as a function of angular momentum $`I`$. For the standard deformation (left figure in Fig. 3) the 2-qp band (labeled by number 2) that is the first one to cross the ground state band corresponds to a configuration with particles from smaller $`K`$-orbits ($`K=1/2`$ and 3/2). In the case of the enlarged deformation (right figure in Fig. 3) the relevant 2-qp band (again labeled as 2) has particles from larger $`K`$-orbits ($`K=3/2`$ and 5/2). In the first case the crossing angle between the unperturbed ground and that 2-qp band at $`I=12\mathrm{}`$, which we loosely define as the angle between the tangents of both curves at the crossing point , is large, and is reflected in Fig. 2c as an exaggerated zig-zag effect in the full line. On the other hand, a smaller crossing angle at $`I=16\mathrm{}`$ for the enlarged deformation case is behind the smooth behavior of the dotted curve, which precisely reproduces the experimental information.
At the standard deformation there are three neutron single particle levels that contribute to the two quasiparticle bands active in the backbending region. They are the Nilsson states with important spherical components N = 6, l=i, j=13/2, m= 5/2, -3/2, 1/2, denoted \[6 i13/2 5/2\]n, \[6 i13/2 -3/2\]n and \[6 i13/2 1/2\]n respectively. When the deformation is increased, only the first two levels are relevant for backbending. In Fig. 4, some neutron Nilsson single particle levels close to the Fermi level (indicated in each figure as a black dot) are plotted. The rearrangements of the relative position of these levels explains the displacement to higher energies of many two-qp bands for the enlarged deformation, with the consequent changes in the band crossings and the backbending plots.
Notice that, for $`ϵ_20.3`$ and N = 92, there is a large gap and a sparse number of single particle states for 92 $``$ N$``$ 96. For this reason, the excitation energy and backbending plots for <sup>160,162,164</sup>Dy shown in Fig. 1 d,e,f and Fig. 2 d,e,f are normally behaved, with no sudden changes in their slopes.
Until now, we have argued the necessity of changing the single particle distributions by shifting the Fermi level to that corresponding to a larger deformation in the standard Nilsson diagram. It should be noted that the parameters used to generate the Nilsson diagram in the present paper were fitted nearly 30 years ago , when not many accurate and systematic high-spin data were available. An alternative of improvement the single particle distribution is to modify the standard Nilsson parameters, i.e. changing the local single particle distribution in the standard Nilsson diagram. It has been noticed recently that the standard Nilsson parameters for the proton $`N=5`$ shell need to be modified to reproduce the newest high spin data.
In an early PSM work , the modified set of Nilsson parameters for lighter Er and Yb isotopes was used, while the standard Nilsson parameters were employed for heavier isotopes. Namely, if the authors of insisted on assuming the deformations commonly found in the literature, they had to use different sets of the Nilsson parameters for lighter and heavier isotopes. We have tested our prescription of increasing deformation in Er and Yb nuclei, and found that one is able to use one unified (standard) set of Nilsson parameters to span a deformed basis for the PSM, with the lighter isotopes requiring a larger deformation increase.
## V Reduced quadrupole moment and B(E2) transitions
The $`B(E2;I_iI_f)`$ transition probabilities from initial state $`(\sigma _i,I_i)`$ to final state $`(\sigma _f,I_f)`$ are given by
$`B(E2;I_iI_f)={\displaystyle \frac{2I^{}+1}{2I+1}}|\mathrm{\Psi }_I^{}||\widehat{Q}_2||\mathrm{\Psi }_I|^2,`$ (9)
where
$`\mathrm{\Psi }_I^{}\widehat{Q}_2\mathrm{\Psi }_I={\displaystyle \underset{\nu }{}}\{{\displaystyle \underset{\kappa \kappa ^{}}{}}(IK^{}\nu ^{},\lambda \nu |I^{}K^{})\varphi _\kappa ^{}|\widehat{Q}_{2\nu }\widehat{P}_{K^{}\nu K}^I|\varphi _\kappa F_\kappa ^{}^I^{}F_\kappa ^I\}.`$ (10)
Here $`F_\kappa ^{}^I^{}`$ and $`F_\kappa ^I`$ are, respectively, the PSM eigenvectors for spins $`I`$ and $`I^{}`$, as calculated in Eq. (2), and the electric quadrupole operator $`Q_{2\nu }`$ is defined as
$`Q_{2\nu }=e_pQ_{2\nu }^p+e_nQ_{2\nu }^n,Q_{2\nu }^{p(n)}=e{\displaystyle \underset{i=1}{\overset{Z(N)}{}}}r_i^2Y_{2\nu }(\theta _i,\varphi _i).`$ (11)
The effective charges $`e_p`$ and $`e_n`$ for protons and neutrons were taken as $`e_p=1.5`$ and $`e_n=0.5`$ in previous works . We have used the same values in the calculations involving the standard deformation in this paper. However, when using the enlarged deformation, it proved necessary to use a slightly smaller effectives charges, as prescribed for example in Ref. :
$$e_p=e(1+\frac{Z}{A}),e_n=e(\frac{Z}{A})$$
(12)
For <sup>158</sup>Dy their numerical values are $`e_p=1.41`$ and $`e_n=0.41`$.
Fig. 5 exhibits the $`B(E2;I_iI_f)`$ values for the six Dy isotopes under study. Again the full lines represent the results obtained with the standard deformation and the dotted lines those obtained with the enlarged one. Experimental data are presented with their error bars. There are strong reductions in the $`B(E2)`$ values predicted around $`I=14\mathrm{}`$ using the standard deformation that clearly contradict the experimental data. This contradiction is now removed in the calculation with the enlarged deformation.
Another quantity of interest is the reduced transition quadrupole moment. It is defined as
$$Q_t(II^{})=\frac{1}{(I0,20|I^{}0)}\sqrt{\frac{2I+1}{2I^{}+1}B(E2;II^{})}.$$
(13)
For a rigid rotor with axial symmetry, it is equivalent, up to a sign, to the static quadrupole moment. In this special case
$$Q_t(II2)/Q_t(20)=1.$$
An analysis of this quotient shows the extent to which the yrast band $`B(E2)`$ values correspond to a symmetric axial rotor. Cescato et al. showed that the cranking models give a rigid rotor behavior for any spin and can not explain the experimental fluctuations in the quadrupole moment as function of angular momentum.
The Fig. 6(a,b,c,d,e,f) shows the behavior of the ratio $`Q_t(II2)/Q_t(20)`$ as function of the total angular momentum. As seen in the figure, the theoretical predictions exhibit a sudden change in magnitude at those angular momenta where backbending takes place. As was mentioned in the discussion of $`B(E2)`$ values, at standard deformations the PSM predicts more variations than observed, while with the enlarged deformation the agreement with the experimental data is improved.
The origin of the improved agreement in the transition quadrupole matrix elements is similar to that of the improved backbending behavior. A smoothing of the crossing interaction will generally tend to suppress sharp drops of the transition matrix elements at the first backbending.
The reduction of the $`B(E2)`$ values observed in <sup>156,158</sup>Dy for $`I>20\mathrm{}`$ and the corresponding reduction in quadrupole moments can not be reproduced in any of the present calculations which have included only a few states near the Fermi surface. Qualitatively, this would require a more extensive PSM description including in the Hilbert space more 2- and 4-qp states, and possibly even higher orders of qp states, to account for this phenomenon of gradual loss of collectivity.
## VI The Gyromagnetic Factors
Gyromagnetic factors are very sensitive to the particle alignment processes, thus allowing us to disentangle the origin of the crossing band at spin around $`1618\mathrm{}`$. The magnetic moment $`\mu `$ of a state $`(\sigma ,I)`$ is defined by
$$\mu (\sigma ,I)=\sqrt{\frac{4\pi }{3}}<\sigma ,II|_{10}|\sigma ,II>=\frac{[4\pi I]^{1/2}}{[3(I+1)(2I+1)]^{1/2}}<\sigma ,I||_1||\sigma ,I>,$$
(14)
the operator $`_{10}`$ is given by
$$_{10}=\mu _N\sqrt{\frac{3}{4\pi }}\underset{i=1}{\overset{A}{}}g_l^{(i)}l_z^{(i)}+g_s^{(i)}s_z^{(i)}=\mu _N\sqrt{\frac{3}{4\pi }}\underset{\tau =p,n}{}g_l^\tau L_z^\tau +g_s^\tau S_z^\tau $$
(15)
with $`\mu _N`$ the nuclear magneton, and $`g_l`$ and $`g_s`$ the orbital and the spin gyromagnetic factors, respectively.
The gyromagnetic factors $`g(\sigma ,I),g_p(\sigma ,I)`$ and $`g_n(\sigma ,I)`$ are defined by
$$g(\sigma ,I)=\frac{\mu (\sigma ,I)}{\mu _NI}=g_p(\sigma ,I)+g_n(\sigma ,I),$$
(16)
with $`g_\tau (\sigma ,I),\tau =p,n`$, given by
$$g_\tau (\sigma ,I)=\frac{1}{[I(I+1)(2I+1)]^{1/2}}\left(g_l^\tau <\sigma ,I||J^\tau ||\sigma ,I>+(g_s^\tau g_l^\tau )<\sigma ,I||S^\tau ||\sigma ,I>\right).$$
(17)
In the calculations we use for $`g_l`$ the free values and for $`g_s`$ the free values damped by the usual 0.75 factor
$$g_l^p=1g_l^n=0g_s^p=5.586\times 0.75g_s^n=3.826\times 0.75.$$
(18)
We emphasize that, unlike many other models, the g-factor is directly computed by using the many-body wave function (Eq. (17)). In particular, there is no need to introduce any core contribution, which is a model-dependent concept.
In general, for proton alignment the contribution $`g_l^p<\sigma ,I||J^p||\sigma ,I>`$ is large and positive and we expect an increasing $`g`$ factor. For neutron alignment $`g_s^n<\sigma ,I||S^n||\sigma ,I>`$ is negative and we therefore expect a decreasing $`g`$ factor.
In Fig. 7 we present the gyromagnetic factor for the six Dy isotopes along the yrast band, again with the full lines representing the results obtained with the standard deformation and the dotted lines those obtained with the enlarged one. Experimental data are presented with their error bars. Our results show a slight increase at low spins, followed by a clear reduction at the band crossing region and then a recovery. The decrease of the total $`g`$-factor at the band crossing confirms the character of the crossing band as a neutron aligned band. The smoothness of $`g`$ at high angular momentum indicates a proton alignment. The prediction of this trend is supported by the <sup>154</sup>Dy g-factor data which extended the measurement to high spins. For the low spin part, the PSM predictions are also confirmed by later experiments .
In a recent experiment, Alfer et al. measured g-factors for <sup>158,160,162</sup>Dy at low spins . In contrast to the behavior of the Er g-factors at low spins , Alfer et al. found a clear drop in <sup>158,160</sup>Dy at spin $`I=6\mathrm{}`$, which was not predicted by the PSM calculation . The purpose of our present g-factor calculation is to find a possible explanation for the Alfer’s data when enlarged deformation is studied in this paper. However, the result is negative: the basic features of the theoretical g-factor do not seem to change much in our new calculations. In the calculation with enlarged deformation, one sees only a delayed and smaller decrease of the values at the band crossing, but nothing essential has changed for the low spins.
Bengtsson and Åberg suggested an increase in g-factor at low spins due to changes in deformation and pairing. Our microscopic calculations in this paper seem to have the same trend. The admixture from the $`i_{13/2}`$ single neutrons to the ground band is found to be negligibly small around spin $`I=6\mathrm{}`$ in our calculation, thus can not be the reason causing the g-factor drop, as suggested by the authors of Ref. (If this argument is right, one would expect a larger drop of g-factor for the state $`I=8\mathrm{}`$). To our knowledge, there have been no microscopic calculations that can reproduce this variation of the g-factor around spin $`6\mathrm{}`$. A similar situation is found in $`{}_{}{}^{50}Cr`$ , when studied using the spherical shell model (complete $`fp`$ shell) and the HFB method. Both predictions are nearly identical, but they do not agree with the data , noticeably for $`I=4\mathrm{}`$, where the g-factor drops in a way similar to that found in Dy isotopes . We notice that recent measurements do not find this g-factor dropping at low spins in $`{}_{}{}^{50}Cr`$
## VII Conclusions
We have presented a study of the yrast band in the <sup>154-164</sup>Dy isotopes using the Projected Shell Model (PSM). We have shown that the use of an input deformation $`20\%`$ larger than the standard value, coupled with a slight reduction in effective charge, leads to an improved description of the yrast band energies, and of $`B(E2)`$ values and transition quadrupole moments in these isotopes. The dependence of the $`B(E2)`$ values on the angular momentum is also better described when the larger deformations are used.
We have discussed the changes in the distribution of single-particle occupation implied by increased deformations for the unperturbed rotational bands. The different Nilsson single particle energies at enlarged deformations and the associated changes in the Fermi level were shown to be the main source of changes in the yrast spectra and the wave functions. Appropriate modifications of the Nilsson parameters would have had a similar effect.
While general rotational features and the physics related to the band crossings are well described by the present study, limitations of the model are seen from the discussions. The lighter isotopes <sup>154</sup>Dy and <sup>156</sup>Dy exhibit softness against rotation not present in the calculations. A correct description of these nuclei would require a study with a richer Hilbert space than is contained in this simple PSM. The observed reduction in g-factor around $`I=6\mathrm{}`$ can not be explained by the PSM.
After this manuscript was finished, we become aware of a recent extension of the PSM by Sheihk and Hara that includes $`\gamma `$deformation in the basis states and performs 3-dimensional angular momentum projection. Although their preliminary code works in a very limited model space, great improvement for the description of the moment of inertia at low spins in rare earth nuclei with neutron number around 90 is obtained. This could remove the discrepancies found in our present paper for <sup>154</sup>Dy and <sup>156</sup>Dy and strongly extend the predictive power of the PSM.
We finally emphasize that a systematic study for a chain of nuclei is always a serious test for microscopic models, before they can be used to explain or predict an isolated event in specific isotopes.
###### Acknowledgements.
This work was supported in part by Conacyt (Mexico) and the National Science Foundation. Yang Sun acknowledge the hospitality of the Instituto de Ciencias Nucleares, UNAM, where the final version of this article was completed.
Figure Captions
Figure 1: Angular frequency $`\omega `$ vs. angular momentum $`I`$ for <sup>154,156,158,160,162,164</sup>Dy are shown in inserts (a,b,c,d,e,f) respectively. The PSM results using standard deformations are presented as solid lines; those with enlarged deformations as dotted lines. Experimental data are represented by diamonds.
Figure 2: Twice the inertia moment $`\mathrm{\Theta }`$ vs. the square of the angular velocity $`\omega `$. The same convention as Fig. 1 is used.
Figure 3: Unperturbed rotational bands for $`{}_{}{}^{158}Dy`$ at standard (a) and enlarged (b) deformation. For the 2-qp bands the notation \[N lj m\] is used for each qp component. 4qp means a four quasiparticle state.
Figure 4: Nilsson neutron single particle energies around the Fermi level (represented with a diamond) for standard (a) and enlarged (b) deformation.
Figure 5: B(E2) values, in $`e^2b^2`$, as a function of the angular momentum $`I`$. The same convention as Fig. 1 is used.
Figure 6: Reduced transition quadrupole moments $`Q_t(II2)`$ normalized with respect to $`Q_t(20)`$. The same convention as Fig. 1 is used.
Figure 7: G-factors vs. angular momentum I. The same convention as Fig. 1 is used.
|
no-problem/9901/cond-mat9901322.html
|
ar5iv
|
text
|
# Effect of the Fermi surface destruction on transport properties in underdoped cuprates
## I INTRODUCTION
Normal state transports in high-$`T_c`$ cuprates continue to be a challenging subject. It is known for a long time that the resistivity $`\rho (T)`$ in the normal state shows a linear temperature behavior down to the superconducting transition temperature $`T_c`$ and meanwhile the inverse Hall angle $`\mathrm{cot}\theta _H(T)`$ and the Hall coefficient $`R_H(T)`$ have $`T^2`$ and $`T^1`$ temperature dependences, respectively. However, the situation is different in underdoped systems, in which both $`\rho (T)`$ and $`R_H(T)`$ deviate from their high-$`T`$ behaviors below certain temperatures higher than $`T_c`$ .
Various models have been proposed to accout for the temperature behaviors of the resistivity, the inverse Hall angle as well as the Hall coefficient . Among them, of accumulating interest is the models based on the so-called hot spots and/or cold spots , which refer to small regions on the Fermi surface(FS) where the electron lifetime is unusually short or long, respectively. Fundamentally, in this kind of models, the anomalous temperature dependences of transport coefficients are ascribed to the anistropy of scatterings on different momentum regions. Some successes have been achieved based on these models. However, there are relatively few studies of the transport properties in underdoped cuprates. One of the striking features in underdoped high-$`T_c`$ cuprates is that there is a normal state gap (pseudogap) as measured by various experiments . Recent angle-resolved photoemission(ARPES) experiment further indicates that the pseudogap opens up at different momentum points at different temperatures, consequently it leads to a FS composed of disconnected arcs in the pseudogap state. Because the electron lifetime varies over the Fermi surface as assumed in the hot spot and/or cold spot models and also as suggested by ARPES experiments , it is expected that the losing of some parts of FS will affect its temperature behavior. It is our aim in this paper to study the effects of the destruction of the FS on the transport properties in the pseudogap state. Our main results are: (1) Based on the standard Boltzmann transport theory, we demonstrate that the transport properties in the pseudogap state are well described by the cold spot model— the main contribution to transports comes from the cold spots and the hot spots contribute little, which is consistent with the recent studies on the transports in the normal state . (2) For the realistic calculations using the nearly antiferromagnetic Fermi liquid (NAFL) interaction form , we find that the variation of the Fermi velocity along the FS is an essential ingredient for the justification of the applicability of the cold spot model. (3) The bandstructure which has an extended flat band near $`(0,\pi )`$ gives a good account for the experimental observations. (4) By reducing the dispersion for optimally doped high-$`T_c`$ cuprates by a factor of 3, we can fit our result for the resistivity with experiments quantitatively. Moreover, using the same parameters, we find that the calculated temperature dependence of the inverse Hall angle is also consistent with the experimental data . As for the Hall coefficient, we get a weaker temperature dependence than experiments, however its crossover behavior from the normal to the pseudogap state is in agreement with experiments qualitatively.
The paper is organized as follows. In Section II, we discuss the effect of the variation of the Fermi velocity along the FS on the resistivity in the pseudogap state by comparing two kinds of tight-binding bandstructures which differ in the flatness of the dispersions near $`(0,\pm \pi )`$ points. In Section III, we present fits to experimental data of the resistivity and the inverse Hall angle and discuss qualitatively the crossover behavior of the Hall coefficient from the normal to the pseudogap state. Section IV contains a brief discussion and a conclusion.
## II EFFECT OF THE VARIATION OF FERMI VELOCITY ON RESISTIVITY
Currently, the most commonly used band structure for quasiparticles in high-$`T_c`$ cuprates is the two-dimensional tight-binding model including the neast- and next-neast- neighbour hopping term which is written as,
$$\epsilon _k=2t(\mathrm{cos}k_x+\mathrm{cos}k_y)4t^{^{}}\mathrm{cos}k_x\mathrm{cos}k_y\mu ,$$
(1)
where, $`t=0.25`$eV, $`t^{^{}}/t=0.45`$ and $`\mu `$ is the chemical potential which is determined by hole concentration. As will be discussed below, we find that this dispersion fails to account for the temperature dependence of the resistivity in the pseudogap state as far as our model is concerned. Thus, another bandstructure is also considered, which is obtained by a tight-binding fit to ARPES energy dispersion by Norman et al. . It reads,
$`\epsilon _k`$ $`=`$ $`t_0+t_1(\mathrm{cos}k_x+\mathrm{cos}k_y)+t_2\mathrm{cos}k_x\mathrm{cos}k_y+t_3(\mathrm{cos}2k_x+\mathrm{cos}2k_y)`$ (3)
$`+t_4(\mathrm{cos}2k_x\mathrm{cos}k_y+\mathrm{cos}2k_y\mathrm{cos}k_x)+t_5\mathrm{cos}2k_x\mathrm{cos}2k_y\mu ,`$
with real space hopping matrix elements (in eV) \[$`t_0,\mathrm{},t_5`$\]= \[0.1305,-0.2976,0.1636,-0.026,-0.0559,0.051\]. Experimentally, the pseudogap opens at different temperatures for different hole doping concentrations. However, since the opening temperature $`T^{}`$ is chosen by hand in our model, this change can be naturally realized. Thus, we will not consider the effect of different doping levels and fix the hole concentration $`n=0.1`$. The FS’s for these two dispersions corresponding to $`n=0.1`$ are shown in Fig.1. The main difference between them is that the energy band Eq.(2) is flatter near the crossing of the FS and the Brillouin zone boundary. This difference can be seen more clearly from their dispersions plotted in Fig.2. As shown, a very flat band exists near the $`\mathrm{M}`$ point along the direction of $`\mathrm{\Gamma }`$ to $`\mathrm{M}`$ for Eq.(2). Consequently, the Fermi velocity at $`k`$-point $`A`$ for the dispersion (2) is nearly 2.5 times smaller than at $`k`$-point $`B`$($`A`$ and $`B`$ are indicated in Fig.1), while it varies slightly for the dispersion (1), as one can see from the inset of Fig.2. In the following, we will see that this slight difference affects the temperature behavior of the transport coefficients in the pseudogap state qualitatively. It is worthy to point out that both dispersions are obtained by fitting to the ARPES experiments on the optimally doped materials. So, applying them to underdoped systems by just adjusting the chemical potential is a rigid band approximation. Because no detailed bandstructure for underdoped materials is available for us now, we will use this approximation in this section for a qualitative discussion on the sensitivity of the resistivity in the pseudogap state with respect to the variation of the Fermi velocity along the FS. In section IV, we will demonstrate that this assumption fails to fit to the experimental data quantitatively and the best fit to experiments is the dispersion (2) reduced by a factor of 3.
Though there are many studies for the origin of this pseudogap , no consensus seems to have been achieved. The ARPES experiments show that the gap has a $`d_{x^2y^2}`$ symmetry and it first appears near $`(0,\pm \pi )`$ and $`(\pm \pi ,0)`$ points, the gapped regions spread laterally on cooling the samples . In the presence of gap, the transfer rates of electrons into and out of these regions as well as the excitations of electrons in these regions will drop rapidly. For simplicity, here we assume that the states in the gapped region( shown schematically in Fig.1 as the regions closed by four bold line semi-circles) is unavailable for electrons. These regions first appear at the opening temperature of the pseudogap and will extend gradually as temperature decreases. Because only several ARPES data on the destruction of the FS are available, we can not deduce a precise form of its variation as a function of temperature and will assume that the $`T`$-dependences of the radius of the gapped regions will be $`R(T)(T^{}T)`$, $`R(T)(T^{}T)^{1/2}`$ and $`R(T)\mathrm{tanh}2\sqrt{(T^{}/T)1}`$ ($`T^{}`$ the opening temperature of the pseudogap) and its maximum value at the superconducting transition temperature $`T_c`$ be $`R_{max}=0.3\pi `$ (the case of $`R_{max}=0.25\pi `$ is sometimes also included for comparison). We will choose $`T^{}=150`$K and $`T_c=`$64K to fit to the experimental data on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub>. Our model is reminiscent of a recent proposal by Furukawa, Rice and Salmhofer . Based on the one-loop renormalation group investigation, they demonstrated that the FS with saddle points $`(\pi ,0)`$ and $`(0,\pi )`$ can be truncated by the formation of an insulting condensate due to the umklapp scattering as the electron density increases, while the remaining FS is still metallic. It also bears a close similarity to the bosonic preformed pairs model by Geshkenbein, Ioffe and Larkin , in which the fermions lying inside the disks shown in Fig.1 are assumed to be paired into dispersionless bosons, and the interaction of transferring electrons from the disks to other parts of the FS is weak so that the bosons are in fact localized.
In order to proceed the detail calculations, a form of the effective interaction between electrons and spin fluctuations is required. We note that, the anisotropy of scatterings on the Fermi line can be naturally realized in the nearly antiferromagnetic Fermi liquids(NAFL) model , where the spin fluctuations strongly peak at the AF wave vector $`(\pi ,\pi )`$. So, we will adopt this model interaction which reads ,
$$\chi (𝐪,\omega )=\underset{i}{}\frac{1}{\omega _{q_i}i\omega }$$
(4)
where $`\omega _{q_i}=T^c+\alpha T+\omega _D\psi _{q_i}`$, $`\psi _{q_i}=2+\mathrm{cos}(q_x+\delta Q_i)+\mathrm{cos}(q_y)`$(or $`2+\mathrm{cos}(q_x)+\mathrm{cos}(q_y+\delta Q_i)`$), $`T^c,\alpha ,`$ and $`\omega _D`$ are temperature-independent parameters. The sum over $`i`$ runs over the incommensurate wavevector $`\delta Q_i=\pm 0.12\pi `$, which has been shown to exist in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.6</sub> , recently. In our discussion, the results with commensuration $`\delta Q_i=0`$ are also included for comparison.
We assume that a weak magnetic field is applied perpendicular to the CuO<sub>2</sub> plane and an electrical field is along the $`x`$ direction, i.e., $`𝐄=E𝐞_𝐱`$ and $`𝐁=B𝐞_z`$. Within the conventional relaxation-time approximation, the longitudinal and Hall conductivities can be calculated according to,
$$\sigma _{xx}=2e^2\underset{k}{}[𝐯(k)𝐞_𝐱]^2\tau _k[\frac{f(\epsilon _k)}{\epsilon _k}],$$
(5)
$$\sigma _{xy}=2e^3\underset{k}{}[𝐯(k)𝐞_𝐱\tau (k)]𝐯(k)\times 𝐁[𝐯(k)𝐞_𝐲\tau (k)][\frac{f(\epsilon _k)}{\epsilon _k}],$$
(6)
where, $`𝐯(k)=_𝐤\epsilon _k`$ is the group velocity and $`\tau (k)`$ the relaxation time.
Following Stojkovic and Pines , we will approximate the relaxation rates by the electron lifetime. To second-order in the interaction constant $`g`$, it reads,
$$\frac{1}{\tau (k)}=2g^2\underset{k^{}}{}\mathrm{Im}\chi (kk^{},\epsilon _k^{}\epsilon _k)[n(\epsilon _k^{}\epsilon _k)+f(\epsilon _k^{})],$$
(7)
where $`n(\epsilon )`$ and $`f(\epsilon )`$ are the Bose and Fermi distribution functions, respectively.
We solve Eqs.(4), (5) and (6) numerically by dividing the Brillouin zone into 200$`\times `$200 lattices. The parameters for the spin-fluctuation spectrum are choosen as $`T^c=0`$, $`\alpha =2.0`$ and $`\omega _D=77`$meV. Another set of parameters, namely, $`T^c=0`$, $`\alpha =2.0`$ and $`\omega _D=147`$meV is also used, no qualitative change has been found. The interaction constant is taken to be $`g=0.64eV`$ as used before . In the pseudogap state, the sum over $`k(k^{})`$ in Eqs.(4), (5) and (6) will exclude those regions where the gap is formed.
The temperature dependences of the relaxation rates $`\tau _k`$ for both dispersions are presented in Fig.3. Due to the opening of the pseudogap, a decrease is observed for all cases at low temperatures. This result is expected because we have taken the density of states in the gapped region to be zero after the temperature is lower than the opening temperature $`T^{}=150`$K. We note that the gapped regions first appear at $`(0,\pm \pi )`$ points and the energy difference between the chemical potential and that at $`(0,\pm \pi )`$ is 1950 K and 650 K for the dispersions (1) and (2), respectively, at the same doping concentration $`n=0.1`$. Because transports involve the scatterings of electrons situating about several K$`T`$ around the FS, the destruction of the FS affects the transport properties only when the difference between the energies at the FS and at the gapped regions is comparable with K$`T`$. As a result, the decrease in $`1/\tau _k`$ starts at different temperatures for the dispersions (1) and (2). It starts at nearly $`T^{}`$ for the dispersion (1)(shown in (c) and (d)) and at lower temperature for the dispersion (2)(shown in (a) and (b)), especially it will depends on the spreading rate of the gapped regions for the dispersion (2). Comparing $`1/\tau _k`$ at different $`k`$-points $`A`$ and $`B`$, one finds that the relaxation rate is strongly anistropy along the FS, it is larger near the hot spots such as the $`k`$-point $`A`$ and smaller near the cold spots such as $`k`$-point $`B`$. This is due to the anistropy of the interaction form Eq.(3). We would like to emphasize that there is no appreciable difference in the ratios of the $`1/\tau _k`$ at hot spots to at cold spots calculated using the dispersions (1) and (2), namely it is about 4 for the dispersion (1) and 7 for the dispersion (2).
Now, we turn to the discussion of the dc resistivity. Before proceeding with a detail analysis, one may speculate that the resistivity will decrease once the temperature is below $`T^{}`$(or the temperature when $`1/\tau _k`$ starts to decrease for the dispersion (2)) as inferred from the behavior of $`1/\tau _k`$, according to the well-known Drude formular for resistivity $`\rho =m^{}/(n_ee^2\tau )`$ (here $`\tau `$ is an effective relaxation rate, $`m^{}`$ the effective mass and $`n_e`$ the number of electrons). However, the numerical result for $`\rho (T)=1/\sigma _{xx}`$ calculated using Eq.(4) turns out to be not so trivial, since the quantity $`m^{}/n_e`$ will change after parts of the FS is destroyed due to the formation of the pseudogap. As shown in Fig.4(a), the dc resistivity calculated using the dispersion (1) goes up instead of going down after the gap opens for all cases of $`R_{max}`$ and $`R(T)`$, which contradicts the experimental observation , while those calculated using the dispersion (2) (as shown in Fig.4(b)) show a decrease below $`T^{}`$ though a small rise can also be observed below about 100 K and 80 K for the cases of $`R_{max}=0.3\pi `$, $`R(T)(T^{}T)`$, and $`R_{max}=0.25\pi `$, $`R(T)\mathrm{tanh}2\sqrt{(T^{}/T)1}`$, respectively. Since the relaxation rate at the cold spots shows decrease once entering into the pseudogap state, the contribution to the longitudinal conductivity from the cold spots $`\sigma _{xx}^{(c)}`$ will increase. On the other hand, the density of states near the hot spots will lose because of the opening of pseudogap and it gives rise to a decrease in the conductivity coming from the hot spots $`\sigma _{xx}^{(h)}`$. Therefore, whether the resistivity rises or drops in the pseudogap state depends on the competition of the increment in $`\sigma _{xx}^{(c)}`$ and the decrease in $`\sigma _{xx}^{(h)}`$. For the cold spot model in which the relaxation rate at the hot spots is assumed to be unusually larger than at the cold spots , $`\sigma _{xx}^{(h)}`$ will be short-circuited and the conductivity $`\sigma _{xx}`$ is determined by $`\sigma _{xx}^{(c)}`$. So, $`\sigma _{xx}`$ will increase and in turn the resistivity $`\rho (T)=1/\sigma _{xx}`$ will decrease. To the contrary, for the hot spot model the contribution from the hot spots is comparable with or even larger than that from the cold spots, then the decrease in $`\sigma _{xx}^{(h)}`$ will surpass the increase in $`\sigma _{xx}^{(c)}`$ and it leads to a drop in the conductivity and a rise in the resistivity. Thus, in order to account for the temperature behavior in resistivity observed in the pseudogap state, the resistivity should be dominated by the contribution from the cold spots and that from the hot spots be negligible. Now, we return to our realistic calculation using the interaction form Eq.(3). From Fig.3, one finds that the ratio of the relaxation rate at the hot spots to that at the cold spots is about 4 ( a band calculation gives the same ratio, see table II in Ref. ) and 7 for the dispersions (1) and (2), respectively. Thus, no overwhelming contribution from the cold spots can be expected just from this ratio. In this case, the kinematical factor (Fermi velocity $`v_F`$) should be considered, since the transport coefficients involve a $`k`$-sum over $`\tau _k`$ weighted by $`v_F^2`$. As noted above, for the dispersion (2) the ratio of the Fermi velocity at the cold spots (near $`k`$-point $`B`$) is 2.5 times larger than at the cold spots (near $`k`$-point $`A`$). This, along with the ratio 7 for the relaxation rate, makes $`\sigma _{xx}^{(c)}`$ at the $`k`$-point $`A`$ be 44 times larger than $`\sigma _{xx}^{(h)}`$ at the $`k`$-point $`B`$ and justifies the applicability of the cold spot model. However, for the dispersion (1) the Fermi velocity at the cold spots is nearly 1.15 times smaller than at the cold spots, consequently $`\sigma _{xx}^{(c)}`$ is only 3 times larger than $`\sigma _{xx}^{(h)}`$. Thus, the losing in $`\sigma _{xx}^{(h)}`$ due to the opening of the pseudogap will exceed the increase in $`\sigma _{xx}^{(c)}`$ arising from the enhancement in $`\tau _k^{(c)}`$ and eventually the resistivity will increase.
From the above discussion, one can see that the crossover behavior of the resistivity is better described by the cold spot rather than the hot spot model, which is consistent with the recent studies on the transport properties in the normal state . In a realistic calculation, we find that the variation of the Fermi velocity along the FS plays an important role in the determination of the cold spot or hot spot model. In terms of the ARPES experiments , an extended van Hove singularity (flat band) exists near $`(0,\pm \pi )`$ — around the hot spots, it will lead to a lower Fermi velocity around the hot spot region and justifies the applicability of the cold spot model. However, the energy dispersion of Eq.(1) is not flat enough and meanwhile the flat band is far away from the FS. Consequently, it has larger Fermi velocity near the hot spots as shown in the inset of Fig.2. So, as far as our model is concerned, the dispersion (1) is inadequate for the description of the transport properties in the underdoped cuprates though it was used mostly before.
## III FITTING TO EXPERIMENTAL DATA
Although the agreement between the model calculation of the resistivity using the dispersion (2) and experiments is reasonable in view of its crossover behavior from the normal to the pseudogap state, there are two discrepancies when fitting it to experiments quantitatively. One is a non-linear temperature resistivity appears at high temperatures and the other is that the resistivity ceases to decrease and even has a slight rise with further decreasing temperature below about 100 K which is higher than $`T_c`$ as can be seen in Fig.4(b). To resolve these discrepancies, we note that the bandstructure (2) is obtained from a fit to the photoemission experimental data of Ba<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> with hole doping 0.17, i.e., an optimally doped cuprate, so applying it to the underdoped regime by just adjusting its chemical potential is a rigid band assumption. In fact, the bandstructure will change as doping varies. An important fearure is that the band width will become narrow, i.e., the quasiparticles will becomes heavy as doping decreases. Of course, this renormalization of mass is anisotropic in momentum space, it is larger near the FS and becomes more and more less away from it. The detailed treatment of this renormalization requires a complicated calculation and goes beyond our scope here, thus we simply take $`\epsilon _k\epsilon _k/3.0`$. This amounts to reducing the energy difference between the chemical potential and the flat band for hole doping $`n=0.1`$ from 54 meV to 18 meV, which is consistent with ARPES experimental data 19 meV for the underdoped YBa<sub>2</sub>Cu<sub>4</sub>O<sub>8</sub> . The reduction of the whole energy band by a factor of 3 is justified approximately by the fact that just the electrons near the FS contribute to transports and those far away from it have in fact no effect. Before making a quantitative comparison with experiments, we note that Wuyts et al. have developed an universal analyzing method for transport data in underdoped high-$`T_c`$ superconductors. They demonstrated that the transport data on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>x</sub> can be scaled onto an universal curve using one scaling parameter $`T_0`$ which has $`0.8T_0T^{}`$. We will adopt this method in the following analysis.
The results for the dc resistivity $`\rho (T)=1/\sigma _{xx}`$, calculated using the dispersion (2) reduced by a factor of 3 with $`\mu =0.02t`$ ($`n`$=0.1) and using the dispersion (2) but with $`\mu =0.016t`$( the dashed-dotted line), are shown in Fig.5, where the residual resistence is taken to be $`\rho _0=0.162\rho (T_0)`$, the same value as used before , $`T^{}=0.72T_0`$ with $`T^{}=150`$K. The result represented by the dashed-dotted line has the same energy difference 18 meV to that calculated using the reduction of the bandwidth. The hollow squares indicate the experimental data of Ref. . An important effect of the reduction of the energy band is that the flat region becomes large, so the slight rise in the resistivity below about 100 K observed in Fig.4(b) is removed and a continue decrease is obtained which fits the experimental data well. On the other hand, though having the same energy difference, the result by adjusting the chemical potential (dashed-dotted line) still shows a rise below 100 K. It implies that it is the range of the flat band instead of the difference between the chemical potential and the flat band that has something to do with the low temperature rise in resistivity. This may be understandable from the reason causing this rise. As the gapped regions spread with decreasing temperature, more and more parts of the FS are destroyed. Since the Fermi velocity increases when the wavevector moves from the $`k`$-point $`A`$ to $`B`$, the ratio of the Fermi velocity at the cold spots to that at the crossing of the FS and the edge of the gapped regions will decrease. Thus, the contribution to the conductivity from the hot regions will grow gradually and the resistivity will cease to decrease or even rise in low temperatures. If we reduce the width of the energy band, then the range of the flat band and consequently the range of the low Fermi velocity will grow. It enables the contribution from the hot regions to be negligible. One can also see from the figure that a linear in $`T`$ dependence is well reproduced in the normal state, although its slope is somewhat larger than experimental data. The results for different $`T`$-dependences of the radius of gapped regions $`R(T)`$ are shown in the inset of Fig.4. We find that the best fit to the experimental data is $`R(T)(T^{}T)`$ though the difference between $`R(T)(T^{}T)^{1/2}`$ and $`R(T)(T^{}T)`$ is minor. The comparison of the resistivities calculated for the commensurate and incommensurate cases is also shown in Fig.5, in which the dashed line represents the result for the commensurate case. The qualitative difference between them is that the slop is smaller for the commensurate case and make the fit become bad. As noted above, the temperature dependence of the resistivity is determined by the weight of the contributions from the cold spots to that from the hot spots. The incommensurate wave vectors will make the hot spots be shifted away from the original ones (move to the cold spots for $`\delta 𝐐`$ and to the boundary of the Brillouin zone for $`\delta 𝐐`$) and thus lead to a change of the weight of the contribution from the cold spots to that from the hot spots. It is this change that gives rise to different slops in the temperature behavior of the resistivity.
Using the same parameters, we have calculated the $`T`$-dependences of the inverse Hall angle $`\mathrm{cot}\theta _H(T)=\sigma _{xx}/\sigma _{xy}`$ and the Hall coefficient $`R_H=\sigma _{xy}/B\sigma _{xx}\sigma _{yy}`$. The result for the inverse Hall angle is presented in Fig.6. The fit to experimental data above $`T^{}`$ is good below about 300K. Below 150 K, a slight deviation from the $`T^2`$ appears, especially, the $`\mathrm{cot}\theta _H(T)`$ curve changes from convexity to concavity with nearly the same reflection point as the experimental data for the incommensuration case. Comparing the results for different $`T`$-dependences of the gapped regions, one can find that there is no significant difference as shown by the solid and dotted lines. The results for the commensurate case (dashed line) and calculated using the dispersion (2) with $`\mu =0.016t`$(dashed-dotted line) all show disagreement with the experimental data at high temperatures and also exhibit a deviation from the high-$`T`$ behavior at much low temperature compared with the experiment.
As for the Hall coefficient shown in Fig.7, we obtain a weaker temperature dependence than that seen experimentally . The similar result has been reported by Stojkovic and Pines based on the NAFL model . However, the trend is very similar to the experimental data and allows us to compare its crossover behavior with the experiment qualitatively. A striking feature for the incommensuration cases which are represented by the solid and dotted lines is that the Hall coefficient decreases rapidly with decreasing temperature at low temperatures and causes a peak occurring slightly below the opening temperature $`T^{}`$. This is qualitatively consistent with experiments . We note that the result with commensuration (dashed line) shows an even weaker $`T`$ dependence than the case of incommensuration. Moreover, the results calculated using the dispersion (2) with $`\mu =0.016t`$( dashed-dotted line) displays a contrary temperature behavior, i.e., it decreases with temperature. This strong discrepancy is related to the discrepancy in the resistivity discussed above. Because $`R_H1/\sigma _{xx}^2`$, any slight deviation from the $`T`$-linearity in $`\rho (T)`$ will be amplified and leads to a worse result for the Hall coefficient. From the same arguement as that for resistivity, we know that the main contribution to the transverse conductivity $`\sigma _{xy}`$ comes from the cold spots and the hot spots contribute little . So, $`\sigma _{xy}`$ will have the same trend as that for the longitudinal conductivity $`\sigma _{xx}`$, and their effect will cancel and lead to a minor variation in the temperature dependence of the inverse Hall angle. On the other hand, the depression will reflect in the Hall coefficient since we have $`R_H=\sigma _{xy}/B\sigma _{xx}\sigma _{yy}`$.
This agreement with experiments using the dispersion (2) reduced by a factor of 3 raises a question: whether the dispersion (1) also works after the same reduction. We have done it and the result turns out to be bad. The reason is that there is another vH singularity in the dispersion (1) except that at $`(0,\pi )`$, it exists at $`(0,0)`$ point. That part of the FS near the diagonal direction will approach to this one when the energy band is reduced and increase the density of states at the cold spots, eventually this will change the weight of cold spots to hot spots drastically.
## IV DISCUSSION AND CONCLUSION
In summary, we have investigated the effect of the FS destruction on transport properties in the pseudogap state of underdoped high-$`T_c`$ cuprates based on the standard Boltzmann theory. Using a simple assumption of taking the density of states of the gapped regions to be zero, we calculate the temperature dependences of the longitudial resistivity, the Hall angle and the Hall coefficient. The results indicate that the temperature dependence of the transport coefficients is strongly sensitive to the existance and the range of the flat band near $`(0,\pm \pi )`$, and the anistropy of the scattering rates for electrons along the Fermi surface. We find that the temperature dependences of the transport coefficients in the pseudogap state are better described by the cold spot model, i.e., they are determined by the contribution from the cold spots while the hot spots contribute little. We can semi-quantitatively explain the temperature dependences of both the resistivity and the Hall angle, as well as qualitatively explain the crossover behavior of the Hall coefficient from the normal state to the underdoped state. However, the calculated Hall coefficient in the normal state shows a weaker temperature dependence than that observed by experiments.
It is worthwhile to point out that in NAFL model the different magnetic properties in underdoped systems are ascribed to distinct scaling regimes of the spin-fluctuation spectrum Eq.(2) . From this point of view, the anomalous transport properties may arise from the different interaction form which is related to the opening of the ”spin pseudogap” as deduced from the NMR and neutron scattering experiments , though there is no detailed calculations about it now. Here, we focus on the effect of the Fermi surface topology and consider the interaction form in the underdoped regime to be the same as that in the optimally doped regime. What is the relation between the two proposals and also if any other interaction form which gives a varying electron lifetime on the FS such as Eq.(2) can give the same results presented here deserve further investigations.
## ACKNOWLEDGMENTS
We would like to thank M.R.Norman for discussion and for pointing out the dispersion Eq.(2). We acknowledge the support from NSC of Taiwan under Grants No.88-2112-M-001-004 and No.88-2112-M-003-004. JXL is support in part by the National Nature Science Foundation of China.
On leave from Department of Physics, Nanjing University, Nanjing 210093, People’s Republic of China
## FIGURE CAPTIONS
Fig.1 Fermi surfaces for the dispersions (1) (dashed line) and (2) (thin solid line) with hole doping $`n=0.1`$. The thick solid lines enclose the ”disk” regions where a pseudogap is suggested by experiments. The density of states of electrons in that regions will be assumed to be zero in our model calculations.
Fig.2 Energy dispersions for Eqs.(1) and (2) described in the text along the $`\mathrm{\Gamma }=(0,0)`$$`\mathrm{M}=(0,\pi )`$$`\mathrm{Y}=(\pi ,\pi )`$ direction. Note the very flat band existing near $`\mathrm{M}`$ for Eq.(2). The inset shows the Fermi velocities along Fermi surface for the dispersions (1) (dotted line) and (2) (solid line). The $`k`$-point $`A^{},B`$ and $`A`$ corresponds to what indicated in Fig.1.
Fig.3 Relaxation rates as a function of temperature at different $`k`$ points along the Fermi surface. (a) and (b) are the results calculated using Eq.(1), (c) and (d) are those using Eq.(2). The $`k`$-point symbols ($`A`$ and $`B`$) correspond to what indicated in Fig.1. The maximum value of the radius of the gapped region is $`R_{max}=0.25\pi `$(see text) and their $`T`$-dependences are $`R(T)(T^{}T)`$(solid lines), $`(T^{}T)^{1/2}`$ (dashed lines) and $`\mathrm{tanh}(2\sqrt{T^{}/T1.0})`$(dotted lines). The results are for the commensurate magnetic interaction, those for the incommensuration case are qualitatively similar to the results shown here except for a larger values.
Fig.4 Sensitivity of the resistivity with respect to the dispersions (1) \[(a)\] and (2) \[(b)\]. The solid line indicates the result with a $`T`$-dependence of the radius of the gapped region $`R(T)\mathrm{tanh}(2\sqrt{T^{}/T1.0})`$, the dashed line with $`R(T)(T^{}T)`$, and both correspond to the maximum value $`R_{max}=0.25\pi `$. The dotted line corresponds to the case of $`R_{max}=0.3\pi `$ and $`R(T)(T^{}T)`$. For comparison, we also show the result for the commensuration case as indicated in the figure.
Fig.5 Scaled resistivity versus scaled temperature with the maximum value of the radius of the gapped region $`R_{max}=0.3\pi `$. The solid, dashed, dotted lines and those in the inset are the results calculated using the dispersion (2) reduced by a factor of 3. The dashed-dotted line is the result calculated using the dispersion (2) with the chemical potential $`\mu =0.016t`$ (see text). Solid line: the $`T`$-dependence of the radius of the gapped region $`R(T)(T^{}T)`$, the incommensuration $`\delta Q=0.12\pi `$. Dashed line: $`R(T)(T^{}T)`$, $`\delta Q=0`$. Dotted line: $`R(T)(T^{}T)^{1/2}`$, $`\delta Q=0.12\pi `$. Dashed-Dotted line: $`R(T)(T^{}T)`$, $`\delta Q=0.12\pi `$. The open squares, both in the main panel and in the inset, are experimental data from Ref. . Inset shows the sensitivity of the resistivity with respect to the size and temperature dependence of the gapped region. Solid line: the same parameters with the solid line in the main panel. Dashed line: $`R(T)(T^{}T)`$, $`R_{max}=0.25\pi `$. Dotted line: $`R(T)\mathrm{tanh}(2\sqrt{T^{}/T1.0})`$, $`R_{max}=0.25\pi `$.
Fig.6. Scaled inverse Hall angle versus scaled temperature with the maximum value of the radius of the gapped region $`R_{max}=0.3\pi `$. The solid, dashed and dotted lines are the results calculated using the dispersion (2) reduced by a factor of 3. The dashed-dotted line is the result calculated using the dispersion (2) with the chemical potential $`\mu =0.016t`$ (see text). Solid line: the $`T`$-dependence of the radius of the gapped region $`R(T)(T^{}T)`$, the incommensuration $`\delta Q=0.12\pi `$. Dashed line: $`R(T)(T^{}T)`$, $`\delta Q=0`$. Dotted line: $`R(T)(T^{}T)^{1/2}`$, $`\delta Q=0.12\pi `$. Dashed-Dotted line: $`(T^{}T)`$, $`\delta Q=0.12\pi `$. The open squares are experimental data from Ref. .
Fig.7. Temperature dependence of the Hall coefficient with the maximum value of the radius of the gapped region $`R_{max}=0.3\pi `$. The solid, dashed and dotted lines are the results calculated using the dispersion (2) reduced by a factor of 3. The dashed-dotted line is the result calculated using the dispersion (2) with the chemical potential $`\mu =0.016t`$ (see text). Solid line: the $`T`$-dependence of the radius of the gapped region $`R(T)(T^{}T)`$, the incommensuration $`\delta Q=0.12\pi `$. Dashed line: $`R(T)(T^{}T)`$, $`\delta Q=0`$. Dotted line: $`R(T)(T^{}T)^{1/2}`$, $`\delta Q=0.12\pi `$. Dashed-Dotted line: $`(T^{}T)`$, $`\delta Q=0.12\pi `$.
|
no-problem/9901/cond-mat9901050.html
|
ar5iv
|
text
|
# Magnetooptical sum rules close to the Mott transition
\[
## Abstract
We derive new sum rules for the real and imaginary parts of the frequency-dependent Hall constant and Hall conductivity. As an example, we discuss their relevance to the doped Mott insulator that we describe within the dynamical mean-field theory of strongly correlated electron systems.
\]
The ac Hall effect can provide valuable insights into the dynamics of an electronic medium. This has recently been demonstrated in the case of high-$`T_c`$ superconductors : Various theoretical models based on different scattering mechanisms agree that the anomalous frequency and temperature dependences of the Hall effect are closely intertwined, but they differ in their predictions about these dependences . So far, experiments cannot discriminate between these models, but they will possibly be able to do so in the future .
The magnetooptical response of charge carriers can be probed by the frequency-dependent Hall conductivity, Hall constant, or Hall angle. Recently, a sum rule for the Hall angle has been derived that is similar to the well-known $`f`$-sum rule for the optical conductivity . In this paper, we derive new sum rules for the real and imaginary parts of the two other magnetotransport probes. Such sum rules are useful: First, they help elucidating how the corresponding spectral weight is redistributed upon changing the temperature or the doping level. Second, they provide exact constraints on the interdependence of Hall effect-related quantities and thus help interpreting experimental data. For example, the sum rules for the ac Hall constant relate its low-frequency behavior to its infinite-frequency limit. This can be useful because experimentally, only the microwave domain and the far infrared are attainable sufficiently reliably , whereas the calculation of the Hall constant simplifies considerably in the high-frequency limit .
We shall first derive the sum rules for the Hall conductivity and Hall constant quite generally. Then, to illustrate their application, we shall discuss some aspects of the magnetooptical response of correlated electrons close to the density-driven Mott transition.
We start by considering the ac conductivities. In terms of the dissipative part of the current-current correlation function,
$$\chi _{\nu \mu }^{\prime \prime }(\omega )=_{\mathrm{}}^{\mathrm{}}𝑑t\frac{1}{2}[\widehat{J}_\nu (t),\widehat{J}_\mu (0)]e^{i\omega t},$$
(1)
the conductivity tensor reads
$$\sigma _{\nu \mu }(\omega )=ie^2𝒫_{\mathrm{}}^{\mathrm{}}\frac{d\stackrel{~}{\omega }}{\pi }\frac{\chi _{\nu \mu }^{\prime \prime }(\stackrel{~}{\omega })}{\stackrel{~}{\omega }(\omega \stackrel{~}{\omega })}+e^2\frac{\chi _{\nu \mu }^{\prime \prime }(\omega )}{\omega }.$$
(2)
Here, $`𝒫`$ indicates principal-value integration. From time reversal invariance, homogeneity of time, and the Hermiticity of the current operators, we may deduce the following symmetry properties :
$`\chi _{xx}^{\prime \prime }(\omega )`$ $`=`$ odd & real (3)
$`\chi _{xy}^{\prime \prime }(\omega )`$ $`=`$ $`\text{even \& wholly imaginary},`$ (4)
where Eq. (4) holds to first order in the magnetic field. Eqs. (3) and (4) imply that the real parts of $`\sigma _{xx}(\omega )`$ and $`\sigma _{xy}(\omega )`$ are even while their imaginary ones are odd. We also see that the dc Hall conductivity is finite only if
$$\chi _{xy}^{\prime \prime }(0)=0,$$
(5)
and derive the first couple of sum rules:
$`{\displaystyle _0^{\mathrm{}}}𝑑\omega \mathrm{}\sigma _{xy}(\omega )`$ $`=`$ $`0,`$ (6)
$`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\omega {\displaystyle \frac{\omega \mathrm{}\sigma _{xy}(\omega )}{\pi e^2}}`$ $`=`$ $`i[\widehat{J}_x,\widehat{J}_y].`$ (7)
To prove Eq. (6), we close the path of integration along a semicircle at infinity in the upper-half complex-frequency ($`z`$) plane and apply Cauchy’s theorem. The integral on the semicircle does not contribute since the leading high-frequency behavior of $`\sigma _{xy}(z)`$ is $`1/z^2`$. The sum rule (7) is similar to the $`f`$-sum rule of the optical conductivity,
$$_{\mathrm{}}^{\mathrm{}}𝑑\omega \frac{\mathrm{}\sigma _{xx}(\omega )}{\pi e^2}=i[\widehat{J}_x,\widehat{P}_x]=\chi ^0,$$
(8)
where $`\widehat{P}_x`$ is the polarization operator satisfying $`\widehat{J}_x(t)=\widehat{P}_x(t)/t`$, and $`\chi ^0=𝑑\omega \chi _{xx}^{\prime \prime }(\omega )/\pi \omega `$ is the static current-current correlation function, which is positive definite. To interprete the right-hand sides of Eqs. (7) and (8), we first note that the Hall frequency $`\omega _Hi[\widehat{J}_x,\widehat{J}_y]/\chi ^0`$ is the generalization of the cyclotron frequency to the lattice . Its sign determines that of the infinite-frequency Hall constant,
$$R_H^{}=\underset{H0}{lim}\frac{N\omega _H}{e^2\chi ^0H},$$
(9)
which was considered by Shastry et al. . Here, $`N`$ denotes the total number of lattice sites. Second, the Drude-theory expression $`\sigma _{xx}(\omega )=\frac{\omega _p^2/4\pi }{1/\tau i\omega }`$ yields $`\chi ^0=\omega _p^2/4\pi e^2`$, where $`e`$ is the charge of an electron and $`\omega _p`$ the plasma frequency. In general, however, $`\chi ^0`$ and $`\omega _H`$ depend on all external and model parameters such as temperature, band filling, and correlation strength.
Before proceeding, we compare the sum rules (7) and (8). In both cases, the contribution of a band $`ϵ(\stackrel{}{k})`$ to the right-hand side can be represented as a weighted average of the momentum-distribution function, $`n_{\stackrel{}{k}\sigma }`$, over the Brillouin zone (BZ), where the weight function is determined by the inverse mass tensor : $`i[\widehat{J}_x,\widehat{J}_y]=He_{\stackrel{}{k}\sigma }det(ϵ_\stackrel{}{k}^{\nu \mu })n_{\stackrel{}{k}\sigma }`$ and $`\chi ^0=_{\stackrel{}{k}\sigma }ϵ_\stackrel{}{k}^{xx}n_{\stackrel{}{k}\sigma }`$. Here, upper indices indicate differentiation with respect to a component of the Bloch vector, such as in, say, $`ϵ_\stackrel{}{k}^x=ϵ_\stackrel{}{k}/k_x`$. $`H`$ is the magnetic field and is assumed to point in the $`z`$ direction, and $`\nu ,\mu =x,y`$. In many semiconductors, only Bloch states close to the minima of the conduction band or the maxima of the valence band contribute. Then, one can replace the inverse mass tensor by its value at the respective band edge. Thus, the sum rules (7) and (8) are seen to relate hard-to-obtain experimental information to, first, the number of carriers and, second, to the mass tensor at a band edge which can be measured in a cyclotron-resonance experiment. In a strongly correlated system, on the other hand, the momentum-distribution function receives contributions from the entire BZ, and the above-mentioned BZ averages may no longer be easy to determine experimentally .
Next, we investigate the ac Hall constant. In Ref. , it has been decomposed into its infinite-frequency limit (9) and a memory-function contribution which can be represented in terms of a spectral function $`k(\omega )`$:
$$R_H(\omega )=R_H^{}\left(1+_{\mathrm{}}^{\mathrm{}}𝑑\stackrel{~}{\omega }𝒫\frac{k(\stackrel{~}{\omega })\stackrel{~}{\omega }}{\stackrel{~}{\omega }\omega }\right)+i\pi R_H^{}k(\omega )\omega .$$
(10)
$`k(\omega )`$ was shown to be even and real. Therefore, the real and imaginary parts of $`R_H(\omega )`$ are even and odd, respectively. We also establish nontrivial sum rules for the ac Hall constant:
$`{\displaystyle _0^{\mathrm{}}}𝑑\omega \left[\mathrm{}R_H(\omega )R_H^{}\right]=0,`$ (11)
$`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{d\omega }{\pi }}{\displaystyle \frac{\mathrm{}R_H(\omega )}{\omega }}=R_HR_H^{},`$ (12)
where $`R_H`$ is the dc Hall constant. Eq. (11) holds because the leading high-frequency behavior of $`R_H(z)R_H^{}`$ is $`1/z^2`$ . Eq. (12) is a Kramers-Kronig relation. The sum rules (11) and (12) are interesting because they relate the Hall constant at low frequencies to its infinite-frequency limit. The low-frequency regime is attainable in experiments , whereas the high-frequency limit is much easier to handle theoretically. The sum rule (11) implies that $`R_H`$ cannot go over from its dc value to its infinite-frequency limit monotonically.
Finally, we quote a sum rule for the Hall angle $`t_H(\omega )\mathrm{tan}\theta _H(\omega )=\sigma _{xy}(\omega )/\sigma _{xx}(\omega )`$ that was derived in Ref. :
$$_{\mathrm{}}^{\mathrm{}}\frac{d\omega }{\pi }\mathrm{}t_H(\omega )=\omega _H.$$
(13)
By contrast to the $`f`$-sum rule in Eq. (8), none of our sum rules involves a positive definite integrand. As a consequence, we expect our sum rules to become fully useful only in conjunction with some theoretical understanding of the problem involved.
We now apply the above-mentioned sum rules to the doped Mott insulator, which we describe by the single-band Hubbard model with bare bandwidth $`2D`$ and on-site repulsion $`U`$. We are primarily interested in the physics close to half filling, $`\delta 1n1`$, where $`n`$ denotes the average occupancy per lattice site. In the limit of infinite spatial dimensions, all vertex corrections of the conductivity tensor vanish which implies
$`\sigma _{xx}(i\omega _m)`$ $`=`$ $`{\displaystyle \frac{2ie^2}{N\beta }}{\displaystyle \underset{\stackrel{}{k}n}{}}(ϵ_\stackrel{}{k}^x)^2G_{\stackrel{}{k}|n}{\displaystyle \frac{G_{\stackrel{}{k}|n+m}G_{\stackrel{}{k}|n}}{i\omega _m}},`$ (14)
$`\sigma _{xy}(i\omega _m)`$ $`=`$ $`{\displaystyle \frac{e^3H}{N\beta }}{\displaystyle \underset{\stackrel{}{k}n}{}}\left|\begin{array}{cc}ϵ_\stackrel{}{k}^xϵ_\stackrel{}{k}^x& ϵ_\stackrel{}{k}^{xy}\\ ϵ_\stackrel{}{k}^yϵ_\stackrel{}{k}^x& ϵ_\stackrel{}{k}^{yy}\end{array}\right|G_{\stackrel{}{k}|n}^2{\displaystyle \frac{G_{\stackrel{}{k}|n+m}G_{\stackrel{}{k}|nm}}{i\omega _m}}.`$ (17)
In each equation, a spin factor 2 has been taken into account and $`\beta =1/T`$ is the inverse temperature. $`i\omega _n`$ and $`i\omega _m`$ are fermionic and bosonic Matsubara frequencies, respectively. The single-particle Green’s function is given by $`G_{\stackrel{}{k}|n}^1=i\omega _n+\mu ϵ_\stackrel{}{k}\mathrm{\Sigma }(i\omega _n)`$, where the local self-energy $`\mathrm{\Sigma }(i\omega _n)`$ must be calculated by solving a single-impurity Anderson model supplemented by a self-consistency condition . Earlier work on the Hall effect in infinite dimensions was carried out in Refs. . We compute the ac conductivities (14) and (17) numerically by using the tight-binding band $`ϵ_\stackrel{}{k}=(D/\sqrt{2d})_j\mathrm{cos}(k_ja)`$ in $`d`$ dimensions, where $`a`$ is the lattice spacing. We use the iterated perturbation theory (IPT), which can be shown to obey our sum rules exactly.
Our main focus is on the frequency regime well below the Mott-Hubbard gap $`U`$. The relevant part of the single-particle spectrum then consists of two distinct features: an incoherent lower Hubbard band (LHB) and, provided the temperature is low enough, a quasiparticle resonance (QPR) at the Fermi level. As the doping level is increased, the QPR merges with the LHB from above.
Accordingly, there are two widely different energy scales close to the Mott transition: a coherence temperature $`T_{\text{coh}}`$ below which Fermi-liquid properties begin to be observed, and $`D`$ which sets the scale for incoherent excitations. The width of the QPR defines a second low-energy scale $`T^{}`$. The ac conductivities (14) and (17) reflect the possible transitions within the single-particle spectrum. For $`T<T_{\text{coh}}`$, this means that the integrands of all sum rules roughly decompose into two features: First, a narrow one at zero frequency which is due to transitions within the QPR. Consequently, its width scales at most with $`T^{}`$. We shall see below that this feature can be resolved in the Fermi-liquid regime so its width does not exceed the smaller scale $`T_{\text{coh}}`$. Second, a feature around a frequency $`\omega _1`$ that measures the distance between the maxima of the LHB and the QPR, $`\omega _1D`$. At high temperatures, on the other hand, the integrands are solely determined by transitions from occupied to unoccupied states within the LHB, and $`D`$ is the only energy scale.
In the Fermi-liquid regime, $`T,\omega <T_{\text{coh}}`$, the conductivities can be cast into the Drude forms $`\sigma _{xx}(\omega )=\frac{\omega _p^2/4\pi }{1/\tau ^{}i\omega }`$ and $`\sigma _{xy}(\omega )=\frac{\omega _c^{}\omega _p^2/4\pi }{(1/\tau ^{}i\omega )^2}`$ with renormalized parameters. Here, the renormalized plasma frequency behaves as $`\omega _p^2D\delta `$ ; $`1/\tau ^{}\delta \text{Im}\mathrm{\Sigma }_R(\omega =0,T)`$, where $`\mathrm{\Sigma }_R`$ is the retarded self-energy in the absence of disorder; $`\omega _c^{}\omega _c\delta `$ where $`\omega _c`$ is the cyclotron frequency of noninteracting electrons on the same lattice. The renormalized plasma and cyclotron frequencies must not be confused with the bare ones defined by the sum rules (8) and (13), respectively.
Expanding Eqs. (14) and (17) to leading order in $`1/T`$ as explained in Ref. shows that both conductivities are suppressed by a factor $`\delta `$ close to the Mott transition. Approximate expressions for the dissipative parts of the conductivities, which capture the doping and temperature dependences in the region $`T,\omega >T^{}`$, $`\omega 2D`$, are given by $`\mathrm{}\sigma _{xx}(\omega )e^2\delta \frac{1\mathrm{exp}(|\omega |/T)}{|\omega |}`$ and $`\mathrm{}\sigma _{xy}(\omega )e^3H\delta \text{sgn}(\omega )[1\mathrm{exp}(|\omega |/T)]/D`$. The last relation only holds for a generic band that does not have the bipartite-lattice property discussed in Ref. .
We now discuss the qualitative forms of the functions governing the sum rules (6), (7), (11), (12), and (13) more specifically. In all plots, we have chosen $`\delta =0.1`$ and $`U=4`$.
Real part of the Hall conductivity.–Its high-frequency behavior is given by $`\sigma _{xy}(\omega )e^2\chi ^0\omega _H/\omega ^2`$ and therefore has the opposite sign as $`R_H^{}`$ in Eq. (9). On the other hand, its dc value has the same sign as $`R_H`$. Close to half filling, and for intermediate temperatures and up, $`R_H`$ and $`R_H^{}`$ have the same sign . Since in this parameter regime, the only energy scale is $`D`$, $`\mathrm{}\sigma _{xy}(\omega )`$ changes its sign once at a scale of order $`D`$ to satisfy sum rule (6). For $`T<T_{\text{coh}}`$, $`R_H^{}`$ remains hole-like while $`R_H`$ becomes electron-like . Then, the sum rule (6) requires at least one further sign change at a scale $`\omega T_{\text{coh}}`$. This prediction is corroborated by our numerical investigation, Fig. 1 displays the frequency-dependent Hall conductivity for $`T=0.015D`$.
Imaginary part of the Hall conductivity.–Fig. 2 displays the integrand of the sum rule (7), which is proportional to the spectral function (4). We have normalized this function to 1 to facilitate the comparison between curves belonging to different temperatures. For $`T>T^{}`$, this function hardly depends on temperature. Its “M-shaped” form is consistent with Eq. (5) and the fact that $`D`$ is the only energy scale. As the temperature is decreased to below $`T^{}`$, the spectral weight is redistributed to comply with the emergence of two energy scales $`T_{\text{coh}}`$ and $`\omega _1`$, the Drude form in the Fermi-liquid regime, and the fact that the overall weight is positive.
Hall angle.–The real part of the Hall angle defined before Eq. (13) closely resembles that of the previously considered function, except that it is not subject to a condition like Eq. (5).
Hall constant.–Close to half filling and for $`T>T^{}`$, $`R_H`$ is greater than $`R_H^{}`$ . Then, $`\mathrm{}R_H(\omega )`$ satisfies the sum rule (11) as follows: Starting from its dc value, $`\mathrm{}R_H(\omega )`$ first decreases monotonically as a function of frequency, drops to below its infinite-frequency level at a frequency of order $`D`$, and finally rises to approach $`R_H^{}`$ from below. In the opposite limit of very low temperatures, we show a curve for $`T=0.015D`$ in the main panel of Fig. 3, along with a better resolution of its low-frequency part in the left inset. $`R_H^{}`$ (dotted line) is seen to be positive, while $`R_H<0`$ (not discernible), in agreement with Ref. . In addition, $`R_H(\omega )`$ hardly depends on frequency in the Fermi-liquid regime, as expected from the Drude parametrizations of $`\sigma _{xx}(\omega )`$ and $`\sigma _{xy}(\omega )`$. To counterbalance the drop of the dc value to below $`R_H^{}`$, a peak-like structure has piled up to above the $`R_H^{}`$ level at the other energy scale, $`\omega D`$ . The structure about $`\omega 3D`$ arises from the upper Hubbard band. At very high frequencies, $`\mathrm{}R_H(\omega )`$ approaches its asymptotic value according to a $`1/\omega ^2`$ law. In the right inset of Fig. 3, we display a curve at the cross-over temperature $`T=0.15D`$. Like in the high-temperature regime, $`R_H>R_H^{}>0`$. But the sign change of $`\mathrm{}R_H(\omega )R_H^{}`$ is already shifted to higher frequencies, signalling the emergence of the peak at $`\omega D`$ as the temperature is lowered.
Finally, Fig. 4 displays the function $`k(\omega )`$, which is proportional to the integrand of the sum rule (12) and which has not been normalized to 1. As the temperature is decreased from well above (not shown in Fig. 4) to well below $`T^{}`$, a single peak of width $`D`$ decomposes into a narrow one at $`\omega =0`$ and of width smaller than $`T_{\text{coh}}`$, and a feature around $`\omega \omega _1`$ which involves a sign change. In the normal state of cuprates such as La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub>, $`k(\omega )`$ is resonance-like and has a width given by the anomalous relaxation rate $`1/\tau _H`$ which exhibits a $`T^2`$ law . Instead, we find a width of order $`D`$ for temperatures above $`T^{}`$ where $`R_H>0`$. Similarly, the dynamical mean-field theory predicts that Kohler’s rule is replaced by $`\mathrm{\Delta }\rho /\rho (\omega _c/D)^2`$ in the high-temperature regime , whereas experiments on cuprates are consistent with $`\mathrm{\Delta }\rho /\rho (\omega _c\tau _H)^2`$ as suggested by Terasaki et al. . Here, $`\mathrm{\Delta }\rho `$ is the magnetoresistance.
In summary, we have derived sum rules for the real and imaginary parts of the Hall conductivity and Hall constant. We have applied them, along with another one for the Hall angle, to the doped Mott insulator.
This work was supported by NSF DMR 95-29138. E.L. is funded by the Deutsche Forschungsgemeinschaft.
|
no-problem/9901/astro-ph9901333.html
|
ar5iv
|
text
|
# FORMATION OF QUASAR NUCLEI IN THE HEART OF ULTRALUMINOUS INFRARED GALAXIES
## 1. INTRODUCTION
Since the discovery of ultraluminous infrared galaxies (Soifer et al. 1984; Wright et al. 1984; see for a review Sanders & Mirabel 1996), these galaxies (hereafter ULIGs) have often been considered as possible precursors of optically bright quasars (Sanders et al. 1988a, 1988b; Norman & Scoville 1988). This argument is based on the following observational properties of ULIGs. (a) Their bolometric luminosities amount to $`10^{12}L_{}`$, being comparable to those of quasars (Sanders et al. 1988a). (b) Their luminosity function is similar to that of quasars in the local universe (Soifer et al. 1987; Sanders et al. 1988b). (c) All the ULIGs are galaxy mergers or heavily interacting galaxies (Sanders et al. 1988a; Lawrence et al. 1989; Leech et al. 1994; Clements et al. 1996). Morphological evidence for galaxy mergers has also been obtained for a number of optically selected quasars in the local universe although the majority are giant elliptical or giant elliptical-like galaxies (McLeod & Rieke 1994a, 1994b; Disney et al. 1995; Bahcall et al. 1997; McLure et al. 1998). However, infrared-selected quasars tend to reside in morphologically disturbed hosts (e.g., Hutchings & Neff 1988; Boyce et al. 1996; Baker & Clements 1997). If most giant elliptical galaxies were formed by major mergers between/among disk galaxies (Toomre 1977; Barnes 1989; Ebisuzaki, Makino, & Okumura 1991), it is possible that the majority of quasar hosts are major merger remnants. (d) ULIGs in later merger phases tend to have active galactic nuclei (AGN) on the average (Sanders et al. 1988b; Majewski et al. 1993; Borne et al. 1997). Although all the above properties suggest an evolutionary link from ULIGs to optically bright quasars, its plausibility is still in question.
It is generally considered that the quasars are powered by the central engine of active galactic nuclei (AGN); i.e., disk-gas accretion onto a supermassive black hole (SMBH) and masses of SMBHs in quasar nuclei are estimated to be $`M_{}10^8M_{}`$ (e.g., Rees 1984; Blandford 1990). Therefore, if an evolutionary link exists between ULIGs and quasars, we have to explain either the presence or the formation of SMBHs with mass higher than $`10^8M_{}`$ in the heart of ULIGs. This issue was already discussed by Norman & Scoville (1988). They investigated the fate of a coeval, massive-star cluster of $`4\times 10^9M_{}`$ within the central 10 pc region (see also Weedman 1983) and found that a SMBH can be formed in the heart of ULIGs. However, recent high-resolution optical and near-infrared images of a number of ULIGs using the Hubble Space Telescope have shown that the intense star forming regions are scattered in circumnuclear regions up to $``$ a few kpc from the nucleus (Shaya et al. 1994; Scoville et al. 1998; Surace et al. 1998). Therefore, it still seems uncertain whether or not a SMBH with $`10^8M_{}`$ can be made during the course of merger evolution in ULIGs. In this Letter, we investigate this issue taking actual observational properties of ULIGs into account.
## 2. FORMATION OF QUASAR NUCLEI IN THE HEART OF ULIGs
Morphological features of ULIGs suggest that most ULIGs come from mergers between or among galaxies (Sanders et al. 1988a; Taniguchi, & Shioya 1998). Another important property of the ULIGs is that they are very gas rich; e.g., $`M_{\mathrm{H}_2}10^{10}M_{}`$ (Sanders et al. 1988a; Scoville et al. 1991; Downes & Solomon 1998). Therefore, the progenitor galaxies of ULIGs are rich in gas such as giant spiral galaxies. Since intense starbursts are observed in many ULIGs, the most probable formation mechanism of SMBHs is the collapse of compact remnants of massive stars (Weedman 1983; Norman & Scoville 1988). Another important issue is whether or not progenitor galaxies had SMBHs originally in their nuclei. Although masses of SMBHs in nearly spiral galaxies (i.e., progenitor galaxies of ULIGs) are of the order of $`10^{67}M_{}`$ at most (e.g., Kormendy et al. 1998 and references therein), these seed SMBHs could grow due to gas accretion in circumnuclear dense-gas regions during the course of the merger. Therefore, we consider two cases; 1) at least one progenitor had a SMBH with $`M_{}10^{67}M_{}`$, and 2) no progenitor had a SMBH.
### 2.1. Mergers between/among Nucleated Galaxies
If a progenitor galaxy had a SMBH in its nucleus, this seed SMBH could grow in mass during the course of merger because the central region of a ULIG is very gas-rich. Such gas accretion may be efficient within the central 1 kpc region because the gas density is quite high in the central region; i.e., $`10^{34}`$ cm<sup>-3</sup> (Scoville, Yun, & Bryant 1997; Downes & Solomon 1998). We here consider the classical Bondi-type (Bondi 1952) gas accretion onto the SMBH. This gas accretion rate is given by
$$\dot{M}=2\pi m_\mathrm{H}n_\mathrm{H}r_\mathrm{a}^2v_\mathrm{e},$$
(1)
where $`m_\mathrm{H}`$, $`n_\mathrm{H}`$, $`r_\mathrm{a}`$, and $`v_\mathrm{e}`$ are the mass of a hydrogen atom, the number density of the hydrogen atom, the accretion radius defined as $`r_\mathrm{a}=GM_{}v_\mathrm{e}^2`$ ($`M_{}`$ is the mass of the seed SMBH), and the effective relative velocity between the seed SMBH and the ambient gas, respectively. A typical dynamical mass of the central 1 kpc region is of the order of $`10^9M_{}`$. Suppose that the SMBH with mass of $`10^6M_{}`$ is sinking toward the dynamical center of the merger. Its orbital velocity is $`v_{\mathrm{orb}}67M_{\mathrm{nuc},9}^{1/2}r_1^{1/2}`$ km s<sup>-1</sup>, where $`M_{\mathrm{nuc},9}`$ is the dynamical mass of the central 1 kpc region of the merger and $`r_1`$ is the radius in units of 1 kpc. The crossing time of the SMBH is $`T_{\mathrm{cross}}10^7`$ years. Therefore, the merging time scale is estimated to be $`T_{\mathrm{merger}}10T_{\mathrm{cross}}10^8`$ years (e.g., Barnes 1989). Adopting an average gas density in the circumnuclear regions of ULIGs to be $`n_\mathrm{H}10^3`$ cm<sup>-3</sup>, we obtain the accreting gas mass during the course of merger;
$$M_{\mathrm{acc}}=\dot{M}T_{\mathrm{merger}}3\times 10^5M_{,6}^2n_{\mathrm{H},3}v_{\mathrm{e},100}^3T_{\mathrm{merger},8}M_{}$$
(2)
where $`M_{,6}`$ is the mass of SMBH in units of $`10^6M_{}`$, $`n_{\mathrm{H},3}`$ is the average gas density in units of $`10^3`$ cm<sup>-3</sup>, $`v_{\mathrm{e},100}`$ is the orbital velocity with respect to the ambient gas in units of 100 km s<sup>-1</sup>, and $`T_{\mathrm{merger},8}`$ is the merger time scale in units of $`10^8`$ years. This estimate implies that the seed SMBH cannot grow up to $`M_{}10^8M_{}`$ if the seed SMBH is less massive than $`10^7M_{}`$. Since the ULIGs come from mergers between two galaxies or among several galaxies, their progenitor galaxies should be very giant spiral galaxies in order to pile up molecular gas up to $`10^{10}M_{}`$ in their central regions (e.g., Sanders et al. 1988a). In fact, nearby spiral galaxies such as M31 and NGC 4258 have SMBHs with a few $`10^7M_{}`$ \[e.g., Kormendy et al. (1998) and references therein; see also Miyoshi et al. (1995) for the case of NGC 4258\]. Therefore, it seems quite likely that the seed SMBH may be more massive than that adopted in the above estimate. If the seed SMBH is more massive than a few $`10^7M_{}`$, it could grow up to $`10^8M_{}`$. Although we have no knowledge about the seed SMBHs in the progenitors, our estimates given here suggest that the gas accretion in the dense gas clouds onto the seed SMBH can lead to the formation of a quasar nucleus in the heart of ULIGs.
### 2.2. Mergers between/among Non-nucleated Galaxies
Next we consider a case where there is no seed SMBH in the progenitor galaxies of ULIGs. In this case, a possible way to form quasar nuclei in ULIGs is to pile up the circumnuclear star clusters of compact remnants of massive stars; black holes and neutron stars, each of which has a few $`M_{}`$ at most. This issue was pioneeringly discussed by Weedman (1983). Using both starburst models by Gehrz, Sramek, & Weedman (1983) and optical spectroscopic observations (i.e., H$`\alpha `$ luminosity), he suggested that starburst galaxies with $`L`$(H$`\alpha `$) $`10^{42}`$ erg s<sup>-1</sup> could produce compact starburst remnants up to $`10^9M_{}`$. However, since the H$`\alpha `$ luminosity of starburst galaxies is dominated by the most massive stars in the starburst region, it seems hard to estimate the total mass of compact remnants solely using $`L`$(H$`\alpha `$). Thus, the estimate by Weedman (1983) may provide a rough upper limit for the compact remnant mass in the starburst galaxies. Norman & Scoville (1988) also discussed the formation of quasar nuclei in ULIGs. However, their assumption (a coeval, massive-star cluster of $`4\times 10^9M_{}`$ within the central 10 pc region) turns out to be unlikely because the recent high-resolution optical and near-infrared imaging by the Hubble Space Telescope of the ULIGs have shown that blue star clusters are distributed in the circumnuclear regions up to $`r`$ a few kpc (Shaya et al. 1994; Surace et al. 1998; Scoville et al. 1998). Although the central star cluster associated with the western nucleus of Arp 220 is very luminous and its mass is estimated to be $`10^8M_{}`$, typical masses of the circumnuclear star clusters are of the order of $`M_{\mathrm{cl}}10^7M_{}`$ at most (Shaya et al. 1994; Scoville et al. 1998; Shioya, Taniguchi, & Trentham 1998; see also Taniguchi, Trentham, & Shioya 1998). Although some star clusters may be hidden because of heavy extinction (Scoville et al. 1991; Genzel et al. 1998), we firstly investigate whether or not the star clusters found both in the optical and in the near infrared (NIR) (Shaya et al. 1994; Scoville et al. 1998) will be responsible for the formation of a SMBH with $`M_{}10^8M_{}`$.
Shaya et al. (1994) discussed the fate of circumnuclear star clusters; since these clusters will lose their kinetic energy to individual stars during random encounters (i.e., dynamical friction), they will sink toward the merger center within $`10^9`$ years. Here, from a viewpoint of dynamical relaxation of the star clusters, we examine whether or not compact remnants formed in the circumnuclear star-forming clusters can make a SMBH with $`M_{}10^8M_{}`$. For simplicity, we consider a case where ten circumnuclear star-forming clusters, each of which has a total stellar mass of $`10^7M_{}`$, are distributed within $`r`$ a few kpc. It is known that stars with $`m_{}8M_{}`$ produce compact remnants. We estimate how many such massive stars are formed in each cluster.
We assume that stars are formed with a Salpeter-like initial mass function (IMF);
$$\varphi (m)=\beta m^\mu $$
(3)
where $`m`$ is the stellar mass in units of $`M_{}`$ and $`\beta `$ is a normalization constant determined by the relation
$$_{m_l}^{m_u}\varphi (m)𝑑m=1,$$
(4)
which leads to
$$\beta =\frac{(\mu 1)m_l^{\mu 1}}{1(m_l/m_u)^{\mu 1}}.$$
(5)
The number of stars with a mass range $`m_1m_{}m_2`$ is estimated as
$$N(m_1m_{}m_2)=_{m_1}^{m_2}\frac{\varphi (m)}{m}𝑑m.$$
(6)
Using $`\beta `$ in equation (5), we re-write equation (6) as
$$N(m_1m_{}m_2)=\left(\frac{\beta }{\mu }\right)(m_1^\mu m_2^\mu )\mathrm{stars}M_{}^1.$$
(7)
There are three free parameters; the power index ($`\mu `$), and the upper and lower mass limits of the IMF ($`m_u`$ and $`m_l`$). Since stars with $`m_{}8M_{}`$ produce compact remnants, we set $`m_1=8M_{}`$ and $`m_2=m_u>m_1`$. In Table 1, we give results for some possible combinations of the parameters. Although $`\mu `$ = 1.35 is the canonical value for stars in the solar neighborhood, there are some lines of evidence that massive stars are more preferentially formed in such violent star-forming regions; i.e., a top-heavy initial mass function with $`m_l1M_{}`$ (e.g., Goldader et al. 1997 and references therein). Therefore, we adopt a case of $`\mu =0.35`$, $`m_u=30M_{}`$ and $`m_l=1M_{}`$. In this case, there are $`4\times 10^5`$ massive stars with $`m_{}8M_{}`$ in each cluster. Each compact remnant has a mass from a few $`M_{}`$ (for neutron stars) to several $`M_{}`$ (for black holes). Therefore, the total mass of compact remnants in each cluster is $`10^6M_{}`$. Such ten clusters will be relaxed dynamically with a time scale of
$$T_{\mathrm{dyn}}n_{\mathrm{cl}}^{1/2}G^{1/2}r_{\mathrm{cl}}^{3/2}M_{\mathrm{cl}}^{1/2}9.4\times 10^8r_{\mathrm{cl},1}^{3/2}M_{\mathrm{cl},7}^{1/2}(\mathrm{years}),$$
(8)
where $`r_{cl,1}`$ is the typical size of a circumnuclear region with ten star clusters in units of 1 kpc and $`M_{\mathrm{cl},7}`$ is the mass of the clusters in units of $`10^7M_{}`$. Therefore, we expect that a SMBH with mass of $`10^7M_{}`$ will be made $`10^9`$ years after the onset of circumnuclear starbursts in ULIGs. This mass is smaller than that necessary for quasar nuclei (i.e., $`10^8M_{}`$). Each cluster of compact remnants would be able to gain its mass by gas accretion as discussed in section 2.1. However, the accreted mass is estimated to be $`3\times 10^7M_{}`$ for ten clusters in total, being still less massive than $`10^8M_{}`$.
Here it should be again remembered that all star clusters in the central region of ULIGs cannot be observed in both the optical and the NIR because inferred extinction toward the nuclei of ULIGs is very large; e.g., $``$ 50 mag (Genzel et al. 1998; Scoville et al. 1991). Therefore, it is quite likely that the majority of nuclear star clusters in ULIGs are hidden by a large amount of gas and dust. Recently, Shioya et al. (1998) analyzed the optical-NIR spectral energy distributions of nuclear star clusters in Arp 220. They found that these clusters are more massive systematically (i.e., $`10^8M_{}`$) than circumnuclear ones but can account only for about one-seventh of the total bolometric luminosity of Arp 220. Although OH megamaser sources found in the central region of Arp 220 may provide possible evidence for hidden AGN (Diamond et al. 1989; Lonsdale et al. 1998), the recent mid-infrared spectroscopy of a sample of ULIGs has shown that the majority of the ULIGs such as Arp 220 are powered by nuclear starbursts (Genzel et al. 1998; Lutz et al. 1998). Therefore, it is strongly suggested that some hidden star clusters should be responsible for the remaining ($`6/7`$) bolometric luminosity of Arp 220. Since compact remnants produced in the hidden clusters will join to form a SMBH, it is expected that a SMBH with $`M_{}10^8M_{}`$ will be formed in the heart of ULIGs.
## 3. DISCUSSION
We have shown that a SMBH with mass $`10^8M_{}`$ can be made in a ULIG during the course of merger between/among gas rich galaxies regardless of the presence of a seed SMBH in progenitor galaxies; i.e., (a) if one progenitor galaxy had a seed SMBH with mass of $`10^7M_{}`$, this seed SMBH can grow up to $`10^8M_{}`$ because of efficient Bondi-type gas accretion during the course of merger given a gas density in the circumnuclear region of $`n_\mathrm{H}10^3`$ cm<sup>-3</sup>, and (b) even if there was no progenitor galaxy with a seed SMBH, the star clusters of compact remnants made in the circumnuclear starbursts can merge into the merger center within a dynamical time scale of $`10^9`$ years to form a SMBH with $`10^8M_{}`$. Note, however, that the contribution of compact remnants supplied from hidden star clusters is necessary to explain the formation of SMBHs. In conclusion, the ultraluminous infrared galaxies observed in the local universe can make a SMBH in their center during the course of merger either by the gas accretion onto a seed SMBH or by the dynamical relaxation of star clusters of compact remnants made in the violent circumnuclear starbursts.
The presence of a SMBH with $`M_{}10^8M_{}`$ is a crucially important necessary condition for the occurrence of quasar activity. In the local universe, the masses of SMBHs in giant spiral galaxies (e.g., our Milky Way Galaxy, M31, NGC 1068, and NGC 4258) are as low as $`M_{}10^{67}M_{}`$ (Kormendy et al. 1998 and references therein). Therefore, it is suggested that isolated, typical spiral galaxies cannot harbor quasar nuclei. However, mergers between/among gas-rich galaxies can cause efficient gas fueling toward the nuclear regions of the merging systems and then trigger intense starbursts either as a result of the piling of a lot of gas (Mihos & Hernquist 1994) or by the dynamical effect of SMBH binaries (Taniguchi & Wada 1996; Taniguchi & Shioya 1998). Furthermore, as demonstrated in the present work, these mergers provide a possible way to form SMBHs with $`M_{}10^8M_{}`$. In this respect, it is quite likely that the ULIGs will finally evolve to optically luminous quasars as suggested by Sanders et al. (1988a, 1988b).
Finally it is worthwhile noting that some elliptical galaxies could be formed by galaxy mergers (e.g., Toomre 1977; Barnes 1989; Ebisuzaki et al. 1991). We also note that some elliptical galaxies (e.g., M87, NGC 3115, NGC 3377, NGC 4261, and so on) have SMBHs with $`M_{}10^8M_{}`$ (Kormendy et al. 1998 and references therein). Actually, investigating the physical conditions of ULIGs (i.e., mass density and velocity dispersion), Kormendy and Sanders (1992) found evidence that ULIGs are elliptical galaxies forming by merger-induced dissipative collapse (see also Wright et al. 1990; Baker & Clements 1997). Therefore, we suggest that almost all SMBHs with $`M_{}10^8M_{}`$ in the local universe were made by galaxy mergers.
We would like to thank an anonymous referee for his/her useful comments. This work was supported in part by the Ministry of Education, Science, Sports and Culture in Japan under Grant Nos. 07055044, 10044052, and 10304013.
|
no-problem/9901/astro-ph9901401.html
|
ar5iv
|
text
|
# Untitled Document
Stellar Population of Ellipticals in Different Environments:
Near-infrared Spectroscopic Observations
P. A. James<sup>1</sup> and B. Mobasher<sup>2</sup>
<sup>1</sup> Astrophysics Research Institute, Liverpool John Moores University,
Twelve Quays House, Egerton Wharf, Birkenhead L41 1LD.
<sup>2</sup> Astrophysics Group, Blackett Laboratory, Imperial College,
Prince Consort Road, London SW7 2BZ.
Accepted for publication in the Monthly Notices of the Royal Astronomical Society
ABSTRACT
Near-infrared spectra of 50 elliptical galaxies in the Pisces, A2199 & A2634 clusters, and in the general field, have been obtained. The strength of the CO (2.3 $`\mu m`$) absorption feature in these galaxies is used to explore the presence of an intermediate-age population (e.g. Asymptotic Giant Branch stars) in ellipticals in different environments. We find the strongest evidence for such a population comes from ellipticals in groups of a few members, which we interpret as the result of recent minor merging of these galaxies with later type galaxies. Field galaxies from very isolated environments, on the other hand, show no evidence for young or intermediate-age stars as revealed by H$`\beta `$ and CO absorptions, and appear to form a very uniform, old population with very little scatter in metallicity and star formation history.
Key words: galaxies: clusters - galaxies: elliptical - galaxies: fundamental parameters - galaxies: stellar content - infrared: galaxies.
1 INTRODUCTION
Elliptical galaxies are conventionally assumed to have stellar populations dominated by old stars, formed in a single burst some 12–16 Gyr ago. However, the degree to which this assumption is violated in observed systems is very uncertain. Indeed, some of the studies regarded as underpinning this view express distinct reservations on the accuracy of this picture. For example, Tinsley & Gunn (1976) point out that their modelling of optical and near-infrared colour and line indices of ellipticals cannot rule out the existence of ‘a considerable population of stars born after the rapid initial burst’ (see also Bruzual 1983). However, more recent studies show that less than 10 per cent of the stars in present-day ellipticals are likely to have been formed in the last 5 Gyr, imposing strong constraints on the time and duration of the last episode of star formation in ellipticals (Bower, Lucey & Ellis 1992). Moreover, using UV-optical colours of ellipticals in clusters at $`z`$0.5, Ellis et al. (1997) confirm that star formation in most cluster ellipticals was essentially completed by $`z`$3.
However, one outstanding question is whether field ellipticals, which are currently isolated or surrounded by at most a small number of neighbours, have the same star formation history as their cluster counterparts. Indeed, several observational studies have found strong evidence that they do not, with the star formation extending to significantly more recent times in the case of field ellipticals. For example, Larson, Tinsley & Caldwell (1980) found that bright field ellipticals are, on average, bluer than cluster ellipticals of the same luminosity, implying more recent star formation in the former. Furthermore, O’Connell (1980) showed that major star formation episodes in the nearby blue elliptical M 32 continued until $``$5 Gyr ago while, using optical spectroscopy, Bica & Alloin (1987) concluded that field ellipticals and lenticulars contain an intermediate-age component, not present in cluster ellipticals. These results were further confirmed by Bower et al. (1990) and Rose et al. (1994) who used a range of optical spectral indicators to infer that ‘a substantial intermediate-age population is present in the early-type galaxies in low-density environments, a population that is considerably reduced or altogether lacking in the early-type galaxies in dense clusters’. Similarly, Schweizer & Seitzer (1992) found that some field ellipticals have blue UV-optical colours and strong H$`\beta `$ absorption lines, indicating star formation within the last 3–5 Gyr. Also, the range in H$`\beta `$ strength observed in field ellipticals was interpreted by Gonzalez (1993) as evidence for an inhomogeneous stellar population. Finally, Kauffmann (1996) showed via simulations of the star formation and merging history of galaxies that star formation can naturally be expected to continue to more recent epochs in low-density environments. An implication of these results, explored by de Carvalho & Djorgovski (1992) and Guzmán & Lucey (1993), is that the younger stellar populations in ellipticals in low-density environments results in an increase in scatter about, and offsets from, the Fundamental Plane of elliptical galaxies (Dressler et al. 1987, Djorgovski & Davis 1987). This has strong implications regarding formation of ellipticals and determination of galaxy distances using the Fundamental Plane.
However, there has recently been an important dissenting opinion, in the work of Silva & Bothun (1998), who study the near-IR colours of a sample of field elliptical galaxies with signs of recent disturbance (i.e. possible post-mergers). They concluded that $`<`$10–15 per cent of the stellar mass in these systems is in the form of intermediate-age stars, with ages 1–3 Gyr. The fraction of light contributed by such a population can be somewhat larger, but is still $`<`$20–30 per cent. This is a surprising result, because this sample contains just those field galaxies which would be expected to show the strongest signatures of recent star formation.
To address these questions, we carried out a spectroscopic study of elliptical galaxies in different environments, using near-infrared measurements of the 2.3 $`\mu `$m CO 2–0 photospheric absorption feature to determine the contribution from the intermediate-age stellar population to the light of these galaxies (Mobasher & James 1996; henceforth paper I). We found marginally-significant evidence for deeper CO absorptions, and hence larger intermediate-age populations, in the field galaxies. However, the small sample size (9 field and 12 cluster galaxies) limits the statistical significance of these results. The present paper extends the sample to 50 ellipticals in total (20 field and 30 cluster) and looks in more detail at optimal techniques for extracting information on stellar populations from CO measurements. The near-IR spectral data presented here are for ellipticals in the Abell 2199 and Abell 2634 clusters, and for field galaxies from Faber et al. (1989).
Recent observations and data reduction are outlined in section 2. Section 3 presents results regarding the correlation between star formation history and environment. Section 4 explores the dependence of CO index on the physical parameters in galaxies and discusses some of the relevant peculiarities of individual galaxies. Section 5 contains a comparison of these results with previous studies, and section 6 summarises the main conclusions.
A Hubble constant of 75 km s<sup>-1</sup> Mpc<sup>-1</sup> is assumed in this paper.
2 OBSERVATIONS AND DATA REDUCTION
The observations presented here were carried out using the United Kingdom Infrared Telescope (UKIRT) on the 4 nights 1996 August 2–5. The instrument used was the long-slit near-IR spectrometer CGS4, with the 150 line mm<sup>-1</sup> grating and the short-focal-length (150 mm) camera. The 2-pixel-wide slit was chosen, corresponding to a projected width on the sky of 2.4 arcsec. The wavelength range in this configuration in first order was 0.33 $`\mu `$m, centred on the start of the CO absorption band at $``$2.33 $`\mu `$m for galaxies with recession velocities of a few thousand km s<sup>-1</sup> (the present sample has redshifts from 1500–12000 km s<sup>-1</sup>). The effective resolution, including the effective degradation caused by the wide slit, was about 900, and the array was moved by 1 pixel between integrations to enable bad pixel replacement in the final spectra.
For each observation, the galaxy was centred in the slit by maximising the IR signal, using an automatic peak-up facility. Total integration times were between 40 minutes and an hour per galaxy, depending on their central surface brightness. During this time, the galaxy was slid up and down the slit at one minute intervals by 22 arcsec, giving two offset spectra which were subtracted to remove most of the sky emission. Stars of spectral types A0–A6, suitable for monitoring telluric absorption, were observed in the same way before and after each galaxy, with airmasses matching those of the galaxy observations as closely as possible (root-mean-square difference $`<`$0.1 A.M.). Flat fields and argon arc spectra were taken using the CGS4 calibration lamps.
The spectra were reduced using the FIGARO package in the Starlink environment, as outlined in paper I. However, there was an initial complication, resulting from the CGS4 slit rotation mechanism having jammed at the time that these observations were made. This resulted in the slit being substantially misaligned with the columns of the array, which made sky-line subtraction and spectral extraction difficult. The problem was overcome by using the FIGARO routine SDIST to fit the orientations of arc lines in spectra taken during each of the 4 nights. This resulted in a correction which was applied to each spectrum taken using the CDIST routine. This worked very successfully, which was demonstrated by applying the correction obtained from one arc spectrum to another taken on the same night. Arc lines in the corrected spectra were perfectly aligned with the array columns, and the only side-effect was a slight loss of wavelength range as the ends of the corrected spectra subsequently had to be trimmed. This had no impact on the present programme, since the available wavelength range was in excess of that needed. The galaxy flux was then extracted from $``$5 pixels (i.e. 6 arcsec.) along the slit.
The result of the data reduction was a one-dimensional, wavelength-calibrated galaxy spectrum, which had been divided by an A-star spectrum to remove the effects of atmospheric absorptions. This was converted into a normalised, rectified spectrum by fitting a power-law to featureless sections of the continuum, and dividing the whole spectrum by this power-law, extrapolated over the full wavelength range (see Doyon, Joseph & Wright 1994 for a discussion of this procedure). This fitting process made use of code kindly written by Dr C. Davenhall under the Starlink QUICK facility. The resulting rectified spectrum then has a flat continuum level of unity across the whole wavelength range, simplifying the calculation of equivalent widths and spectral indices of the spectral features. The spectra were wavelength calibrated using the argon arc spectra, and redshift-corrected on the basis of their catalogued recession velocities.
To quantify the depth of the 2.3 $`\mu `$m CO absorption features, several different methods have been proposed. For example, Doyon et al. (1994) use a spectroscopic CO index, defined by
$$CO_{sp}=2.5log<R_{2.36}>$$
where $`<R_{2.36}>`$ is the average value of the rectified spectrum between 2.31 and 2.4 $`\mu `$m in the galaxy rest frame. Doyon et al. (1994) give conversions between this index and the photometric CO index used by earlier studies based on narrow-band filter observations, and also calibrate the index against effective temperature for dwarf, giant and supergiant stars. This was the definition we used to quantify CO depth in paper I. However, Puxley, Doyon & Ward (1997; PDW) have recently proposed a new definition, which they claim to be the most powerful discriminant between different stellar populations in galaxies. Considering various options, they conclude that the optimal wavelength range is 2.2931–2.32 $`\mu `$m, a substantially narrower range than for $`CO_{sp}`$, and they express this as an equivalent width in nm, rather than as an index in magnitudes. Whilst they adopt this measure for astrophysical reasons, there are practical advantages resulting from the narrower wavelength range. Errors resulting from the uncertainty in the power-law fit to the continuum are substantially reduced, because the degree of extrapolation needed is much smaller. Also, the new definition enables higher-redshift objects to be observed without the spectra becoming unduly noisy as the bandpass of interest moves to the end of the K window. On the Mauna Kea site, the window is typically usable to about 2.5 $`\mu `$m, which corresponds to a limit of only 12,500 km s<sup>-1</sup> in recession velocity before the end of the $`CO_{sp}`$ range is lost, compared to 23,000 km s<sup>-1</sup> for the range advocated by PDW.
However, there are arguments against going to the still shorter wavelength range used by, for example, Kleinmann & Hall (1986) and Origlia, Moorwood & Oliva (1993). Their EW measurements were found to be sensitive to velocity-dispersion smoothing, whereas the PDW find their EW to be completely unaffected (we apply no velocity dispersion corrections to the EW in the present paper). In addition, shorter baselines result in reduced signal-to-noise, which would be a significant problem for the fainter galaxies here. Thus, we calculate both the CO EW defined by PDW, and the $`CO_{sp}`$ index for comparison with paper I and the calibrations of Doyon et al. (1994). Whilst the prescription given by Doyon et al. (1994) and PDW are simple and, it is to be hoped, unambiguous, the resulting indices and equivalent widths should be regarded as instrumental measures. It would be useful to obtain repeat measurements of the present sample with other telescopes and instruments to check for possible systematic differences, but this has not yet been done.
In calculating the errors in the CO estimates, a number of random and systematic sources of error were taken into account. Random errors consist of both photon counting statistics and weak unresolved spectral features, which effectively introduce similar errors. These were estimated from the standard deviation of the ‘featureless’ continuum used in the power-law fitting, after division by the power-law. This constitutes the dominant error in the CO EW values, due to the relatively small spectral range on the CO bandhead, whilst the random error on the continuum determination is small because most of the continuum between 2.15 and 2.28 $`\mu `$m can be used. This gives a 1–$`\sigma `$ error on the CO EW estimated at 0.2 nm. Determination of the continuum slope from the power-law fitting also contributes to the error, and was estimated by varying the wavelength ranges used in the power-law fitting, and by using different packages and algorithms for the fitting, with errors of 0.2 nm allocated to this cause. Finally, errors due to wavelength calibration and redshift uncertainty were estimated by shifting the spectral bandpass by an amount equivalent to $`\pm `$400 km s<sup>-1</sup>, and an error of 0.1 nm allocated to this for all measurements. Adding these three sources of error in quadrature, since they are most plausibly uncorrelated with one another, overall errors of 0.3 nm were assigned to the CO EW values.
The CO measurements for the sample of 29 elliptical galaxies in this study (both in field and clusters) are listed in Table 1. This also contains the 21 galaxies from paper I for which new CO measurements have been determined following the prescription of PDW. Column 1 gives the galaxy name. For the cluster galaxies without NGC, UGC or IC numbers, the designator given is either from Butcher & Oemler (1985) or Lucey et al. (1997), and these galaxies are listed in the format BO$`nnn`$ or L$`nnn`$ respectively. Column 2 gives the heliocentric recession velocity in km s<sup>-1</sup>, column 3 $`CO_{sp}`$, and column 4 the CO EW as defined by PDW, in nm. Column 5 gives the H$`\beta `$ equivalent width from Trager et al. (1998), in units of 0.1 nm. Column 6 gives the total B-band absolute magnitude, calculated from B<sub>T</sub> values in the NASA/IPAC Extragalactic Database (NED) and the heliocentric redshift. Column 7 gives the log of velocity dispersion in km s<sup>-1</sup> and column 8 the $`Mg_2`$ index in magnitudes, both of which were taken from Faber et al. (1989) or Lucey et al. (1997). Column 9 gives the cluster or group membership where HG and GH denote groups from Huchra & Geller (1982) and Geller & Huchra (1983) respectively, and ‘Isol’ denotes galaxies which pass our isolation criterion, which we describe in section 3.
As a check on the overall validity of our data and reduction methods, we calculated the stellar type of stars giving rise to the near-IR light in these galaxies. This involved converting the CO EW values to CO<sub>sp</sub> using the equation given by PDW, then using the CO<sub>sp</sub>–effective temperature calibration given by Doyon et al. (1994), and finally the effective temperature–spectral type conversion from Allen (1976). We find that the CO absorption strengths for most galaxies are consistent with those of K giants, with a range equivalent to stars of spectral types K3III–K8III. Photometric studies (e.g. Frogel et al. 1978) tend to show that the near-IR light of galaxies is equivalent to that of late K or early M giants, reasonably consistent with our findings.
3 ENVIRONMENTAL DEPENDENCE OF CO DEPTH
The distribution of CO EWs for the 50 elliptical galaxies in this study is shown in Fig. 1, where the cluster and field galaxy distributions are shown by solid and dashed lines respectively. The field galaxy distribution is also shaded for clarity. The field galaxies appear to cover a broader range of CO EWs compared to their counterparts in clusters. The field ellipticals also show a pronounced bimodal distribution. Overall, the mean CO strengths for the field and cluster galaxies are similar, corresponding to 3.28$`\pm `$0.12 nm and 3.29$`\pm `$0.06 nm respectively, with the errors being the standard deviations from the mean. However, the apparent difference in the distributions is moderately significant, with a K-S test determining that there is only an 8 per cent chance that the two distributions were drawn from the same parent population. In paper I, the field galaxy distribution was simply displaced towards stronger CO indices (or larger EW values) compared with that for clusters.
However, there is a potential problem, in that the two peaks found in the CO EW distribution for field galaxies correlate strongly, though not perfectly, with the observing run (there is no such correlation for cluster galaxies). This is in the sense that most of the galaxies in the high EW peak centred on $``$3.8 nm were observed in the 1994 run described in paper I, whereas the peak centred on 2.7 nm is comprised of galaxies observed in the 1996 run. The difference is significant, with the field galaxies in the 1994 run having a mean CO EW of 3.82$`\pm `$0.06 nm, whereas those observed in the 1996 run have a mean of 2.88$`\pm `$0.09 nm. This immediately raises the question of whether the offset is a systematic, due to some change in the instrument or the data reduction procedures. We take this possibility extremely seriously, and in the following discussion examine all the possibilities to explore the reality of this effect. We emphasize here that the CO EW values for the cluster galaxies observed in 1994 had an average value of 3.34$`\pm `$0.09 nm, compared with 3.25$`\pm `$0.08 nm for the cluster galaxies observed in 1996. Thus the cluster galaxy distributions show no offset between the two runs, as is confirmed by a K-S test, and any systematic would therefore have preferentially to affect the field galaxies.
We first checked whether the stars used to remove atmospheric absorptions have had any effect on the measured CO EW values. However, all of the stars were drawn from a very small range of spectral types, with no chance of any intrinsic CO absorption. They were bright enough that the chance of misidentification is very slight, and in any case, no correlation was found between the derived CO EW and the star used. There are many cases where the same star was used for galaxies which were found to have CO EW values drawn from opposite extremes of the measured range, and conversely using different stars for a given galaxy spectrum was found to have no effect on derived EW values within the errors. Airmass differences between the galaxy and star observations were also found to have no measurable effect on the CO EWs.
We checked for wavelength calibration differences between the two runs, but no systematic difference was found. The CO EW definition adopted by PDW is, in any case, only weakly sensitive to wavelength errors, and any shift large enough to give the observed offset between the two field galaxy groups in Fig. 1 would have been immediately apparent in the spectra. The same argument also excludes redshift errors as the source of this effect, with shifts of $`\pm `$400 km s<sup>-1</sup> only changing the derived EW by $`\pm `$0.1 nm in the most extreme cases.
The instrument configuration was quite different between the two runs, and indeed a different detector array had been installed. The main consequences of this were a longer wavelength range for the 1996 run, giving a longer continuum baseline, and a somewhat coarser wavelength resolution for the 1994 observations. This gives rise to a number of possible effects on the derived CO values. One concerns the power-law fitting and extrapolation, which is better constrained for the more recent data, and the difference in spectral resolution could also potentially cause an effect due to rounding error in the wavelength range over which the equivalent width was calculated. To check for any systematic effects on the derived CO strengths, we re-reduced the more recent spectra, reducing the wavelength range and rebinning into coarser pixels in the wavelength direction, to match the wavelength range and resolution of the 1994 data. CO strengths were then derived from these simulated spectra, and were found to differ by up to 0.25 nm from values originally derived. In every case, most of the change was due to fitting the power-law continuum over a smaller wavelength range, as was demonstrated by producing spectra where the wavelength range and the resolution were changed individually. However, even the largest differences found by changing both wavelength range and resolution are much smaller (by a factor of at least 4) compared with the offset between the two field galaxy groups shown in Fig. 1.
Finally, the Galactic latitudes of the field galaxies were checked to explore if extinction effects might be significant. However, both the high-EW and low-EW groups have $`|b|`$ distributions which are approximately uniform over the range 20 and 60, with no galaxies outside these limits.
Thus we have checked all the potential systematic effects of which we are aware, and none contributes significantly to the offset between the two groups of field galaxy CO strengths in Fig. 1. We therefore conclude that this represents a real difference between the galaxies, and continue now to discuss astrophysical explanations for this.
When selecting field galaxies for the 1994 run, we based our choice purely on information tabulated by Faber et al. (1989), checking only that ‘field’ galaxies were not members of major clusters. However, for the 1996 run, we selected field galaxies from highly isolated environments, thus maximising the environmental difference between field and cluster samples. Thus there is a strong correlation between degree of isolation and observing run, which we now propose explains the offset in field galaxy properties. Although these are all nominally field galaxies, there is in fact quite a wide range of environments sampled by these 19 galaxies, ranging from complete isolation up to membership of groups with $``$10 bright members. The criterion we adopt for complete isolation is that there be no companions within $``$5 magnitudes in apparent brightness, within a projected radius of 500 kpc. Of the 19 field galaxies, 6 meet this criterion (IC 5157, NGC 6020, NGC 6127, NGC 7391, NGC 7785 and ESO462–G015). At the other extreme, 4 galaxies are found in groups rich enough to have been identified by Huchra & Geller (1982) or Geller & Huchra (1983), who employed an algorithm based on position and redshift information only, to identify significant groupings in the CfA redshift survey and another magnitude-limited, all-sky sample. These galaxies are identified in Table 1. In addition, NGC 1600 is generally considered to lie at the center of a group of at least 10 members. The other 8 members of our field subset have at least one apparent companion but are fairly isolated, lying in groups with at most a few members.
It is interesting to note that all 6 completely isolated galaxies lie in the low–CO peak while the 5 galaxies in moderately rich groups lie in the high–CO peak in Fig. 1. The remaining ‘field’ galaxies, with a small number of apparent companions, are split between the two peaks. An interpretation of this result will be discussed in section 5.
It is also of interest to investigate the possible correlation of CO EW with environment for the cluster galaxies. However, Fig. 2 shows no correlation between CO EW and projected distance of galaxies from cluster centres.
4 SPECTROSCOPIC PROPERTIES OF ELLIPTICALS
In this section, we briefly look at correlations between the spectroscopic CO EWs and other physical parameters of ellipticals in our sample.
A weak dependence is found between the CO EW and the metallicity of ellipticals, with the latter being measured by the $`Mg_2`$ index (Fig. 3). There is the hint of a correlation in the expected sense, but this is formally only at the 1–$`\sigma `$ level. This is unsurprising given that previous studies have shown the CO index to be a poor metallicity indicator in old and metal-rich stellar populations. For example, Frogel et al. (1978) found no gradient in the photometric CO index with radius, even though galaxies have strong metallicity gradients as shown by optical indicators. There is a correlation between $`Mg_2`$ and absolute B magnitude, in the sense that more luminous galaxies have higher metallicity (Fig. 4). However, this does not reflect in any measureable correlation between CO EW and absolute B magnitudes, as shown in Fig. 5, which also shows that the scatter in CO EW is constant with galaxy luminosity. Fig. 6 shows CO EW plotted against H$`\beta `$ equivalent width, for the 19 galaxies in the present sample which were observed in the optical by Trager et al. (1998). Since H$`\beta `$ is strong in stellar populations with relatively recent star formation, it is reassuring to see a trend, however weak, in the expected sense in this plot, but the formal significance of the correlation is low due to the small number of objects for which data are available, and due to the different ages of populations probed by the two indicators.
Some of the individual galaxies have known peculiarities which might affect their star formation histories and hence the measured CO EWs. For example, NGC 1052 hosts a liner-type nucleus with broad H$`\alpha `$ emission and optical–near-IR line ratios inconsistent with stellar excitation (Alonso-Herrero et al. 1997), and is thus concluded to have a central nonstellar ionizing source. NGC 6051 is the radio source 4C+24.36 (Olsen 1970). NGC 6137 is identified as a head-tail radio source by Ekers (1978). NGC 6166, a multiple-nucleus cD galaxy in Abell 2199, is a strong radio source (3C338). Carollo et al. (1997) note that HST WFPC2 images of NGC 7626 show a warped, symmetrical dust lane crossing the centre, and this galaxy also has strong radio jets (Jenkins 1982). The compilation of radio surveys by Calvani, Fasano & Franceschini (1989) contains radio observations for 12 of the present sample; of these, 5 are detections (NGC 1052, NGC 1600, NGC 7385, NGC 7626 & NGC 7785) while the remainder (IC 5157, NGC 380, NGC 384, NGC 410, NGC 6020, NGC 7454 & NGC 7562) are all upper limits. Since any links between star formation and nuclear activity are highly uncertain at present, we make no comment on these data, other than to note that no strong correlation of radio properties with CO EW is evident.
5 DISCUSSION
The main issue confronted in this study is the effect of environment on the stellar population in elliptical galaxies (i.e. the epoch of star formation). Using the CO measurements for field and cluster ellipticals, we find no overall evidence for a stronger CO absorption feature in field galaxies compared to those in clusters, contrary to what was found in paper I. However, we find a clear difference between the CO strengths of ellipticals in small groups, with a few companions, and those which are truly isolated. Galaxies located in groups have CO EW values larger than either the isolated field subset, or than the cluster galaxies. Therefore, a simple monotonic correlation between star formation history and local galaxy number density seems untenable. This is the most important result from the present study.
However, it is unlikely to be a coincidence that the ellipticals appearing to contain young stars are preferentially located in environments most conducive to merging, due to the lower velocity dispersion of the groups compared to rich clusters, and indeed to merging with late-type companions, which are far more common in groups than rich clusters. Thus we propose, along with Ellis et al. (1997), that all ellipticals formed most of their stars at an early epoch, and those containing younger stars have acquired them recently through minor mergers with gas-rich galaxies. This is also suggested by Silva & Bothun (1998) in a study using near-infrared photometry of ellipticals which show signs of interaction/mergers. It is, however, likely that our spectroscopy, which only samples the central few arcsec, is more sensitive to such a component if the merging is strongly dissipative, with the accreted material rapidly sinking to the centre. Thus, our data should be more sensitive to nuclear starbursts than would be the global colours measured by Silva & Bothun (1998).
The observed offset between the isolated field subsample and the main peak of cluster galaxies in Fig. 1 is not a metallicity effect since they have similar mean $`Mg_2`$ indices (which are measures of metallicity), corresponding to 0.304$`\pm `$0.004 mag. (for the ‘isolated’ ellipticals) and 0.295$`\pm `$0.005 mag. (for the cluster ellipticals). Therefore, given the weak correlation between $`Mg_2`$ and CO EW (Fig. 2), the offset between isolated and cluster galaxies must have some other cause. However, there is a striking difference between the range of $`Mg_2`$ indices for the different subsamples. The standard deviation of the $`Mg_2`$ indices for the isolated galaxy subsample is only 0.01 mag, c.f. 0.03 mag for the cluster galaxies and 0.04 mag for the group galaxies. While we must beware small number statistics (only 6 of the isolated galaxies have $`Mg_2`$ values), this indicates that the isolated ellipticals form a very homogeneous group of galaxies, a point which is also indicated by the extreme narrowness of the corresponding peak in CO EW values shown in Fig. 1. Conversely, the large range in $`Mg_2`$ for the group galaxies is easily understood if these are prone to significant merging and accretion activity.
There have been several studies claiming to have found intermediate-age populations in field galaxies (e.g. Bica & Alloin 1987, Bower et al. 1990, Schweizer & Seitzer 1992, Rose 1994). It is therefore of interest to see whether these are consistent with the interpretation presented above. Bica & Alloin (1987) find 5 individual galaxies which show clear evidence of intermediate-age or young stellar populations. Of these, 4 (NGC 2865, NGC 5018, NGC 5061 & NGC 5102) are listed by Faber et al. (1989) as being members of groups with group velocity dispersions between 120 and 430 km s<sup>-1</sup>. The only exception is NGC 4382, an interacting S0 on the outskirts of the Virgo cluster. Thus, all the 5 galaxies are located in environments conducive to merging activity. The same is true for the ellipticals studied by Rose et al. (1994), where all of the field ellipticals lie in the outer regions of the Virgo cluster, or in groups identified by Faber et al. (1989), with group velocity dispersions between 65 and 210 km s<sup>-1</sup>. Since the majority of ellipticals lie in groups or clusters, this can hardly be taken as a striking confirmation of our model, but it is encouraging that there are no discrepant objects in these two independent studies.
Further observations are clearly required to test our present interpretation. CO spectroscopy should be obtained for field galaxies claimed by other studies (e.g. Bica & Alloin 1987; Rose et al. 1994) to have undergone recent star formation, and for the apparent post-merger objects of Silva & Bothun (1998). It is also important to extend the present sample of very isolated galaxies to confirm the very homogeneous old populations we have found in the present paper. Our present interpretation in terms of recent merging predicts a correlation between CO absorption strength and the velocity dispersion of the group or cluster in which the galaxy resides; it will be of interest to see whether such a correlation is confirmed by future observations of galaxies in a range of environments from the core of the Coma cluster to the field.
In order to calibrate the observed differences in CO strengths in terms of starburst ages and strengths, reliable evolutionary synthesis models are required. This is an area which is yet to be fully developed in terms of near-IR spectroscopic parameters, but some preliminary work has been done. Buzzoni (1995) tabulates CO indices for simulated stellar populations with ages between 4 and 15 Gyr, and in general predicts strengths similar to those in the isolated field galaxies in the present sample. For example, taking a Salpeter IMF and solar metallicities, the CO indices predicted by Buzzoni (1995) convert to equivalent widths of 2.4–3.0 nm, using the conversion formulae given by Doyon et al. (1994) and PDW. This agreement is encouraging, but it should be noted that these models cannot explain the strength of CO absorption observed in the ‘group’ ellipticals. Even if there is an overall offset between the model predictions and observations, it is interesting to note that Buzzoni et al. (1995) find only $``$10% change in CO absorption strength between 4 Gyr and 15 Gyr populations (the extremes in the modelled ages), for any reasonable IMF and metallicity. The $``$30% difference observed between the ‘isolated’ and ‘group’ galaxies thus requires either a population of stars significantly younger than 4 Gyr in the ‘group’ galaxies, a combination of age and metallicity effects (but see the discussion earlier in this section) or may point to an additional stellar component which is not accurately reproduced in the existing models. This latter is quite possible, given the complexity of evolution of giant and supergiant stars.
6 CONCLUSIONS
Contrary to the main conclusion of paper I, we do not find evidence for an overall offset in CO absorption strength, between field and cluster ellipticals. However, there is a bimodal distribution in this parameter for field galaxies only, with the two distributions directly correlated with the degree of isolation of galaxies. Specifically, very isolated ellipticals appear to form a very homogeneous population, with no sign of recent star formation and a very small range of metallicity as revealed by their $`Mg_2`$ absorption feature. On the other hand, ellipticals in groups frequently show evidence for intermediate-age stellar populations and have a wide range in metallicity. Ellipticals in rich clusters have intermediate properties in both parameters.
We interpret the observed differences in terms of recent minor mergers, which are most likely to occur in the moderate density environment of small groups consisting of only a few members. Dissipative mergers with gas-rich galaxies could then introduce a significant population of younger stars to the central regions of galaxies in these groups, giving the stronger CO absorptions we find. If this is the case, then our data are consistent with ellipticals in all environments being essentially old, in agreement with other recent studies.
Further work is clearly needed to extend the size of the isolated and group samples, to test further whether the somewhat a posteriori division of the field galaxies is actually justified. In addition, the whole area of near-IR spectroscopy of galaxies is ripe for detailed stellar spectral synthesis modelling, so that data of the type we present here can be fully understood.
ACKNOWLEDGMENTS
We thank the anonymous referee for several useful recommendations which significantly improved the content and presentation of this paper. PJ thanks Doug Burke for useful suggestions. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The United Kingdom Infrared Telescope is operated by the Joint Astronomy Centre on behalf of the U.K. Particle Physics and Astronomy Research Council.
REFERENCES
Allen, C. W., 1976, Astrophysical Quantities (3rd ed.), Athlone Press, London Alonso-Herrero A., Rieke M. J., Rieke G. H., Ruiz M., 1997, ApJ, 482, 747 Bica E., Alloin D., 1987, A&A, 181, 270 Bower R. G., Ellis R. S., Rose J. A., Sharples R. M., 1990, AJ, 99, 530 Bower R. G., Lucey J. R., Ellis R. S., 1992, MNRAS, 254, 601 Bruzual G., 1983, ApJ, 273, 105 Butcher H. R., Oemler A., 1985, ApJS, 65, 665 Buzzoni A., 1995, ApJS, 98, 69 Calvani M., Fasano G., Franceschini A., 1989, AJ, 97, 1319 Carollo C. M., Franx M., Illingworth G. D., Forbes D. A., 1997, ApJ, 481, 710 de Carvalho R. R., Djorgovski S., 1992, ApJ, 389, L49 Djorgovski, S., Davis, M., 1987, ApJ., 313, 59 Doyon R., Joseph R. D., Wright G. S., 1994, ApJ, 421, 101 Dressler, A., Lynden-Bell, D., Burstein, D., Davies, R.L., Faber, S.M., Terlevich, R.J., Wegner, G., 1987, ApJ, 313, 42 Ekers R. D., 1978, A&A, 69, 253 Ellis R. S., Smail I., Dressler A., Couch W. J., Oemler A., Butcher H., Sharples R.M., 1997, ApJ, 483, 582 Faber S. M., Wegner G., Burstein D., Davies R. L., Dressler A., Lynden-Bell D., Terlevich R. J., 1989, ApJS, 69, 763. Frogel J. A., Persson S. E., Aaronson M., Matthews K., 1978, ApJ, 220, 75 Geller M. J., Huchra J. P., 1983, ApJS, 52, 61 (GH) Gonzalez J. J., 1993, Ph.D. thesis, Univ. California, Santa Cruz Guzmán R., Lucey J. R., 1993, MNRAS, 263, 47 Huchra J. P., Geller M. J., 1982, ApJ, 257, 423 (HG) Jenkins C. R., 1982, MNRAS, 200, 705 Kauffmann G., 1996, MNRAS, 281, 487 Kleinmann S. G., Hall D. N. B., 1986, ApJS, 62, 501 Larson R. B., Tinsley B. M., Caldwell C. N., 1980, ApJ, 237, 692 Lucey J. R., Guzmán R., Steel J., Carter D., 1997, MNRAS, 287, 899 Mobasher B., James P. A., 1996, MNRAS, 280, 895 (paper I) O’Connell R. W., 1980, ApJ, 236, 430 Olsen E. T., 1970, AJ, 75, 764 Origlia L., Moorwood A. F. M., Oliva E., 1993, A&A, 280, 536 Puxley P. J., Doyon R., Ward M. J., 1997, ApJ, 476, 120 (PDW) Rose J. A., Bower R. G., Caldwell N., Ellis R. S., Sharples R. M., Teague P., 1994, AJ, 108, 2054 Schweizer F., Seitzer P., 1992, AJ, 104, 1039 Silva D. R., Bothun G. D., 1998, AJ, 116, 85 Tinsley B.M., Gunn J.E., 1976, ApJ, 203, 52 Trager S.C., Worthey G., Faber S.M., Burstein D., Gonzalez J., 1998, ApJS, 116, 1
Figure Captions
Figure 1. A histogram showing the distribution of CO EW values for cluster (solid line) and field (dashed line & shading) ellipticals.
Figure 2. CO EW as a function of projected distance from the cluster centre, for 30 ellipticals in rich clusters.
Figure 3. CO EW as a function of the metallicity index $`Mg_2`$ for the 44 elliptical galaxies. Plotted symbols are the same as for Fig. 2.
Figure 4. Metallicity index $`Mg_2`$ as a function of total B-band absolute magnitude for 44 elliptical galaxies. Plotted symbols are the same as for Fig. 2.
Figure 5. CO EW as a function of total B-band absolute magnitude for 50 elliptical galaxies. Plotted symbols are the same as for Fig. 2.
Figure 6. CO EW as a function of H$`\beta `$ EW for 19 elliptical galaxies. Plotted symbols are the same as for Fig. 2.
|
no-problem/9901/cond-mat9901088.html
|
ar5iv
|
text
|
# Controlled Drift of Indirect Excitons in Coupled Quantum Wells: Toward Bose Condensation in a Two-Dimensional Trap
\[
## Abstract
We have succeeded in trapping indirect excitons in coupled quantum wells in a harmonic potential minimum via inhomogeneous applied stress and electric field. These excitons exhibit a strong Stark shift (over 60 meV), long lifetime (100 ns), and high diffusivity (1000 cm<sup>2</sup>/s). This approach is very promising for obtaining the high exciton density needed for Bose condensation of excitons in two dimensions.
\]
Over the past 15 years, several experiments have indicated evidence of Bose effects or Bose condensation of excitons in semiconductors . These experiments mostly fall into four basic categories: evidence based on spectral lineshape analysis following incoherent generation of the excitons, which shows narrowing of the exciton luminescence lines at high densities , evidence based on comparison of total luminescence intensities of two different excitonic species, by which the relative populations can be deduced , evidence based on light emission following coherent generation of excitons in the ground state, which shows that excitons remain in regions of phase space near the ground state for time periods long compared to the exciton scattering time , and measurements of the transport of the excitons which show fast expansion out of the creation region . This body of evidence, while important, lacks the dramatic “smoking gun” that has been seen in alkali atoms in magneto-optical traps , namely, a spatial condensation into a two-component distribution, a clear prediction of the theory of the weakly interacting Bose gas which has no classical analog. It has long been known that excitons in a harmonic potential will also show this behavior if they undergo Bose condensation; a method of creating a harmonic potential for excitons in bulk semiconductors is well established , but so far, experimental attempts with bulk semiconductors have not succeeded in creating a density of excitons high enough for Bose condensation in this kind of trap.
Much recent attention has been given to indirect, or “dipole,” excitons in two-dimensional heterostructures . This system is appealing because (1) the excitons can have long lifetimes due to the spatial separation of the electron and hole, (2) the interaction between the dipole-aligned excitons is strongly repulsive, so that crossover to a Fermi liquid state is not expected at high density, and (3) the quality of semiconductor heterostructures has been steadily increasing, so that true two-dimensional physics can be studied. In a two-dimensional system, Bose-Einstein condensation is not expected, but rather a Kosterlitz-Thouless transition to a superfluid state , although J. Fernández-Rossier, C. Tejedor, and R. Merlin have recently argued that the coupling of the excitons to the photon states will allow them to undergo Bose condensation in two dimensions.
Early experiments with this type of structure showed evidence for Bose effects, but later work showed that localization due to random variations in the structures significantly complicated the analysis of the luminescence lineshape. Recent studies of similar stuctures have shown quite promising results, including evidence for increased diffusion out of the excitation region at high density and low temperature. Other recent measurements of the diffusion of indirect excitons have also shown fast expansion at high densities . Enhanced diffusion is expected for superfluid excitons, but can also be attributed to other, classical effects which also occur at high density, such as phonon wind .
In order to overcome the complications of localization and classical, pressure-driven expansion, X.J. Zhu, P.B. Littlewood, and T.M. Rice proposed a variation in which inward pressure on the excitons is produced which confines them to a potential minimum. As Nozierés and others have pointed out, if a potential minimum exists, true Bose condensation can occur in two dimensions instead of a Kosterlitz-Thouless transition. Zhu, Littlewood, and Rice envisioned that a potential minimum could be created by a variation in the quantum well thickness. In this Letter, we report the experimental accomplishment of a potential minimum for indirect excitons in a two-dimensional plane via a different means. This method creates a harmonic potential minimum for the excitons, so that the telltale two-component spatial signature of Bose condensation can occur, and it allows us to vary the depth of the potential minimum via an external control.
The samples we use are GaAsAl<sub>x</sub>Ga<sub>1-x</sub>As coupled quantum well structures fabricated via molecular-beam epitaxy (MBE) at the Max-Planck-Instutute in Stuttgart; the substrate is heavily p-doped and the capping layer is heavily n-doped in order to allow electric field perpendicular to plane of the quantum wells. Fig. 1(a) illustrates the band structure when electric field is applied; as seen in Fig. 1(b), as the electric field is increased, the energy of the indirect excitons undergoes a strong Stark shift to lower energy, as also seen in previous studies (e.g., Refs. .) The spatial separation of electron and hole into two separate planes also increases the lifetime of the excitons; in our samples we measure lifetimes of the indirect excitons of around 100 ns.
We create a potential minimum for the excitons via externally applied, inhomogeneous stress and electric field. Fig. 2 shows the experimental geometry. The quantum well sample is clamped between two metal plates, each with a small hole, and a pin is pressed against the GaAs substrate, which has been polished on both surfaces prior to the MBE fabrication. The pin creates a shear strain maximum in the quantum wells, as well as a slight hydrostatic expansion; both of these strain effects lead to an energy minumum for the excitons via the Pikus and Bir deformation Hamiltonian , similar to the way in which inhomogeneous strain leads to a potential energy minimum for carriers in bulk semiconductors . Too much stress from the pin will cleave the sample, of course, but springs on the back of the sample help to prevent this, allowing a reproducible, controllable stress.
In addition, the pin is held at a fixed, negative voltage while the clamping plates are connected to ground. This causes a current to flow through the heavily-doped substrate, so that the voltage across the quantum wells drops to zero far away from the pin. As seen in Fig. 1, higher electric field corresponds to lower energy for the indirect excitons, so that this effect also contributes to a potential energy minimum for the excitons below the pin.
The entire assembly is placed in liquid or gaseous helium, and the quantum wells are excited by a laser through the window of an optical cryostat by means of a prism attached to the lower metal plate. The force on the pin is controlled by a micrometer at the top of the cryostat, as in Ref. . Fig. 3 shows time-integrated luminescence from a coupled quantum well sample with 60 Å GaAs wells and 42 Å Al<sub>.3</sub>Ga<sub>.7</sub>As barrier, taken with a CCD camera on the back of an imaging spectrometer as the laser spot is scanned across the surface of the sample. As seen in Fig. 3(a), a well depth of more than 10 meV can be created, compared to the inhomogeneous broadening in these samples of slightly less than 1 meV. When the voltage applied to the pin is set to zero, the same time-integrated scan gives Fig. 3(b), which shows that the effect of the variation in voltage is about the same as the effect of the shear strain maximum.
The fact that a potential minimum occurs is strongly connected to the geometry which leaves the lower surface of the sample unconstrained. When the sample is placed on a glass slide, a potential energy maximum is seen, since in this case the sample is compressed, and the positive shift in energy due to the hydrostatic deformation potential overwhelms the negative shift due to shear strain. We have solved the static field equations for strain in the sample via finite-element analysis, and the shifts in energy in both cases agree with our calculations .
In a harmonic potential minimum in two dimensions, the critical number for Bose condensation is given by
$`N_c`$ $`=`$ $`{\displaystyle \underset{n}{}}n{\displaystyle \frac{1}{e^{n\mathrm{}\omega /k_BT}1}}`$ (1)
$`=`$ $`{\displaystyle \frac{(k_BT)^2}{(\mathrm{}\omega _0)^2}}{\displaystyle ϵ𝑑ϵ\frac{1}{e^ϵ1}}`$ (2)
$`=`$ $`1.8{\displaystyle \frac{(k_BT)^2}{(\mathrm{}\omega _0)^2}},`$ (3)
where $`\omega _0=(\alpha /m)^{1/2}`$. The shape of the well shown in Fig. 3(a) corresponds to a force constant of $`\alpha =65`$ meV/mm<sup>2</sup>, approximating $`U=\alpha x^2/2`$ in the center of the trap. For a temperature of 2 Kelvin and exciton mass on the order of the electron mass, this critical number is approximately $`10^7`$. By comparison, a single laser pulse from our dye laser contains more than $`10^{11}`$ photons. We have not seen evidence for Bose effects in this well, however. Because we excite at $`\lambda =660`$ nm, the excess energy of the generated carriers is quite high, so that the carrier temperature is well above 100 K for most of their lifetime, as determined by fits to the band-edge luminescence from the substrate at the same times, even when the sample is immersed in liquid helium. At this temperature, the critical number is four orders of magnitude higher.
The diffusion length of the excitons is also too short for thermalization in the well at this temperature. Fig. 4 shows the spatial profile of the indirect exciton luminescence at various times after a laser pulse has created them about 400 $`\mu `$m from the center of the well. The expansion at early time corresponds to a diffusion constant of over 1000 cm<sup>2</sup>/s, similar to that of Ref. . (This fast expansion at early time is essentially the same even with zero applied stress.) At late times, as the exciton density drops, the expansion of the excitons slows down, although the effect of drift due to the gradient in potential energy is clearly seen.
For a lifetime of 100 ns and D = 1000 cm<sup>2</sup>/s, the diffusion length of the excitons is around 100 $`\mu `$m. By comparison, the equilibrium spatial width of a classical gas in a harmonic potential well with $`\alpha =65`$ meV/mm<sup>2</sup>, determined approximately by the condition $`\alpha x^2/2=3k_BT/2`$ , is over 500 $`\mu `$m.
As the exciton gas gets colder, the equilibrium spatial width should become smaller, to less than 100 $`\mu `$m at 2 K. This also indicates the importance of lower effective exciton temperature. We believe that by creating the excitons with lower energy via near-resonant excitation, we can significantly reduce the exciton temperature in the future.
Another approach which may also aid the approach to Bose condensation of excitons in this geometry may be adding a strong magnetic field, which will reduce the spin degeneracy of the excitons, forcing higher numbers of particles into few states, and which will also create a more strongly repulsive interaction between the excitons. Several authors have argued that magnetic field will enhance Bose effects of excitons; experimentally, Ref. reported a sharp increase of diffusivity of indirect excitons above a critical threshold of magnetic field.
Recently, several authors have proposed optical tests for the phase coherence which should appear in the excitonic Bose condensate. Underlying all these approaches is the fact that Bose condensation implies spontaneous phase coherence, and since excitons couple to photon states, this phase coherence should transfer to the photons, even in the absence of lasing. The method proposed here of confining an exciton condensate to a trap is much more amenable to these kinds of tests than methods which allow free expansion of the exciton gas, since the ground state in this case is well defined.
Finally, we note that the method we have used here to trap the excitons may have other applications. Since excitons are charge neutral, they do not respond to electric field, and it is therefore difficult to control their motion. We have shown that the motion of excitons in heterostructures can be controlled over distances up to 100 $`\mu `$m via both inhomogeneous shear stress and inhomogeneous electric field. In particular, variation of the voltage across the quantum wells can be accomplished by depositing resistive patterns on the surface via photolithography. Small “wires” for excitons can therefore be created which carry excitons from place to place in response to electric fields.
Acknowledgements. This work has been supported by the National Science Foundation as part of Early Career award DMR-97-22239. One of the authors (D.S.) is a Cottrell Scholar of the Research Corporation. We thank I. Hancu for early contributions to these experiments, and L.M. Smith for helpful conversations.
|
no-problem/9901/gr-qc9901031.html
|
ar5iv
|
text
|
# Critical phenomena and a new class of self-similar spherically symmetric perfect-fluid solutions
## I Introduction
One of the most exciting developments in general relativity in recent years has been the discovery of critical phenomena in gravitational collapse. For a variety of spherically symmetric imploding matter fields, there is a self-similar critical solution containing a naked singularity which separates models which collapse to black holes from those which disperse . Sometimes a discrete similarity is involved but, in other circumstances, the critical solution seems to be represented by a continuously self-similar model. This is one which has a homothetic Killing vector and contains no dimensional constants. Perfect-fluid models of this kind necessarily have an equation of state of the form $`p=\alpha \mu `$ and so only in this case could the critical solution be homothetic.
Self-similar spherically symmetric perfect-fluid solutions have been much studied in general relativity (see and references therein) and the attempt to understand critical phenomena has led to a several further studies . However, their precise relationship with the critical solution has remained obscure. This is mainly because the full family of such solutions had not been identified when critical phenomena were first discovered. However, recently Carr & Coley have presented a complete asymptotic classification of such solutions. Furthermore, by reformulating the field equations for these models in terms of dynamical systems, Goliath et al. have obtained a compact three-dimensional state space representation of the solutions and this leads to another complete picture of the solution space. These investigations have resulted in the discovery of a new class of ‘asymptotically Minkowski’ self-similar spacetimes.
In this paper we shall discuss these new solutions and show why they are intimately related to critical phenomena . We thereby demonstrate for the first time the global nature of the critical solution. Although the detailed derivation of these new solutions is given elsewhere, this is the first published announcement of their existence and the first attempt to link them to critical phenomena. The purpose of this paper is therefore to highlight this result in advance of the more extensive analyses (since these are in a much broader context). In particular, we will show how the global features relates to the equation of state parameter $`\alpha `$ and explain why there is only one such solution for each $`\alpha `$. It should be emphasized that numerical studies of the critical solution are always restricted to some finite range of the self-similar variable $`z`$. However, as the critical index is approached, the extent of the self-similar region grows and one could in principle go to arbitrarily large values of $`z`$. This paper can therefore be regarded as predicting the characteristics of these solutions.
The discussion will mainly be in terms of the compact state space but, to extract important physical features, some of the quantities used by Carr & Coley will be plotted. No equations will be used because the discussion is intended to be purely qualitative and thereby accessible to the general reader. However, some technical terms will be used in the next section, so we here give some background references. For an introduction to dynamical systems theory in general relativity, see ; for its particular application in the spherically symmetric context, see ; for more details of the other types of self-similar solutions, see Carr et al. . At the end of the paper, we emphasize our key predictions, so that “critical” workers can investigate these.
## II The solution space
We shall focus on spherically symmetric self-similar solutions in which the spacetime admits a homothetic Killing vector. This means that all dimensionless variables depend only on the self-similar variable $`zr/t`$, where $`r`$ is the comoving radial coordinate and $`t`$ is the time coordinate. In a dynamical systems approach these solutions correspond to orbits in a three-dimensional compact state space. The state space of self-similar spherically symmetric perfect-fluid models (for $`\alpha >1/5`$) is presented in Fig. 1. A point in this space corresponds to a certain geometry and matter field configuration on a homothetic (constant $`z`$) slice, while an orbit in the state space represents an entire spacetime. All continuous orbits are future and past asymptotic to one of a few solutions with higher symmetry. These appear as equilibrium points on the boundary of the state space and these points are labelled in Fig. 1. A physical description of the solutions asymptotic to them is given in Table I.
The state space is divided into two halves, one corresponding to positive $`z`$, the other to negative $`z`$. This means that the solutions in one half are the time-reverse of solutions in the other half, so all equilibrium points appear twice in Fig. 1. The sonic surfaces are also depicted and solutions generally develop a shock-wave here . However, in two of the sonic surfaces there is a sonic line and solutions which pass through this line can be extended continuously through the sonic surface. Only these solutions will be considered to be physical and the number of such solutions is strongly restricted.
The state space has the advantage that it gives a pictorial representation of the relationship between different solutions and the connection between the initial and final states, thereby yielding insights into the global nature of the solutions. However, it has the disadvantage that it is rather abstract. To better understand the physical aspects of the solutions, it is useful to consider some of the physically interesting quantities which arise in the comoving approach. The dependence of these quantities on $`z`$ corresponds to two-dimensional projections of orbits in the full state space. Following , we use: (1) the scale factor $`S`$, which fixes the relation between the comoving radial coordinate $`r`$ and the Schwarzschild radial coordinate $`R=Sr`$, this indicating when a solution expands infinitely ($`S\mathrm{}`$) or encounters a singularity ($`S0`$) for finite values of $`z`$; (2) the velocity $`V`$ of the spheres of constant $`z`$ relative to the fluid, which is important for the identification of event horizons ($`|V|=1`$) and naked singularities, see ; (3) the density profile $`\mu t^2`$, which gives the matter distribution at a given comoving time $`t`$; and (4) the mass function $`2m/R`$, where $`m(r)`$ is the mass within radial coordinate $`r`$, this indicating the presence of an apparent horizon ($`2m/R=1`$), see .
We shall first briefly review the previously known families of solutions. All of these solutions are discussed in more detail in , where the dependence of the above functions on $`z`$ are shown explicitly. We shall then consider the new asymptotically Minkowski solutions. This is the first discussion of their physical features and the first detailed analysis of their relevance to critical phenomena.
Asymptotically Friedmann solutions
There are two one-parameter sets of solutions that are asymptotic to the flat Friedmann solution, all of which are connected with one of the Friedmann points F. One has positive $`z`$ and the other has negative $`z`$. Two qualitatively different families can be distinguished: (1) expanding-recollapsing solutions (F–K orbits); and (2) ever-expanding (or ever-contracting) solutions (F–C orbits), where C is to be interpreted as an infinitely dispersed state. The latter family contains the flat Friedmann solution itself. Thus the flat Friedmann solution appears both as an equilibrium point F and as an orbit in state space, corresponding to different slicings.
Asymptotically quasi-static solutions
For each value of $`\alpha `$, there is a unique static solution, originally found by Tolman . The corresponding (T–T) orbit traverses the entire state space and spans both positive and negative $`z`$. Furthermore, there is a two-parameter set of solutions with behavior resembling the static solution at large $`|z|`$ (i.e. at early times). They are all associated with K points, corresponding to non-isotropic singularities. As with the asymptotically Friedmann solutions, there are two different families within this class: (1) expanding-recollapsing solutions (K–K orbits); and (2) ever-expanding (or ever-contracting) solutions (K–C orbits). The latter contain the naked-singularity solutions discussed by Ori & Piran and Foglizzo & Henriksen . Unlike the asymptotically Friedmann solutions, the asymptotically quasi-static solutions necessarily span both positive and negative $`z`$.
Asymptotically Minkowski solutions
When $`\alpha >1/5`$, solutions exist that are ‘asymptotically Minkowski’, in the sense that the state-space orbits asymptote to equilibrium points that correspond to Minkowski space. There are actually two subclasses of such solutions, associated with different equilibrium points in state space: Class A solutions are connected with the M points and are described by two parameters. Class B solutions are connected with the $`\stackrel{~}{\mathrm{M}}`$ points and are described by one parameter. Both of these subclasses contain two different families of solutions: (1) singular solutions (K–M, K–$`\stackrel{~}{\mathrm{M}}`$ orbits); and (2) regular solutions (C–M, C–$`\stackrel{~}{\mathrm{M}}`$ orbits). Only the latter contain a sonic point. All these types of solutions are illustrated in Figs. 2 and 3, where the arrows indicate whether solutions are future asymptotic to M or $`\stackrel{~}{\mathrm{M}}`$.
Solutions in class A have $`V1`$, $`S\mathrm{}`$, $`\mu t^20`$ and $`2m/R0`$ at some finite value $`z=z_{}`$. Although this limit is reached at finite $`z`$, it should be pointed out that most investigations of the critical solution use the physical distance $`R`$, which is infinite. Examples of such solutions are illustrated by the dotted curves in Figs. 2 and 3. Solutions in class B have $`VV_{}>1`$, $`S\mathrm{}`$, $`\mu t^20`$ and $`2m/R0`$ as $`z\mathrm{}`$. Examples of these solutions are represented by the dashed curves in Figs. 2 and 3. Both classes are asymptotically dispersive and solutions asymptotic to K points also have $`S0`$ at a finite value of $`z`$, indicating the formation of a singularity in the past (assuming the time direction indicated in the figures). Even though $`2m/R0`$ for both classes, the mass $`m`$ need not vanish.
It should be emphasized that these solutions are not asymptotically flat in the usual global sense, in which there is a certain radial decay of the curvature towards spatial infinity, see, e.g., . We are not considering an isolated system here but rather a fluid spacetime in which the Minkowski geometry is obtained asymptotically along certain coordinate lines. It can be shown that the curvature vanishes asymptotically as the M ($`\stackrel{~}{\mathrm{M}}`$) point is approached and $`r\mathrm{}`$ (and hence $`t\mathrm{}`$). Because the fluid becomes infinitely diluted, the situation is analogous to that of the open Friedmann solution, in which the Milne solution is approached asymptotically along certain time-lines at late times (see, e.g., ).
## III The critical solution
Critical phenomena in gravitational collapse were first studied by Choptuik and remain an active field of research, see, e.g., and references therein. The solution at the threshold of black-hole formation in spherically symmetric radiation fluid collapse, corresponding to $`\alpha =\frac{1}{3}`$, was studied by Evans & Coleman . They found it to be a self-similar solution distinguished by the following criteria:
(1) It is everywhere analytic, or at least $`C^{\mathrm{}}`$. In particular it has a regular center, and also crosses the sonic surface in an analytic way.
(2) It has a collapsing interior surrounded by an expanding exterior. This means that the radial fluid three-velocity $`V_R`$ associated with a Schwarzschild foliation (which is different from the function $`V`$) has exactly one zero.
Subsequently, other authors have used these criteria to investigate the critical solution for other values of $`\alpha `$. For a recent review, see .
The uniqueness of the critical solution can be understood as follows: For each value of the equation-of-state parameter $`\alpha `$, there exists a one-parameter set of solutions with a regular center and a one-parameter set of solutions analytic at the sonic line (). Thus it is not surprising that the first condition leads to a discrete set of solutions. There is only one solution in this set that satisfies the second condition and this is the critical solution.
We now examine the critical solution in terms of both the state space of the self-similar spherically symmetric perfect-fluid solutions and the behavior of the various physical quantities. The results are summarized in Figs. 4 and 5.
Starting from the regular center C, a numerical investigation shows that for all equations of state, the orbit of the critical solution passes through the sonic line and enters the spatially self-similar region (with $`|V|>1`$). It turns out that for $`\alpha `$ in the range $`0<\alpha 0.28`$, it is of the asymptotically quasi-static kind: it passes through the spatially self-similar region and enters a second timelike self-similar region, finally reaching another sonic point (indicated by ‘x’ in the figures), which is generally irregular. However, this does not invalidate the solution as being the critical one, since the solution describing the inner collapsing region is usually matched to an asymptotically flat exterior geometry sufficiently far from the center. An example of a solution belonging to this class is given by the full curves in Figs. 4 and 5.
We find that for the limiting case $`\alpha 0.28`$, the critical solution is an asymptotically Minkowski solution of class B, whose orbit ends at an equilibrium point $`\stackrel{~}{\mathrm{M}}`$. In Figs. 4 and 5, this solution corresponds to the dashed lines. For $`0.28\alpha <1`$, we find that the critical solution belongs to the asymptotically Minkowski solutions of class A, whose orbit ends at an equilibrium point M. A solution representing this class is indicated by the dotted curves in Figs. 4 and 5.
For $`\alpha \alpha _{}0.89`$, the investigations in indicate that the critical solution is already irregular at the first sonic point. As the matching must be performed outside the sonic point , the solution would then be unphysical. However, Neilsen & Choptuik have recently demonstrated the existence of a regular critical solution for $`\alpha \alpha _{}`$ as well. Our present investigation supports their analysis.
To understand what happens for $`\alpha =\alpha _{}`$, we consider equations of state near this value. It turns out that the behavior can be understood in terms of the stability near the sonic line. In order for a solution to be regular at the sonic surface, the corresponding orbit must approach the sonic line along one of (at most) two possible directions. Each of these directions is associated with an eigenvalue – the direction corresponding to the smaller eigenvalue is called dominant and is associated with a one-parameter family of solutions (containing just one $`C^{\mathrm{}}`$ solution), the other is called secondary and is associated with an isolated solution. For $`\alpha <\alpha _{}`$, the critical solution corresponds to the secondary direction. However, when $`\alpha `$ gets close to $`\alpha _{}`$, the eigenvalue associated with the critical solution approaches that of the other direction. For $`\alpha =\alpha _{}`$, the eigenvalues (and directions) are equal, corresponding to a degenerate node, and for $`\alpha >\alpha _{}`$, the critical solution is associated with the dominant direction. Thus a transition from the secondary direction to the dominant direction occurs at $`\alpha =\alpha _{}`$. This transition results in severe numerical difficulties, so very high numerical precision is required to investigate such solutions, as pointed out in .
## IV Conclusions
Our key predictions can be summarized as follows:
(1) When one gets sufficiently close to the critical solution that the large-$`z`$ behavior can be studied, then this solution should have the various asymptotic features we predict for different values of $`\alpha `$.
(2) There should be a sudden transition in the nature of the critical solution as $`\alpha `$ passes through 0.28, with the solution going from the asymptotically quasi-static form to asymptotically Minkowski form. However, it should be emphasized that the solution is only flat towards null infinity for $`\alpha >1/3`$, so one still needs to match to a non-self-similar region on a spacelike surface.
(3) Although the asymptotically flat limit is reached at finite $`z`$, most critical workers use the physical distance, which is infinite. However, it should be pointed out that in the stiff case ($`\alpha =1`$), the asymptotically flat state is reached at finite physical distance, which should lead to some anomalies, see ).
(4) Although we have not explained why the critical solution is analytic at the sonic point (this presumably relates to the usual stability criterion), we have used the asymptotic features to explain why the analytic solution is unique for given $`\alpha `$. The relationship to solutions which are regular but not analytic at the sonic point is discussed in more detail by Carr & Henriksen .
|
no-problem/9901/gr-qc9901023.html
|
ar5iv
|
text
|
# Quantum Mechanics of Geometry
## I Introduction
During his Göttingen inaugural address in 1854, Riemann suggested that geometry of space may be more than just a fiducial, mathematical entity serving as a passive stage for physical phenomena, and may in fact have direct physical meaning in its own right. General relativity provided a brilliant confirmation of this vision: curvature of space now encodes the physical gravitational field. This shift is profound. To bring out the contrast, let me recall the situation in Newtonian physics. There, space forms an inert arena on which the dynamics of physical systems –such as the solar system– unfolds. It is like a stage, an unchanging backdrop for all of physics. In general relativity, by contrast, the situation is very different. Einstein’s equations tell us that matter curves space. Geometry is no longer immune to change. It reacts to matter. It is dynamical. It has “physical degrees of freedom” in its own right. In general relativity, the stage disappears and joins the troupe of actors! Geometry is a physical entity, very much like matter.
Now, the physics of this century has shown us that matter has constituents and the 3-dimensional objects we perceive as solids are in fact made of atoms. The continuum description of matter is an approximation which succeeds brilliantly in the macroscopic regime but fails hopelessly at the atomic scale. It is therefore natural to ask: Is the same true of geometry? If so, what is the analog of the ‘atomic scale?’ We know that a quantum theory of geometry should contain three fundamental constants of Nature, $`c,G,\mathrm{}`$, the speed of light, Newton’s gravitational constant and Planck’s constant. Now, as Planck pointed out in his celebrated paper that marks the beginning of quantum mechanics, there is a unique combination, $`\mathrm{}_P=\sqrt{\mathrm{}G/c^3}`$, of these constants which has dimension of length. ($`\mathrm{}_P10^{33}`$cm.) It is now called the Planck length. Experience has taught us that the presence of a distinguished scale in a physical theory often marks a potential transition; physics below the scale can be very different from that above the scale. Now, all of our well-tested physics occurs at length scales much bigger than than $`\mathrm{}_P`$. In this regime, the continuum picture works well. A key question then is: Will it break down at the Planck length? Does geometry have constituents at this scale? If so, what are its atoms? Its elementary excitations? Is the space-time continuum only a ‘coarse-grained’ approximation? Is geometry quantized? If so, what is the nature of its quanta?
To probe such issues, it is natural to look for hints in the procedures that have been successful in describing matter. Let us begin by asking what we mean by quantization of physical quantities. Take a simple example –the hydrogen atom. In this case, the answer is clear: while the basic observables –energy and angular momentum– take on a continuous range of values classically, in quantum mechanics their eigenvalues are discrete; they are quantized. So, we can ask if the same is true of geometry. Classical geometrical quantities such as lengths, areas and volumes can take on continuous values on the phase space of general relativity. Are the eigenvalues of corresponding quantum operators discrete? If so, we would say that geometry is quantized and the precise eigenvalues and eigenvectors of geometric operators would reveal its detailed microscopic properties.
Thus, it is rather easy to pose the basic questions in a precise fashion. Indeed, they could have been formulated soon after the advent of quantum mechanics. Answering them, on the other hand, has proved to be surprisingly difficult. The main reason, I believe, is the inadequacy of standard techniques. More precisely, to examine the microscopic structure of geometry, we must treat Einstein gravity quantum mechanically, i.e., construct at least the basics of a quantum theory of the gravitational field. Now, in the traditional approaches to quantum field theory, one begins with a continuum, background geometry. To probe the nature of quantum geometry, on the other hand, we should not begin by assuming the validity of this picture. We must let quantum gravity decide whether this picture is adequate; the theory itself should lead us to the correct microscopic model of geometry.
With this general philosophy, in this article I will summarize the picture of quantum geometry that has emerged from a specific approach to quantum gravity. This approach is non-perturbative. In perturbative approaches, one generally begins by assuming that space-time geometry is flat and incorporates gravity –and hence curvature– step by step by adding up small corrections. Discreteness is then hard to unravel<sup>*</sup><sup>*</sup>*The situation can be illustrated by a harmonic oscillator: While the exact energy levels of the oscillator are discrete, it would be very difficult to “see” this discreteness if one began with a free particle whose energy levels are continuous and then tried to incorporate the effects of the oscillator potential step by step via perturbation theory.. In the non-perturbative approach, by contrast, there is no background metric at all. All we have is a bare manifold to start with. All fields –matter as well as gravity/geometry– are treated as dynamical from the beginning. Consequently, the description can not refer to a background metric. Technically this means that the full diffeomorphism group of the manifold is respected; the theory is generally covariant.
As we will see, this fact leads one to Hilbert spaces of quantum states which are quite different from the familiar Fock spaces of particle physics. Now gravitons –the three dimensional wavy undulations on a flat metric– do not represent fundamental excitations. Rather, the fundamental excitations are one dimensional. Microscopically, geometry is rather like a polymer. Recall that, although polymers are intrinsically one dimensional, when densely packed in suitable configurations they can exhibit properties of a three dimensional system. Similarly, the familiar continuum picture of geometry arises as an approximation: one can regard the fundamental excitations as ‘quantum threads’ with which one can ‘weave’ continuum geometries. That is, the continuum picture arises upon coarse-graining of the semi-classical ‘weave states’. Gravitons are no longer the fundamental mediators of the gravitational interaction. They now arise only as approximate notions. They represent perturbations of weave states and mediate the gravitational force only in the semi-classical approximation. Because the non-perturbative states are polymer-like, geometrical observables turn out to have discrete spectra. They provide a rather detailed picture of quantum geometry from which physical predictions can be made.
The article is divided into two parts. In the first, I will indicate how one can reformulate general relativity so that it resembles gauge theories. This formulation provides the starting point for the quantum theory. In particular, the one-dimensional excitations of geometry arise as the analogs of ‘Wilson loops’ which are themselves analogs of the line integrals $`\mathrm{exp}iA.d\mathrm{}`$ of electro-magnetism. In the second part, I will indicate how this description leads us to a quantum theory of geometry. I will focus on area operators and show how the detailed information about the eigenvalues of these operators has interesting physical consequences, e.g., to the process of Hawking evaporation of black holes.
I should emphasize that this is not a technical review. Rather, it is written in the same spirit that drives Jayant’s educational initiatives. I thought this would be a fitting way to honor Jayant since these efforts have occupied so much of his time and energy in recent years. Thus my aim is present to beginning researchers an overall, semi-quantitative picture of the main ideas. Therefore, the article is written at the level of colloquia in physics departments in the United States. I will also make some historic detours of general interest. At the end, however, I will list references where the details of the central results can be found.
## II From metrics to connections
### A Gravity versus other fundamental forces
General relativity is normally regarded as a dynamical theory of metrics —tensor fields that define distances and hence geometry. It is this fact that enabled Einstein to code the gravitational field in the Riemannian curvature of the metric. Let me amplify with an analogy. Just as position serves as the configuration variable in particle dynamics, the three dimensional metric of space can be taken to be the configuration variable of general relativity. Given the initial position and velocity of a particle, Newton’s laws provide us with its trajectory in the position space. Similarly, given a three dimensional metric and its time derivative at an initial instant, Einstein’s equations provide us with a four dimensional space-time which can be regarded as a trajectory in the space of 3-metrics Actually, only six of the ten Einstein’s equations provide the evolution equations. The other four do not involve time-derivatives at all and are thus constraints on the initial values of the metric and its time derivative. However, if the constraint equations are satisfied initially, they continue to be satisfied at all times..
However, this emphasis on the metric sets general relativity apart from all other fundamental forces of Nature. Indeed, in the theory of electro-weak and strong interactions, the basic dynamical variable is a (matrix-valued) vector potential, or a connection. Like general relativity, these theories are also geometrical. The connection enables one to parallel-transport objects along curves. In electrodynamics, the object is a charged particle such as an electron; in chromodynamics, it is a particle with internal color, such as a quark. Generally, if we move the object around a closed loop, we find that its state does not return to the initial value; it is rotated by an unitary matrix. In this case, the connection is said to have curvature and the unitary matrix is a measure of the curvature in a region enclosed by the loop. In the case of electrodynamics, the connection is determined by the vector potential and the curvature by the electro-magnetic field strength.
Since the metric also gives rise to curvature, it is natural to ask if there is a relation between metrics and connections. The answer is in the affirmative. Every metric defines a connection —called the Levi-Civita connection of the metric. The object that the connection enables one to parallel transport is a vector. (It is this connection that determines the geodesics, i.e. the trajectories of particles in absence of non-gravitational forces.) It is therefore natural to ask if one can not use this connection as the basic variable in general relativity. If so, general relativity would be cast in a language that is rather similar to gauge theories and the description of the (general relativistic) gravitational interaction would be very similar to that of the other fundamental interactions of Nature. It turns out that the answer is in the affirmative. Furthermore, both Einstein and Schrödinger gave such a reformulation of general relativity. Why is this fact then not generally known? Indeed, I know of no textbook on general relativity which even mentions it. One reason is that in their reformulation the basic equations are somewhat complicated —but not much more complicated, I think, than the standard ones in terms of the metric. A more important reason is that we tend to think of distances, light cones and causality as fundamental. These are directly determined by the metric and in a connection formulation, the metric is a ‘derived’ rather than a fundamental concept. But in the last few years, I have come to the conclusion that the real reason why the connection formulation of Einstein and Schrödinger has remained so obscure may lie in an interesting historical episode. I will return to this point at the end of this section.
### B Metrics versus connections
Modern day researchers re-discovered connection theories of gravity after the invention and successes of gauge theories for other interactions. Generally, however, these formulations lead one to theories which are quite distinct from general relativity and the stringent experimental tests of general relativity often suffice to rule them out. There is, however, a reformulation of general relativity itself in which the basic equations are simpler than the standard ones: while Einstein’s equations are non-polynomial in terms of the metric and its conjugate momentum, they turn out to be low order polynomials in terms of the new connection and its conjugate momentum. Furthermore, just as the simplest particle trajectories in space-time are given by geodesics, the ‘trajectory’ determined by the time evolution of this connection according to Einstein’s equation turns out to be a geodesic in the configuration space of connections.
In this formulation, the phase space of general relativity is identical to that of the Yang-Mills theory which governs weak interactions. Recall first that in electrodynamics, the (magnetic) vector potential constitutes the configuration variable and the electric field serves as the conjugate momentum. In weak interactions and general relativity, the configuration variable is a matrix-valued vector potential; it can be written as $`\stackrel{}{A}_i\tau _i`$ where $`\stackrel{}{A}_i`$ is a triplet of vector fields and $`\tau _i`$ are the Pauli matrices. The conjugate momenta are represented by $`\stackrel{}{E}_i\tau _i`$ where $`\stackrel{}{E}_i`$ is a triplet of vector fieldsAs usual, summation over the repeated index $`i`$ is assumed. Also, technically each $`\stackrel{}{A}_i`$ is a 1-form rather than a vector field. Similarly, each $`\stackrel{}{E}_i`$ is a vector density of weight one, i.e., natural dual of a 2-form.. Given a pair $`(\stackrel{}{A}_i,\stackrel{}{E}_i)`$ (satisfying appropriate conditions as noted in footnote 2), the field equations of the two theories determine the complete time-evolution, i.e., a dynamical trajectory.
The field equations –and the Hamiltonians governing them– of the two theories are of course very different. In the case of weak interactions, we have a background space-time and we can use its metric to construct the Hamiltonian. In general relativity, we do not have a background metric. On the one hand this makes life very difficult since we do not have a fixed notion of distances or causal structures; these notions are to arise from the solution of the equations we are trying to write down! On the other hand, there is also tremendous simplification: Because there is no background metric, there are very few mathematically meaningful, gauge invariant expressions of the Hamiltonian that one can write down. (As we will see, this theme repeats itself in the quantum theory.) It is a pleasant surprise that the simplest non-trivial expression one can construct from the connection and its conjugate momentum is in fact the correct one, i.e., is the Hamiltonian of general relativity! The expression is at most quadratic in $`\stackrel{}{A}_i`$ and at most quadratic in $`\stackrel{}{E}_i`$. The similarity with gauge theories opens up new avenues for quantizing general relativity and the simplicity of the field equations makes the task considerably easier.
What is the physical meaning of these new basic variables of general relativity? As mentioned before, connections tell us how to parallel transport various physical entities around curves. The Levi-Civita connection tells us how to parallel transport vectors. The new connection, $`\stackrel{}{A}_i`$, on the other hand, determines the parallel transport of left handed spin-$`\frac{1}{2}`$ particles (such as the fermions in the standard model of particle physics) —the so called chiral fermions. These fermions are mathematically represented by spinors which, as we know from elementary quantum mechanics, can be roughly thought of as ‘square roots of vectors’. Not surprisingly, therefore, the new connection is not completely determined by the metric alone. It requires additional information which roughly is a square-root of the metric, or a tetrad. The conjugate momenta $`\stackrel{}{E}_i`$ represent restrictions of these tetrads to space. They can be interpreted as spatial triads, i.e., as ‘square-roots’ of the metric of the 3-dimensional space. Thus, information about the Riemannian geometry of space is coded directly in these momenta. The (space and) time-derivatives of the triads are coded in the connection.
To summarize, there is a formulation of general relativity which brings it closer to theories of other fundamental interactions. Furthermore, in this formulation, the field equations simplify greatly. Thus, it provides a natural point of departure for constructing a quantum theory of gravity and for probing the nature of quantum geometry non-perturbatively.
### C Historical detour
To conclude this section, let me return to the piece of history involving Einstein and Schrödinger that I mentioned earlier. In the forties, both men were working on unified field theories. They were intellectually very close. Indeed, Einstein wrote to Schrödinger saying that he was perhaps the only one who was not ‘wearing blinkers’ in regard to fundamental questions in science and Schrödinger credited Einstein for inspiration behind his own work that led to the Schrödinger equation. Einstein was in Princeton and Schrödinger in Dublin. But During the years 1946-47, they frequently exchanged ideas on unified field theory and, in particular, on the issue of whether connections should be regarded as fundamental or metrics. In fact the dates on their letters often show that the correspondence was going back and forth with astonishing speed. It reveals how quickly they understood the technical material the other hand sent, how they hesitated, how they teased each other. Here are a few quotes:
The whole thing is going through my head like a millwheel: To take $`\mathrm{\Gamma }`$ \[the connection\] alone as the primitive variable or the $`g`$’s \[metrics\] and $`\mathrm{\Gamma }`$’s ? … —Schrödinger, May 1st, 1946.
How well I understand your hesitating attitude! I must confess to you that inwardly I am not so certain … We have squandered a lot of time on this thing, and the results look like a gift from devil’s grandmother. —Einstein, May 20th, 1946
Einstein was expressing doubts about using the Levi-Civita connection alone as the starting point which he had advocated at one time. Schrödinger wrote back that he laughed very hard at the phrase ‘devil’s grandmother’. In another letter, Einstein called Schrödinger ‘a clever rascal’. Schrödinger was delighted and took it to be a high honor. This continued all through 1946. Then, in the beginning of 1947, Schrödinger thought he had made a breakthrough. He wrote to Einstein:
Today, I can report on a real advance. May be you will grumble frightfully for you have explained recently why you don’t approve of my method. But very soon, you will agree with me… —Schrödinger, January 26th, 1947
Schrödinger sincerely believed that his breakthrough was revolutionary <sup>§</sup><sup>§</sup>§The ‘breakthrough’ was to drop the requirement that the (Levi-Civita) connection be symmetric, i.e., to allow for torsion.. Privately, he spoke of a second Nobel prize. The very next day after he wrote to Einstein, he gave a seminar in the Dublin Institute of Advanced Studies. Both the Taoiseach (the Irish prime minister) and newspaper reporters were invited. The day after, the following headlines appeared:
Twenty persons heard and saw history being made in the world of physics. … The Taoiseach was in the group of professors and students. ..\[To a question from the reporter\] Professor Schrödinger replied “This is the generalization. Now the Einstein theory becomes simply a special case …” —Irish Press, January 28th, 1947
Not surprisingly, the headlines were picked up by New York Times which obtained photocopies of Schrödinger’s paper and sent them to prominent physicists –including of course Einstein– for comments. As Walter Moore, Schrödinger’s biographer puts it, Einstein could hardly believe that such grandiose claims had been made based on a what was at best a small advance in an area of work that they both had been pursuing for some time along parallel lines. He prepared a carefully worded response to the request from New York Times:
It seems undesirable to me to present such preliminary attempts to the public. … Such communiqués given in sensational terms give the lay public misleading ideas about the character of research. The reader gets the impression that every five minutes there is a revolution in Science, somewhat like a coup d’état in some of the smaller unstable republics. …
Einstein’s comments were also carried by the international press. On seeing them, Schrödinger wrote a letter of apology to Einstein citing his desire to improve the financial conditions of physicists in the Dublin Institute as a reason for the exaggerated account. It seems likely that this ‘explanation’ only worsened the situation. Einstein never replied. He also stopped scientific communication with Schrödinger for three years.
The episode must have been shocking to those few who were exploring general relativity and unified field theories at the time. Could it be that this episode effectively buried the desire to follow up on connection formulations of general relativity until an entirely new generation of physicists who were blissfully unaware of this episode came on the scene?
## III Quantum Geometry
### A General Setting
Now that we have a connection formulation of general relativity, let us consider the problem of quantization. Recall first that in the quantum description of a particle, states are represented by suitable wave functions $`\mathrm{\Psi }(\stackrel{}{x})`$ on the classical configuration space of the particle. Similarly, quantum states of the gravitational field are represented by appropriate wave functions $`\mathrm{\Psi }(\stackrel{}{A}_i)`$ of connections. Just as the momentum operator in particle mechanics is represented by $`\widehat{P}\mathrm{\Psi }_I=i\mathrm{}(\mathrm{\Psi }/x_I)`$ (with $`I=1,2,3`$), the triad operators are represented by $`\widehat{\stackrel{}{E}_i}\mathrm{\Psi }=i\mathrm{}G(\delta \mathrm{\Psi }/\delta \stackrel{}{A}_i)`$. The task is to express geometric quantities, such as lengths of curves, areas of surfaces and volumes of regions, in terms of triads using ordinary differential geometry and then promote these expressions to well-defined operators on the Hilbert space of quantum states. In principle, the task is rather similar to that in quantum mechanics where we first express observables such as angular momentum or Hamiltonian in terms of configuration and momentum variables $`\stackrel{}{x}`$ and $`\stackrel{}{p}`$ and then promote them to quantum theory as well-defined operators on the quantum Hilbert space.
In quantum mechanics, the task is relatively straightforward; the only potential problem is the choice of factor ordering. In the present case, by contrast, we are dealing with a field theory, i.e., a system with an infinite number of degrees of freedom. Consequently, in addition to factor ordering, we face the much more difficult problem of regularization. Let me explain qualitatively how this arises. A field operator, such as the triad mentioned above, excites infinitely many degrees of freedom. Technically, its expectation values are distributions rather than smooth fields. They don’t take precise values at a given point in space. To obtain numbers, we have to integrate the distribution against a test function, which extracts from it a ‘bit’ of information. As we change our test or smearing field, we get more and more information. (Take the familiar Dirac $`\delta `$-distribution $`\delta (x)`$; it does not have a well-defined value at $`x=0`$. Yet, we can extract the full information contained in $`\delta (x)`$ through the formula: $`\delta (x)f(x)𝑑x=f(0)`$ for all test functions $`f(x)`$.) Thus, in a precise sense, field operators are distribution-valued. Now, as is well known, product of distributions is not well-defined. If we attempt naively to give meaning to it, we obtain infinities, i.e., a senseless result. Unfortunately, all geometric operators involve rather complicated (in fact non-polynomial) functions of the triads. So, the naive expressions of the corresponding quantum operators are typically meaningless. The key problem is to regularize these expressions, i.e., to extract well-defined operators from the formal expressions in a coherent fashion.
This problem is not new; it arises in all physically interesting quantum field theories. However, as I mentioned in the Introduction, in other theories one has a background space-time metric and it is invariably used in a critical way in the process of regularization. For example, consider the electro-magnetic field. We know that the energy of the Hamiltonian of the theory is given by $`H=(\stackrel{}{E}\stackrel{}{E}+\stackrel{}{B}\stackrel{}{B})d^3x`$. Now, in the quantum theory, $`\widehat{\stackrel{}{E}}`$ and $`\widehat{\stackrel{}{B}}`$ are both operator-valued distributions and so their square is ill-defined. But then, using the background flat metric, one Fourier decomposes these distributions, identifies creation and annihilation operators and extracts a well-defined Hamiltonian operator by normal ordering, i.e., by physically moving all annihilators to the right of creators. This procedure removes the unwanted and unphysical infinite zero point energy form the formal expression and the subtraction makes the operator well-defined. In the present case, on the other hand, we are trying to construct a quantum theory of geometry/gravity and do not have a flat metric –or indeed, any metric– in the background. Therefore, many of the standard regularization techniques are no longer available.
### B Geometric operators
Fortunately, between 1992 and 1995, a new functional calculus was developed on the space of connections $`\stackrel{}{A}_i`$ —i.e., on the configuration space of the theory. This calculus is mathematically rigorous and makes no reference at all to a background space-time geometry; it is generally covariant. It provides a variety of new techniques which make the task of regularization feasible. First of all, there is a well-defined integration theory on this space. To actually evaluate integrals and define the Hilbert space of quantum states, one needs a measure: given a measure on the space of connections, we can consider the space of square-integrable functions which can serve as the Hilbert space of quantum states. It turns out that there is a preferred measure, singled out by the physical requirement that the (gauge covariant versions of the) configuration and momentum operators be self-adjoint. This measure is diffeomorphism invariant and thus respects the underlying symmetries coming from general covariance. Thus, there is a natural Hilbert space of states to work with This is called the kinematical Hilbert space; it enables one to formulate the quantum Einstein’s (or supergravity) equations. The final, physical Hilbert space will consist of states which are solutions to these equations.. Let us denote it by $``$. Differential calculus enables one to introduce physically interesting operators on this Hilbert space and regulate them in a generally covariant fashion. As in the classical theory, the absence of a background metric is both a curse and a blessing. On the one hand, because we have very little structure to work with, many of the standard techniques simply fail to carry over. On the other hand, at least for geometric operators, the choice of viable expressions is now severely limited which greatly simplifies the task of regularization.
The general strategy is the following. The Hilbert space $``$ is the space of square-integrable functions $`\mathrm{\Psi }(\stackrel{}{A}_i)`$ of connections $`\stackrel{}{A}_i`$. A key simplification arises because it can be obtained as the (projective) limit of Hilbert spaces associated with systems with only a finite number of degrees of freedom. More precisely, given any graph $`\gamma `$ (which one can intuitively think of as a ‘floating lattice’) in the physical space, using techniques which are very similar to those employed in lattice gauge theory, one can construct a Hilbert space $`_\gamma `$ for a quantum mechanical system with $`3N`$ degrees of freedom, where $`N`$ is the number of edges of the graphThe factor $`3`$ comes from the dimension of the gauge group $`\mathrm{SU}(2)`$ which acts on Chiral spinors. The mathematical structure of the gauge-rotations induced by this $`\mathrm{SU}(2)`$ is exactly the same as that in the angular-momentum theory of spin-$`\frac{1}{2}`$ particles in elementary quantum mechanics.. Roughly, these Hilbert spaces know only about how the connection parallel transports chiral fermions along the edges of the graph and not elsewhere. That is, the graph is a mathematical device to extract $`3N`$ ‘bits of information’ from the full, infinite dimensional information contained in the connection, and $`_\gamma `$ is the sub-space of $``$ consisting of those functions of connections which depend only on these $`3N`$ bits. (Roughly, it is like focusing on only $`3N`$ components of a vector with an infinite number of components and considering functions which depend only on these $`3N`$ components, i.e., are constants along the orthogonal directions.) To get the full information, we need all possible graphs. Thus, a function of connections in $``$ can be specified by fixing a function in $`_\gamma `$ for every graph $`\gamma `$ in the physical space. Of course, since two distinct graphs can share edges, the collection of functions on $`_\gamma `$ must satisfy certain consistency conditions. These lie at the technical heart of various constructions and proofs.
The fact that $``$ is the (projective) limit of $`_\gamma `$ breaks up any given problem in quantum geometry into a set of problems in quantum mechanics. Thus, for example, to define operators on $``$, it suffices to define a consistent family of operators on $`_\gamma `$ for each $`\gamma `$. This makes the task of defining geometric operators feasible. I want to emphasize, however, that the introduction of graphs is only for technical convenience. Unlike in lattice gauge theory, we are not defining the theory via a continuum limit (in which the lattice spacing goes to zero.) Rather, the full Hilbert space $``$ of the continuum theory is already well-defined. Graphs are introduced only for practical calculations. Nonetheless, they bring out the one-dimensional character of quantum states/excitations of geometry: It is because ‘most’ states in $``$ can be realized as elements of $`_\gamma `$ for some $`\gamma `$ that quantum geometry has a ‘polymer-like’ character.
Let me now outline the result of applying this procedure for geometric operators. Suppose we are given a surface $`S`$, defined in local coordinates by $`x_3=\mathrm{const}`$. The classical formula for the area of the surface is: $`A_S=d^2x\sqrt{E_i^3E_i^3}`$, where $`E_i^3`$ are the third components of the vectors $`\stackrel{}{E}_i`$. As is obvious, this expression is non-polynomial in the basic variables $`\stackrel{}{E}_i`$. Hence, off-hand, it would seem very difficult to write down the corresponding quantum operator. However, thanks to the background independent functional calculus, the operator can in fact be constructed rigorously.
To specify its action, let us consider a state which belongs to $`_\gamma `$ for some $`\gamma `$. Then, the action of the final, regularized operator $`\widehat{A}_S`$ is as follows. If the graph has no intersection with the surface, the operator simply annihilates the state. If there are intersections, it acts at each intersection via the familiar angular momentum operators associated with $`SU(2)`$. This simple form is a direct consequence of the fact that we do not have a background geometry: given a graph and a surface, the diffeomorphism invariant information one can extract lies in their intersections. To specify the action of the operator in detail, let me suppose that the graph $`\gamma `$ has $`N`$ edges. Then the state $`\mathrm{\Psi }`$ has the form: $`\mathrm{\Psi }(\stackrel{}{A}_i)=\psi (g_1,\mathrm{}g_N)`$ for some function $`\psi `$ of the $`N`$ variables $`g_1,\mathrm{},g_N`$, where $`g_k`$ ($`SU(2)`$) denotes the spin-rotation that a chiral fermion undergoes if parallel transported along the $`k`$-th edge using the connection $`\stackrel{}{A}_i`$. Since $`g_k`$ represent the possible rotations of spins, angular momentum operators have a natural action on them. In terms of these, we can introduce ‘vertex operators’ associated with each intersection point $`v`$ between $`S`$ and $`\gamma `$:
$$\widehat{O}_v\mathrm{\Psi }(A)=\underset{I,L}{}k(I,L)\stackrel{}{J}_I\stackrel{}{J}_L\psi (g_1,\mathrm{},g_N)$$
(1)
where $`I,L`$ run over the edges of $`\gamma `$ at the vertex $`v`$, $`k(I,J)=0,\pm 1`$ depending on the orientation of edges $`I,L`$ at $`v`$, and $`\stackrel{}{J}_I`$ are the three angular momentum operators associated with the $`I`$-th edge. (Thus, $`\stackrel{}{J}_I`$ act only on the argument $`g_I`$ of $`\psi `$ and the action is via the three left invariant vector fields on $`SU(2)`$.) Note that the the vertex operators resemble the Hamiltonian of a spin system, $`k(I,L)`$ playing the role of the coupling constant. The area operator is just a sum of the square-roots of the vertex operators:
$$\widehat{A}_S=\frac{G\mathrm{}}{2c^3}\underset{v}{}|O_v|^{{\scriptscriptstyle \frac{1}{2}}}$$
(2)
Thus, the area operator is constructed from angular momentum-like operators. Note that the coefficient in front of the sum is just $`\frac{1}{2}\mathrm{}_P^2`$, the square of the Planck length. This fact will be important later.
Because of the simplicity of these operators, their complete spectrum –i.e., full set of eigenvalues– is known explicitly: Possible eigenvalues $`a_S`$ are given by
$$a_S=\frac{\mathrm{}_P^2}{2}\underset{v}{}\left[2j_v^{(d)}(j_v^{(d)}+1)+2j_v^{(u)}(j_v^{(u)}+1)j_v^{(d+u)}(j_v^{(d+u)}+1)\right]^{\frac{1}{2}}$$
(3)
where $`v`$ labels a finite set of points in $`S`$ and $`j^{(d)},j^{(u)}`$ and $`j^{(d+u)}`$ are non-negative half-integers assigned to each $`v`$, subject to the usual inequality
$$j^{(d)}+j^{(u)}j^{(d+u)}|j^{(d)}j^{(u)}|.$$
(4)
from the theory of addition of angular momentum in elementary quantum mechanics. Thus the entire spectrum is discrete; areas are indeed quantized! This discreteness holds also for the length and the volume operators. Thus the expectation that the continuum picture may break down at the Planck scale is borne out fully. Quantum geometry is very different from the continuum picture. This may be the fundamental reason for the failure of perturbative approaches to quantum gravity.
Let us now examine a few properties of the spectrum. The lowest eigenvalue is of course zero. The next lowest eigenvalue may be called the area gap. Interestingly, area-gap is sensitive to the topology of the surface $`S`$. If $`S`$ is open, it is $`\frac{\sqrt{3}}{4}\mathrm{}_P^2`$. If $`S`$ is a closed surface –such as a 2-torus in a 3-torus– which fails to divide the spatial 3-manifold into an ‘inside’ and an ‘outside’ region, the gap turns out to be larger, $`\frac{2}{4}\mathrm{}_P^2`$. If $`S`$ is a closed surface –such as a 2-sphere in $`R^3`$– which divides space into an ‘inside’ and an ‘outside’ region, the area gap turns out to be even larger; it is $`\frac{2\sqrt{2}}{4}\mathrm{}_P^2`$. Another interesting feature is that in the large area limit, the eigenvalues crowd together. This follows directly from the form of eigenvalues given above. Indeed, one can show that for large eigenvalues $`a_S`$, the difference $`\mathrm{\Delta }a_S`$ between consecutive eigenvalues goes as $`\mathrm{\Delta }a_S(exp\sqrt{a_S/\mathrm{}_P^2})\mathrm{}_P^2`$. Thus, $`\mathrm{\Delta }a_S`$ goes to zero very rapidly. (The crowding is noticeable already for low values of $`a_S`$. For example, if $`S`$ is open, there is only one non-zero eigenvalue with $`a_S<0.5\mathrm{}_{P}^{}{}_{}{}^{2}`$, seven with $`a_S<\mathrm{}_P^2`$ and 98 with $`a_S<2\mathrm{}_P^2`$.) Intuitively, this explains why the continuum limit works so well.
### C Physical consequences: details matter!
However, one might wonder if such detailed properties of geometric operators can have any ‘real’ effect. After all, since the Planck length is so small, one would think that the classical and semi-classical limits should work irrespective of, e.g., whether or not the eigenvalues crowd. For example, let us consider not the most general eigenstates of the area operator $`\widehat{A}_S`$ but –as was first done in the development of the subject– the simplest ones. These correspond to graphs which have simplest intersections with $`S`$. For example, $`n`$ edges of the graph may just pierce $`S`$, each one separately, so that at each one of the $`n`$ vertices there is just a straight line passing through. For these states, the eigenvalues are $`a_S=(\sqrt{3}/2)n\mathrm{}_P^2`$. Thus, here, the level spacing $`\mathrm{\Delta }a_S`$ is uniform, like that of the Hamiltonian of a simple harmonic oscillator. If we restrict ourselves to these simplest eigenstates, even for large eigenvalues, the level spacing does not go to zero. Suppose for a moment that this is the full spectrum of the area operator. wouldn’t the semi-classical approximation still work since, although uniform, the level-spacing is so small?
Surprisingly, the answer is in the negative! What is perhaps even more surprising is that the evidence comes from unexpected quarters: the Hawking evaporation of large black holes. More precisely, we will see that if $`\mathrm{\Delta }a_S`$ had failed to vanish sufficiently fast, the semi-classical approximation to quantum gravity, used in the derivation of the Hawking process, must fail in an important way. The effects coming from area quantization would have implied that even for large macroscopic black holes of, say, a thousand solar masses, we can not trust semi-classical arguments.
Let me explain this point in some detail. The original derivation of Hawking’s was carried out in the framework of quantum field theory in curved space-times which assumes that there is a specific underlying continuum space-time and explores the effects of curvature of this space-time on quantum matter fields. In this approximation, Hawking found that the classical black hole geometries are such that there is a spontaneous emission which has a Planckian spectrum at infinity. Thus, black-holes, seen from far away, resemble black bodies and the associated temperature turns out to be inversely related to the mass of the hole. Now, physically one expects that, as it evaporates, the black hole must lose mass. Since the radius of the horizon is proportional to the the mass, the area of the horizon must decrease. Thus, to describe the evaporation process adequately, we must go beyond the external field approximation and take in to account the fact that the underlying space-time geometry is in fact dynamical. Now, if one treated this geometry classically, one would conclude that the process is continuous. However, since we found that the area is in fact quantized, we would expect that the black hole evaporates in discrete steps by making a transition from one area eigenvalue to another, smaller one. The process would be very similar to the way an excited atom descends to its ground state through a series of discrete transitions.
Let us look at this process in some detail. For simplicity let us use units with $`c=1`$. Suppose, to begin with, that the level spacing of eigenvalues of the area operator is the naive one, i.e. with $`\mathrm{\Delta }a_S=(\sqrt{3}/2)\mathrm{}_P^2`$. Then, the fundamental theory would have predicted that the smallest frequency, $`\omega _o`$, of emitted particles would be given by $`\mathrm{}\omega _o`$ and the smallest possible change $`\mathrm{\Delta }M`$ in the mass of the black hole would be given by $`\mathrm{\Delta }M=\mathrm{}\omega _o`$. Now, since the area of the horizon goes as $`A_HG^2M^2`$, we have $`\mathrm{\Delta }M\mathrm{\Delta }a_H/2G^2M\mathrm{}_P^2/G^2M`$. Hence, $`\mathrm{}\omega _o\mathrm{}/GM`$. Thus, the ‘true’ spectrum would have emission lines only at frequencies $`\omega =N\omega _o`$, for $`N=1,2,\mathrm{}`$ corresponding to transitions of the black hole through $`N`$ area levels. How does this compare with the Hawking prediction? As I mentioned above, according to Hawking’s semi-classical analysis, the spectrum would be the same as that of a black-body at temperature $`T`$ given by $`kT\mathrm{}/GM`$, where $`k`$ is the Boltzmann constant. Hence, the peak of this spectrum would appear at $`\omega _p`$ given by $`\mathrm{}\omega _pkT\mathrm{}/GM`$. But this is precisely the order of magnitude of the minimum frequency $`\omega _o`$ that would be allowed if the area spectrum were the naive one. Thus, in this case, a more fundamental theory would have predicted that the spectrum would not resemble a black body spectrum. The most probable transition would be for $`N=1`$ and so the spectrum would be peaked at $`\omega _p`$ as in the case of a black body. However, there would be no emission lines at frequencies low compared with $`\omega _p`$; this part of the black body spectrum would be simply absent. The part of the spectrum for $`\omega >\omega _p`$ would also not be faithfully reproduced since the discrete lines with frequencies $`N\omega _o`$, with $`N=1,2,\mathrm{}`$ would not be sufficiently near each other –i.e. crowded– to yield an approximation to the continuous black-body spectrum.
The situation is completely different for the correct, full spectrum of the area operator if the black hole is macroscopic, i.e., large. Then, as I noted earlier, the area eigenvalues crowd and the level spacing goes as $`\mathrm{\Delta }a_H(\mathrm{exp}\sqrt{a_H/\mathrm{}_P^2})\mathrm{}_P^2`$. As a consequence, as the black hole makes transition from one area eigenvalue to another, it would emit particles at frequencies equal to or larger than $`\omega _p\mathrm{exp}\sqrt{a_H/\mathrm{}_P^2}`$. Since for a macroscopic black-hole the exponent is very large (for a solar mass black-hole it is $`10^{38}`$!) the spectrum would be well-approximated by a continuous spectrum and would extend well below the peak frequency. Thus, the precise form of the area spectrum ensures that, for large black-holes, the potential problem with Hawking’s semi-classical picture disappears. Note however that as the black hole evaporates, its area decreases, it gets hotter and evaporates faster. Therefore, a stage comes when the area is of the order of $`\mathrm{}_P^2`$. Then, there would be deviations from the black body spectrum. But this is to be expected since in this extreme regime one does not expect the semi-classical picture to continue to be meaningful.
This argument brings out an interesting fact. There are several iconoclastic approaches to quantum geometry in which one simply begins by postulating that geometric quantities should be quantized. Then, having no recourse to first principles from where to derive the eigenvalues of these operators, one simply postulates them to be multiples of appropriate powers of the Planck length. For area then, one would say that the eigenvalues are integral multiples of $`\mathrm{}_P^2`$. The above argument shows how this innocent looking assumption can contradict semi-classical results even for large black holes. In the present approach, we did not begin by postulating the nature of quantum geometry. Rather, we derived the spectrum of the area operator from first principles. As we see, the form of these eigenvalues is rather complicated and could not have been guessed a priori. More importantly, the detailed form does carry rich information and in particular removes the conflict with semi-classical results in macroscopic situations.
### D Current and future directions
Exploration of quantum Riemannian geometry continues. Last year, it was found that geometric operators exhibit certain unexpected non-commutativity. This reminds one of the features explored by Alain Connes in his non-commutative geometry. Indeed, there are several points of contact between these two approaches. For instance, the Dirac operator that features prominently in Connes’ theory is closely related to the connection $`\stackrel{}{A}_i`$ used here. However, at a fundamental level, the two approaches are rather different. In Connes’ approach, one constructs a non-commutative analog of entire differential geometry. Here, by contrast, one focuses only on Riemannian geometry; the underlying manifold structure remains classical. In three space-time dimensions, it is possible to get rid of this feature in the final picture and express the theory in purely combinatorial fashion. Whether the same will be possible in four dimensions remains unclear. However, combinatorial methods continue to dominate the theory and it is quite possible that one would again be able to present the final picture without any reference to an underlying smooth manifold.
Perhaps the most striking application of quantum geometry has been to black hole thermodynamics. We saw in section 3.3 that the Hawking process provides a non-trivial check on the level spacing of the eigenvalues of area operators. Conversely, the discrete nature of these eigenvalues provides a statistical mechanical explanation of black hole entropy. To see this, first recall that for familiar physical systems —such as a gas, a magnet, or a black body— one can arrive at the expression of entropy by counting the number of micro-states. The counting in turn requires one to identify the building blocks that make up the system. For a gas, these are atoms; for a magnet, electron spins and for the radiation field in a black body, photons. What are the analogous building blocks for a large black hole? They can not be gravitons because the gravitational fields under consideration are static rather than radiative. Therefore, the elementary constituents must be non-perturbative in nature. In our approach they turn out to be precisely the quantum excitations of the geometry of the black hole horizon. The polymer-like one dimensional excitations of geometry in the bulk pierce the horizon and endow it with its area. It turns out that, for a given area, there are a specific number of permissible bulk states and for each such bulk state, there is a precise number of permissible surface states of the intrinsic quantum geometry of the horizon. Heuristically, the horizon resembles a pinned balloon —pinned by the polymer geometry in the bulk— and the surface states describe the permissible oscillations of the horizon subject to the given pinning. A count of all these quantum states provides, in the usual way, the expression of the black hole entropy.
Another promising direction for further work is construction of better candidates for ‘weave states’, the non-linear analogs of coherent states approximating smooth, macroscopic geometries. Once one has an ‘optimum’ candidate to represent Minkowski space, one would develop quantum field theory on these weave quantum geometries. Because the underlying basic excitations are one-dimensional, the ‘effective dimension of space’ for these field theories would be less than three. Now, in the standard continuum approach, we know that quantum field theories in low dimensions tend to be better behaved because their ultra-violet problems are softer. Hence, there is hope that these theories will be free of infinities. If they are renormalizable in the continuum, their predictions at large scales can not depend on the details of the behavior at very small scales. Therefore, one might hope that quantum field theories on weaves would not only be finite but also agree with the renormalizable theories in their predictions at the laboratory scale.
A major effort is being devoted to the task of formulating and solving quantum Einstein’s equations using the new functional calculus. Over the past two years, there have been some exciting developments in this area. The methods developed there seem to be applicable also to supergravity theories. In the coming years, therefore, there should be much further work in this area. More generally, since quantum geometry does not depend on a background metric, it may well have other applications. For example, it may provide a natural arena for other problem such as that of obtaining a background independent formulation of string theory.
So far, I have focussed on theoretical ideas and checks on them have come from considerations of consistency with other theoretical ideas, e.g., those in black hole thermodynamics. What about experimental tests of predictions of quantum geometry? An astonishing recent development suggests that direct experimental tests may become feasible in the near future. I will conclude with a summary of the underlying ideas. The approach one takes is rather analogous to the one used in proton decay experiments. Processes potentially responsible for the decay come from grand unified theories and the corresponding energy scales are very large, $`10^{15}`$ GeV —only four orders of magnitude below Planck energy. There is no hope of achieving these energies in particle accelerators to actually create in large numbers the particles responsible for the decay. Therefore the decays are very rare. The strategy adopted was to carefully watch a very large number of protons to see if one of them decays. These experiments were carried out and the (negative) results actually ruled out some of the leading candidate grand unified theories. Let us return to quantum geometry. The naive strategy of accelerating particles to Planck energy to directly ‘see’ the Planck scale geometry is hopeless. However, as in proton decay experiments, one can let these minutest of effects accumulate till they become measurable. The laboratory is provided by the universe itself and the signals are generated by the so-called $`\gamma `$-ray bursts. These are believed to be of cosmological origin. Therefore, by the time they arrive on earth, they have traveled extremely large distances. Now, if the geometry is truly quantum mechanical, as I suggested, the propagation of these rays would be slightly different from that on a continuum geometry. The difference would be minute but could accumulate on cosmological distances. Following this strategy, astronomers have already put some interesting limits on the possible ‘gaininess’ of geometry. Now the challenge for theorists is to construct realistic weave states corresponding to the geometry we observe on cosmological scales, study in detail propagation of photons on them and come up with specific predictions for astronomers. The next decade should indeed be very exciting!
Acknowledgments The work summarized here is based on contributions from many researchers especially John Baez, Alejandro Corichi, Roberto DePitri, Rodolfo Gambini, Chris Isham, Junichi Iwasaki, Jerzy Lewandowski, Renate Loll, Don Marolf, Jose Mourao, Jorge Pullin, Thomas Thiemann, Carlo Rovelli, Steven Sawin, Lee Smolin and José-Antonio Zapata. Special thanks are due to Jerzy Lewandowski for long range collaboration. This work was supported in part by the NSF Grant PHY95-14240 and by the Eberly fund of the Pennsylvania State University.
|
no-problem/9901/astro-ph9901402.html
|
ar5iv
|
text
|
# AMS-Shuttle test for antimatter stars in our Galaxy 11footnote 1Invited talk on the 3d Int. Conf. on Cosmoparticle Physics ”COSMION-97”. Russia, Moscow, December 8–14, 1997.
(<sup>a</sup>Center for CosmoParticle Physics ”Cosmion”, Miusskaya Pl., 4, 125047 Moscow, Russia
<sup>b</sup>Institute of Applied Mathematics, Miusskaya Pl., 4, 125047 Moscow, Russia
<sup>c</sup>Moscow State Engineering Physics Institute (Technical University), Kashirskoe Sh., 31, 115409 Moscow, Russia
<sup>d</sup>Moscow State University, Institute of Nuclear Physics, Vorobjevy Gory, 119899 Moscow, Russia)
## Abstract
The AMS–Shuttle experiment is shown to be sensitive to test the hypothesis on the existence of antimatter globular cluster in our Galaxy. The hypothesis follows from the analysis of possible tests for the mechanisms of baryosynthesis and uses antimatter domains in the matter dominated Universe as the probe for the physics underlying the origin of the matter. The total mass for the antimatter objects in our Galaxy is fixed from the below by the condition of antimatter domain survival in the matter dominated Universe and from above by the observed gamma ray flux. For this mass interval the expected fluxes of antinuclei can lead to up to ten antihelium events in the AMS-Shuttle experiment.
The modern big bang theory is based on inflationary models with baryosynthesis and nonbaryonic dark matter. The physical basis for all the three phenomena lies outside the experimentally proven theory of elementary particles. This basis follows from the extensions of the standard model. Particle theory considers such extensions as aesthetical appealing (grand unification), necessary to remove internal inconsistencies in the standard model (supersymmetry, axion) or simply theoretically possible (neutrino mass, lepton and baryon number violation). Most of these theoretical ideas can not be tested directly and particle theory considers cosmological relevance as the important component of their indirect test. In the absence of direct methods of study one should analyse the set of indirect effects, which specify the models of particles and cosmology. AMS experiment Ref. turns to be important tool in such analysis. The expected progress in the measurement of cosmic rays fluxes and gamma background and in the search for antinuclei and exotic charged particles make this experiment important source of information on the possible cosmological effects of particle theory. Its operation on Alpha Station will shed light on WIMP annihilation in the Galaxy, on primordial black hole evaporation, on possible existence of exotic charged particles and many other important clues on the hidden parameters of the modern cosmology, following from the hidden sector of particle theory. The first step in this direction may be done on the base of AMS-Shuttle experiment.
The COSMION-ETHZ programme assumes joint systematic study of AMS experiment as the basement for experimental cosmoparticle physics. The specifics of AMS–Shuttle experimental programme puts stringent restriction on the possible choice of cosmic signatures for the new physics. At this stage no clear detection of positrons, gamma rays or multi GeV antiprotons will be possible. It makes us to reduce the analysis to the antinuclear signal as the profound signature of new physics and cosmology.
The generally accepted motivation for baryon asymmetric Universe is the observed absence of the macroscopic amounts of antimatter up to the scales of clusters of galaxies. According to the Big Bang theory baryon symmetric homogeneous mixture of matter and antimatter can not survive after local annihilation, taking place at the first millisecond of cosmological evolution. Spatial separation of matter and antimatter can provide their survival in the baryon symmetric Universe but should satisfy severe constraints on the effects of annihilation at the border of domains. The most recent analysis finds that the size of domains should be only few times smaller than the modern cosmological horizon to escape the contradictions with the observed gamma ray background Ref. . In baryon asymmetric Universe the big bang theory predicts the exponentially small fraction of primordial antimatter and practically excludes the existence of primordial antinuclei. The secondary antiprotons may appear as a result of cosmic ray interaction with the matter, when galaxies are formed. In such interaction it is impossible to produce any sizeable amount of secondary antinuclei. Thus non exponentially small amount of antiprotons in the Universe in the period from $`10^3`$ s to $`10^{16}`$ s and antinuclei in the modern Universe are the profound signature for new phenomena, related to the cosmological consequences of particle theory. The inhomogeneity of baryon excess generation and antibaryon excess generation as the reflection of this inhomogeneity represents one of the most important example of such consequences. It turned out Refs. , that practically all the existing mechanisms of baryogenesis can lead to generation of antibaryon excess in some places, the baryon excess, averaged over the whole space, being positive. So domains of antimatter in baryon asymmetric Universe provide a probe for the physical mechanism of the matter generation.
The original Sakharov’s scenario of baryosynthesis Ref. has found physical grounds in GUT models. It assumes CP violating effects in out–of–equilibrium B–non–conserving processes, which generate baryon excess proportional to CP violating phase. If sign and magnitude of this phase varies in space, the same out–of–equilibrium B–non–conserving processes, leading to baryon asymmetry, result in $`B<0`$ in the regions, where the phase is negative. The same argument is appropriate for the models of baryosynthesis, based on electroweak baryon charge nonconservation at high temperatures as well as on its combination with lepton number violation processes, related to the physics of Majorana mass of neutrino. In all these approaches to baryogenesis independent on the physical nature of B- nonconservation the inhomogeneity of baryon excess and generation of antibaryon excess is determined by the spatial dependence of CP violating phase.
Spatial dependence of this phase is predicted in models of spontaneous CP violation, modified to escape the supermassive domain wall problem (see Refs. and Refs. therein).
In this type of models CP violating phase acquires discrete values $`\varphi _+=\varphi _0+\varphi _{sp}`$ and $`\varphi _{}=\varphi _0\varphi _{sp}`$, where $`\varphi _0`$ and $`\varphi _{sp}`$ are, respectively, constant and spontaneously broken CP phase, and antibaryon domains appear in the regions with $`\varphi _{}<0`$, provided that $`\varphi _{sp}>\varphi _0`$.
In models, where CP violating phase is associated with the amplitude of invisible axion field, spatially–variable phase changes $`\varphi _{vr}`$ continuously from $`\pi `$ to $`+\pi `$. The amplitude of axion field plays the role of $`\varphi _{vr}`$ in the period starting from Peccei–Quinn symmetry breaking phase transition until the axion mass is switched on at $`T1`$ GeV. The net phase changes continuously and if baryosynthesis takes place in the considered period axion induced baryosynthesis implies continuous spatial variation of the baryon excess given by Ref.
$$b(x)=A+b\mathrm{sin}(\theta (x)).$$
(1)
Here $`A`$ is the baryon excess induced by constant CP-violating phase, which provides the global baryon asymmetry of the Universe and $`b`$ is the measure of axion induced asymmetry. If $`b>A`$, antibaryon excess is generated along the direction $`\theta =3\pi /2`$. The stronger is the inequality $`b>A`$, the larger interval of $`\theta `$ around the semisurface $`\theta =3\pi /2`$ provides generation of antibaryon excess Ref. . In the case $`bA=\delta A`$ the antibaryon excess is proportional to $`\delta ^2`$ and the relative volume occupied by it is proportional to $`\delta `$.
The axion induced antibaryon excess forms the Brownian structure looking like an infinite ribbon along the infinite axion string (see Ref. ). The minimal width of the ribbon is of the order of horizon in the period of baryosynthesis and is equal to $`m_{Pl}/T_{BS}^2`$ at $`TT_{BS}`$. At $`T<T_{BS}`$ this size experiences red shift and is equal to
$$l_h(T)\frac{m_{Pl}}{T_{BS}T}$$
(2)
This structure is smoothed by the annihilation at the border of matter and antimatter domains. When the antibaryon diffusion scale exceeds $`l_h(T)`$ the infinite structure decays on separated domains.
The size and amount of antimatter in domains, generated as the result of local baryon–non–conserving out–of–equilibrium processes, is related to the parameters of models of CP violation and/or invisible axion (see Refs. ). SUSY GUT motivated mechanisms of baryon asymmetry imply flatness of superpotential relative to existence of squark condensate. Such a condensate, being formed with $`B>\mathrm{\hspace{0.17em}0}`$, induces baryon asymmetry, after squarks decay on quarks and gluinos. The mechanism doesn’t fix the value and sign of B in the condensate, opening the possibilities for inhomogeneous baryon charge distribution and antibaryon domains Refs. . The size and amount of antimatter in such domains is determined by the initial distribution of squark condensate.
Thus the antimatter domains in the baryon asymmetric Universe are related to practically all the mechanisms of baryosynthesis, and serve as the probe for the mechanisms of CP violation and primordial baryon charge inhomogeneity. The size of domains depends on the parameters of these mechanisms Refs. .
General parameters of the averaged effect of the domain structure are the relative amount of antimatter $`\omega _a=\rho _a/\rho _{crit}`$, where $`\rho _a`$ is the averaged over domain density of antimatter and $`\rho _{crit}`$ is the critical density, and the mean size of domains (the characteristic scale in their distribution on sizes) or for small domain sizes the time scale of their annihilation with the matter.
To compare the effect of antimatter domain annihilation with the observational data one should introduce the relative amount of annihilated antimatter relative to the total amount of matter. One may easily find (see for details Ref. ) that this ratio $`r`$ is given by:
$$r\frac{bf(ll_a)}{A},$$
(3)
where $`l_a`$ is the maximal size of domains annihilated by the considered period and $`f(l)`$ is the volume fraction of domains with the size $`l`$. In the case of discrete spontaneous CP violation discussed above $`b=A`$.
One of the features expected for antimatter domains in baryon asymmetrical Universe is the possibility of diffused antiworld. It corresponds to the antibaryon matter density much smaller, than the baryon matter density. One of the interesting consequences of diffused antiworld hypothesis is the possibility of unusual light antinuclei abundance. At antibaryon densities, much smaller, than the baryon density, anti–deuterium and anti–helium–3 may be more abundant, than antihelium–4. However diffused antiworld with very low antibaryon density can not lead to formation of antimatter objects and gamma ray search for annihilation in diffused antimatter clouds is the most promising in this case. The possibility of antibaryon density in domains comparable or even higher than the mean baryon density is much more interesting for AMS-Shuttle programme in cosmoparticle physics.
As it was recently shown Ref. , in the case when axion induced CP violation dominates in the process of Baryosynthesis, the antimatter density within the surviving domains should be larger than the mean baryon density. On the other hand the SUSY GUT squark condensate may induce large scale modulation of this distribution. Since both axion and SUSY are considered as the necessary extensions of the standard model one should consider at least the combination of axion– and squark–condesate– induced inhomogeneous baryosynthesis as the minimally realistic case. With the account for the other possible mechanisms for inhomogeneous baryosynthesis, predicted on the base of various and generally independent extensions of the standard model, the general analysis of possible domain distributions is rather complicated. Fortunately, the test for the possibility of the existence of antistars in our Galaxy, offered in Ref. , turns to be practically model independent and as we show here may be accessible for AMS-Shuttle Experiment. Let us assume some distribution of antimatter domains, which satisfies the constraints on antimatter annihilation in the early Universe. Domains, surviving after such annihilation, should have the mass exceeding
$$M_{min}(b/A)\rho _bl_a^3,$$
(4)
where $`\rho _b`$ is the mean cosmological baryon density. The mass fraction $`f`$ of such domains relative to total baryon mass is strongly model dependent. Note that since the diffusion to the border of antimatter domain is determined on RD stage by the radiation friction, the surviving scale fixes the size of the surviving domain. On the other hand the constraints on the effects of annihilation put the upper limit on the mass of annihilated antimatter.
The modern antimatter domain distribution should be cut at masses given by the Eq. (4) due to annihilation of smaller domains and it is the general feature of any model of antibaryosynthesis in baryon asymmetrical Universe. The specific form of the domain distribution is model dependent. At the scales smaller than those given by Eq. (4) the spectrum should satisfy the constraints on the relative amount of annihilating antimatter. Provided these constraints are satisfied, one may consider the conditions for antimatter objects formation. One should take into account that the estimation of the annihilation scale after recombination (see Ref. ) gives for this scale the value close the Jeans mass in the neutral baryon gas after recombination. So the development of gravitational instability may take place in antimatter domains resulting in the formation of astronomical objects of antimatter.
Formation of antimatter object has the time scale being of the order of $`t_f(\pi G\rho )^{1/2}`$. The object is formed, provided that this time scale is smaller than the time scale of its collision with the matter clouds. The latter is the smallest in the beginning of the object formation, when the clouds forming objects have large size.
Note that the isolated domain can not form astronomical object smaller than globular cluster Ref. . The isolated anti–star can not be formed in matter surrounding since its formation implies the development of thermal instability, during which cold clouds are pressed by hot gas. Pressure of the hot matter gas on the antimatter cloud is accompanied by the annihilation of antimatter. Thus anti–stars can be formed in the antimatter surrounding only, what may take place when such surrounding has at least the scale of globular cluster.
One should expect to find antimatter objects among the oldest population of the Galaxy Ref. . It should be in the halo, since owing to strong annihilation of antimatter and matter gas the formation of secondary antimatter objects in the disk component of our Galaxy is impossible. So in the estimation of antimatter effects we can use the data on the spherical component of our Galaxy as well as the analogy with the properties of the old population stars in globular clusters and elliptical galaxies.
In the spherical component of our Galaxy the antimatter globular cluster should move with high velocity (what follows from the velocity dispersion in halo, $`v150`$ km/s) through the matter gas with very low number density ($`n310^4cm^3`$). Owing to small density of antimatter gas effects of annihilation with the matter gas within the antimatter globular cluster are small. These effects, however, deserve special analysis for future search for antimatter cluster as the gamma source.
The integral effects of antimatter cluster may be estimated by the analysis of antimatter pollution of the Galaxy by the globular cluster of antistars.
There are two main sources of such pollution: the antistellar wind (the mass flow from antistars) and the antimatter Supernova explosions. The first source provides the stationary in-flow of antimatter particles with the velocity $`10^7÷10^8cm/s`$ to the Galaxy. From the analogy with the elliptical galaxies, for which one has the mass loss $`10^{12}M_{}`$ per Solar mass per year, one can estimate the stationary admixture of antimatter gas in the Galaxy and the contribution of its annihilation into the gamma ray background. The estimation strongly depends on the distribution of magnetic fields in the Galaxy, trapping charged antiparticles. Crude estimation of the gamma flux from the annihilation of this antimatter flux is compatible with the observed gamma background for the total mass of antimatter cluster less than $`10^5M_{}`$. This estimation puts upper limit on the total mass fraction of antimatter clusters in our Galaxy. Their integral effect should not contradict to the observed gamma ray background.
The uncertainty in the distribution of magnetic fields causes even more problems in the reliable estimation of the expected flux of antinuclei in cosmic rays. It is also accomplished by the uncertainty in the mechanism of cosmic ray acceleration. The relative contribution of disc and halo particles into the cosmic ray spectrum is also unknown. To get some feeling of the expected effect we may assume that the mechanisms of acceleration of matter and antimatter cosmic rays are similar and that the contribution of antinuclei into the cosmic ray fluxes is proportional to the mass ratio of globular cluster and Galaxy. Putting together the lower limit on the mass of the antimatter globular cluster from the condition of survival of antimatter domain and the upper limit on this mass following from the observed gamma ray background one obtains Ref. the expected flux of antihelium nuclei in the cosmic rays with the energy exceeding $`0.5`$ GeV/nucleon to be $`10^8÷10^6`$ of helium nuclei observed in the cosmic rays. The results of numerical calculation of the expected antihelium flux in Ref. together with the expected sensitivity of AMS experiment Ref. are given in Fig.1.
Such estimation assumes that annihilation does not influence the antinuclei composition of cosmic rays, which fact may take place if the cosmic ray antinuclei are initially relativistic. If the process of acceleration takes place outside the antimatter globular cluster one should take into account the Coulomb effects in the annihilation cross section of non relativistic antinuclei, which may lead to suppression of their expected flux.
On the other side the antinuclei annihilation invokes new factor in the problem of their acceleration, which is evidently absent in the case of cosmic ray nuclei. This factor may play very important role in the account for antimatter Supernovae as the possible source of cosmic ray antinuclei. From the analogy with elliptical galaxies one may expect Ref. that in the antimatter globular cluster Supernovae of the I type should explode with the frequency about $`210^{13}/M_{}`$ per year. On the base of theoretical models and observational data on SNI (cf. Ref. ) one expects in such explosion the expansion of a shell with the mass of about $`1.4M_{}`$ and velocity distribution up to $`210^9cm/s`$. The internal layers with the velocity $`v<\mathrm{\hspace{0.17em}8}10^8cm/s`$ contain anti-iron $`{}_{}{}^{56}Fe`$ and the outer layers with higher velocity contain lighter elements such as anti-calcium or anti-silicon. Another important property of Supernovae of the I type is the absence of hydrogen lines in their spectra. Theoretically it is explained as the absence of hydrogen mantle in Presupernova. In the case of antimatter Supernova it may lead to strong relative enhancement of antinuclei relatively to the antiprotons in the cosmic rays. Note that similar effect is suppressed in the nuclear component of cosmic rays, since Supernovae of the II type are also related to the matter cosmic ray origin in our Galaxy, in which massive hydrogen mantles (with the mass up to few solar masses) are accelerated.
In the contrast with the ordinary Supernova the expanding antimatter shell is not decelerated owing to acquiring the interstellar matter gas and is not stopped by its pressure but annihilate with it Ref. . In the result of annihilation with hydrogen, of which the matter gas is dominantly composed, semi- relativistic antinuclei fragments are produced. The reliable analysis of such cascade of antinuclei annihilation may be based on the theoretical models and experimental data on antiproton nucleus interaction. This programme is now under way. The important qualitative result is the possible nontrivial contribution into the fluxes of cosmic ray antinuclei with $`Z14`$ and the enhancement of antihelium flux.
Another important qualitative effect of annihilation in the expected composition of cosmic ray antinuclei is the possible presence of significant fraction of anti–helium–3. One can expect Ref. this fraction to be of the order of $`0.2`$ of the expected flux of anti–helium–4. This estimation follows from the experimental data on antiproton–helium interaction measured in the experiment PS 179 at LEAR CERN Ref. .
The estimations of Ref. assumed stationary in-flow of antimatter in the cosmic rays. In case Supernovae play the dominant role in the cosmic ray origin the in–flow is defined by their frequency. One may find from Ref. that the interval of possible masses of antimatter cluster $`(310^3÷10^5)M_{}`$ gives the time scale of antimatter in–flow $`1.610^9÷510^7`$ years, which value exceeds the generally estimated life time of cosmic rays in the Galaxy. The succession of antinuclear annihilations may result in this case in the dominant contribution of antihelium–3 into the expected antinuclear flux.
To conclude, with all the reservations mentioned above on the base of the hypothesis on antimatter globular cluster in our Galaxy one may predict at the level of the expected 600 antiprotons up to ten antihelium events in the AMS–Shuttle experiment. Their detection will be exciting indication favouring this hypothesis. Even the upper limit on antihelium flux will lead to important constraint on the fundamental parameters of particle theory and cosmology to be discussed in our successive publications.
Acknowledgments
The Russian side of COSMION–ETHZ collaboration expresses its gratitude to ETHZ for the permanent support of studies undertaken in the framework of International projects ”Astrodamus” and ” COSMION– ETHZ”.
|
no-problem/9901/hep-ph9901314.html
|
ar5iv
|
text
|
# DESY 99–004 ISSN 0418–9833 MPI–PhT 99–01 hep–ph/9901314 January 1999 Forward Jet Production at small 𝑥 in Next-to-Leading Order QCD
## 1 Introduction
The cross section for forward jet production in deep inelastic scattering (DIS) has been proposed as a particularly sensitive means to investigate the parton dynamics at small $`x`$ . Analytic calculations based on the BFKL equation in the leading-logarithmic approximation show a strong rise of this cross section with decreasing $`x`$ and were found in reasonable agreement with the first data from the H1 collaboration at HERA . More recent measurements of the forward cross section, based on an order of magnitude increased statistics compared to , have been presented recently by the ZEUS and the H1 collaborations confirming the earlier findings . Monte Carlo generators based on direct photon interactions (DIR) calculated from leading order (LO) $`O(\alpha _s)`$ matrix elements together with leading-logarithm
parton showers disagree with the measured jet cross section by an appreciable factor. Also next-to-leading order (NLO), i.e. $`O(\alpha _s^2)`$, calculations predict too small forward cross sections at small $`x`$ as already shown by Mirkes and Zeppenfeld using their MEPJET program when comparing to the data in . This has been confirmed also with the new data in .
A similar deficiency between NLO calculations and measured data occurs for the dijet rate in the region $`E_T^2>Q^2`$ , where $`E_T`$ is the transverse energy of the produced jets and $`Q^2`$ is the usual squared lepton momentum transfer. This kinematic range is also relevant for the forward jet production, as will be seen later. The region of small enough $`Q^2`$ is the photoproduction regime where the virtual photon resolves into partons. Indeed, introducing a resolved photon contribution, the measured dijet rate and the forward jet cross section can be described satisfactorily concerning the shape of the cross section as a function of $`x`$ as well as the absolute normalization. This description is based on the Monte Carlo program RAPGAP which includes a resolved photon contribution in addition to the direct process, which both are evaluated with LO matrix elements with additional emissions in the initial and final state generated by parton showers together with subsequent hadronization.
The dijet rate has been calculated also in NLO including direct and resolved photon contributions . In order to avoid double counting in the full NLO calculation, the contribution from the virtual photon splitting into $`q\overline{q}`$ pairs, where either the quark or the antiquark subsequently interacts with a parton originating from the proton, had to be subtracted , similar as is done in the NLO theory for the photoproduction of jets . The subtracted terms in the NLO direct contribution are part of the parton distribution functions (PDF’s) of the virtual photon and appear in the resolved contribution in an evolved form. With this procedure the whole cross section for two-jet production, which is a superposition of the direct and resolved contributions minus the photon splitting piece, becomes to a large extent independent of the factorization scale at the photon vertex. This full NLO calculation of the dijet rate agreed well with the H1 data over the full $`Q^2`$ domain, $`5Q^2100`$ GeV<sup>2</sup>, and the $`x`$ domain, $`10^4x10^2`$, and for jet transverse momenta $`E_T^2Q^2`$ .
In this work we want to present the results of a calculation of the forward jet cross section on the basis of the NLO theory used for the dijet rate. Although the kinematic constraints, very low $`x`$ and $`E_T^2/Q^2`$ of order one, are rather similar, it is not obvious that the calculated cross sections will agree with the recent ZEUS and H1 experimental results.
After some comparisons with the MEPJET results to make sure that our DIS jet program, called JetViP , gives the same results under identical kinematical conditions we shall give our results with the experimental cuts of the ZEUS and H1 analysis. We close with a short summary and an outlook to future studies.
## 2 Comparisons and Results
### 2.1 Comparison with MEPJET
Before we present our results with the ZEUS and H1 kinematical constraints for selecting the forward jets we performed a check of our program JetViP with the forward jet kinematics by comparing with the NLO results of Mirkes and Zeppenfeld , who have produced their results with the fixed order program MEPJET , which only includes direct photon contributions. We have chosen the same kinematical cuts, which differ somewhat from the cuts used in the ZEUS and H1 analyses.
The $`O(\alpha _s)`$ results are obtained taking the Glück, Reya and Vogt (GRV) LO proton PDF’s together with the one-loop formula for $`\alpha _s`$. For the $`O(\alpha _s^2)`$ results we employ the GRV higher order PDF’s together with the two-loop running $`\alpha _s`$ formula. We take $`N_f=5`$ and match the strong coupling at the charm and bottom thresholds $`\mu _R=m_c,m_b`$, respectively.
Jets are defined in the laboratory frame using the cone algorithm with the opening angle $`\sqrt{(\mathrm{\Delta }\eta )^2+(\mathrm{\Delta }\varphi )^2}=\mathrm{\Delta }R1`$ in the so-called E-scheme. In this scheme the four-vector of the combined jet is given as the sum of the four-vectors of the two partons. The differences of pseudorapidities and azimuthal angles with respect to the jet direction are $`\mathrm{\Delta }\eta `$ and $`\mathrm{\Delta }\varphi `$. All jets have to fulfill $`|\eta |<3.5`$ and $`E_T,E_T^B>4`$ GeV, where the index B refers to quantities in the Breit frame. $`\eta `$ and $`E_T`$ are measured in the HERA laboratory frame. Additional cuts are made for events which contain a forward jet. These requirements are $`1.735<\eta <2.90`$ and $`E_T>5`$ GeV with
$$p_z/E_P>0.05,0.5<E_T^2/Q^2<4.$$
(1)
The $`x`$ variable is restricted to the small-$`x`$ region of $`x<0.004`$. The cuts on the electron variables are $`Q^2>8`$ GeV<sup>2</sup>, $`y>0.1`$, $`E^{}>11`$ GeV and $`\theta _e^{}[160^0,173.5^0]`$. The electron and proton energies are $`E_e=27.5`$ GeV and $`E_P=820`$ GeV, respectively. The positive $`z`$-direction is the direction of the incoming proton momentum.
The renormalization ($`\mu _R`$) and factorization scales ($`\mu _F`$) are taken equal and are identified with the sum $`\mu _R=\mu _F=\frac{1}{2}_ik_T^B(i)`$ where $`k_T^B(i)`$ and $`p_T^B`$, the parton’s transverse momentum, are related in the Breit frame by
$$[k_T^B(i)]^2=2E_i^2(1\mathrm{cos}\theta _{ip})=\frac{2}{1+\mathrm{cos}\theta _{ip}}[p_T^B(i)]^2,$$
(2)
where $`\theta _{ip}`$ is the angle between the parton and the proton direction in the Breit system. In LO one-jet production, i.e., in the naive parton model limit, $`k_T^B(i)=Q`$. With these constraints we obtain the cross sections in Tab. 1, where our JetViP results are compared to results from , referenced as MEPJET in the table. In addition to the forward jet cross sections we also list the full 2-jet, inclusive 2-jet and exclusive 3-jet cross sections. Compared to these cross sections, the kinematic constraints defining the forward cross section lead to considerable reductions. Furthermore we notice that the NLO corrections to the forward jet cross sections are large in agreement with . The O($`\alpha _s^0`$) single-jet cross section is not considered since it vanishes if a forward jet is required, due to the kinematical restrictions of the phase space.
The numbers for MEPJET are taken from ref. . They differ by a few percent from our JetViP results. This is due to a different implementation. In JetViP the azimuthal ($`\varphi `$) dependence of the jet with respect to the electron plane is integrated out in the hadronic center-of-mass system, whereas in MEPJET this $`\varphi `$ dependence is included in this frame and then integrated in the HERA laboratory system. These terms which originate from the interference of the longitudinal and transverse virtual photon polarization ($`\mathrm{cos}\varphi `$) and from the transverse linear photon polarization ($`\mathrm{cos}2\varphi `$) vanish for $`Q^20`$ . Since in our case the virtuality $`Q^2`$ is not very large, the contribution of the azimuthal dependent terms is small, which leads to small deviations between MEPJET and JetViP results. In addition, when integrating over the full phase space, it does not matter in which system the $`\varphi `$ integration is performed, so that the observed difference is essentially due to the phase space restrictions in the forward jet selection.
Next we have evaluated the forward jet cross section including a resolved virtual photon contribution. Since the $`Q^2`$ is fairly large one might think that introducing a resolved virtual photon component is not necessary, because it is equivalent to a contribution generated in the NLO correction to the direct cross section. However, the NLO resolved cross section introduces additional higher order terms that are not contained in the NLO direct cross section, as we will see below. The kinematical cuts for selecting the forward jet are the same as for the comparison with MEPJET. The resolved cross section in LO and up to NLO is calculated as described in our earlier work and which is incorporated in the JetViP program . The PDF’s of the virtual photon are taken from , specifically we took the version SaS1D, which was transformed to the $`\overline{\text{MS}}`$ scheme (see ). For the LO resolved cross section it would be more appropriate to choose SaS1D without the transformation to the $`\overline{\text{MS}}`$ scheme. This would increase the total LO cross section in Tab. 2 by $`10\%`$. As in we subtracted a term originating from the $`\gamma ^{}q\overline{q}`$ splitting in the NLO direct photon matrix elements in order to avoid double counting at the NLO level. The SaS parametrizations of the virtual photon PDF vanish if the virtuality $`Q^2`$ is larger than the factorization scale $`\mu _F^2`$. Therefore we have chosen $`\mu _R^2=\mu _F^2=Q^2+(E_T^B)^2`$. This enforces the virtual photon to be present since now always $`Q^2<\mu _F^2`$. The results for the various components of the forward cross section are summarized in Tab. 2. From these results we observe the following. First, the sum of the LO direct and resolved contributions coincide within $`10\%`$ with the NLO direct cross section. Second, adding the subtracted NLO direct contribution, which is the NLO direct minus the contribution from the photon splitting term (given with the minus sign in Tab. 2) to the NLO resolved cross section, leads to a large correction of about $`60\%`$ if compared to the full NLO direct cross section. This increase has two sources.
First, the LO resolved cross section is $`15`$% larger than the subtracted photon splitting term. Part of this increase is due to the evolution of the PDF’s of the photon. The other part comes from the gluon component of the photon PDF, which amounts to 7.3 pb. Second, the NLO corrections to the resolved cross section give a further increase as compared to the LO result of approximately $`80\%`$, which originates from the NLO corrections to the resolved matrix elements.
We conclude, that the NLO resolved contribution supplies higher order terms in two ways, first through the NLO corrections in the hard scattering cross section and second in the leading logarithmic approximation by evolving the PDF’s of the virtual photon to the chosen factorization scale. This way we sum the logarithms in $`E_T^2/Q^2`$, which, however, in the considered kinematical region is not an important effect numerically, as we have seen in Tab. 2. Therefore, the enhancement of the NLO direct cross section through inclusion of resolved processes in NLO is mainly due to the convolution of the point-like term in the photon PDF with the NLO resolved matrix elements, which gives an approximation to the NNLO direct cross section without resolved contributions. One of the dominant contributions to the forward cross section is shown in Fig. 1 (left part), where the photon splitting term is convoluted with a matrix element that provides two gluons in the final state. This way one gluon rung is added to the gluon ladder as compared to the corresponding NLO direct cross section, shown on the right of Fig. 1. The additional gluon in the NLO resolved term is in our approach calculated from perturbative QCD, producing an additional term which makes a contribution to the forward jet. In the BFKL approach for the forward jet cross section this extra gluon is part of the BFKL evolution. In contrast, the NLO direct term in Fig. 1 (right part) contains also additional gluons in the DGLAP evolution of the proton PDF, which, however, are not resolved, i.e., go to the proton remnant. Another way to generate a larger forward jet cross section would be to go to the NNLO corrections of the direct cross section, which has not been done yet. We think that including the NLO resolved component produces a reasonable approximation to this NNLO cross section. Such a correspondence is present at one order lower. As already remarked above, the superposition of the LO direct and resolved cross section is almost equal to the NLO direct cross section.
### 2.2 Comparison with ZEUS and H1 Data
For the comparison with the 1995 ZEUS and the 1994 H1 forward jet cross section data we calculated the NLO cross section with slightly different input and in particular with the exact kinematical constraints for the forward jet selection as used in the two experiments.
As proton PDF’s we apply now the CTEQ4M parametrization with the two-loop $`\alpha _s`$. We take $`N_f=5`$ as before and match the value of $`\alpha _s`$ at the thresholds $`\mu _R=m_c,m_b`$ with a $`\mathrm{\Lambda }_{\overline{MS}}`$ as used in CTEQ4M. Jets are defined with the cone algorithm in the HERA frame as described above, except that the axis of the jet is calculated now as the transverse energy weighted mean of $`\eta `$ and $`\varphi `$ of the two partons or jets belonging to the combined jet. This kind of jet definition was also applied in the experimental jet analysis. As scales we choose $`\mu ^2=\mu _R^2=\mu _F^2=M^2+Q^2`$ with a fixed $`M^2=50`$ GeV<sup>2</sup> related to the mean $`E_T^2`$ of the forward jet. We take this fixed value of $`M`$ instead of $`E_T`$ for technical reasons, since the calculations in JetViP start from the hadronic c.m.s.. The choice $`\mu _F^2>Q^2`$ is mandatory if we want to include a resolved contribution. Another choice of scale would be $`\mu _F=E_T`$ or $`\mu _F=M`$. In the case $`\mu _F^2/Q^2>1`$ only for $`E_T^2/Q^2>1`$, which covers only part of the ZEUS kinematical range. To have a resolved cross section in all $`E_T^2/Q^2`$ bins we consider the choice $`\mu _F^2=M^2+Q^2`$ more appropriate.
In the two experiments the forward jet selection criteria are different. In the ZEUS experiment the kinematical constraints are: $`E_e^{}>10`$ GeV, $`y>0.1`$, $`\eta >2.6`$ and $`E_T>5`$ GeV with the forward jet constraints $`x_{jet}=E_{jet}/E_P>0.036`$, $`0.5<E_T^2/Q^2<2`$, $`p_{z,jet}^B>0`$ and $`4.5\times 10^4<x<4.5\times 10^2`$. Our results for the forward jet cross section under these ZEUS kinematical conditions are shown in Fig. 2 a,b. In Fig. 2 a we plotted the full O($`\alpha _s^2`$) inclusive two-jet cross section (DIR) as a function of $`x`$ for three different scales $`\mu ^2=3M^2+Q^2,M^2+Q^2`$ and $`M^2/3+Q^2`$ and compared them with the measured points from ZEUS . As to be expected the calculated NLO direct cross section is by a factor 2 to 4 too small compared to the data. The variation inside the assumed range of scales is small, so that also with a reasonable change of scales we can not get agreement with the data. In Fig. 2 b we show the corresponding forward jet cross sections with the NLO resolved contribution included, as described in the previous subsection, again for the three different scales $`\mu `$ as in Fig. 2 a. Now we find good agreement with the ZEUS data. The scale variation of the calculated cross section is larger than in Fig. 2 a. In particular, the largest scale gives now the largest cross section opposite to what is observed in Fig. 2 a. This different scale variation comes primarily from the scale dependence of the virtual photon PDF. This factorization scale variation is supposed to be compensated between the LO resolved and the NLO direct contribution but not for the NLO resolved contribution. This could only occur if we could include the NNLO direct contribution which, however, is not available. Since the NLO resolved contribution for the forward jet is rather large, as discussed above, this scale dependence can not be avoided. On the other hand, the scale dependence is not so large that we must fear our results not to be trustworthy. In Fig. 2 b the cross section is labeled DIR<sub>S</sub>+RES, where DIR<sub>S</sub> stands for NLO direct minus the photon-quark-antiquark splitting term. We have calculated the forward jet cross section also with the scale $`\mu _F^2=M^2`$. For this choice we obtain a somewhat smaller cross section which is approximately equal to the cross section with the scale $`\mu _F^2=M^2/3+Q^2`$.
In the H1 experiment the forward jets are selected with the kinematical cuts $`E_e^{}>11`$ GeV, $`y>0.1`$, $`160^0<\theta _e^{}<173^0`$, $`1.735<\eta <2.794`$ (this corresponds to $`7^0<\theta _{jet}<20^0`$), $`E_T>3.5(5.0)`$ GeV, $`x_{jet}>0.035`$ and $`0.5<E_T^2/Q^2<2`$. This corresponds approximately to the $`Q^2`$ range $`5<Q^2<100`$ GeV<sup>2</sup>. The forward jet cross section is measured for various $`x`$ bins ranging from $`1.0\times 10^4`$ to $`4.0\times 10^3`$. Otherwise the calculated forward cross section is obtained under the same assumptions as for the ZEUS selection cuts. In Fig. 3 a,b,c,d we show the results compared to the H1 data obtained with two $`E_T`$ cuts in the HERA system, $`E_T>3.5`$ GeV (Fig. 3 a,b) and $`E_T>5.0`$ GeV (Fig. 3 c,d). In the plots on the left (Fig. 3 a,c) the data are compared with the pure NLO direct prediction, which turns out to be too small by a similar factor as observed in the comparison with the ZEUS data. In Fig. 3 b,c the forward jet cross section is plotted with the NLO resolved contribution included in the way described above. For both $`E_T`$ cuts, $`E_T>3.5`$ GeV (in Fig. 3b) and $`E_T>5.0`$ GeV (Fig. 3d), we find good agreement with the 1994 H1 data inside the scale variation window $`M^2/3+Q^2<\mu ^2<3M^2+Q^2`$. Compared to the ZEUS data the H1 measurements extend down to smaller $`x`$. In the lowest $`x`$ bin, $`1.0\times 10^4<x<5.0\times 10^4`$ the forward jet cross section has a dip, which is also reproduced in the theoretical calculation and which is due to the kinematical constraints for selecting forward jets.
We conclude that the NLO theory with a resolved virtual photon contribution gives a good description of both, the ZEUS and the H1 forward jet data. It is important, that both components, the direct and the resolved one, are calculated up to NLO. A LO calculation of both components would fall short of the experimental data, as it is clear from the results presented in Tab. 2, where we compared LO and NLO results.
We remark that the forward jet cross sections as measured by ZEUS and H1 are obtained at the hadron level, i.e., the jets are constructed from measured hadron momenta using the same cone algorithm. In our NLO calculation the jets are combined from partons with the same jet algorithm. The size of the corrections from hadron to parton level has been studied by the ZEUS collaboration using several Monte Carlo simulation programs with the result that for the models which account very well for the ZEUS forward jet cross sections the correction factors are close to unity for all $`x`$ values considered in the analysis. In the H1 work similar results are reported .
In the H1 forward jet data are also successfully described with the RAPGAP model which includes a direct and a resolved component, both in LO, and with a similar scale $`\mu `$ as we have used. In addition this model has leading logarithm parton showers in the initial and final state built in. We think that these parton shower contributions produce the higher order effects which we found necessary to account for the correct normalization and the $`x`$ dependence of the forward jet cross section data.
## 3 Concluding Remarks
We conclude that the measurements of the forward jet cross section presented recently by the ZEUS and H1 collaborations can be described very well by the NLO theory with direct and resolved virtual photon contributions added in a consistent way. The theory shows good agreement with the data with respect to the normalization and also the functional dependence with decreasing $`x`$. Whereas this variation of the cross section with $`x`$ is also compatible with the NLO predictions based on direct photons, for the correct normalization the resolved component up to NLO is needed. To avoid double counting the $`\gamma ^{}q\overline{q}`$ splitting term is removed from the NLO direct contribution.
In contrast, LO BFKL predictions yield much larger forward cross sections than the data . These calculations suffer, however, from several deficiencies. They are asymptotic and do not contain the correct kinematic constraints of the produced jets . Furthermore they do not allow the implementation of a jet algorithm as used in the experimental analysis. Also NLO $`\mathrm{ln}(1/x)`$ terms in the BFKL kernel predict large negative corrections which are expected to reduce the forward cross section as well. When all these points are taken into account the BFKL approach may give an equally well description of the forward jet data. Even if this is the case, it is clear from this work that the BFKL theory is not the only theoretical approach that describes the forward jet cross sections.
### Acknowledgements
We are grateful to D. Graudenz, G. Grindhammer and H. Jung for interesting discussions and to D. Zeppenfeld for correspondence about his work with E. Mirkes.
|
no-problem/9901/astro-ph9901219.html
|
ar5iv
|
text
|
# Conformal gravity and a naturally small cosmological constant1footnote 11footnote 1astro-ph/9901219 v2, February 26, 2001.
## Abstract
With attempts to quench the cosmological constant $`\mathrm{\Lambda }`$ having so far failed, we instead investigate what could be done if $`\mathrm{\Lambda }`$ is not quenched and actually gets to be as big as elementary particle physics suggests. Since the quantity relevant to cosmology is actually $`\mathrm{\Omega }_\mathrm{\Lambda }`$, quenching it to its small measured value is equally achievable by quenching not $`\mathrm{\Lambda }`$ but $`G`$ instead, with the $`G`$ relevant to cosmology then being much smaller than that measured in a low energy Cavendish experiment. A gravitational model in which this explicitly takes place, viz. conformal gravity, is presented, with the model being found to provide for a completely natural, non fine tuned accounting of the recent high $`z`$ accelerating universe supernovae data, no matter how big $`\mathrm{\Lambda }`$ itself actually gets to be. Thus to solve the cosmological constant problem we do not need to change or quench the energy content of the universe, but rather only its effect on cosmic evolution.
The recent discovery Riess1998 ; Perlmutter1998 that the current era deceleration parameter $`q(t_0)`$ is close to $`1/2`$ has made the already extremely disturbing cosmological constant problem even more vexing than before. Specifically, with $`q(t_0)`$ being given in standard gravity by $`q(t_0)=(n/21)\mathrm{\Omega }_M(t_0)\mathrm{\Omega }_\mathrm{\Lambda }(t_0)`$ \[where $`\mathrm{\Omega }_M(t)=8\pi G\rho _M(t)/3c^2H^2(t)`$ is due to ordinary matter (i.e. matter for which $`\rho _M(t)=A/R^n(t)`$ where $`A>0`$ and $`3n4`$), and where $`\mathrm{\Omega }_\mathrm{\Lambda }(t)=8\pi G\mathrm{\Lambda }/3cH^2(t)`$ is due to a cosmological constant\], we see that not only must $`c\mathrm{\Lambda }`$ be non-zero, it must be of order $`3c^2H^2(t_0)/8\pi G=\rho _C(t_0)`$ in magnitude, i.e. it must be quenched by no less than 60 orders of magnitude below its natural value as expected from fundamental particle physics. Additionally, since such a quenched $`c\mathrm{\Lambda }`$ would then be of order $`\rho _M(t_0)`$ as well (the so-called cosmic coincidence), our particular cosmological epoch would then only be achievable in standard gravity if the macroscopic Friedmann evolution equation were to be fine-tuned at very early times to incredible precision. Any still to be found fundamental microscopic physics mechanism which might in fact quench $`c\mathrm{\Lambda }`$ by the requisite sixty orders of magnitude would thus still leave standard gravity with an additional macroscopic coincidence to explain.
Since no mechanism has yet been found which might actually quench $`c\mathrm{\Lambda }`$ and since its quenching might not necessarily work macroscopically anyway, we shall thus turn the problem around and ask what can be done if $`c\mathrm{\Lambda }`$ is not in fact quenched and is in fact as big as elementary particle physics suggests. To this end we note immediately that it would still be possible to have $`q(t_0)`$ be of order one today (the measurable consequence of $`c\mathrm{\Lambda }`$) if instead of quenching $`c\mathrm{\Lambda }`$ we instead quench $`G`$, with the cosmological $`G`$ then being replaced by an altogether smaller $`G_{eff}`$. Since observationally $`\rho _M(t_0)`$ is known to not be bigger than $`\rho _C(t_0)`$, any successful such cosmological quenching of $`G`$ (successful in the sense that such relativistic quenching not modify standard non-relativistic physics) would immediately leave us with a non-quenched $`c\mathrm{\Lambda }`$ which would then not suffer from any cosmic coincidence problem.
Given these remarks it is thus of interest to note that it is precisely a situation such as this which obtains in the conformal gravity theory which has recently been advanced M1990 ; M1992 ; M1994 ; M1997 ; M1998 ; M1999 ; M2000 as a candidate alternative to the standard gravitational theory. Conformal gravity is a fully covariant gravitational theory which, unlike standard gravity, possesses an additional local scale invariance, a symmetry which when unbroken sets any fundamental cosmological constant and any fundamental $`G`$ to zeroM1990 . Unlike standard gravity conformal gravity thus has a great deal of control over the cosmological constant, a control which is found to be of relevance even after the conformal symmetry is spontaneously broken by the non-vanishing of a scalar field vacuum expectation value $`S_0`$ below a typical critical temperature $`T_V`$. In fact in the presence of such breaking the standard attractive $`G`$ phenomenology is found to still emerge at low energies M1994 , while cosmology is found M1992 to instead be controlled by the effective $`G_{eff}=3c^3/4\pi \mathrm{}S_0^2`$, a quantity which by being negative immediately entails cosmic repulsion M1998 , and which, due to its behaving as $`1/S_0^2`$, is made small by the very same mechanism which serves to make $`\mathrm{\Lambda }`$ itself large.
Other than the use of a changed $`G`$ the cosmic evolution of conformal gravity is otherwise the same as that of the standard one, viz. M1998 ; M1999 ; M2000
$`\dot{R}^2(t)+kc^2=3c^3\dot{R}^2(t)(\mathrm{\Omega }_M(t)+\mathrm{\Omega }_\mathrm{\Lambda }(t))/4\pi \mathrm{}S_0^2G\dot{R}^2(t)(\overline{\mathrm{\Omega }}_M(t)+\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t))`$
$`q(t)=(n/21)\overline{\mathrm{\Omega }}_M(t)\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)`$ (1)
(Eq. (1) serves to define $`\overline{\mathrm{\Omega }}_M(t)`$ and $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)`$). Moreover, unlike the situation in the standard theory where values for the relevant evolution parameters (such as the sign of $`\mathrm{\Lambda }`$) are only determined phenomenologically, in conformal gravity essentially everything is already a priori known. With conformal gravity not needing dark matter to account for non-relativistic issues such as galactic rotation curve systematics M1997 , $`\rho _M(t_0)`$ can be determined directly from luminous matter alone, with galaxy luminosity accounts giving a value for it of order $`0.01\rho _C(t_0)`$ or so. Further, with $`c\mathrm{\Lambda }`$ being generated by vacuum breaking in an otherwise scaleless theory, since such breaking lowers the energy density, $`c\mathrm{\Lambda }`$ is unambiguously negative, with it thus being typically given by $`\sigma T_V^4`$. Then with $`G_{eff}`$ also being negative, $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)`$ is necessarily positive, just as needed to give cosmic acceleration. Similarly, the sign of the spatial 3-curvature $`k`$ is known from theory M2000 to be negative,<sup>2</sup><sup>2</sup>2At the highest temperatures the zero energy density required of a (then) completely conformal invariant universe is maintained by a cancellation between the positive energy density of ordinary matter and the negative energy density due to the negative curvature of the gravitational field. something which has been independently confirmed from a study of galactic rotation curves M1997 . Finally, since $`G_{eff}`$ is negative, the cosmology is singularity free and thus expands from a finite maximum temperature $`T_{max}`$, a temperature which for $`k<0`$ is necessarily greater than $`T_V`$ M1998 ; M1999 ; M2000 (so that a large $`T_V`$ entails an even larger $`T_{max}`$).
Given only that $`\mathrm{\Lambda }`$, $`k`$ and $`G_{eff}`$ are all negative, the temperature evolution of the theory is then completely determined for arbitrary $`T_{max}`$ and $`T_V`$, to yield M1998 ; M1999 ; M2000
$$\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)=(1T^2/T_{max}^2)^1(1+T^2T_{max}^2/T_V^4)^1,\overline{\mathrm{\Omega }}_M(t)=(T^4/T_V^4)\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)$$
(2)
at any $`T`$. Thus, from Eq. (2) we see that simply because $`T_{max}T(t_0)`$, i.e. simply because the universe is as old as it is, it immediately follows that $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t_0)`$ has to lie somewhere between zero and one today no matter how big (or small) $`T_V`$ might be. Then, since $`T_VT(t_0)`$, $`\overline{\mathrm{\Omega }}_M(t_0)`$ has to be completely negligible,<sup>3</sup><sup>3</sup>3$`\overline{\mathrm{\Omega }}_M(t_0)`$ is suppressed by $`G_{eff}`$ being small, and not by $`\rho _M(t_0)`$ itself being small. so that $`q(t_0)`$ must thus necessarily lie between zero and minus one today notwithstanding that $`T_V`$ is huge. Moreover, the larger $`T_V`$ gets to be, the more $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t_0)`$ will be reduced below one, with it taking a value close to one half should $`T(t_0)T_{max}/T_V^2`$ be close to one. With $`\overline{\mathrm{\Omega }}_M(t_0)`$ being negligible today, $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t_0)`$ is therefore given as $`1+kc^2/\dot{R}^2(t_0)`$, a quantity which necessarily lies below one if $`k`$ is negative. Thus in a $`k<0`$ conformal gravity universe, once the universe has cooled enough, $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t)`$ will then be forced to have to lie between zero and one no matter how big $`\mathrm{\Lambda }`$ may or may not be. The contribution of $`\mathrm{\Lambda }`$ to cosmology is thus seen to be completely under control in conformal gravity, with the theory thus leading us right into the $`\overline{\mathrm{\Omega }}_\mathrm{\Lambda }(t_0)1/2`$, $`\overline{\mathrm{\Omega }}_M(t_0)=0`$ region, a region which, while foreign to standard gravity, is nonetheless still fully compatible with the reported supernovae data fits. Hence to solve the cosmological constant problem we do not need to change or quench the energy content of the universe, but rather only its effect on cosmic evolution. This work has been supported in part by the Department of Energy under grant No. DE-FG02-92ER40716.00.
|
no-problem/9901/cond-mat9901203.html
|
ar5iv
|
text
|
# Velocity statistics in excited granular media
## I Introduction
Granular systems are often treated statistically, since the large number of degrees of freedom and the complexity of interparticle forces limit analyses on the particle level. What are the statistical properties of particles in an excited granular medium? In many respects the dynamics of granular media strongly resemble the dynamics of ordinary fluids and solids. Yet there are fundamental differences between granular solids and fluids and their molecular counterparts that make the answer to this question intriguing . Ordinary temperature does not lead to measurable velocity fluctuations due to the large mass of granular particles. When granular particles are excited by an external energy source, their thermal energies are much smaller than their kinetic energies. During collisions between particles, frictional forces and deformation near the points of contact lead to dissipation of kinetic energy. Therefore it is not surprising to find that kinetic energy is transformed into thermal energy when particles interact in an excited granular medium .
For strongly excited granular media, assumptions similar to those in kinetic theory are often made . Among these assumptions are that velocity distributions are Gaussian and that the mean energy is shared equally among the various degrees of freedom. However, recent numerical and theoretical research indicates that excited inelastic hard spheres can exhibit non-Gaussian velocity distributions , though the predicted velocity distributions differ considerably from each other. Some thermodynamic descriptions of granular media make use of the concept of entropy , or separate the dissipative degrees of freedom from conservative ones .
Vibrated granular media have played a special role in efforts to understand the dynamics of granular materials, in part because vibration is a convenient method of replacing the energy lost to friction and inelasticity. A variety of novel phenomena have been discovered in the past fifteen years : heaping and convection rolls , standing and traveling waves , oscillons , and fluidization .
In this paper, we study the simple case of a single layer of particles with vertical excitation, varying the fractional coverage of this layer and the amplitude of the driving waveform. We determine the velocity distributions precisely and compare them to various functional forms, an issue that previous experiments have left unresolved for granular particles that are free to move in three dimensions. Quantitatively studying particles in three dimensions has only recently been achieved in rotated cylinders using NMR techniques; it yielded important information about the segregation process . By observing a three dimensional excited granular medium from above, we are able to focus on the shapes of the distributions of the horizontal velocity components. A combination of otherwise identical white and black particles allows us to track individual particles even for high fractional coverages at high excitations where particles frequently move over each other. We measure the dependence of the variances of the distributions on the excitation amplitude, frequency and the fractional coverage. Sharing of energy in mixtures of granular materials is also investigated.
In Sec. II, we discuss the background for our experiments. In Sec. III we present the experimental setup. In Sec. IV, we show that, in most cases we treated experimentally, the velocity distribution $`P(v)`$ deviates measurably from a Gaussian and can be described by $`P(v)\mathrm{exp}(|v/v_c|^{1.5})`$. We show precisely how the variance increases with driving amplitude and changes with fractional coverage $`c`$, where $`c=1`$ is the coverage for a compact crystal of beads. The energy sharing between particles of different types is also described here.
## II Background
A number of studies on excited granular media have addressed the issues of clustering and energy sharing. Numerical simulations and theoretical derivations have shown that the presence of inelasticity in granular flows can lead to the formation of clusters; as a consequence, equipartition of energy fails . Experiments performed in a horizontal two-dimensional layer were consistent with this predicted clustering effect . Vertical one-dimensional experiments and simulations were performed earlier ; a crossover from a condensed (clustered) to a fluidized state was found as a function of the driving acceleration, the number of beads, and the coefficient of restitution.
Deviations from equipartition due to clustering are straightforward to understand on physical grounds. Inelastic collisions imply a loss of energy each time a collision occurs. When particles begin to gather in a certain region of space, the rate of their collisions increases. The rate of energy loss for this group of clustered particles is thus greater, and the distribution of their velocities becomes narrower than that of the particles in less dense regions.
Clustering can affect the velocity statistics. Assuming a Gaussian velocity distribution for a nearly homogeneous granular medium, Puglisi et al. predict a non-Gaussian velocity distribution due to clustering: a superposition of Gaussian velocity distributions with different widths. For inelastic particles this leads to high velocity tails in the velocity distribution which decrease more slowly than a Gaussian function but faster than an exponential. The velocity distributions obtained in a simulation by Taguchi and Takayasu are power laws, resulting from clustering. These distributions have diverging variance; this calls into question the notion of a granular temperature, which is generally derived from a variance. In a two-dimensional simulation, Peng and Ohta found that the velocity distributions deviate from Gaussian behavior under the influence of gravity unless $`g\delta h<<T`$, where $`\delta h`$ is the height of the region of observation. $`T`$ is the granular temperature (see e.g. Ref. ), defined as the variance of the velocity (minus the mean velocity)
$$T=<(v^2<v>^2)>.$$
(1)
The high energy tails of velocity distributions in a homogeneous granular fluid were investigated theoretically by Esipov and Pöschel for the unforced case, and by Noije and Ernst for both the unforced and the heated case, based on the Enskog-Boltzmann equation. In the free case, i.e. without energy input into the system, the velocity distribution approaches an exponential at high velocities. When energy is added to the granular medium randomly and uniformly throughout the system, the high velocity approximation becomes $`P(v)\mathrm{exp}(|v|^{1.5})`$. In order to compare our experimental results to this prediction we have to assume that energy input into the measured horizontal motion occurs randomly.
While measurements of velocity statistics in an excited granular medium have been carried out , few measurements precise enough to distinguish between different functional forms of the velocity distribution exist to our knowledge. One exception is a recent study of clustering and ordering near a peak acceleration of $`a=1`$ g by Olafsen and Urbach , who found significant deviations from Gaussian distributions, at low and especially at high velocities, where the distributions become exponential.
Deviations from equipartition can occur for reasons other than clustering. For example, Knight and Woodcock studied a vibrationally excited granular system theoretically (without gravity) and concluded that equipartition need not be observed at high amplitude of excitation due to the anisotropy of the energy source. A two dimensional system (a vertical Hele Shaw cell) was studied experimentally by Warr, Huntley, and Jacques . Velocity distributions, though roughly Gaussian, were checked and found to exhibit anisotropy between the vertical and horizontal motion: the horizontal velocity distribution was narrower than the vertical one. Grossman, Zhou, and Ben-Naim considered a two-dimensional granular gas (without gravity) from a theoretical quasi-continuum point of view and found the density to be nonuniform and the velocity distributions to be asymmetric for “thermal” energy input from one side. McNamara and Luding considered the sharing of energy between rotational and translational motion of the particles, and found a significant violation of equipartition.
The scaling of the granular kinetic energy with vibration amplitude is also of interest in connection with the experiments to be discussed in the present paper. It has been considered experimentally by Warr et al., numerically by Luding, Hermann, and Blumen , and theoretically by Kumaran in the nearly elastic limit of weak dissipation and also by Huntley in a simple model. The results of these different studies do not seem to be mutually consistent with each other, perhaps because different regimes were explored; the situation is unclear.
## III Experimental setup and methods
The experiments are conducted in a circular container of diameter 32 cm made of delrin. It is driven vertically with sinusoidal acceleration at a single frequency using a VTS500 vibrator from Vibration Test Systems Inc. A computer controlled feedback loop keeps the vibration amplitude constant and reproducible. The frequency $`f`$ used in most experiments is $`100`$ Hz and the peak acceleration $`a`$ of the plate is in the range $`38g`$. The peak plate velocity $`v_p=a/2\pi f`$ lies between $`4.7`$ cm/s and $`14.1`$ cm/s. The particles are glass beads $`4`$ mm in diameter (from Jaygo Inc.) with fractional coverage $`c`$. A glass cover at $`2`$ cm height prevents the beads from escaping from the container. Collisions with the cover are rare for most experimental parameters but lead to measureable changes in the velocity distributions at the largest accelerations, if the coverage is low. Some charging of the glass beads is noticeable when the beads are at rest, but electrostatic forces are negligible at the range of accelerations investigated here.
Our objective is to measure the horizontal velocity distributions in an excited three dimensional granular medium for a large range of particle densities. However, particle tracking becomes increasingly difficult as particle tracks approach each other and cross frequently at large $`c`$, thus hindering reliable identification of the horizontal positions of individual particles. However, if only a modest number of particles are reflective, the frequency of collisions between them is small and nearly independent of coverage or excitation. By tracking these test particles we can also directly compare the velocity distributions for different coverage.
Tracking a subset of the particles is accomplished by using some white glass beads among black glass beads. Except for the color, the black and white beads have identical physical properties. Stainless steel beads of three different diameters replaced the white glass beads in some experiments. The material properties of all particles we used are listed in table I.
Images of an area $`16.74\times 15.70`$ cm at the center of the container are taken at a resolution of $`512\times 480`$ pixels using a fast camera (SR-500, Kodak Inc.) operated at $`250`$ or $`500`$ frames/s. At a vibration frequency of 100 Hz this ensures that images are taken at 5 different phases relative to the phase of the plate vibration yielding the average energy throughout the cycle. The images are analyzed using IDL (Research Systems Inc.) software. First each image is enhanced using a bandpass filter and thresholding to eliminate noise. The positions of all bright particles are found from the enhanced image by calculation of their centroid; this defines the particle positions reproducibly to within less than $`0.1`$ pixels. Small effects due to the finite pixel size are noticeable sometimes as slightly increased probabilities of particle displacements that are multiples of the pixel width ($`0.0327`$ cm in the physical system). This displacement corresponds to $`v=8.18`$ cm/s in most experiments. The high frame rate ensures that even the fastest beads move less than one particle diameter between images. This allows accurate tracking of all bright beads for all $`546`$ sequential frames (the maximum available with our camera) with a typical precision of $`\pm 2\%`$ for the velocity measurements. We project each step onto two perpendicular directions, and study the statistics of each velocity component. One concern was that as a black bead moves over a bright bead, the centroid position could move away from the center of the bead and possibly alter the measured velocity distribution. For a test we thus eliminated particles whose integrated greyscale intensity changed rapidly. We found that eliminating these points does not measurably alter the velocity distribution. However, if a collision occurs between frames, our measurements indicate the average of the velocity prior to and after a collision. The measured distribution of velocities will therefore probably be slightly closer to a Gaussian than the real distribution of velocities. To improve the data in this regard, it would be necessary to sample faster while retaining the same relative accuracy in the velocity measurement. It would therefore be necessary to use a faster camera, to zoom in closer to the sample and to extend the measurement over a significantly larger number of frames to obtain the same statistics.
## IV Experimental Results
### A Granular Temperature
Extracted particle tracks are shown in Fig. 1 for $`c=0.28`$ (half of the beads are white) and $`a=5`$g. For clarity, only tracks longer than $`200`$ images are shown, which eliminates some tracks close to the edge. A total of $`546`$ frames is acquired at $`250`$ frames/s (i.e. for $`2.18`$ seconds) with approximately $`200`$ tracked particles in each frame.
The particle velocities are determined from the particle displacement between consecutive frames. This does not always represent the true velocity of the particle though. If a collision occurs between frames, the apparent velocity will be lower than the true velocity. We can define an apparent temperature based on displacements along either horizontal coordinate denoted here as $`x`$; it depends on the time between frames $`\mathrm{\Delta }t`$:
$$T(\mathrm{\Delta }t)=<(x_j(t_k+\mathrm{\Delta }t)x_j(t_k))^2>_{j,k}/\mathrm{\Delta }t^2,$$
(2)
where the average is taken over all particles (j) and frames (k). The particle tracks obtained at $`250`$ frames/s during an interval of $`2.18`$ s allow us to determine $`T(\mathrm{\Delta }t)`$ for $`1/250\mathrm{s}\mathrm{\Delta }t<<2.18\mathrm{s}`$. It is often useful to express the velocities in units of the peak plate velocity $`v_p`$, which yields a dimensionless temperature $`\stackrel{~}{T}(\mathrm{\Delta }t)=T(\mathrm{\Delta }t)/v_p^2`$. Fig. 2 shows the dimensionless temperature $`\stackrel{~}{T}(\mathrm{\Delta }t)`$ vs. frame rate ($`1/\mathrm{\Delta }t`$) at $`c=0.42`$ and $`f=100`$Hz for different accelerations. For small frame rates, i.e. large $`\mathrm{\Delta }t`$, $`\stackrel{~}{T}(\mathrm{\Delta }t)`$ increases approximately linearly with the frame rate $`\stackrel{~}{T}(\mathrm{\Delta }t)1/\mathrm{\Delta }t`$. This indicates that the particle motion may be described by an ordinary diffusion law when many collisions occur in the sampling interval $`\mathrm{\Delta }t`$. Assuming such a diffusion process, one expects that
$$\stackrel{~}{T}(\mathrm{\Delta }t)\frac{\stackrel{~}{T}\tau _c}{\mathrm{\Delta }t}(\mathrm{for}\mathrm{\Delta }t>>\tau _c).$$
(3)
The dashed and the solid lines in Fig. 2 are linear in $`1/\mathrm{\Delta }t`$ and give upper and lower limits to $`\stackrel{~}{T}\tau _c`$. We estimate that $`\stackrel{~}{T}1.0`$ based on the high frame rate limit. We can now estimate the collision time to be $`0.02\mathrm{s}<\tau _c<0.04\mathrm{s}`$ for $`f=100`$ Hz and $`c=0.42`$, which corresponds to one collision every $`24`$ oscillations of the vibrator. On the other hand, for large frame rates (small $`\mathrm{\Delta }t`$), $`\stackrel{~}{T}(\mathrm{\Delta }t)`$ approaches a constant. This occurs when $`\mathrm{\Delta }t`$ is much smaller than $`\tau _c`$; in this limit $`T(\mathrm{\Delta }t)T`$, so the granular temperature $`T`$ (for displacements along one axis) can be defined as
$$T=\underset{\mathrm{\Delta }t0}{lim}T(\mathrm{\Delta }t)=<v^2>.$$
(4)
The approach of the measured granular temperature to a constant cannot be fitted by a simple exponential or power law. For an excited granular material, this shape could be influenced by correlations in velocity between neighboring particles and correlations between the local density and particle velocity. Starting from the fact that the sampling time is roughly $`10\%`$ of the mean collision time at an intermediate coverage, we estimate that the true granular temperature $`T`$ might be up to $`10\%`$ higher than the measured value.
All values of $`T`$ and $`\stackrel{~}{T}`$ presented in this paper are the measured temperatures obtained at the highest frame rate (usually $`250`$ frames/s), where the limit $`\mathrm{\Delta }t0`$ is justified. This limit is approached in a very similar way for different accelerations (see Fig. 2). This allows us to determine the acceleration dependence of $`\stackrel{~}{T}`$. Note that the collision time can be roughly independent of acceleration since the height of the average bounce increases with increasing peak plate acceleration. For the fixed number of particles in our system this leads to an increase in the mean free path with increasing acceleration. When the number of particles is changed, the collision time also changes. As shown in Fig. 3, the limit of Eqn. (4) is approached fastest at the lowest energy of $`c=0.14`$ and slower at $`c=0.42`$ and at $`c=0.98`$. However, we are close enough to the limit at all coverages (since the mean time between collisions is always significantly longer than the time between frames) to observe qualitatively how $`\stackrel{~}{T}`$ changes with $`c`$.
We can therefore determine the dependence of the temperature on plate acceleration $`a`$ and coverage $`c`$, shown in Fig. 4. The temperature increases with $`a`$ for all coverages. As a function of $`c`$, $`T`$ increases at low coverage, exhibits a maximum around $`c=0.30`$ and decreases with $`c`$ at high coverage. This trend reflects changes in the true granular temperature as shown in Fig. 3. No measurable change in the temperature dependence occurs around $`c=1`$. A fractional coverage above unity is meant to indicate that more than one close packed layer of beads is used. The highest granular temperature indicates that the average potential energy of the particles corresponds to a mean height above the plate comparable to one particle diameter $`d`$. All experimental results are therefore limited to the regime of particle energies smaller than or comparable to the only characteristic energy of a granular material, the potential energy of raising one particle by one diameter $`mgd`$. When scaled by the peak plate velocity $`v_p`$ as in Fig. 5, the granular temperature becomes independent of acceleration to within $`\pm 10\%`$ for most data points. Remarkably, the dependence on coverage follows approximately the same behavior at all accelerations.
The scaling of velocities by $`v_p`$ implies that $`T1/f^2`$. Fig. 6 shows, on a log-log plot, that $`T`$ does indeed decrease approximately $`1/f^2`$ for the two accelerations and the three coverages shown. The plot covers two orders in magnitude of the granular temperature, ranging from conditions where beads rarely hit the container lid to conditions where frequent collisions with the lid occur. We conclude that the scaling of the bead velocity by the plate velocity is very robust and is not significantly influenced by additional contacts with the container lid.
### B Velocity Distributions
The velocity distribution along one axis, obtained from particle tracks of white beads, is shown in Fig. 7. Fits to a Gaussian distribution $`F_g(v)=A_g\mathrm{exp}(|v/\sqrt{2}v_c|^2)`$ are shown as dashed lines, and fits to the prediction of Ref.
$$F_2(v)=A_2\mathrm{exp}(|v/1.164v_c|^{1.5})$$
(5)
are shown as solid lines. The characteristic velocitiy $`v_c`$ is defined as the square root of the variance, so that for both $`F_g`$ and $`F_2`$:
$$v_c=\sqrt{<v^2>}=\sqrt{T}.$$
(6)
The data points for the fits are weighted equally on a linear scale in Fig. 7(a,b) and equally on a logarithmic scale in Fig. 7(c,d). The characteristic velocity $`v_c`$ in (a,b) is $`v_c=6.19`$ cm/s for $`F_g`$ and $`v_c=6.99`$ cm/s for $`F_2`$. The Gaussian fit underestimates the probability of both low and high ($`v>3v_c`$) velocities, while the fit to $`F_2`$ describes the probability distribution quite well over three orders of magnitude in probability. The increased weight of the high velocity experimental data in (c,d) leads to $`v_c=7.66`$ cm/s for $`F_g`$ and $`v_c=6.81`$ cm/s for $`F_2`$. The Gaussian fit again underestimates high and low velocity probabilities, while the fit to $`F_2`$ proves to be insensitive to the weighting of data points, indicating a robust fit. The fit to $`F_2`$ is also insensitive to the choice of the fitted range of velocities, while the characteristic velocity decreases with decreasing fitting range for $`F_g`$. We conclude that $`F_2`$ provides a better fit than $`F_g`$.
The velocity distribution for a large range of accelerations $`3\mathrm{g}a8\mathrm{g}`$ can be described accurately by $`F_2`$ as shown in Fig. 8(a). The data and the best fitting lines are shifted vertically as needed for clarity; this amounts to multiplication by a constant on a log-linear plot. The probabilities are plotted against $`|\stackrel{~}{v}|^{1.5}`$, where $`\stackrel{~}{v}=v/v_p`$. On this log-linear scale, $`F_2`$ is a straight line, in good agreement with the experimental data for a large range of accelerations. The dependence of the velocity distribution on coverage is more complex. The coverage dependence is shown in Fig. 8(b). At relatively high coverage, above $`c=0.28`$, the experimental data can be described well by $`F_2`$ (solid lines). However, at lower coverage (for approximately the same range of $`c`$ where $`T`$ increases with $`c`$) the probability of high velocities is underestimated by $`F_2`$ (dashed line).
While the granular temperature is proportional to $`1/f^2`$ to a good approximation, measureable differences are apparent in the distribution of non-dimensional velocities $`\stackrel{~}{v}`$, as a function of frequency. Fig. 9 compares the distributions for $`40`$ Hz and $`140`$ Hz. The distribution falls more slowly with velocity as $`f`$ is increased. The cause of this behavior is probably the smaller (unscaled) velocities at higher frequency, which decreases the collision rate (i.e. the rate of energy loss through inelastic collisions), and the higher frequency of vibration (i.e. roughly the rate of energy input). For the highest unscaled particles velocities and lowest excitation frequency ($`a=5`$ g and $`f=40`$ Hz) deviations from fits to $`F_2`$ become observable.
At low accelerations $`a=2`$ g we observe that the velocity distribution has exponential tails and an approximately Gaussian central component. The crossover from a Gaussian distribution to an exponential distribution is shown in Fig. 10. These results are similar to the velocity distributions around $`a=1`$ g obtained by Olafsen and Urbach , and possibly related to clustering effects observed at low accelerations. In addition, the system is nearly two dimensional since most beads bounce lower than one particle diameter..
It is possible to investigate the case of a freely cooling granular medium to some extent by looking at a time averaged velocity distribution. We shut off the vibrator abruptly, and simultaneously trigger the camera at $`250`$ frames/s. We then extract particle tracks from the image sequence in the same way as we did for image sequences of continuously excited granular media. In order to analyse the functional form of the velocity distribution, we need to accumulate velocities over $`150`$ frames, i.e. $`0.6`$ s. Fig. 11 shows such a velocity distribution for $`c=0.84`$, where the vibrator was operated at $`f=100`$ Hz and $`a=5`$ g prior to shut off. The velocity distribution is exponential, in agreement with calculations for the free cooling case . However, the measured velocity distribution represents an average over almost the entire free cooling process, since the instantaneous velocities indicate that the granular temperature decreases by more than one order of magnitude during the averaging time. We have also made measurements over $`0.1`$ s; although the statistics are not as good, the distributions still appear to be exponential.
### C Equipartition
In a final set of experiments, the white glass beads are replaced by steel beads of different size (grade 100 stainless steel 316 for the two smaller sizes, grade 100 stainless steel 302 for the largest beads). At a total coverage $`c=0.42`$, $`14\%`$ of the glass beads were replaced by steel beads (for the smallest bead size only $`5\%`$ of the glass beads were replaced). In all experiments most collisions of steel beads therefore occur with glass beads. The distributions of $`\stackrel{~}{v}`$ for the tracked particles are shown in Fig. 12. Larger beads have smaller non-dimensional characteristic velocities $`\stackrel{~}{v_c}`$ and thus a smaller granular temperature $`T`$ than smaller beads. The velocity distributions for the two smaller steel bead sizes are described well by $`F_2`$, while that for the largest beads is better described by a Gaussian (dashed line). This could indicate that the large beads effectively prevent clustering since they act as a source of momentum for the other beads. While the granular temperature of the largest steel beads is lower than the temperature of the glass beads, their energy $`mT`$ is larger, while the energy of the smaller steel bead sizes is smaller, as shown in Fig. 13.
## V Summary and conclusion
We have reported experimental studies of velocity statistics for a (fractional) layer of glass beads subjected to vertical vibration. The horizontal motion of a small subset of beads was measured using a high speed camera at a frame rate sufficiently high to measure instantaneous velocities accurately. The measurements were acquired over an interval substantially longer than the time between interparticle collisions. These capabilities allowed us to determine particle statistics of the horizontal motion in detail. We analyzed granular temperatures and velocity distributions for a large range of excitation frequencies, amplitudes, and coverages.
The variance of the particle velocity distribution (or the granular temperature $`T`$ of horizontal motion) varies approximately in proportion to the plate velocity (Fig. 5). It increases with increasing coverage $`c`$ at low $`c`$ and decreases at higher $`c>0.42`$. On the other hand, the mean energy associated with the vertical motion probably declines with increasing $`c`$ for all $`c`$ due to additional dissipation from collisions. In this interpretation the decrease in $`T`$ with $`c`$ at high $`c`$ mirrors the decrease in energy associated with vertical motion. In contrast, the smaller values of $`T`$ found at low $`c`$ likely indicate that the energy transfer from vertical to horizontal motion becomes less efficient at low $`c0.42`$.
We have shown that particles of different mass do not have the same kinetic energy or the same granular temperature when both are present simultaneously (Fig. 13), an apparent violation of equipartition. The most reasonable explanation is that all particles acquire similar vertical velocity fluctuations from the container. The more massive particles therefore obtain a larger vertical kinetic energy. This excess vertical energy is transferred to the horizontal motion, leading to a violation of equipartition.
An important result of this investigation is that in the steady state the velocity distributions deviate measurably from a Gaussian (Fig. 7), but can be described well by $`P(v)\mathrm{exp}(|v/v_c|^{1.5})`$ for broad ranges of frequencies $`f`$, accelerations $`a`$ and coverages $`c`$, in agreement with the theory of Ref . In most of our experiments the time between interparticle collisions $`\tau _c`$ is comparable to the time between contacts with the plate. However, if forcing collisions become significantly less frequent than interparticle collisions (as for free cooling, Fig. 11), then the distribution approaches an exponential. This experimental observation is consistent with the numerical results of Puglisi et al. . This quantitative correspondence with experiment indicates that the theoretical and numerical approaches of Refs. to describing the statistical properties of granular particles are promising.
Puglisi et al. suggest that non-Gaussian behavior and clustering are indications of essentially the same particle dynamics. If so, our experimental results indicate that clustering must occur for a very large range of excitation amplitudes and frequencies.
Acknowledgments: This work was supported in part by the National Science Foundation under Grant No. DMR-9704301. We thank Eric Weeks and John Crocker for providing particle tracking software. We thank Yuhai Tu for valuable discussions. Technical support was provided by Bruce Boyes.
|
no-problem/9901/cond-mat9901058.html
|
ar5iv
|
text
|
# Orbital ordering, Jahn-Teller distortion, and anomalous X-Ray scattering in Manganates
\[
## Abstract
We demonstrate with LSDA+U calculations that x-ray scattering at the K edge of Mn is sensitive to orbital ordering in one energy range and Jahn-Teller distortion in another. Contrary to what is suggested by atomic or cluster models used to date we show that band structure effects rather than local Coulomb interactions dominate the polarization dependence of the K edge scattering and therefore it is sensitive to nearest neighbour bond length distortions and next nearest neighbour orbital occupation. Based on this we propose a new mechanism for K edge x-ray scattering in the manganates which we suggest is also applicable to transition metal compounds in general.
\]
A very popular topic in strongly correlated 3$`d`$ transition metal systems has to do with the influence of orbital degeneracy and orbital ordering on the magnetic and magneto-electronic properties. It is well established in the early works of Khomskii and Kugel that the superexchange interactions between neighboring transition metal ions is strongly dependent on the spatial orientation of occupied $`d`$ orbitals leading to even sign changes in this interaction for different orbital occupations. Very much studied recent examples are the so called colossal magnetoresistance materials containing trivalent Mn ions with four 3$`d`$ electrons which in a cubic crystal field and the usual strong atomic Hunds rule coupling results in an electronic configuration with three electrons with parallel spins in the threefold degenerate t<sub>2g</sub> level and the fourth electron in a twofold orbitally degenerate e<sub>g</sub> level. The remaining twofold orbital degeneracy is in accordance with the Jahn-Teller theorem lifted either by local lattice distortions and or by ordering the e<sub>g</sub> level occupation on neighboring ions and thereby strongly affecting the magnetic structure which in its turn stabilizes a particular orbital ordering. Also the electrical transport properties are strongly affected because of the so called double exchange like mechanisms in which the electron band widths are strongly influenced by the spin structure and also of course by the relative orientation of occupied orbitals on neighboring ions. Similar situations can and do occur in Fe and Co oxides which also are recently a subject of intense investigation. A somewhat different influence of orbital ordering has been invoked to explain the very strange magnetic behavior of various V compounds like LiVO<sub>2</sub> with its transition from a paramagnetic state to a non magnetic state at 500K and YVO<sub>3</sub> with its magnetization reversal transitions at 90K and 77K. So the problem of orbital ordering is quickly becoming a central issue in a broad range of materials. A problem is to find an accurate and direct experimental method to determine the nature of the orbital ordering and also study the changes which occur at magnetic and crystallographic transitions.
Orbital ordering manifests itself in the site dependent orientation of the quadrupole moment resulting from the spatial distribution of the outermost valence $`d`$ electrons. Unfortunately x-ray scattering under normal conditions is primarily controlled by the core electrons of atoms and the sensitivity to the valence electron distribution is usually very low. However as has been demonstrated recently for the manganates the use of x-ray energies corresponding to K absorption edges can greatly enhance the sensitivity of x-ray scattering cross sections to the valence electron distribution making a direct observation of orbital ordering possible. Since we really want to study the 3$`d`$ electron distribution the use of an absorption edge corresponding to a transition directly into the empty $`d`$ states would have the most sensitivity but for the 3$`d`$ transition metals these involve 2$`p`$ or 3$`p`$ like core levels which are relatively shallow and therefore involve long wavelengths restricting the information obtainable to systems with large lattice parameters. In the above mentioned K edge experiments the direct transitions to the $`d`$ states are not dipole allowed and therefore involve weak quadrupole transitions. So where does the sensitivity come from? Several models have been proposed one indeed involving the quadrupole transitions and the other involving transition to the empty 4$`p`$ band states which will be influenced by the $`d`$ electron occupation because of the $`d`$-$`p`$ Coulomb interactions. Ishihara et al recently used a nearest-neighbors MeO<sub>6</sub> octahedron to demonstrate the sensitivity of the Mn 4$`p`$ states to the $`d`$ electron orbital occupation establishing a basis for this effect. However in this model the 4$`p`$ states are atomic like levels whereas in the real solid the 4$`p`$ states form very broad bands which would tend to wash out the influence of the local $`d`$-$`p`$ Coulomb interactions at first glance. For the analysis of the data it is important to have a good understanding of the origin of the effect in detail.
In this paper we present the results of a band structure study of the effect of orbital ordering on the 4p density of states and especially the local symmetry projected density of states. We find indeed that the 4$`p`$ bands are much broader than the $`d`$-$`p`$ Coulomb interactions so that this is unlikely to be the dominant effect. However because the extended 4$`p`$ states so strongly hybridize either directly or via the O states, with the 3$`d`$ states of the neighboring atoms the local $`p_x`$,$`p_y`$, and $`p_z`$ projected density of states is extremely sensitive to the local distortion of the oxygen octahedron and the $`d`$ orbital occupation of the neighboring Mn atoms. So by using the polarization and energy dependence of the scattering, as was done in the experiment one can probe the orbital orientation of the occupied $`d`$ states on the neighboring Mn atoms at energies corresponding to the empty $`d`$ bands and the Jahn-Teller distortion of the oxygen octahedron at energies corresponding to the 4$`p`$ bands. This mechanism is quite different from that previously reported since it is not the core hole parent atoms own orbital orientation which we claim is measured but rather that of the neighboring atoms. In addition we present calculation both with and without the local Jahn-Teller distortions so that the effects of a lattice distortion and a pure orbital ordering effect can be separated.
In order to address those problems we performed LSDA+U calculations on the prototypical orbital ordered compound LaMnO<sub>3</sub> and especially studied the Mn-4$`p`$-DOS corresponding to p<sub>x</sub> and p<sub>y</sub> states. It is necessary to realize that in standard LSDA orbital polarization can be obtained only if MeO<sub>6</sub> octahedrons are distorted due to the Jahn-Teller effect. However if one takes into account on site $`d`$-$`d`$ Coulomb interaction, the resulting ground state is an orbitally polarized insulator even for the crystal structure without Jahn-Teller distortion .
LaMnO<sub>3</sub> has a orthorhombic $`Pbnm`$ crystal structure which can be considered as a cubic perovskite with two types of distortions: the first one is a tilting of the MnO<sub>6</sub> octahedra, so that Mn-O-Mn angles become less then 180, and the second one is a Jahn-Teller distortion (shortening of the two Mn-O bond and elongation of a third one). The later is usually considered responsible for the insulator ground state in LaMnO<sub>3</sub>. The configuration of a Mn<sup>+3</sup> ion in this compound is a high-spin $`d^4`$ state which is represented by $`3t_{2g}`$ and $`1e_g`$ electrons. Because of weak hybridization of $`t_{2g}`$ orbitals with O2p these states can be regarded as forming the localized spin 3/2. In contrast to that, $`e_g`$ orbitals, which hybridize much more strongly with O2p produce in the end rather broad bands. The strong exchange interaction with $`t_{2g}`$ subshell leads to the splitting of the $`e_g`$ band into unoccupied $`e_g`$ and half-occupied $`e_g`$ subbands.
The electronic structure of undoped LaMnO<sub>3</sub> was calculated by the LSDA+U method in LMTO calculation scheme (based on Stuttgart TBLMTO-47 computer code) with the values of U and J equal 8 and 0.88 eV for Mn 3$`d`$ electrons, respectively. For the Mn atoms, the
basis from 4$`s`$, 4$`p`$, and 3$`d`$ orbitals were used, while for La and O atoms it was 6$`s`$, 6$`p`$, 5$`d`$, 4$`f`$ and 2$`s`$, 2$`p`$, 3$`d`$ states, respectively.
As a first step we used the real crystal structure of the LaMnO<sub>3</sub> (with Jahn-Teller distortions). The result of the self-consistent calculation was an orbital ordered antiferromagnetic insulator with a band gap of 1.41 eV and the magnetic moment 3.84$`\mu _B`$ per Mn atom. The $`e_g`$-band is split by this gap into two subbands: the occupied one with the predominantly $`\varphi _1`$ character, and the empty one with the $`\varphi _2`$ character Fig.1. Here, $`\varphi _1`$ is a $`3x^2r^2`$ orbitals from the first type Mn atoms in the basal plane and $`3y^2r^2`$ of the second type (in the same plane) in a cubic coordinate system which differ from the orthorhombic one by 45 rotation around the $`c`$ axis. $`\varphi _2`$ is their $`y^2z^2`$ and $`z^2x^2`$ orbitals, respectively. The 4$`p`$ partial DOS of the first Mn type is shown in Fig.2 (the coordinate system is the same as in Fig.1). For the second Mn type the 4$`p`$ DOS has the same character but the $`p_x`$ and $`p_y`$ states are interchanged.
Before we discuss these results in more detail we note that there are, two interesting regions in the 4$`p`$ partial density of states and therefore in the K edge x-ray absorption spectra: the first about 1.7 eV above the Fermi energy corresponding to the Mn 3$`d`$ e<sub>g</sub> empty states with some 4$`p`$ character mixed in and the second between 12 and 32 eV corresponding to states of mainly 4$`p`$ character. We note here the very strong difference in the p<sub>x</sub> and p<sub>y</sub> density of states which is the reason for the strong
anomalous polarization dependent scattering at the K edge. In order to understand the reason for why these two regions are interesting we performed the calculation of the LaMnO<sub>3</sub> electronic structure in the crystal structure of the Pr<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> which does not have the Jahn-Teller distortion . This allows to separate the contribution of Jahn-Teller distortion in the polarization of 4$`p`$ states from the influence of Mn 3$`d`$ shell. This calculation was made in the same way as the previous one and the result was also an antiferromagnetic insulator with the orbital ordering of Mn $`e_g`$ electrons but with a smaller band gap (0.39 eV) and the magnetic moment 3.81$`\mu _B`$ per Mn atom. The calculated $`d`$-orbital polarization for real and Pr<sub>1/2</sub>Sr<sub>1/2</sub>MnO<sub>3</sub> crystal structures has the same character Fig.1, so, it is not shown here. The comparison of these two calculations (Fig.2 and Fig.3) allows us to conclude that the Mn-4$`p`$-orbital polarization in the second energy region is caused mainly by the Jahn-Teller distortion of oxygen octahedra and in the first one by pure orbital ordering of Mn 3$`d`$ electrons. In order to confirm the later supposition we performed the LSDA calculation of the LaMnO<sub>3</sub> in the crystal structure without Jahn-Teller distortions and without the local Coulomb correction resulting in a metal without any orbital
ordering and also almost without polarization of Mn 4$`p`$ states Fig.4. The magnetic moment is 3.17$`\mu _B`$ per Mn atom.
Recently, Muracami et all. have applied anomalous x-ray scattering techniques to the LaMnO<sub>3</sub>. They observed a resonant-like and polarization dependent character of the scattering intensity near the K edge of the Mn ion in this compound. The comparison of calculated 4$`p`$ densities of states Fig.2 and the measured fluorescence Fig.2 in shows that they are very similar: the broad total 4$`p`$ DOS has also two peaks (in the region 12-32eV) and the distance between them is about 13eV. Next, according to this experiment, the LaMnO<sub>3</sub> has the same type of the orbital ordering as was given by Goodenough and which we found in our calculations too. Based on the splitting of the Mn 4$`p`$ states Muracami et all. proposed a theoretical description of the scattering mechanism but they did not specify the origin of this splitting. From the MeO<sub>6</sub> cluster calculations it was shown that the main reason of the Mn 4$`p`$ polarization consists in the Coulomb interaction between Mn 3$`d`$ and 4$`p`$ electrons. So, this should lead to the raising in the energy of the $`p_x`$ orbital which is oriented along the direction of the occupied 3$`x^2`$-$`r^2`$ orbital (in the cubic coordinate system) but according to our calculation the picture is completely different. First of all, we should note again that the polarization of the Mn 4$`p`$ orbitals in the region of 12-32 eV above Fermi energy originates mainly from the Jahn-Teller distortion of the oxygen octahedra which is indirectly connected with the orbital ordering. Second, in contrast to the cluster calculation, the $`p_y`$ Mn orbital which is perpendicular to the occupied 3$`x^2`$-$`r^2`$ orbital (in cubic coordinate system
and for one particular Mn atom) is almost absent in the region about 1.7 eV above Fermi energy. It means that the hybridization of this orbital with the neighboring Mn 3$`d`$ orbitals has a much stronger influence on the band structure of 4$`p`$ states then the $`d`$-$`p`$ Coulomb interaction on the same site.
Summarizing, the LSDA+U calculation for the undoped LaMnO<sub>3</sub> demonstrates that there are two main contributions to the polarization dependence of the K edge scattering both of which involve the 4$`p`$ orbitals and band structure effects. The first is caused by the hybridization of the Mn 4$`p`$ orbitals with the ordered Mn 3$`d`$ orbitals on neighboring Mn ions either directly or via the intervening O orbitals. This effect is therefore sensitive to the $`d`$ orbital occupation not on the central Mn ion with the core hole but rather that of the neighboring Mn ions and occurs at energies corresponding to the empty Mn 3$`d`$ bands. We should note here that if the Mn atoms are not in lattice sites with inversion symmetry the 4$`p`$ states can mix directly with the 3$`d`$ states of the same atom. In this case orbital ordering of the core hole parent atom will directly be measured. This would be the case for example in the corundum structure like V<sub>2</sub>O<sub>3</sub>. The second effect originates from the hybridization of the central Mn 4$`p`$ states with states centered on the neighboring O ions and is therefore very sensitive to the local Jahn-Teller distortion but at most weakly affected by the $`d`$ orbital occupation of the Mn 3$`d`$ states. This effect is visible at about 10 eV higher in energy corresponding to the threshold of the predominantly 4$`p`$ band. We note that although we have here demonstrated this for Mn the physics described is very general and will be applicable to transition metal compounds in general although the details like energy scales will depend on the system.
This investigation was supported by the Russian Foundation for Fundamental Investigations (RFFI grants 96-15-96598 98-02-17275), and by the Netherlands Organization for Fundamental Research on Matter (FOM) with financial support by the Netherlands Organization for the Advance of Pure Science (NWO). The Groningen group also acknowledges the financial support of the EU via the TMR OXSEN network.
|
no-problem/9901/cond-mat9901191.html
|
ar5iv
|
text
|
# Quantum Lifshitz Point
## I Introduction
Magnetic metals close to a zero temperature phase transition have been a subject of active ongoing study , mainly due to their apparent defiance of the Fermi liquid paradigm. Near a quantum critical point, one usually finds neither a nearly constant linear specific heat coefficient $`\gamma \delta C/Tconst.`$, nor a virtually temperature-independent magnetic susceptibility $`\chi const.`$, nor a $`T^2`$-resistivity $`\rho const.+T^2`$, all normally expected from a Landau Fermi liquid . Instead experiments reveal a host of anomalous thermodynamic, magnetic and transport properties calling for adequate theoretical description.
In its current form, scaling theory of quantum criticality in itinerant magnets is due to Hertz and Millis . It describes incipient magnetic ordering in an isotropic metal solely in terms of a boson order parameter with the correlation range in both space and time diverging at a quantum critical point. This divergence naturally leads to low-temperature anomalies in thermodynamic, magnetic and transport properties. Some of the early work on the subject was done by Mathon , Makoshi and Moriya , Ueda and Dzyaloshinskii and Kondratenko , who described magnetic transitions at $`T=0`$ using diagrammatic methods. These and other studies of nearly ferro- and antiferromagnetic metals as well as magnetic fluctuations in general, done in the spirit of self-consistent theory, were summarized in the book by Moriya . A phenomenological approach aiming at establishing relations between various critical exponents at a quantum critical point has been developed by Continentino .
The Hertz-Millis theory and its implications describe rather well some itinerant ferromagnets such as ZrZn<sub>2</sub> and, to a lesser extent, MnSi. In ZrZn<sub>2</sub>, moderate external pressure $`p`$ 8 kbar is enough to reduce the Curie temperature $`T_C`$ to zero. Near the critical pressure $`p_c`$, $`T_C`$ scales as $`(p_cp)^{3/4}`$ while the resistivity obeys $`\rho T^{1.6\pm 0.1}`$, in excellent agreement with the theory .
At the same time, several Ce-based antiferromagnets, where the Néel temperature can be tuned to zero by pressure or doping, appear to disregard the conclusions of the available theories. One of the most studied examples is CeCu<sub>6-x</sub>Au<sub>x</sub>, where at the quantum critical point the resistivity behaves as $`const.+T`$, and where over more than two decades in temperature the specific heat coefficient diverges as $`\mathrm{ln}(T)`$ while the inverse susceptibility fits $`\chi ^1const.+T^\alpha `$ with $`\alpha 0.8\pm 0.1`$.
Another interesting case is CePd<sub>2</sub>Si<sub>2</sub>, a metal which orders antiferromagnetically below 10 K. Applied pressure reduces the Néel temperature $`T_N`$ from 10 K at the ambient pressure to about 0.4 K at 28 kbar. Between 15 kbar and 28 kbar, $`T_N`$ falls linearly with pressure, and at 28 kbar CePd<sub>2</sub>Si<sub>2</sub> becomes superconducting below 0.4 K, in a narrow strip between 23 and 32 kbar . At 28 kbar, the resistivity of CePd<sub>2</sub>Si<sub>2</sub> exhibits a striking $`T^{1.2\pm 0.1}`$ behavior over about two decades in temperature .
Neither CeCu<sub>6-x</sub>Au<sub>x</sub> nor CePd<sub>2</sub>Si<sub>2</sub> appear to respect the available theoretical results which, for a bulk antiferromagnet, would imply the Néel temperature scaling as $`T_N(p_cp)^{2/3}`$, the specific heat coefficient $`\gamma T^{1/2}`$ and the resistivity $`\rho T^{3/2}`$. At the same time, theoretical results for two dimensions are much closer to what was observed in CeCu<sub>6-x</sub>Au<sub>x</sub>, as in this case the theory does yield $`const.+T`$ resistivity , logarithmically divergent linear specific heat coefficient and essentially linear scaling of the Néel temperature with pressure . This led to an interpretation of data on CeCu<sub>6-x</sub>Au<sub>x</sub> in terms of a purely two-dimensional magnetic ordering in an otherwise perfectly three-dimensional metal with only moderate anisotropy .
An alternative interpretation , based on the neutron scattering data, pointed at a possible explanation in terms of highly anisotropic critical fluctuations with the stiffness in one of the directions vanishing at or very close to the quantum critical point. The conspiracy of this anisotropy with the anomalous frequency exponent $`\alpha 0.8\pm 0.1`$ of critical fluctuations allowed to fit together the large body of experimental data obtained at the quantum critical point in CeCu<sub>6-x</sub>Au<sub>x</sub>, including the specific heat, the uniform susceptibility and the neutron scattering scans.
This latter scenario would imply a residual quartic dispersion ($`q^4`$) of critical fluctuations in the “soft” direction, thus also leading to the dimensionality reduction (by $`1/2`$, as opposed to the purely two-dimensional order) and hence to qualitative modification of the Hertz-Millis theory.
Finally, it was noticed that the resistivity $`\rho T^{1.2\pm 0.1}`$ of CePd<sub>2</sub>Si<sub>2</sub>, as well as $`\rho T^{1.25\pm 0.1}`$ of CeNi<sub>2</sub>Ge<sub>2</sub> (which has the structure of CePd<sub>2</sub>Si<sub>2</sub> with a smaller unit cell) may also be explained assuming anisotropic critical fluctuations with the residual quartic dispersion in one of the directions.
All these experimental findings suggest that vanishing stiffness may be the aspect of physics needed to to describe the quantum criticality in Ce-based antiferromagnets by a Hertz-Millis type of theory. A critical point which, in addition to the onset of ordering, is characterized by disappearance of stiffness in one or several directions, is called a Lifshitz point . In this paper, I study a quantum Lifshitz point, a curious yet possibly experimentally relevant coincidence of a quantum critical point (onset of ordering at $`T=0`$) with a point where the stiffness vanishes in one or several directions in the momentum space. I develop scaling theory of a classical Gaussian region of a disordered phase near a quantum Lifshitz point in an itinerant three-dimensional magnet. I study a particular case of a Lifshitz point, where the incipient ordering exhibits anisotropic dispersion which is quartic in only one direction and quadratic in the remaining two.
The restriction to only the classical Gaussian region is due to the fact that in the other regions the low-temperature behavior is dominated by crossovers between various regimes and thus comparison to the experiment is rather hard to make. Moreover, the full phase diagram is sensitive to the relative strength of the coupling constants, whereas the results for the classical Gaussian region of a disordered phase are independent of this uncertainty and can be tested experimentally.
It is important to note that Lifshitz point is a multicritical point and that, generally, one shall expect to find three phases in its vicinity, corresponding to one disordered and two different ordered states (see Section V). At the same time, none of the experiments on CeCu<sub>6-x</sub>Au<sub>x</sub> or CePd<sub>2</sub>Si<sub>2</sub> which I am aware of, indicated presence of more than one magnetically ordered phase. To that end, I show that, near a quantum Lifshitz point, one may indeed find only one ordered phase rather than two, which makes the model potentially relevant to CeCu<sub>6-x</sub>Au<sub>x</sub> , CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub> .
Bearing in mind that both CePd<sub>2</sub>Si<sub>2</sub> and CeCu<sub>6-x</sub>Au<sub>x</sub> are antiferromagnetic metals, one is led to assume that the critical mode is non-conserved and over-damped. Therefore this work may be viewed as an experiment-motivated extension of the Hertz-Millis theory to a particular case of a quantum Lifshitz point.
In Section II, I present the results and discuss them. I determine the equation of the critical line, the behavior of the order parameter susceptibility and the correlation length close to the quantum Lifshitz point. I also estimate the low-temperature behavior of the conductivity and the anomaly of the linear specific heat coefficient. Then I compare these theoretical results with the experimental findings .
In Section III, I derive the scaling equations, solve them in Section IV and obtain the results outlined in Section II. In Section V, I discuss the phase diagram near the quantum Lifshitz point and set the conditions for appearance of only one ordered phase.
To demonstrate that the quantum Lifshitz point I study is well defined, in Section VI I examine how interactions generate stiffness in the “soft” direction, and show that this effect can be neglected at all relevant momenta and frequencies. Finally, in Section VII, I summarize the results and comment on them. The appendix provides the calculation details.
Self-consistent treatment of a model with the stiffness vanishing in one of the directions in the momentum space has been given recently by C. Lacroix et al. with reference to the experimental data on YMn<sub>2</sub>-based materials, where the transition is first-order. The present work amounts to a renormalization group derivation of some of the results obtained in (staggered susceptibility), plus new results (specific heat coefficient, possible phase diagrams and the transition line equation) along with comparison to the experimental data for Ce-based metallic antiferromagnets undergoing second-order phase transition. It is also shown that generation of stiffness in the “soft” direction by short-distance fluctuations is negligible at all relevant momenta, and thus the quantum Lifshitz point studied in this paper is well-defined.
## II The results
The main results of this Section amount to establishing the leading low-temperature behavior of various quantities near a quantum Lifshitz point. This task is facilitated by the fact that the theory falls above its upper critical dimension. Moreover, since the stiffness in the “soft” direction, generated by the short-range fluctuations, turns out to be negligible (see Section VI), the thermodynamic and transport properties in the classical Gaussian region ($`T_N=0`$) may be obtained from the Gaussian action with this stiffness set equal to zero:
$$S[\varphi ]=_0^\beta 𝑑\tau 𝑑x\varphi _\alpha \left[\delta +|_\tau |+_{}^2+_{}^4\right]\varphi _\alpha ,$$
(1)
where $`\delta `$ is defined by the tuning parameter $`p`$ and by the feedback of the quartic interaction in the Ginzburg-Landau action (2-4) as per $`\delta =(pp_c)/p_c+const.T^{5/4}`$. Thus the Néel temperature $`T_N`$ in this theory scales as $`T_N(p_cp)^{4/5}`$. At $`p=p_c`$, the correlation length $`\xi _{}`$ scales as $`T^{5/8}`$, whereas the correlation length $`\xi _{}`$ in the “soft” direction scales as $`T^{5/16}`$.
The leading exponent of the specific heat coefficient is given by the Gaussian contribution to the free energy $`F`$:
$$F=Tr\mathrm{log}G^1(q,\omega )=^1𝑑z^1d^2q_{}^1𝑑q_{}\mathrm{coth}\frac{z}{2T}\mathrm{arctan}\left[\frac{z}{q_{}^2+q_{}^4}\right]$$
and yields the specific heat coefficient
$$\delta C/TT^{1/4}.$$
Resistivity due to scattering off anisotropic fluctuations can be estimated via the characteristic transport time $`\tau _{tr}`$ given by the fluctuations of the magnetic order parameter near the transition. Since the critical fluctuations are antiferromagnetic and thus characterized by a finite wave-vector, the $`(1\mathrm{cos}\theta )`$ factor in the transport relaxation rate may be omitted in a qualitative estimate, which leads to
$$1/\tau _{tr}T\underset{q,\omega }{}\varphi ^2_{q,\omega }T^{5/4},$$
as realized indeed by the authors of . Recently, it has been argued that the observed resistivity $`\rho (T)const.+T^{1.2\pm 0.1}`$ may be a crossover phenomenon due to the interference between the impurity scattering and scattering by the critical fluctuations, conspiring to mimic a power-law behavior. At the moment it remains to be seen how sensitive this mechanism may be to the material-dependent factors such as the shape of the Fermi surface. At the same time, for a system with a true quantum Lifshitz point, the $`T^{5/4}`$ resistivity shall be accompanied by the $`C/TT^{1/4}`$ scaling of the specific heat coefficient.
In CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>, the resistivity does exhibit a temperature exponent close to $`5/4`$. However, the rest of the available data is less encouraging for the present theory. The Néel temperature scales linearly with pressure instead of obeying $`T_N(p_cp)^{4/5}`$. To my knowledge, the specific heat data on CePd<sub>2</sub>Si<sub>2</sub> at the critical pressure is not yet available. The specific heat data on CeNi<sub>2</sub>Ge<sub>2</sub> is still ambiguous, as both the $`C/TT^{1/2}`$ and $`C/T\mathrm{log}T`$ have been reported .
One can indeed find formal reasons (such as presence of the superconducting phase in CePd<sub>2</sub>Si<sub>2</sub>, or disorder in CeCu<sub>5.9</sub>Au<sub>0.1</sub>) why the Lifshitz point theory is not applicable to these materials. However, the $`T^{1.2}`$ scaling of the resistivity in CePd<sub>2</sub>Si<sub>2</sub> is seen at temperatures up to 60 K, which is more than two orders of magnitude greater than the superconducting transition temperature $`T_c`$ 0.4 K. At the same time, virtually linear scaling of $`T_N`$ with pressure persists up to about 5 K, which is again much greater than $`T_c`$. These arguments (as well as recent finding of the $`\omega /T`$ scaling and the inverse uniform susceptibility behaving as $`\chi ^1const.+T^{0.8\pm 0.1}`$ in CeCu<sub>5.9</sub>Au<sub>0.1</sub>) strongly suggest that the physics of these materials amounts to more than just a refined Gaussian theory above its upper critical dimension.
## III The scaling equations
The assumption of an overdamped critical mode with the stiffness vanishing in one of the directions leads to the following effective action close to the quantum Lifshitz point:
$`S_{eff}[\varphi ]`$ $`=`$ $`S^{(2)}+S^{(4)}`$ (2)
$`S^{(2)}`$ $`=`$ $`{\displaystyle _0^\beta }𝑑\tau {\displaystyle 𝑑x\varphi _\alpha \left[\delta +|_\tau |+_{}^2+D_{}^2+_{}^4\right]\varphi _\alpha }`$ (3)
$`S^{(4)}`$ $`=`$ $`{\displaystyle _0^\beta }𝑑\tau {\displaystyle 𝑑x\left[u(\varphi _\alpha \varphi _\alpha )^2+v_1(_{}\varphi _\alpha )^2\varphi _\beta \varphi _\beta +v_2(\varphi _\alpha _{}\varphi _\alpha )^2\right]}.`$ (4)
with the frequency and the momentum cut-off set equal to unity. Here the mass term $`\delta `$ and the stiffness $`D`$ in the “soft” direction describe deviation from the quantum Lifshitz point, $`\beta =1/T`$ is the inverse temperature, $`_{}`$ corresponds to the two “normal” directions, whereas $`_{}`$ corresponds to the third, “soft” direction. The coupling constants $`v_1`$ and $`v_2`$ describe the dispersion of the quartic coupling constant $`u`$; they correspond to the only two possible linearly independent terms quadratic in $`_{}`$ and quartic in $`\varphi _\alpha `$, and describe generation of stiffness in the “soft” direction, as shown in Fig. 3.
The scaling variables in the theory are the mass term $`\delta `$, the stiffness $`D`$ in the “soft” direction, the temperature $`T`$ and coupling constants $`u`$, $`v_1`$ and $`v_2`$. To the lowest order in the latter three, the scaling equations can be derived e.g. by expanding the partition function $`Z`$ to first order in $`S^{(4)}`$, then integrating out a thin shell near the cut-off, rescaling the variables and the fields, and then comparing the result with the original action :
$`{\displaystyle \frac{dT(b)}{d\mathrm{ln}b}}`$ $`=`$ $`2T(b)`$ (5)
$`{\displaystyle \frac{du(b)}{d\mathrm{ln}b}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}u(b)`$ (6)
$`{\displaystyle \frac{dv_{1,2}(b)}{d\mathrm{ln}b}}`$ $`=`$ $`{\displaystyle \frac{3}{2}}v_{1,2}(b)`$ (7)
$`{\displaystyle \frac{d\delta (b)}{d\mathrm{ln}b}}`$ $`=`$ $`2\delta (b)+2(n+2)uf_1[T(b),\delta (b),D(b)]`$ (8)
$`+`$ $`(nv_1+v_2)f_2[T(b),\delta (b),D(b)]`$ (9)
$`{\displaystyle \frac{dD(b)}{d\mathrm{ln}b}}`$ $`=`$ $`D(b)+(nv_1+v_2)f_1[T(b),\delta (b),D(b)].`$ (10)
Here $`n`$ is the number of components of field $`\varphi _\alpha `$. The definitions of $`f_1[T(b),\delta (b),D(b)]`$ and $`f_2[T(b),\delta (b),D(b)]`$, as well as the details of the derivation, are given in the Appendix. Note that, following Millis , I denote the running value of a scaling variable (e.g. $`T(b)`$) by indicating explicit dependence on the rescaling parameter $`b`$, whereas for the bare (physical) quantities the $`b`$ dependence is omitted. Also note that truncating the equations at the first order in $`u`$, $`v_1`$ and $`v_2`$ is equivalent to the assumption that the bare value of these couplings is small.
An extra scaling variable $`D(b)`$ in equations (5-10) compared with those of Millis leads to appearance of two different ordered phases to be described in Section V. This sets the main distinction between a quantum Lifshitz point and a simple quantum critical point of Hertz and Millis, where there is only one ordered phase.
## IV Solution of the scaling equations
In this Section, I obtain the qualitative solution of the scaling equations (5-10) following Millis and noticing that $`f_1[T(b),\delta (b),D(b)]`$ and $`f_2[T(b),\delta (b),D(b)]`$ virtually do not depend on $`\delta (b)`$ or $`D(b)`$ for $`\delta (b),D(b)1`$, and that both of them fall off rapidly as $`\delta (b)`$ exceeds unity. Thus one can neglect their dependence on $`\delta (b)`$ and $`D(b)`$ for $`\delta (b)<1`$ and stop the scaling at $`\delta (b)=1`$.
With these provisos, the formal solution of (5-10) reads
$`T(b)`$ $`=`$ $`Tb^2`$ (11)
$`u(b)`$ $`=`$ $`ub^{1/2}`$ (12)
$`v_{1,2}(b)`$ $`=`$ $`v_{1,2}b^{3/2}`$ (13)
$`\delta (b)`$ $`=`$ $`\delta b^2+2b^2(n+2)u{\displaystyle _0^{\mathrm{ln}b}}𝑑\tau e^{\frac{5}{2}\tau }f_1[Te^{2\tau }]`$ (14)
$`+`$ $`b^2(nv_1+v_2){\displaystyle _0^{\mathrm{ln}b}}𝑑\tau e^{\frac{7}{2}\tau }f_2[Te^{2\tau }]`$ (15)
$`D(b)`$ $`=`$ $`b\left[D+(nv_1+v_2){\displaystyle _0^{\mathrm{ln}b}}𝑑\tau e^{\frac{5}{2}\tau }f_1[Te^{2\tau }]\right].`$ (16)
As in the case of a simple quantum critical point , two major regimes exist, depending on whether the scaling stops when the running value of the temperature is much smaller or much greater than one. The first case is usually referred to as quantum while the second case is called classical.
To obtain the condition for the quantum regime, set $`T=0`$ on the right hand side of (15), then integrate over $`\tau `$ up to $`\mathrm{ln}(b^{})`$ such that $`\delta (b^{})=1`$, solve for $`b^{}`$, substitute $`b^{}`$ into (11) and require $`T(b^{})1`$. The resulting condition is
$$Tr_1,r_1\frac{4}{5}(n+2)uf_1[0]+\frac{2}{7}(nv_1+v_2)f_2[0].$$
(17)
If reversed, the inequality (17) corresponds to the classical regime, where it is convenient to divide the scaling trajectory into two parts, corresponding to $`T(b)1`$ (quantum) and $`T(b)1`$ (classical). For $`T(b)1`$, one can estimate $`f_{1,2}[T]`$ as $`f_1[T]B_1T`$, $`f_2[T]B_2T`$. Then, for $`T(b)1`$, the equations (6-10) may be recast in terms of new variables $`U(b)u(b)T(b)`$, $`V_{1,2}(b)v_{1,2}(b)T(b)`$:
$`{\displaystyle \frac{dT(b)}{d\mathrm{ln}b}}`$ $`=`$ $`2T(b)`$ (18)
$`{\displaystyle \frac{dU(b)}{d\mathrm{ln}b}}`$ $`=`$ $`{\displaystyle \frac{3}{2}}U(b)`$ (19)
$`{\displaystyle \frac{dV_{1,2}(b)}{d\mathrm{ln}b}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}V_{1,2}(b)`$ (20)
$`{\displaystyle \frac{d\delta (b)}{d\mathrm{ln}b}}`$ $`=`$ $`2\delta (b)+2B_1(n+2)U(b)+B_2(nV_1(b)+V_2(b))`$ (21)
$`{\displaystyle \frac{dD(b)}{d\mathrm{ln}b}}`$ $`=`$ $`D(b)+B_1(nV_1(b)+V_2(b)).`$ (22)
The initial conditions correspond to $`b=\overline{b}`$ such that $`T(\overline{b})=1`$ and read
$`T(\overline{b})`$ $`=`$ $`1`$ (23)
$`U(\overline{b})`$ $`=`$ $`u(\overline{b})=uT^{1/4}`$ (24)
$`V_{1,2}(\overline{b})`$ $`=`$ $`v_{1,2}T^{3/4}`$ (25)
$`\delta (\overline{b})`$ $`=`$ $`{\displaystyle \frac{1}{T}}\left[r_1+A(n+2)uT^{5/4}\right]`$ (26)
$`D(\overline{b})`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{T}}}\left[r_2+{\displaystyle \frac{A}{2}}(nv_1+v_2)T^{5/4}\right],`$ (27)
where $`r_2`$ is defined by
$$r_2D+\frac{2}{5}(nv_1+v_2)f_1[0].$$
(28)
For $`Tr_1`$, both $`U(\overline{b})`$ and $`V_{1,2}(\overline{b})`$ are small, which justifies using linearized equations near $`b=\overline{b}`$. Neglecting the higher powers of $`T`$ and assuming that $`u`$ and $`v_{1,2}`$ are of the same order of magnitude, the solution of (18-22) is
$`T(b)`$ $`=`$ $`T(\overline{b})b^2`$ (29)
$`U(b)`$ $`=`$ $`U(\overline{b})b^{3/2}`$ (30)
$`V_{1,2}(b)`$ $`=`$ $`V_{1,2}(\overline{b})b^{1/2}`$ (31)
$`\delta (b)`$ $`=`$ $`b^2\left[\delta (\overline{b})+4B_1u(n+2)T^{1/4}\right]`$ (32)
$`D(b)`$ $`=`$ $`b\left[D(\overline{b})+2B_1(nv_1+v_2)T^{3/4}\right].`$ (33)
Now one can ensure the consistency of using the linearized equations by demanding that, when scaling stops at $`\delta (b^{})=1`$, the coupling constant $`U`$ is small:
$$U(b^{})1.$$
(34)
This condition corresponds to the Ginzburg criterion and is violated only very close to the line $`\delta (b^{})=0`$, which defines the Néel temperature $`T_N`$ as a function of $`u`$ and $`r_1`$:
$$r_1+(A+4B_1)(n+2)uT_c^{5/4}=0.$$
(35)
The Ginzburg criterion (34) is violated in a narrow window $`\delta T_N/T_Nu^{1/3}T_c^{1/12}1`$ of strong classical fluctuations.
To establish connection with experiment, it is assumed that $`r_1`$ and $`r_2`$ are both proportional to the deviation $`(pp_c)`$ of the control parameter $`p`$ from its critical value $`p_c`$. This is a reasonable assumption given that the theory at hand is above the upper critical dimension. Thus $`T_N`$ scales as per
$$T_N(p_cp)^{4/5}.$$
(36)
However, which two phases does the line (36) separate?
## V The phase diagram
To answer this question, one has to recall that, as mentioned above, Lifshitz point is a tricritical point. Thus, generally, one shall expect to see two different ordered phases and one disordered phase in its vicinity, as illustrated by the following toy-model expression for the free energy $`F`$ at a finite-temperature Lifshitz point:
$$F\varphi \left[\delta +Dq^2+q^4\right]\varphi .$$
(37)
As follows from (37), the region $`(D>0,\delta <0)`$ corresponds to the phase ordered at $`q=0`$, whereas in the region $`(D<0,\delta <(D/2)^2)`$ one finds ordering at wave vectors $`q=\pm \sqrt{D/2}`$. This phase diagram is shown in Fig. 1.
To establish the phase diagram near a quantum Lifshitz point, one shall draw the curves of $`\delta (b^{})=0`$ and $`D(b^{})=0`$ in the $`(Tp)`$ plane and compare with Fig. 1. Mutual position of the two curves depends on the relative magnitude of $`u`$, $`v_1`$ and $`v_2`$, and on the signs of the proportionality coefficients between $`r_{1,2}`$ and $`(pp_c)`$. Six possibilities arise, as shown on Fig. 2. As in Fig. 1, horizontal shading denotes “commensurate” ordering (at $`q=0`$), whereas zig-zag shading corresponds to “incommensurate” order (at $`q=\pm \sqrt{D/2}`$). Case (d) can be ruled out on physical grounds, as it corresponds to ordering at high temperatures. The possibilities illustrated in Fig. 2 (a) and (b) are irrelevant to the results obtained for CePd<sub>2</sub>Si<sub>2</sub> or for CeCu<sub>6-x</sub>Au<sub>x</sub>, as they lead to existence of two ordered states, whereas no trace of second transition has been found in either of the materials of interest. The remaining cases (c), (e) and (f) are the most interesting for us, as they show how a quantum Lifshitz point can mimic a “regular” quantum critical point with only one ordered phase. Case (f) corresponds to “commensurate” ordering, while (c) and (e) describe “incommensurate” order. In the latter two cases, the transition line between the “incommensurate” and the disordered phases is described by the equation $`\delta (b)=(D(b)/2)^2`$, shown in Fig. 2 by dashed line, with $`\delta (b)`$ and $`D(b)`$ given by (32-33). At low temperatures, this line asymptotically coincides with the line $`\delta (b)=0`$ given by (36).
Currently, the structure of magnetic order in CeCu<sub>6-x</sub>Au<sub>x</sub> is being mapped out experimentally by several groups . However, if a Lifshitz point described above is realized in CeCu<sub>6-x</sub>Au<sub>x</sub>, the exact character of ordering is irrelevant for the physical properties in the “classical Gaussian” region, roughly corresponding to the vicinity of the line $`(T>0,T_c=0)`$.
## VI Generation of stiffness
As shown in Section III (see Fig. 3), near the quantum Lifshitz point the short-range fluctuations do generate stiffness in the “soft” direction, even though it is absent exactly at the critical point. To check the importance of this effect at finite temperature, one has to compare the generated quadratic term $`Dq_{}^2`$ with the quartic term $`q_{}^4`$. The comparison can be done easily using the solution (11-16) of the scaling equations (5-10).
At each step of the renormalization procedure, one focuses on momenta and frequencies near the current value of the cut-off. Recalling the agreement to set the cut-off equal to unity, one finds that rescaling factor $`b`$ corresponds to frequency and momenta $`q_{}(b)=1/b`$, $`q_{}(b)=1/\sqrt{b}`$ and $`\omega (b)=1/b^2`$.
Thus the relative importance of the generated term $`D(b)q_{}^2(b)`$ is given by its comparison with the mass term $`\delta (b)`$ and with the quartic term $`q_{}^4(b)`$. To do the comparison in the classical Gaussian region which we are studying, one shall set $`r_1=r_2=0`$ and then, using (11-16), estimate $`\delta (b)`$ and $`D(b)`$ at a running value of $`b`$.
In the regime of “quantum” renormalization ($`T(b)1`$), the sought estimate is given by
$$\delta (b)A(n+2)ub^{1/2},$$
$$D(b)A(nv_1+v_2)b^{3/2},$$
$$D(b)q_{}^2(b)A(nv_1+v_2)b^{5/2}.$$
$$q_{}^4(b)q_{}^2(b)|\omega |(b)b^2.$$
Applicability of our scaling equations requires weak coupling ($`u,v_{1,2}1`$). Thus, $`D(b)q_{}^2(b)`$ is negligible compared with $`q_{}^4(b)`$ at any value of $`b`$ up to the point when $`\delta (b)`$ and $`q_{}^4(b)`$ become of the same order of magnitude. Therefore, the stiffness generated by short-distance fluctuations under the renormalization flow can be safely neglected in our study, which in turn means that the quantum Lifshitz point is well-defined.
## VII Conclusions
In this paper, I studied the classical Gaussian region near a quantum Lifshitz point. This corresponds to defining the low-temperature behavior at the point where the Néel temperature and the stiffness in the “soft” direction simultaneously become equal to zero. The Néel temperature was found to scale as $`T_N(p_cp)^{4/5}`$. The specific heat coefficient was found to have a $`T^{1/4}`$ anomaly, whereas the resistivity was shown to exhibit $`T^{5/4}`$ scaling. Of these results, only the resistivity exponent finds experimental support (in CePd<sub>2</sub>Si<sub>2</sub> and CeNi<sub>2</sub>Ge<sub>2</sub>), while the predictions for the shape of the transition line appears to fail in fitting the experimental data. The situation with the specific heat data is not yet entirely certain.
Regardless of possible reasons, discussed briefly in Section II, the failure of simple phenomenology based on the assumption of a quantum Lifshitz point appears to be unambiguous for CePd<sub>2</sub>Si<sub>2</sub> and CeCu<sub>6-x</sub>Au<sub>x</sub>. More generally, it appears that the correct theory of the transition shall fall below the upper critical dimension, while the Hertz-Millis theory and its present refinement are all essentially Gaussian.
At the moment, the source of the puzzling behavior of these Ce-based materials near a zero-temperature transition remains unclear. A comprehensive experimental study (specific heat, neutron scattering, thermal transport and NMR/NQR/$`\mu `$SR) might help to resolve some of the pressing issues. An intriguing possible direction of theoretical research would be to study the yet poorly understood interplay between the incipient correlations in the conduction sea and the development of the Kondo effect .
I am indebted to G. Aeppli, P. Coleman, A. Millis, D. Morr, A. Rosch and A. Schröder for discussions related to this article, and to Y. Aoki and T. Fukuhara for discussions of their data. The work was started at Rutgers University, where it was supported by the National Science Foundation (grant number DMR-96-14999). Work at the University of Illinois was supported in part by the MacArthur Chair endowed by the John D. and Catherine T. MacArthur Foundation at the University of Illinois.
## Appendix
In this Appendix, I derive the scaling equations (5-10). The first term on the right hand side of each equation corresponds to rescaling of the variable under an infinitesimal time- and length-scale transformation. It can be obtained e.g. by following Hertz . Rewrite the action (2-4) in the momentum and frequency domain, replacing the sum over the Matsubara frequencies by an integral up to the cut-off (set equal to 1):
$`S_{eff}[\varphi ]`$ $`=`$ $`S^{(2)}+S^{(4)}`$ (38)
$`S^{(2)}`$ $`=`$ $`{\displaystyle ^1}{\displaystyle \frac{d\omega }{2\pi }}{\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}\varphi _\alpha \left[\delta +|\omega |+q_{}^2+Dq_{}^2+q_{}^4\right]\varphi _\alpha `$ (39)
$`S^{(4)}`$ $`=`$ $`\left[{\displaystyle ^1}{\displaystyle \frac{d\omega }{2\pi }}{\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}\right]_{1,2,3}^3`$ (41)
$`\left[u(\varphi _\alpha \varphi _\alpha )^2+v_1(q_{}\varphi _\alpha )^2\varphi _\beta \varphi _\beta +v_2(\varphi _\alpha q_{}\varphi _\alpha )^2\right].`$
Then integrate out a thin shell between the original cut-off (equal to 1) and the new cut-off (equal to $`1/b`$) in the $`q_{}`$ space. Now define thin shells in the $`\omega `$ and $`q_{}`$ spaces in such a way that upon being integrated out, they would admit rescaling of all the variables ($`\omega ,q_{},q_{},\delta ,D,u,v_1,v_2`$ and $`\varphi _\alpha `$) in such a way as to bring the remaining part of $`S^{(2)}`$ back exactly to the form (18) with the new values of $`\delta ,D,u,v_1`$ and $`v_2`$. The only choice of rescaling factors which allows this corresponds to rescaling $`q_{}`$ by $`1/b`$, integrating out $`\omega `$ between $`1`$ and $`1/b^2`$ and rescaling it by $`1/b^2`$, integrating out $`q_{}`$ between $`1`$ and $`1/\sqrt{b}`$ and rescaling it by $`1/\sqrt{b}`$ – at the expense of rescaling $`\delta `$ and $`T`$ by $`b^2`$, $`D`$ by $`b`$, $`u`$ by $`1/\sqrt{b}`$ and $`v_{1,2}`$ by $`b^{3/2}`$, which corresponds precisely to the first terms on the right hand side of (5-10).
Now, I will obtain the remaining terms in (9-10). First, expand the partition function $`Z`$ to first order in $`S^{(4)}`$:
$$Z=Z_0\left[1S^{(4)}\right].$$
Using Wick’s theorem , the average $`S^{(4)}`$ can be conveniently rewritten as
$$S^{(4)}=n(n+2)u\varphi ^2^2+n(nv_1+v_2)\varphi ^2q_{}^2\varphi ^2,$$
where $`\varphi ^2`$ and $`q_{}^2\varphi ^2`$ are defined as per
$$\varphi ^2_0^1\frac{dz}{\pi }\mathrm{coth}\left(\frac{z}{2T(b)}\right)^1\frac{dq_{}^2}{(2\pi )^2}^1\frac{dq_{}}{2\pi }\frac{z}{z^2+[\delta (b)+q_{}^2+D(b)q_{}^2+q_{}^4]^2}$$
(42)
$$q_{}^2\varphi ^2_0^1\frac{dz}{\pi }\mathrm{coth}\left(\frac{z}{2T(b)}\right)^1\frac{dq_{}^2}{(2\pi )^2}^1\frac{dq_{}}{2\pi }\frac{zq_{}^2}{z^2+[\delta (b)+q_{}^2+D(b)q_{}^2+q_{}^4]^2}.$$
(43)
The next step amounts to integrating out a thin shell near the cut-off in the expression for $`S^{(4)}`$. As described above, such a shell has width $`11/b`$ in the $`q_{}`$ direction, $`11/b^2`$ in the $`z`$ direction and $`11/\sqrt{b}`$ in the $`q_{}`$ direction. This integration generates the sought terms in (9-10), with
$`f_1[T(b),\delta (b),D(b)]`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^1}{\displaystyle \frac{dz}{\pi }}\mathrm{coth}\left({\displaystyle \frac{z}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}{\displaystyle \frac{z}{z^2+[\delta (b)+1+D(b)q_{}^2+q_{}^4]^2}}`$ (44)
$`+`$ $`{\displaystyle \frac{2}{\pi }}\mathrm{coth}\left({\displaystyle \frac{1}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}{\displaystyle \frac{1}{1+[\delta (b)+q_{}^2+D(b)q_{}^2+q_{}^4]^2}}`$ (45)
$`+`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle _0^1}{\displaystyle \frac{dz}{\pi }}\mathrm{coth}\left({\displaystyle \frac{z}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle \frac{z}{z^2+[\delta (b)+q_{}^2+D(b)+1]^2}},`$ (46)
$`f_2[T(b),\delta (b),D(b)]`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}{\displaystyle _0^1}{\displaystyle \frac{dz}{\pi }}\mathrm{coth}\left({\displaystyle \frac{z}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}{\displaystyle \frac{z}{z^2+[\delta (b)+1+D(b)q_{}^2+q_{}^4]^2}}`$ (48)
$`+`$ $`{\displaystyle \frac{2}{\pi }}\mathrm{coth}\left({\displaystyle \frac{1}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle ^1}{\displaystyle \frac{dq_{}}{2\pi }}{\displaystyle \frac{q_{}^2}{1+[\delta (b)+q_{}^2+D(b)q_{}^2+q_{}^4]^2}}`$ (49)
$`+`$ $`{\displaystyle \frac{1}{4\pi }}{\displaystyle _0^1}{\displaystyle \frac{dz}{\pi }}\mathrm{coth}\left({\displaystyle \frac{z}{2T(b)}}\right){\displaystyle ^1}{\displaystyle \frac{dq_{}^2}{(2\pi )^2}}{\displaystyle \frac{zq_{}^2}{z^2+[\delta (b)+q_{}^2+D(b)+1]^2}},`$ (50)
thus completing the derivation of the scaling equations (9-10).
|
no-problem/9901/nucl-ex9901003.html
|
ar5iv
|
text
|
# Fusion versus Breakup: Observation of Large Fusion Suppression for 9Be + 208Pb
## Abstract
Complete fusion excitation functions for <sup>9</sup>Be + <sup>208</sup>Pb have been measured to high precision at near barrier energies. The experimental fusion barrier distribution extracted from these data allows reliable prediction of the expected complete fusion cross-sections. However, the measured cross-sections are only 68% of those predicted. The large cross-sections observed for incomplete fusion products support the interpretation that this suppression of fusion is caused by <sup>9</sup>Be breaking up into charged fragments before reaching the fusion barrier. Implications for the fusion of radioactive nuclei are discussed.
The recent availability of radioactive beams has made possible the study of the interactions and structure of exotic nuclei far from the line of stability. Unstable neutron–rich nuclei having very weakly bound neutrons exhibit characteristic features such as a neutron halo extending to large radii, associated low–lying dipole modes, and a low energy threshold for breakup. These features may dramatically affect fusion and other reaction processes. For fusion to occur, the system must overcome the barrier resulting from the sum of the repulsive Coulomb potential and the attractive nuclear potential. Experiments with stable beams have shown, however, that fusion near the barrier is strongly affected by intrinsic degrees of freedom (such as rotation, vibration) of the interacting nuclei, whose coupling with the relative motion effectively causes a splitting in energy of the single, uncoupled fusion barrier. This gives rise to a distribution of barrier heights, some higher and some lower in energy than the uncoupled barrier, and is manifested most obviously as an enhancement of the fusion cross–sections at energies near and below the average barrier.
In the case of halo nuclei, it is well accepted that the extended nuclear matter distribution will lead to a lowering of the average fusion barrier, and thus to an enhancement in fusion cross–sections over those for tightly bound nuclei. The effect of couplings to channels which act as doorways to breakup is, however, controversial. Any coupling will enhance the sub–barrier cross-sections, whereas breakup may result in capture of only a part of the projectile, thus suppressing complete fusion. Model predictions however differ in the relative magnitudes of enhancement and suppression. To investigate the effect of the loosely bound neutrons, fusion excitation functions in the barrier region were measured for <sup>9,11</sup>Be + <sup>238</sup> and <sup>9,10,11</sup>Be + <sup>209</sup>Bi , each study including the reaction with the stable <sup>9</sup>Be for comparison. The fusion cross-sections for <sup>10,11</sup>Be + <sup>209</sup>Bi at energies near and below the barrier were found to be similar to those for <sup>9</sup>Be, while above the barrier the <sup>9</sup>Be induced reaction gave the lowest fusion yield. It is not obvious whether this is due to differing enhancement or suppression for the stable and unstable projectiles.
To investigate the effect on fusion of couplings specific to unstable neutron-rich nuclei, it is necessary to reliably predict the cross–sections expected in their absence. Thus, in the above cases, definitive conclusions are difficult unless fusion with <sup>9</sup>Be is well understood. This requires knowledge of the energy of the average fusion barrier, and ideally a measurement of the distribution of fusion barriers to obtain information on the couplings. All this information can be obtained from precisely measured fusion cross-sections $`\sigma _{\text{fus}}`$, by taking the second derivative of the quantity E$`\sigma _{\text{fus}}`$ with respect to energy E. This function, within certain limits, represents the distribution of barrier probability with energy. The shape of the experimental barrier distribution is indicative of the couplings present and its centroid gives the average barrier position. This information places severe constraints on the theoretical models. For this reason, precise measurements, permitting extraction of barrier distributions, have resulted in a quantitative and self–consistent description of the fusion cross–sections and barrier distributions for a wide range of reactions in which couplings to single–, double–phonon and rotational states are present.
This Letter reports on precisely measured complete and incomplete fusion cross–sections for the reaction of <sup>9</sup>Be + <sup>208</sup>Pb, and utilises the barrier distribution for complete fusion extracted from these data to determine quantitatively the suppression of fusion due to breakup of <sup>9</sup>Be.
The experiments were performed with pulsed <sup>9</sup>Be beams (1ns on, 1$`\mu `$s off) in the energy range 35.0 – 51.0 MeV, from the 14UD tandem accelerator at the Australian National University. Targets were of <sup>208</sup>PbS ($`>`$99% enrichment), 340 – 400 $`\mu `$g.cm<sup>-2</sup> in thickness, evaporated onto 15 $`\mu `$g.cm<sup>-2</sup> C foils. For normalisation, two monitor detectors, placed at angles of 22.5 above and below the beam axis, measured the elastically scattered beam particles. Recoiling heavy reaction products were stopped in aluminium catcher foils of thickness 360 $`\mu `$g.cm<sup>-2</sup>, placed immediately behind the target; the mean range of the fusion evaporation residues at the maximum bombarding energy is 130 $`\mu `$g.cm<sup>-2</sup>. The reaction products were identified by their distinctive $`\alpha `$–energies and half–lives (270 ns to 138 days). Alpha particles from short–lived activity (half–life $`T_{1/2}`$ 26 sec) were measured in-situ during the 1$`\mu `$s periods between the beam bursts, using an annular silicon surface barrier detector placed 8 cm from the target, at a mean angle of 174 to the beam direction. These were measured at all beam energies using the same target. An un–irradiated target and catcher was used at each energy for determining the cross–section of long–lived products ($`T_{1/2}`$ 24 min). Alpha particles from these products were measured using a silicon surface barrier detector situated below the annular counter, such that the target and catcher could be placed 0.8 cm from the detector after the irradiation. The relative solid angles of the two detectors were determined using the $`T_{1/2}=`$24 minute <sup>212</sup>Rn activity.
Fission following fusion was measured during the irradiations using two position sensitive multi–wire proportional counters (MWPCs), each with active area 28.4$`\times `$35.7 cm<sup>2</sup>, centred at 45<sup>0</sup> and $``$135<sup>0</sup> to the beam direction, and located 18.0 cm from the target. Absolute cross–sections for evaporation residues and fission were determined by performing calibrations at sub-barrier energies in which elastically-scattered projectiles were detected in the two monitor detectors, the annular detector and the backward–angle MWPC.
The compound nucleus <sup>217</sup>Rn formed following complete fusion of <sup>9</sup>Be with <sup>208</sup>Pb cools dominantly by neutron evaporation; the measured cross–sections for 2n, 3n, 4n and 5n evaporation residues are shown in Fig. 1(a). No proton evaporation residues were observed. In addition to the $`\alpha `$–particles from the decay of Rn nuclei, $`\alpha `$–particles from the decay of Po nuclei, which are formed as daughters of the Rn nuclei following their $`\alpha `$ decay, were also observed. The yields were, however, much greater than expected from the Rn yields, indicating that there is also a direct population mechanism. Correcting for the Rn daughter yields, the cross-sections for the direct production of <sup>210,211,212</sup>Po nuclei are shown in Fig. 1(b). In principle the Po nuclei could originate from complete fusion followed by $`\alpha xn`$ evaporation. However the shapes of the excitation functions for these nuclei are distinctly different from those in Fig. 1(a), and are not typical of fusion–evaporation. For <sup>9</sup>Be + <sup>208</sup>Pb, prompt $`\alpha `$–particles, measured in coincidence with $`\gamma `$–ray transitions in Po nuclei , showed angular distributions inconsistent with fusion–evaporation, and production by an incomplete fusion mechanism was inferred . To investigate the origin of the Po yield by the $`\alpha `$–decay technique, the same compound nucleus <sup>217</sup>Rn was formed at similar excitation energies in the reaction <sup>13</sup>C + <sup>204</sup>Hg. The $`\alpha `$ spectra were measured between 1.1 and 1.3 times the average barrier energy. The <sup>211,212</sup>Po $`\alpha `$–decays, to which the measurement was most sensitive, had cross-sections of $`<`$5 mb, compared with a total of $``$160 mb for the <sup>9</sup>Be + <sup>208</sup>Pb reaction. Furthermore, the fusion cross-sections determined from the sum of the $`xn`$ evaporation and fission cross-sections agreed with the predictions of a coupled channels calculation and the Bass model, indicating that the $`xn`$ evaporation yield essentially exhausts the total evaporation residue cross-section. Combined, all these observations show that the direct Po production observed in the <sup>9</sup>Be reaction cannot be due to complete fusion. It is attributed to incomplete fusion, and will be discussed later. The observed fission cross-sections were attributed to complete fusion of <sup>9</sup>Be + <sup>208</sup>Pb, since fission following incomplete fusion should be negligible due to the lower angular momentum and excitation energy brought in, and the higher fission barriers of the resulting compound nuclei.
Defining complete fusion experimentally as the capture of all the charge of the <sup>9</sup>Be projectile, the complete fusion cross-section at each energy was obtained by summing the Rn $`xn`$ evaporation residue cross-sections and the fission cross-section. The excitation function for complete fusion is shown by the filled circles in Fig. 2(a), whilst Fig. 2(b) shows the experimental barrier distribution $`d^2(E\sigma _{\text{fus}})/dE^2`$, evaluated from these data using a point difference formula with a c.m. energy step of 1.92 MeV. The average barrier position obtained from the experimental barrier distribution is 38.3$`\pm `$0.6 MeV. The uncertainty was determined by randomly scattering the measured cross-sections, with Gaussian distributions of standard deviation equal to those of the experimental uncertainties, and re-determining the centroid. By repeating this process many times, a frequency distribution for the centroid position was obtained, allowing determination of the variance, and thus the uncertainty.
To predict the fusion cross-sections expected from the measured barrier distribution, realistic coupled channels calculations were performed using a Woods-Saxon form for the nuclear potential with a diffuseness 0.63 fm, depth $``$76 MeV and radius parameter adjusted such that the average barrier energy of these calculations matched that measured. Couplings to the 5/2<sup>-</sup> and 7/2<sup>-</sup> states of the K<sup>π</sup>= 3/2<sup>-</sup> ground–state rotational band in <sup>9</sup>Be, and to the 3<sup>-</sup>, 5<sup>-</sup> and the double octupole–phonon states in <sup>208</sup>Pb were included. The coupling strengths were obtained from experimental data, except for the double octupole–phonon states in <sup>208</sup>Pb, which were calculated in the harmonic limit.
The results of these calculations are shown in Fig. 2(a) and (b) by the dashed lines. They reproduce satisfactorily the asymmetric shape of the measured barrier distribution, but the area under the calculated distribution is much greater than that measured. The disagreement is necessarily reflected in the cross–sections as well, where the calculated values are considerably larger than those measured. In contrast, for fusion with tightly bound projectiles, calculations which correctly reproduce the average barrier position and the shape of the barrier distribution give an extremely good fit to the cross-sections, as expected. The disagreement for <sup>9</sup>Be + <sup>208</sup>Pb, even though the barrier energies are correctly reproduced, suggests the presence of a mechanism hindering fusion. Agreement can be achieved only if the calculated fusion cross-sections are scaled by 0.68, resulting in the full lines in Fig. 2(a) and (b). This scaling factor will be model dependent at the lowest energies, as the calculations are sensitive to the types of coupling and their strength. However, at energies around and above the average barrier, the calculation and hence the scaling factor is more robust, since changes in couplings or potential, within the constraints of the measured barrier distribution, do not change the suppression factor significantly. The suppression factor of 0.68 has an uncertainty of $`\pm `$0.07 arising from the uncertainty in the mean barrier energy. At above barrier energies, there is no evidence, within experimental uncertainty, for an energy dependence of the suppression factor, but a weak dependence cannot be excluded.
The observed suppression of fusion may be related to the large yields of <sup>212,211,210</sup>Po which, as shown above, do not result from complete fusion. They can be formed through the breakup of <sup>9</sup>Be, probably into <sup>4,5</sup>He or two $`\alpha `$ particles and a neutron, with subsequent absorption of one of the charged fragments by the <sup>208</sup>Pb. The capture of all fragments after breakup cannot be distinguished experimentally from fusion without breakup, and is included in the complete fusion yield. Incomplete fusion products following the breakup of <sup>9</sup>Be giving <sup>6,7,8</sup>Li were not observed; they are unfavoured due to large negative $`Q`$ values. The large cross–sections for incomplete fusion, approximately half of those for complete fusion, demonstrate that <sup>9</sup>Be has a substantial probability of breaking up into charged fragments. The sum of the complete and incomplete fusion cross-sections is indicated by the hollow circles in Fig. 2(a). They match the predictions of the coupled channels fusion calculation, suggesting a direct relationship between the flux lost from fusion and the incomplete fusion yields. However, such a simple direct comparison is not strictly possible, since the cross–sections for incomplete fusion may include contributions from higher partial waves which may not have led to complete fusion.
The suppression of fusion observed in this experiment is attributed to a reduction of flux at the fusion barrier radius due to breakup of the <sup>9</sup>Be projectiles. Depending on whether the breakup is dominated by the long range Coulomb or the short range nuclear interaction, different distributions of partial waves for complete fusion should result. Experimental investigations of the partial wave distributions are in progress. Comparison of the present results with those for lighter targets may give additional insights. Measurements have shown fusion suppression for such reactions at energies well above the fusion barrier, although contrary results also exist. Further measurements for lighter targets would be valuable.
In studies of breakup effects for neutron–rich unstable nuclei, the focus has been on the neutron separation energy , which for <sup>11</sup>Be is 0.50 MeV, compared with 1.67 MeV for <sup>9</sup>Be. This led to the expectation that <sup>11</sup>Be induced fusion cross–sections would be suppressed compared with those induced by <sup>9</sup>Be. However this was not borne out by measurement. The present experiment demonstrates that breakup into charged fragments affects fusion very significantly. The two most favourable charged fragmentation channels for <sup>9,11</sup>Be are :
$`{}_{}{}^{9}\text{Be}`$ $``$ $`n+2\alpha ;\text{Q = –1.57 MeV}`$
$`\alpha +^5\text{He};\text{Q = –2.47 MeV}`$
$`{}_{}{}^{11}\text{Be}`$ $``$ $`\alpha +^6\text{He}+n;\text{Q = –7.91 MeV}`$
$`\alpha +\alpha +3n;\text{Q = –8.89 MeV},`$
making <sup>9</sup>Be more unstable in this regard than <sup>11</sup>Be. Reactions with <sup>9</sup>Be therefore offer an excellent opportunity to study breakup and its effect on fusion, but they should not be taken as a stable standard against which to judge the breakup effects of their radioactive cousins.
In summary, the precisely measured fusion excitation function for <sup>9</sup>Be + <sup>208</sup>Pb, allowing determination of the fusion barrier distribution, shows conclusively that complete fusion of <sup>9</sup>Be is suppressed compared with the fusion of more tightly bound nuclei. The calculated fusion cross-sections need to be scaled by a factor 0.68$`\pm `$0.07 in order to obtain a consistent representation of the measured fusion excitation function and barrier distribution. The loss of flux at the fusion barrier implied by this result can be related to the observed large cross–sections for Po nuclei, demonstrating that <sup>9</sup>Be has a large probability of breaking up into two helium nuclei, which would suppress the complete fusion yield. These measurements, in conjunction with breakup cross-sections and elastic scattering data, should encourage a complete theoretical description of fusion and breakup. Paradoxically, breakup of the stable <sup>9</sup>Be appears to be more significant than breakup of the unstable <sup>10,11</sup>Be in influencing the fusion product yields. This conclusion is favourable for using fusion with radioactive beams at near–barrier energies to form new, neutron-rich nuclei.
One of the authors (M.D.) acknowledges the support of a Queen Elizabeth II Fellowship. The work of K.H. was supported by the Japan Society for the Promotion of Science for Young Scientists; R.M.A., N.C., P.R.S.G. and A.S. de T. acknowledge partial support from CNPq, Brazil.
|
no-problem/9901/cond-mat9901247.html
|
ar5iv
|
text
|
# Second Low Temperature Phase Transition in Frustrated UNi4B
\[Submitted to Phys. Rev. Lett. LA-UR-99-143
## Abstract
Hexagonal UNi<sub>4</sub>B is magnetically frustrated, yet it orders antiferromagnetically at T<sub>N</sub> = 20 K. However, one third of the U-spins remain paramagnetic below this temperature. In order to track these spins to lower temperature, we measured the specific heat $`C`$ of UNi$`{}_{4}{}^{}\mathrm{B}`$ between 100 mK and 2 K, and in applied fields up to 9 T. For zero field there is a sharp kink in $`C`$ at $`T^{}`$ 330 mK, which we interpret as an indication of a second phase transition involving paramagnetic U. The rise in $`\gamma =C/T`$ between 7 K and 330 mK and the absence of a large entropy liberated at $`T^{}`$ may be due to a combination of Kondo screening effects and frustration that strongly modifies the low $`T`$ transition.
\]
UNi$`{}_{4}{}^{}\mathrm{B}`$ has been an object of intense experimental and theoretical study in the last several years. The main reason for such interest is a highly unconventional antiferromagnetically ordered state that this compounds attains below the phase transition temperature of $`T_N=20`$ K. Only 2/3 of the U atoms order, with the rest remaining paramagnetic below $`T_N`$. The origin of such behavior must be sought in the frustrating nature of the triangular crystallographic lattice in which UNi$`{}_{4}{}^{}\mathrm{B}`$ forms.
The crystal structure of UNi$`{}_{4}{}^{}\mathrm{B}`$ corresponds to the hexagonal CeCo<sub>4</sub>B-type. The U- and Ni- (or B-) containing triangular planes are shown in Fig. 1. Within these planes both nearest (nn) and next nearest neighbors (nnn) interactions are antiferromagnetic, with $`ab`$ an easy magnetization plane. Below $`T_N`$ this highly frustrated system partially orders, with magnetic unit cell containing nine U atoms. Six of them form an in-plane vortex-like pattern, with neighboring U spins rotated by $`60^{}`$. The remaining three U atoms remain paramagnetic and occupy two distinct positions: one is in the center of the vortex; two other are between the vortices and are surrounded by three pairs of antiparallel ordered U spins. The U atoms are coupled ferromagnetically along the $`c`$-axis, creating in 3D an ordered array of ferromagnetic and paramagnetic chains.
A number of transport and thermodynamic properties were measured to investigate the ordered phase of UNi<sub>4</sub>B, both in zero and applied magnetic field. Resistivity in the $`ab`$ plane continues to rise below $`T_N`$, peaks at 5 K, and then drops rather sharply. The total range of variation in resistance over this temperature range is small, about 4%. The specific heat divided by temperature $`C/T=\gamma `$ initially drops below $`T_N=20`$ K, and starts rising again below 7 K. $`\gamma `$ continues to rise down to the lowest previously measured temperature of $`T=0.35`$ K to $`0.5`$ $`\mathrm{J}/\mathrm{molK}^2`$. Application of magnetic field up to 16 T suppressed $`\gamma `$ by about a factor of 3. These results were taken as an indication that Kondo effect plays an important role in determining the low temperature properties of UNi<sub>4</sub>B.
Several theoretical attempts were made to reproduce the unique partially ordered state below $`T_N`$ and interpret the low temperature specific heat. Initially, ferromagnetic fluctuations in the paramagnetic 1D chains were suggested to explain the low temperature upturn in $`\gamma `$. The specific heat calculated for a 1D Heisenberg ferromagnetic chain with $`S=1/2`$ and $`J_c=35`$ K gave a rather good representation of the measured low temperature tail in specific heat. An alternative view point was taken by Lacroix et al. (Ref. ), where a model was developed to treat both geometric frustration and a possible Kondo interaction between the paramagnetic U spins and conduction electrons. The starting point of this model postulates that the 1D U chains in the $`c`$-axis
are close to magnetic-nonmagnetic instability between the ferromagnetic alignment of the U-spins and a 1D lattice of Kondo-screened zero spin U atoms. Within this model several ground states are possible depending on the strength of the nn and nnn exchange interactions ($`J_1`$ and $`J_2`$, respectively) as well as the energy $`\mathrm{\Delta }`$ (taken positive in the model) to create a magnetic chain and overcome Kondo screening. For sufficiently small values of $`J_1`$ and $`J_2`$ Kondo effect dominates and results in a non-magnetic (NM) phase, with all U spins Kondo-compensated. In the intermediate range of $`J_1`$ and $`J_2`$, with slight experimentally observed distortion taken into account, the stable structure is the observed mixed phase described above. To stabilize a different ground state, additional interaction, perhaps due to crystallographic distortion that makes $`J_1`$ and $`J_2`$ anisotropic, is required within this model.
Another approach treats U’s as classical Heisenberg spins in $`ab`$ plane. Again, nn and nnn interactions are taken into account, as well as interhexagon exchange coupling. For the appropriate choices of parameters, quantum fluctuations can destabilize the standard 120 (3 sublattice) Neél order, and minimization of the total energy gives the experimentally observed ground state. The calculated $`\gamma `$ has a broad maximum at 2 K, and smoothly decreasing to zero as $`T0`$, due to the dominant contribution of spin waves. Therefore, this model seams unable to reproduce experimentally observed specific heat. One must, however, consider the possibility that the low temperature anomaly, due to disordered 1/3 of U atoms, is superimposed upon such spin-wave behavior.
It should be possible to distinguish between these scenarios by performing specific heat measurement at lower temperatures and compare the data with detailed predictions of the 1D ferromagnetic chain and the Kondo models. This was the original motivation behind the measurements that are the subject of this Letter. Our zero-field heat capacity data collected down to 100 mK showed a sharp kink at a temperature of $`330`$ mK, not predicted by any of the theories discussed above. Further measurements in magnetic field up to 9 Tesla parallel to $`a`$\- or $`b`$-axis revealed a very unusual evolution of this feature with field. We propose that this anomaly is an indication of the phase transition involving the U spins that remain paramagnetic below $`T_N=20`$ K. We suggest that the results described below are qualitatively consistent with the model of Kondo-screening of the U spins in the non-magnetic chains, which severely affect their transition at $`T^{}`$.
The single crystal of UNi$`{}_{4}{}^{}\mathrm{B}`$ used in this experiment (with a mass of $`173`$ mg) was grown with Czochralski technique. Similarly produced samples were evaluated with microprobe analysis and neutron diffraction, and were found to be of high quality (no second phase and without disorder). Specific heat data were collected with a quasiadiabatic technique, where ruthenium oxide thick film resistors were used for thermometry. These resistors were previously calibrated as a function of temperature in a magnetic field against a thermometer placed in a field-free region of the apparatus.
Fig. 2 shows the specific heat data collected with magnetic field parallel to the $`a`$-axis (along the line connecting nearest in-plane U neighbors), where we plot both specific heat (a) and $`\gamma =C/T`$ (b) for magnetic field up to 9 Tesla. Not all available field data are shown in the figure for the sake of clarity. The anomaly in zero field appears as a clear kink in the specific heat at a temperature of $`330`$ mK. Just below this temperature the rise in $`\gamma `$ is interrupted, as shown in Fig. 2(b). Application of magnetic field initially moves the anomaly to higher temperature, with the temperature $`T^{}`$ of the peak in $`\gamma `$ reaching maximum at about 3 T. For still larger fields the anomaly first broadens, and then the broad hump in $`\gamma `$ shifts to higher temperatures for fields above 4 T. The field of 9 T completely destroys the anomaly, yet a pronounced low temperature tail emerges, which is present at all measured fields. The apparent insensitivity of the lowest temperature specific heat to magnetic field of up to 9 T indicates that the low temperature tail is likely due to the nuclear Schottky anomaly in large internal fields produced by the ordered U spins. Fig. 3 shows specific heat data collected with the field parallel to the $`b`$-axis (along the line connecting the U next nearest neighbors),
where we plot only $`\gamma `$ vs. temperature. The peak initially moves slightly to higher temperature for fields up to 2 T, before turning around, and is shifted to zero by the field of 6 T. As in the case of $`\stackrel{}{H}\stackrel{}{a}`$, the field of 9 T completely suppresses the anomaly, and displays a low temperature Schottky-like tail.
To compare the data for different field orientations, we plotted $`T^{}`$ as a function of magnetic field along both $`a`$\- and $`b`$-axis in Fig. 4. For the field along the $`a`$-axis the dependence is not monotonic, with a break at 4 T. Low temperature magnetic susceptibility measurements performed at 200 mK as a function of magnetic field in the same orientation ($`\stackrel{}{H}\stackrel{}{a}`$) show a change in slope (a kink) at a field of 4 T, perhaps indicating spin reorientation, or crossover as observed in magnetoresistance. It is likely that the break in the behavior of $`T^{}`$ vs. field at 4 T for $`\stackrel{}{H}\stackrel{}{a}`$ is related to the same phenomenon. For both $`\stackrel{}{H}\stackrel{}{a}`$ and $`\stackrel{}{H}\stackrel{}{b}`$, $`T^{}`$ initially rises with field, though this feature is much more pronounced for $`\stackrel{}{H}\stackrel{}{a}`$. For $`\stackrel{}{H}\stackrel{}{b}`$ orientation $`T^{}`$ is suppressed smoothly to zero by the field of 6 T, indicating the absence of the spin reorientation phenomena for this range of field and its orientation. Spin reorientation (metamagnetic) transitions in UNi$`{}_{4}{}^{}\mathrm{B}`$ have been observed in the past via both magnetization and resistivity measurements, with the zero-field structure more resilient to the field applied in $`b`$\- than in $`a`$-direction. One of the very surprising features of the ordered phase of UNi$`{}_{4}{}^{}\mathrm{B}`$ below $`T_N=20`$ K was the absence of subsequent ordering of the U spins in paramagnetic chains. These chains are coupled by the $`J_2`$ exchange interaction which appears to be dominant in the $`ab`$ plane. This strong interaction would be expected to drive the ordering of the paramagnetic chains as the temperature is lowered further below $`T_N`$. There
are other examples of magnetic systems that display cascade of ordering transitions,both insulating and itinerant. The insulating Ising triangular system CsCoBr<sub>3</sub> undergoes the first phase transition at 28 K, where, just as in the case of UNi<sub>4</sub>B, only 2/3 of the spins participate, with remaining 1/3 of the spins ordering antiferromagnetically at 12 K, a temperature three times lower. In the case of UNi$`{}_{4}{}^{}\mathrm{B}`$ we can say now that the ordering does take place as well. However, the difference between the temperatures of the two observed phase transition in UNi$`{}_{4}{}^{}\mathrm{B}`$ is much greater, a factor of 60. Yet, we expect the ferromagnetic coupling $`J_c`$ along the chains and the antiferromagnetic exchange interaction $`J_2`$ in $`a`$-$`b`$ planes to drive both high and low temperature phase transitions. We believe that the origin of the large difference between the ratios of the phase transition temperatures in the two systems lies in the fact that CsCoBr<sub>3</sub> is an insulator and UNi$`{}_{4}{}^{}\mathrm{B}`$ is a metal. Kondo screening of the paramagnetic U spins by conduction electrons in UNi$`{}_{4}{}^{}\mathrm{B}`$ play crucial role in suppressing the second antiferromagnetic ordering temperature $`T^{}`$.
Within this scenario we can understand several features of the specific heat data. First is the size of the anomaly in specific heat associated with the low temperature phase transition. We do not see a step that would indicate a second order mean-field magnetic phase transition. Instead, it is manifested by a kink in the specific heat. The amount of entropy released at the low phase transition temperature $`T^{}`$ is $`0.1`$ $`\mathrm{J}/\mathrm{molK}`$, forty times lower than $`0.72R\mathrm{ln}2=4.15`$ $`\mathrm{J}/\mathrm{molK}`$ of magnetic entropy recovered at 25 K. At 2 K the entropy grows to $`0.57`$ $`\mathrm{J}/\mathrm{molK}`$, close to 30% of the $`1/3R\mathrm{ln}2`$ of the total entropy one can expect for the U spins in the paramagnetic chains (if the ground state is a doublet). There are two mechanisms at work that result in small amount of entropy liberated by the lower transition. (i)Frustration affects both the low and high temperature ordering transitions. The strongest interaction that couples U spins is ferromagnetic exchange $`J_c`$ along the $`c`$-axis. The frustration in the $`ab`$ plane prevents the ferromagnetic ordering from taking place at the mean-field temperature corresponding to $`J_c`$. As a result, $`T_N`$ is depressed and substantial entropy is released above $`T_N`$ in via ferromagnetic fluctuations along the $`c`$-axis. Below $`T_N`$ paramagnetic sites are not equivalent, some being in the center of the ordered vortices, position (1), and some being between the vortices, position (2). However, magnetic field from the ordered U spins cancels at both sites, and the remaining U atoms can be viewed as triangular lattice with nn exchange interaction $`J_2`$. Therefore, as the temperature is lowered further, U spins in paramagnetic chains experience the frustration inherent to the triangular lattice. (ii)Kondo screening is present with characteristic temperature $`T_K9`$ K. Such screening with this $`T_K`$ alone would be expected to effectively reduce the U spins at $`T^{}`$, absorbing most of the spin entropy. Frustration (or ferromagnetic fluctuations along the $`c`$-axis) in addition absorbs more entropy above $`T^{}`$. As a result, most of the entropy associated with paramagnetic U spins is liberated well $`T^{}`$, resulting in a small specific heat feature at the transition.
Secondly, the very unusual evolution of $`T^{}`$ with magnetic field, displayed in Fig. 4, may also have its explanation in the Kondo screening of the paramagnetic U spins. Magnetic field is expected to break Kondo singlets. The increased spins on paramagnetic sites would tend to order at a higher temperature. The reversal of this trend at higher magnetic field (especially pronounced for $`\stackrel{}{H}\stackrel{}{b}`$ orientation) is most likely due to the usual tendency of the magnetic field to suppress the antiferromagnetic order.
Other scenarios may be invoked to explain the low temperature anomaly in specific heat at 330 mK. One possibility is spin reorientation transition involving the spins that are ordered below $`T_N`$. This scenario may be at odds with calculations of Ref. which gives only one configuration with 2/3 of the U spin participating in antiferromagnetic ordering as stable ground state. Perhaps under some conditions other phases can be stabilized, involving spin reorientation. Another possibility is that frustration drives a spin-glass-like freezing of paramagnetic U spins. This particular scenario agrees with the observed initial effect of the magnetic field (up to 3 T, see Fig. 4), since applied field tends to increase the temperature $`T_f`$ of the spin-glass freezing. Finally, there is a possibility that the low temperature anomaly represents a cross-over into a new quantum state of spins, resembling a quantum liquid, since AC-susceptibility and preliminary $`\mu `$SR experiments show featureless magnetic response and no rise of onset of static, non-random internal magnetic field. These results would be also consistent with a small moment on almost perfectly screened and ordered U spins. Sensitive neutron scattering and additional $`\mu `$SR experiments could be very useful in testing our suggestion for the origin of the observed specific heat anomaly.
In conclusion, we have discovered a second low temperature phase transition in the magnetically frustrated triangular UNi$`{}_{4}{}^{}\mathrm{B}`$ . The low temperature $`T^{}=330`$ mK of the second transition, very large ratio $`T_N/T^{}=60`$, and a non-monotonic evolution of the specific heat anomaly with magnetic field can be qualitatively explained with a combination the effects of Kondo-screening and geometric frustration.
We acknowledge helpful conversations with M. Meisel and G. J. Nieuwenhuys, and thank them for making available to us their unpublished data.
Work at Los Alamos was performed under the auspices of the Department of Energy. Part of this research was supported by the Dutch Foundation FOM.
|
no-problem/9901/astro-ph9901052.html
|
ar5iv
|
text
|
# Constraining dark energy with SNe Ia and large-scale structure
## I Introduction
Two groups have presented strong evidence that the expansion of the Universe is speeding up, rather than slowing down. It comes in the form of distance measurements to some fifty supernovae of type Ia (SNe Ia), with redshifts between 0 and 1. The results are fully consistent with the existence of a cosmological constant (vacuum energy) whose contribution to the energy density is around 70% of the critical density ($`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$). Other measurements indicate that matter alone contributes $`\mathrm{\Omega }_M=0.4\pm 0.1`$ . Taken together, matter and vacuum energy account for an amount close to the critical density, consistent with measurements of the anisotropy of the cosmic microwave background (CMB) .
In spite of the apparent success of the cosmological constant explanation, other possibilities have been suggested for the “dark energy.” This is in part because of the checkered history of the cosmological constant: It was advocated by Einstein to construct a static universe and discarded after the discovery of the expansion; it was revived by Hoyle and Bondi and Gold to solve an age crisis, later resolved by a smaller Hubble constant, and it was put forth to explain the abundance of quasars at $`z2`$, now known to be due to galactic evolution. Further, all attempts to compute the value of the cosmological constant, which in modern terms corresponds to the energy associated with the quantum vacuum, have been wildly unsuccessful . Finally, the presence of a cosmological constant makes the present epoch special: at earlier times matter (or radiation) dominated the energy density and at later times vacuum energy will dominate (the “why now?” problem).
The key features of an alternative form for the dark energy are: bulk pressure that is significantly negative, $`w<1/3`$, where $`wp/\rho `$, and the inability to clump effectively. The first property is needed to ensure accelerated expansion and to avoid interfering with a long matter-dominated era during which structure forms; the second property is needed so that the dark energy escapes detection in gravitationally bound systems such as clusters of galaxies. Candidates for the dark energy include : a frustrated network of topological defects (such as strings or walls), here $`w=\frac{n}{3}`$ ($`n`$ is the dimension of the defect) and an evolving scalar field, where $`\rho =\frac{1}{2}\dot{\varphi }^2+V(\varphi )`$ and $`p=\frac{1}{2}\dot{\varphi }^2V(\varphi )`$ (referred to by some as quintessence) .
The SN Ia data alone do not yet discriminate well against these different possibilities . As shown in Fig. 1, the maximum likelihood region in the $`\mathrm{\Omega }_M`$$`w`$ plane runs roughly diagonally: less negative pressure is permitted if the fraction of critical density contributed by dark energy is larger. Following earlier work , this led us to consider other cosmological constraints: large-scale structure, anisotropy of the CMB, the age of the Universe, gravitational lensing, and measurements of the Hubble constant and of the matter density. As we shall show, some of the additional constraints, especially large-scale structure, complement the SN Ia constraint, and serve to sharpen the limits to $`\mathrm{\Omega }_M`$ and $`w`$; others primarily illustrate the consistency of these measurements with the SN Ia result. In the end, we find $`\mathrm{\Omega }_X(0.6,0.7)`$ and $`w<0.6`$ (95% cl).
## II Method
Our underlying cosmological paradigm is a flat, cold dark matter model with a dark-energy component, though as we will discuss later our results are more general. We restrict ourselves to flat models both because they are preferred by the CMB anisotropy data and a flat Universe is strongly favored by inflation. We restrict ourselves to cold dark matter models because of the success of the cold dark matter paradigm and the lack of a viable alternative. For our space of models we construct marginalized likelihood functions based upon SNe Ia, large-scale structure, and other cosmological measurements, as described below.
Our model parameter space includes the usual cosmological parameters ($`\mathrm{\Omega }_M`$, $`\mathrm{\Omega }_Bh^2`$, and $`h`$) and the amplitude and spectral index of the spectrum of Gaussian curvature fluctuations ($`\sigma _8`$ and $`n`$). For the dark-energy component, we choose to focus on the dynamical scalar-field models, because the frustrated defect models are at best marginally consistent with the SN Ia data alone .
In the dynamical scalar-field models the equation of state $`wp/\rho `$ varies with time. However for most of our purposes, only one additional free parameter needs to be specified, an “effective” equation of state. We choose $`\widehat{w}_{\mathrm{eff}}`$ to be that value $`w`$ which, if the Universe had $`w`$ constant, would reproduce the conformal age today. We choose this definition because the CMB anisotropy spectrum and the COBE normalization of the matter power spectrum remain constant (to within 5–10%) for different scalar field models with the same $`\widehat{w}_{\mathrm{eff}}`$ . For the models under consideration $`\widehat{w}_{\mathrm{eff}}`$ is closely approximated by
$$w_{\mathrm{eff}}𝑑a\mathrm{\Omega }_\varphi (a)w(a)/𝑑a\mathrm{\Omega }_\varphi (a).$$
(1)
and, since it is simpler to compute, we have used $`w_{\mathrm{eff}}`$ throughout. Obviously, our results also apply to constant $`w`$ models (e.g., frustrated defects), by taking $`w=w_{\mathrm{eff}}`$.
While $`w_{\mathrm{eff}}`$ neatly parameterizes the scalar-field models from the standpoint of large-scale structure and the CMB anisotropy, it does not do as well when it comes to the SN Ia data. Recall that $`w_{\mathrm{eff}}`$ as defined in Eq. (1) receives a contribution from a wide range of redshifts. The SN Ia data however are sensitive mostly to $`z1/2`$. Since $`w`$ becomes less negative with time in the models we are considering, the SN Ia data “see” a less negative $`w`$ than the CMB by a model dependent amount. We shall return to this point later.
We normalize our models to the COBE 4-year data using the method of Ref. . Beyond the COBE measurements, the small-scale anisotropy of the microwave background tells us that the Universe is close to being spatially flat (position of the first acoustic peak) and that $`\mathrm{\Omega }_M`$ is less than one and/or the baryon density is high (height of the first acoustic peak). We have not included a detailed fit to the current data (see e.g. Ref. ), but rather impose flatness. The additional facts that might be gleaned from present CMB measurements, $`\mathrm{\Omega }_M<1`$ and high baryon density, are in fact much more strongly imposed by the large-scale structure data and the Burles – Tytler deuterium measurement.
We require that the power-spectrum shape fit the redshift-survey data as compiled by Ref. (excluding the 4 smallest scale points which are most sensitive to the effects of bias and nonlinear effects). On smaller scales we require that all of our models reproduce the observed abundance of rich clusters of galaxies. This is accomplished by requiring $`\sigma _8=\left(0.55\pm 0.1\right)\mathrm{\Omega }_M^{0.5}`$, where $`\sigma _8`$ is the rms mass fluctuation in spheres of $`8h^1`$Mpc computed in linear theory . The baryon density is fixed at the central value indicated by the Burles–Tytler deuterium measurements, $`\mathrm{\Omega }_Bh^2=0.019\pm 0.001`$ . We assume that clusters are a fair sample of the matter in the Universe so that the cluster baryon fraction $`f_B=(0.07\pm 0.007)h^{3/2}`$ reflects the universal ratio of baryons to matter ($`\mathrm{\Omega }_B/\mathrm{\Omega }_M`$). We marginalize over the spectral index and Hubble constant, assuming Gaussian priors with $`n=0.95\pm 0.05`$, which encompasses most inflationary models, and $`h=0.65\pm 0.05`$, which is consistent with current measurements.
There are three other cosmological constraints that we did not impose: the age of the Universe, $`t_0=(14\pm 2)`$Gyr ; direct measurements of the matter density, $`\mathrm{\Omega }_M=0.4\pm 0.1`$, and the frequency of multiply imaged quasars. While important, these constraints serve to prove consistency, rather than to provide complementary information. For example, the SN Ia data together with our Hubble constant constraint lead to an almost identical age constraint . The lensing constraint, recently studied in detail for dynamical scalar-field models , excludes the region of large $`\mathrm{\Omega }_X`$ and very negative $`w`$ (at 95% cl, below the line $`w_{\mathrm{eff}}=0.551.8\mathrm{\Omega }_M`$), which is disfavored by the SN Ia data. The matter density determined by direct measurements, $`\mathrm{\Omega }_M=0.4\pm 0.1`$, is consistent with that imposed by the LSS and Hubble constant constraints.
## III Results
As can be seen in Fig. 1, our large-scale structure and CMB constraints neatly complement the SN Ia data. LSS tightly constrains $`\mathrm{\Omega }_M`$, but is less restrictive along the $`w_{\mathrm{eff}}`$ axis. This is easy to understand: in order to fit the power spectrum data, a COBE-normalized CDM model must have “shape parameter” $`\mathrm{\Gamma }=\mathrm{\Omega }_Mh0.25`$ (with a slight dependence on $`n`$). Together with the constraint $`h=0.65\pm 0.05`$ (and our $`f_B`$ constraint) this leads to $`\mathrm{\Omega }_M0.35`$. As discussed in Ref. , the $`\sigma _8`$ constraint can discriminate against $`w_{\mathrm{eff}}`$; however, allowing the spectral index to differ significantly from unity diminishes its power to do so.
Note that the SN Ia likelihood contours for the dynamical scalar-field model and the constant-$`w`$ models are not the same while the LSS contours are identical. With the Fit C supernovae of Ref. and the dynamical scalar-field models considered here (quadratic, quartic and exponential scalar potentials), the contours are displaced by about 0.1 in $`w_{\mathrm{eff}}`$: the 95% cl upper limit to $`w_{\mathrm{eff}}`$ for the constant $`w`$ models is $`0.62`$, while for the quartic, quadratic and exponential potentials for $`V(\varphi )`$ it is $`0.75`$, $`0.76`$ and $`0.73`$ respectively. The reason for this shift is simple: the $`w`$ dependence of LSS is almost completely contained in the distance to the last-scattering surface and $`w_{\mathrm{eff}}`$ is constructed to hold that constant. On the other hand, the $`w`$ dependence of the SN Ia results is more heavily weighted by the recent value of $`w`$; said another way, there is a different effective $`w`$ for the SN Ia data. This fact could ultimately prove to be very important in discriminating between different models.
Additionally there are a class of dynamical scalar-field models that have attracted much interest recently . For these potentials (here we consider $`V(\varphi )=c/\varphi ^p`$ and $`V(\varphi )=c[e^{1/\varphi }1]`$), and a wide range of initial conditions the scalar-field settles into a “tracking solution” that depends only upon one parameter (here $`c`$) and the evolution of the cosmic scale factor, suggesting that they might help to address the “why now?” problem.
For our purposes, the most interesting fact is that each tracker potential picks out a curve in $`\mathrm{\Omega }_Mw_{\mathrm{eff}}`$ space. Typically the lower values of $`\mathrm{\Omega }_M`$ go with the most negative values of $`w_{\mathrm{eff}}`$ and vice versa (see Fig. 2) This fact puts the tracker solutions in jeopardy, as shown in the same figure. For the tracker models shown here ($`p=2,4`$ and exponential), the 95% cl intervals for the SN Ia and LSS data barely overlap. The situation is even worse for larger values of $`p`$. A similar problem was noted in Ref. .
Finally, we comment on the robustness of our results. While we have restricted ourselves to flat models, as preferred by the CMB data, our constraints do not depend strongly on this assumption. This is because the LSS constraints are insensitive to the flatness assumption, and curvature, which corresponds to a $`w_{\mathrm{eff}}=\frac{1}{3}`$ component, is strongly disfavored by the SN Ia results. We have not explicitly allowed for the possibility that inflation-produced gravity waves account for a significant part of the CMB anisotropy on large-angular scales (i.e., $`T/S>0.1`$), which would have the effect of decreasing the overall amplitude of the COBE normalized power spectrum. In fact, allowing for gravity waves would not change our results, as this degree of freedom is implicitly accounted for by a combination of $`n`$, the normalization freedom in the power spectrum and the uncertainty in the COBE normalization.
Our model space does not explore more radical possibilities, for example, that neutrinos contribute significantly to the mass density or a nonpower-law or isocurvature spectrum of density perturbations . Even allowing for these possibilities (or others) would not change our results significantly if one still adopted the mass density constraint, $`\mathrm{\Omega }_M=0.4\pm 0.1`$. As discussed earlier, it is almost as powerful as the CDM-based LSS constraint.
## IV Conclusions
The evidence provided by SNe Ia that the Universe is accelerating rather than slowing solves one mystery – the discrepancy between direct measurements of the matter density and measurements of the spatial curvature based upon CMB anisotropy – and introduces another – the nature of the dark energy that together with matter accounts for the critical density. SNe Ia alone do not yet strongly constrain the nature of the dark energy.
In this Letter we have shown that consideration of other important cosmological data both complement and reinforce the SN Ia results. In particular, as illustrated in Fig. 1, consideration of large-scale structure leads to a constraint that nicely complements the SN Ia constraint and strengthens the conclusions that one can draw. Other cosmological constraints – age of the Universe, frequency of gravitational lensing and direct measures of the matter density – provide information that is consistent with the SN Ia constraint (lensing and age) and the LSS constraint (matter density), and thereby reinforces the self consistency of the whole picture of a flat Universe with cold dark matter and dark energy.
Finally, what have we learned about the properties of the dark-energy component? The suite of cosmological constraints that we have applied indicate that $`\mathrm{\Omega }_X(0.6,0.7)`$ and $`w_{\mathrm{eff}}<0.6`$ (95% cl), with the most likely value of $`w_{\mathrm{eff}}`$ close to $`1`$ (see Fig. 1). The frustrated network of light cosmic string ($`w_{\mathrm{eff}}=\frac{1}{3}`$) is strongly disfavored, and a network of frustrated walls ($`w_{\mathrm{eff}}=\frac{2}{3}`$) is only slightly more acceptable. Also in the disfavored category are tracker models with $`V(\varphi )=c/\varphi ^p`$ and $`p=2,4,6,8,\mathrm{}`$. Dynamical scalar-field models can be made acceptable provided $`w_{\mathrm{eff}}`$ is tuned to be more negative than $`0.7`$. The current data definitely prefer the most economical, if not the most perplexing, solution: Einstein’s cosmological constant.
###### Acknowledgements.
This work was supported by the DoE (at Chicago, Fermilab, and Lawrence Berkeley National Laboratory) and by the NASA (at Fermilab by grant NAG 5-7092). MW is supported by the NSF.
|
no-problem/9901/astro-ph9901253.html
|
ar5iv
|
text
|
# The Detection of Two Distinct High Ionization States in a QSO Lyman Limit Absorption System: Evidence for Hierarchical Galaxy Formation at 𝑧∼3?
## 1 Introduction
Absorption systems with enough neutral Hydrogen to be optically thick to Lyman continuum radiation are commonly found in the spectra of high redshift QSOs. These Lyman limit (LLS) and damped Lyman alpha (DLA) systems often show line absorption from multiple ionization states of several different elements, most commonly Carbon and Silicon. It is often assumed that optically thick absorption systems contain gas in two separate phases – gas that is shielded from UV background Lyman continuum radiation and thus contains low ions such as CII and SiII, and gas that is not completely shielded from the UV background and thus shows high ionization species such as CIV and SiIV. The low ions are believed to be in a separate phase from the high ions because in DLA systems all low ions have a similar velocity structure that appears to have no relation to the velocity structure of the high ions, all of which also have a similar velocity structure (Prochaska and Wolfe 1997).
In recent years, many groups have begun to use numerical simulations to simulate the formation of structure in variants of cold dark matter (CDM) Universes (Cen et al. 1994; Zhang, Anninos, & Norman 1995; Hernquist et al. 1996). These simulations naturally produce absorbers that at least superficially resemble LLS and DLA systems as a natural result of the hierarchical structure formation that takes place in CDM models (Katz et al. 1996; Gardner et al. 1997). In most simulations LLS and DLAs have very similar physical structures and tend to be protogalaxies in the process of formation via the merging of several smaller structures.
There is not complete agreement within the community that the structures identified in the CDM simulations accurately represent the absorbers seen in actual QSO spectra. In particular, Prochaska and Wolfe (1997) find that the kinematics of DLA low ionization gas is consistent with the gas being located in a thick rotating disk – a picture that is quite different from the simulation results. Nonetheless, the advent of numerical simulations has sparked a renaissance in the study of QSO absorption systems, and the simulations are now making detailed predictions about the observed properties of absorption systems, including the metal lines in DLAs and LLS.
Rauch et al. (1997) calculated the expected metal absorption from DLA and LLS systems found in a CDM simulation with the assumption that the absorbing gas had a uniform metallicity. They found that while the low ions are all found in the same gas, the gas producing most of the SiIV absorption is not the same gas producing C IV or O VI absorption. As expected, they found that absorption from low ions occurs in only the highest density gas near the center of the protogalactic structures. The high ions were found in lower density gas surrounding the protogalactic clump, in a structure resembling an onion. SiIV was found only very close to the center, much like the low ions. CIV was found primarily in low density gas further out, and OVI was found mainly in even lower density gas further yet from the center of the clump. SiIV and CIV were found in gas inside the shock front of a collapsing object, whereas OVI was found in gas still falling into the structure. This resulted in OVI lines that were systematically wider than the CIV or SiIV lines.
In this letter we present the first multiple component OVI absorption features to be detected in a high $`z`$ intervening absorption system. The only other OVI detection at high $`z`$ (Kirkman and Tytler, 1997) contained only one component, and only the $`\lambda `$ 1032 line of the $`\lambda `$ 1032, 1039 doublet was detected because of blending. Here we report the detection of at least 5 OVI components in both members of the $`\lambda `$ 1032, 1039 doublet in the $`z2.77`$ LLS towards 1157+3143 . We also see SiIV, CIV, CII, SiII, and SiIII. This system is remarkable because, as we will show, all high ions have a very similar velocity structure, yet we are able to show on observational grounds alone that the OVI absorption cannot all be occurring in the same gas producing the CIV and SiIV absorption.
## 2 The $`z2.77`$ LLS
We observed QSO 1157+3143 for a total of 7 hours in March 1996 and January 1997 with the HIRES spectrograph (Vogt 1992) on the W.M. Keck Telescope during our search for deuterium. The resulting spectrum has a resolution of 7.9 km s<sup>-1</sup> and a SNR $`50`$ per 0.03 Å pixel. The spectrum was reduced and extracted in the standard fashion (e.g. Kirkman and Tytler, 1997).
There are two LLS in this spectrum, one at $`z2.94`$ and another at $`z2.77`$. The $`z2.94`$ LLS in this spectrum shows neither OVI nor CIV and will not be discussed. We do not have a good estimate of the HI column for the $`z2.77`$ LLS because the $`z2.94`$ LLS blots out the spectrum below 3600 Å which prevents us from seeing any Lyman series line higher than Ly$`\gamma `$ . Nonetheless, we believe the $`z2.77`$ system is a Lyman limit because it shows both CII and SiII, neither of which are expected to be present in an absorption system that is not optically thick to Lyman continuum radiation. The lines associated with the $`z2.77`$ partial LLS found in this spectrum are shown in Figure 1.
Figure 2 shows the OVI and CIV lines of this system in more detail. We used VPFIT (Webb, 1987) to fit Voigt profiles to the lines shown in Figure 2; the line parameters appear above each line in the figure. While fitting, we did not in any way tie line parameters between different elements – the OVI fit is completely independent of the CIV fit. Although there is not a 1-to-1 correspondence between the OVI and CIV lines in Figure 2, it is clear that the velocity structure of the OVI lines does trace the velocity structure of the CIV lines. Figure 2 also gives the impression that the OVI absorption is a smeared out version of the CIV absorption. This impression is confirmed by noting that the $`b`$ values are larger for OVI than for CIV for the absorbers centered near $`140`$ km s<sup>-1</sup> and 40 km s<sup>-1</sup> , which are well defined and lightly blended lines in all three species.
There is more than apparent similarity between the OVI and CIV lines in this absorption system. In Figure 3, we show that the OVI lines can be accurately fit using only profiles restricted to be at the same velocity as the identified CIV lines. While the velocities of the OVI lines in Figure 3 were fixed, the column densities and $`b`$ values of each line were allowed to vary to produce the best match to the observed spectrum. Note that in the final fit the $`b`$ value in each OVI component is $`2`$ times as large as in the corresponding CIV component, and that the column density ratio N(OVI)/N(CIV) varies only by $`5`$ between the different components.
## 3 Ionization State of the $`z2.77`$ LLS
The ionization state of this system is not well constrained by the available data. This is because the OVI and CIV line widths in this system prevent us from using the commonly made assumption that all of the lines at the same velocity arise in the same gas. Thus we can not use the column density ratios of the observed ions to work out the ionization state of the system using assuming either photo or collisional ionization equilibrium. As demonstrated in Figure 3, the OVI line is wider that the CIV line in each velocity component of this LLS. Since Oxygen is heavier than Carbon, this means that some or all of the OVI absorption is arising in gas different than that producing the CIV absorption – there are at least two phases of gas in this system. The phase which produces the OVI absorption is either warmer or more turbulent than the phase producing the CIV absorption.
All of the OVI lines associated with this system have $`b>17`$ km s<sup>-1</sup> , which is equivalent to T $`>2.8\times 10^5`$ K if the line widths are thermal. An ionization parameter of $`U>1`$ is required for the UV background (assuming the spectrum is a QSO like $`1.5`$ power law) to photoheat gas to this temperature (Donahue and Shull, 1991). Since it is unlikely that gas associated with a LLS is this rarefied, the large OVI $`b`$ values probably mean that the lines are widened by non-thermal motions, are collisionally ionized, or are actually several lines so close to each other in velocity space that their individual profiles can not be resolved.
Except for the components at $`50`$ and $`+65`$ km s<sup>-1</sup> , all of the CIV lines have $`6<b<8`$ km s<sup>-1</sup> . This corresponds to a maximum temperature of $`2.64.6\times 10^4`$ K. This is too cold for collisionally ionized gas to show CIV absorption, so the CIV lines must be photoionized. It is tempting to use the observed column densities of CIV, SiIV, SiII and CII to constrain the ionization states of the velocity components of this system by assuming they are in photoionization equilibrium with the UV background. However, we feel that this would be a mistake. The main result we draw from this data is that each velocity component of the LLS contains at least two phases of gas. We feel that this result warrants re-examination of the commonly made assumption that all of the absorption seen at the same velocity arises in the same gas. In particular, if the OVI and CIV absorption do not arise in the same gas, there is no good reason to assume that the CIV and SiIV absorption comes from the same gas, and good reason to suspect that the CIV and CII absorption does not arise in the same gas.
## 4 Discussion
There are several scenarios that can give rise to multiple high ionization phases of gas in an individual component of an absorption line system. The absorbing gas may contain pockets of cool or warm (CIV, SiIV) gas intermixed with pockets of hot gas (OVI). In this scenario a method must be found to widen the OVI lines; presumably they would arise in small pockets of shock heated gas and/or expanding gas around collapsing substructure within each component (stars?). The similar metal column ratios imply the structure within each of the components is similar as well.
In a more likely scenario, the multiple phases are explained by density gradients in the absorbing components at each velocity. Moving along the gradient, the photoionization parameter of the gas will change, giving a large number of effective gas phases. As discussed in the introduction, this sort of situation was produced in the simulations run by Rauch et al. In this scenario, the wide OVI lines are produced primarily because the OVI absorbing gas is falling into the collapsing structure, whereas the CIV absorbing gas is at rest behind the shock front. The observed wide OVI lines and the similar component structure of this LLS suggest we may be observing the formation of a protogalaxy by the merging of several smaller structures. There is, however, at least one major difference between the data and the Rauch et al. simulation. In this system, the SiIV absorption is strong and has the same velocity structure as the other high ions. In the simulations this will only occur if each velocity component is centered on the line of sight to the QSO because of the small effective cross section of SiIV absorbing gas from the center of a collapsed object. This seems improbable. It will be interesting to see if the OVI lines observed at high resolution in other LLS agree with the physical picture of the LLS suggested by these observations.
We thank Tom Bida and Barbara Schafer with the W.M. Keck Observatory for assistance with our observations of 1157+3143 , and Tom Barlow for providing a copy of his extraction software which allowed us to rapidly do science with our data. We thank Bob Carswell for making his VPFIT software package available to us. This work was supported in part by NSF grant 31217A and by NAGW4497 from NASA.
|
no-problem/9901/quant-ph9901046.html
|
ar5iv
|
text
|
# Einstein-Podolsky-Rosen Paradox and Antiparticle
## Acknowledgments
We thank Profs. M-l Du, H-z Li, J-q Liang, R-k Su , B-w Xu, S-q Ying, X-y Zeng and Drs. G-h Yang, J-f Yang and Z Zhang for discussions. This work was supported in part by the NSF of China.
|
no-problem/9901/cond-mat9901297.html
|
ar5iv
|
text
|
# Pseudogaps and Extrinsic Losses in Photoemission Experiments on Poorly Conducting Solids
## Abstract
A photoelectron, on being emitted from a conducting solid, may suffer a substantial energy change through ohmic losses that can drastically alter the lineshape on the meV scale which is now observable due to improved resolution. Almost all of this energy loss takes place after the electron leaves the solid. These losses are expected to be important in isotropic materials with relatively low conductivity, such as certain colossal magnetoresistance manganates and very electrically anisotropic materials, such as one-dimensional conductors. Such effects may also be important in the interpretation of photemission in high-T<sub>c</sub> superconductors. In all these materials, the electric field of the photoelectron can penetrate the system. In particular, extrinsic losses of this type can mimic pseudogap effects and other peculiar features of photoemission in cubic manganates. This is illustrated with the case of La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub>.
In the past few years, the resolution of photoemission (PE) experiments has improved to the range of 10 meV or less, and this has allowed finer details of electronic structure to be observed, including the ”pseudogap” - a depression of intensity at the chemical potential $`\mu `$. Pseudogaps have been observed in a wide variety of materials: quasi-one-dimensional (1D) systems, both inorganic (Ta Se<sub>4</sub>)<sub>2</sub>I and organic (TTF-TCNQ) , quasi-2D systems such as the underdoped high-T<sub>c</sub> materials , and most recently 3D systems: the colossal magnetoresistance (CMR) manganates , . In many cases, interesting temperature dependences of these pseudogaps have been observed. The origin of pseudogaps is among the most fundamental problems of present-day condensed matter physics. Because the most direct way to see them is with PE, it is well to understand this measurement very thoroughly.
A somewhat disturbing aspect of the current situation is that, although the resolution has greatly improved, isolated resolution-limited peaks are not the rule in angle-resolved photoemission (ARPES) data that detect pseudogaps. There is a suggestion here that some extrinsic broadening mechanism is at work or that a large unexplained background is present .
The conventional wisdom interpretation of ARPES data is that at a given wavevector $`\stackrel{}{k}`$, the ideal intensity $`I(\omega )`$ is proportional to $`A(\stackrel{}{k},\omega )`$, the spectral function for a single hole. The observed intensity, at least near $`\mu `$, is broadened only because of the finite instrumental resolution. $`A(\stackrel{}{k},\omega )`$ is, in this context, an ”intrinsic” quantity. The outgoing electron either suffers a large energy loss due, for example, to plasmon emission or ionization, or suffers no loss. In the former case, the electron is not detected or its energy is sufficiently far from threshold that it is ignored; in the latter case, the electron is detected and its measured distribution is a faithful reflection of the intrinsic distribution in the solid.
This conventional picture of the photoemission process is reconsidered in this report for certain important classes of materials, namely those which are ’poor conductors’. The working definition of this phrase is a DC resistivity $`\rho _0`$ which exceeds the Mott value $`100\mu \mathrm{\Omega }`$-cm. I will argue that electrons emitted from such materials are subject to losses of the order of a few tens of meV after they leave the surface. At low resolutions, these processes are usually not important, but for high-resolution experiments they cannot be ignored.
If an electron is emitted normally at speed $`v`$ from very near a clean surface and leaves the sample without undergoing significant energy loss, then the Fourier transform of the electric field inside the material is
$$\stackrel{}{E}(\stackrel{}{r},\omega )=\frac{e}{2\pi v}\frac{2}{1+ϵ(\omega )}_0^{\mathrm{}}𝑑z^{}e^{i\omega z^{}/v}\frac{\stackrel{}{r}z^{}\widehat{z}}{|\stackrel{}{r}z^{}\widehat{z}|^3}.$$
(1)
The surface is the $`xy`$ plane, $`e`$ is the charge on the electron, and $`ϵ(\omega )`$ is the bulk dielectric function. This electric field can set up currents in the bulk.
To arrive at this expression certain approximations have been made. The expression for $`\stackrel{}{E}`$ does not hold when the charge is within a few atomic layers of the surface: to model the short-time, high-frequency losses, a proper treatment using the surface dielectric function would be required. I do not attempt this here, as only the low-frequency loss is of interest. I assume the normal skin effect - the wavevector dependence of $`ϵ(\omega )`$ has been neglected. At high frequencies or for very low temperatures for clean systems, the anomalous skin effect should be taken into account. The factor $`2/(1+ϵ)`$ in Eq. 1 gives image charge and screening effects and proves critical.
These formulas depend on the assumption that the material is cubic. The important special case of emission along the z-axis of a tetragonal material may be treated by the same method, and the image charge factor becomes $`2/(1+\sqrt{ϵ_{xx}ϵ_{zz}})`$. Thus the absorption is strongly enhanced in a layered conducting material where we expect $`|ϵ_{xx}|>>|ϵ_{zz}|`$ at the relevant frequencies. Similar remarks apply to the orthorhombic 1D conductor case, but the calculations become far more complicated and no simple expression comparable to Eq. 1 could be derived.
The currents set in motion by the field will produce ohmic loss. These will be represented in the observed energy of the electron. Classically, the total energy loss is given by
$$Q=\frac{1}{2}_{\mathrm{}}^{\mathrm{}}𝑑\omega d^3r\mathrm{}\sigma (\omega )|\stackrel{}{E}(\stackrel{}{r},\omega )|^2=\frac{2e^2}{\pi v}𝒞_0^{\mathrm{}}𝑑\omega \frac{L(\omega )}{\omega }$$
(2)
where $`𝒞2.57`$ and $`L(\omega )=\mathrm{}\sigma (\omega )/|1+ϵ(\omega )|^2`$.
This classical calculation corresponds to a quantum-mechanical one. In fact, as constant electron velocity was assumed, it is the Born approximation. Because the field is appropriately screened by the dielectric function, I term it the screened Born approximation. This approximation should be valid for those electrons whose energy loss is small compared with their total energy. This ratio is of order 50 mev / 20 eV $``$ 2.5 $`\times 10^3`$ for experimental parameters of interest. The relative differential probability is obtained by setting
$$Q=\mathrm{}_0^{\mathrm{}}\omega P(\omega )𝑑\omega $$
(3)
where $`P(\omega )`$ is the relative differential probability of losing energy $`\mathrm{}\omega `$. Hence
$$P(\omega )=\frac{2e^2𝒞L(\omega )}{\pi \mathrm{}v\omega ^2}$$
(4)
This expression is general, and is of course related to well-known formulas in electron-energy-loss spectroscopy . Its relevance to PE has been noted before . Refs. and are concerned with plasmon and other losses in the electron volt range. Recent work on processes occurring when the electron is still inside the material has also clarified the losses in this energy range , while highlighting the lack of explanation of background in the millielectron volt range.
Note at this point that $`P(\omega )`$ at low frequencies is greater for systems with low conductivity. Because $`ϵ(\omega )4\pi i\sigma /\omega `$ we have $`P(\omega )\sigma /\omega ^2|ϵ|^21/\sigma `$.
Quantum mechanics requires some probability for forward scattering $`P_0`$, or that the electron loses zero energy. Thus, the total normalization is given by the equation
$$1=P_0+_0^{\mathrm{}}P(\omega )𝑑\omega $$
(5)
$`P_0`$ depends on an integration over all energies. Because the dielectric function is usually not known quantitatively over the entire range of energies, $`P_0`$ is difficult to evaluate. For interpretation of data it is best treated as a fit parameter.
I now apply these ideas to angle-integrated PE, saving ARPES for later work. I assume that $`P(\omega )`$ is independent of emission angle, which should be true for the near-normal emissions typical for the incident photon energies used in most cases. The observed intensity $`I(\omega ,T)`$, if electrons are emitted from a material with a temperature-independent density of states $`N(\omega )`$, is
$$I(\omega ,T)=P_0(T)N(\omega )f(\omega )+_0^{\mathrm{}}P(\omega \omega ^{},T)N(\omega ^{})f(\omega ^{})𝑑\omega ^{}$$
(6)
which must then be convoluted with an instrumental resolution function. The ’intrinsic’ temperature dependence comes entirely from the Fermi function $`f(\omega )`$, but this dependence is very minor; I restrict the argument to relatively low T.
I first consider a model system for illustrative purposes. For emission at a given $`\stackrel{}{k}`$ (ARPES), the observed intensity should consist of a main peak at $`\omega =ϵ_\stackrel{}{k}`$ and an asymmetric tail below this, a rather common observation. For the angle-integrated quantity $`I(\omega ,T)`$, we obtain a two-component result according to Eq. 6: the actual density of states $`N(\omega )`$ and a downshifted loss curve. Can this mimic a pseudogap ? Let $`N(\omega )=N_0`$ over some wide energy range ($`eV`$) below $`\mu `$, so that there is no actual pseudogap. Let the model system be a Drude conductor:
$$\sigma (\omega )=\frac{\sigma _0}{1i\omega \tau (T)}$$
(7)
This expression is then substituted in Eq. 6 to produce Fig. 1 for two conductivities. The parameters for the dashed curve are: $`\rho _0=1/\sigma _0=110\mu \mathrm{\Omega }`$cm and $`\tau =4\times 10^{14}`$ s. The parameters for the solid curve are: $`\rho _0=1/\sigma _0=44.5m\mathrm{\Omega }`$-cm and $`\tau =10^{16}`$ s. Both curves have $`P_0=0.01`$ and T = 38 K (k<sub>B</sub> T = 3.3 meV). $`\sigma _0/\tau `$ is held fixed in the figure. The point is very simple: in the Drude model $`\sigma _0/\tau `$ is just $`ne^2/m^{}`$, where $`n`$ is the carrier concentration and $`m^{}`$ is the effective mass, so that all of the temperature dependence in the conductivity occurs in the relaxation time, as in a conventional metal with no gap or pseudogap. The changes in the observed intensity arise entirely from extrinsic effects. The other parameters are held fixed as well. Note that these DC resistances are very high by the standards of ordinary metal physics, but quite typical of the CMR systems at temperatures comparable to or below the metal-insulator (M-I) transition. In the highly resistive state, the fields penetrate into the material and losses are high, whereas the loss is relatively low for the high conductivity state that screens the field. The plots have been normalized in the conventional manner by setting the intensities equal at a binding energy where they have leveled out - here at $`350`$ meV. The results are not very sensitive to this number. The popular midpoint method used to determine a ”pseudogap” would give a value of about 50 meV. The dashed curve represents a system at the Mott conductivity - the borderline at which the loss effects become important. In good metals with $`\rho _0<100\mu \mathrm{\Omega }`$-cm losses become negligible, and the observed spectra reflect the actual density of states faithfully.
The curves demonstrate that in a system with a M-I transition, the observed intensity will change due to ”extrinsic” effects. In general, there will be a motion of weight away from the Fermi energy as one approaches the insulating state. If such motion is observed in experiment it may not have anything to do with an actual pseudogap in the density of states.
Considering now actual spectra, angle-integrated PE on the CMR material La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub> shows a number of unusual and striking features, represented by the points in Fig. 2, taken from Park et al. . The material has a M-I transition at 260 K. In the metallic state at 80 K, there is a strong negative slope in $`I(\omega )`$ for at least 0.6 eV below $`\mu `$. There is a sharp break in slope at $`\mu `$, presumably indicative of a nonzero density of states at $`\mu `$. In the insulating state at 280 K there appears to be no Fermi edge at all - the observed intensity is flat at $`\mu `$ and weight has moved back from $`\mu `$. There is even upward curvature in the data, as opposed to the downward curvature of the Fermi function. There is nothing in the usual theory of metals to account for any of these observations, and they certainly do not agree with band calculations . These features have been taken to indicate a pseudogap , but they can be produced by extrinsic effects.
In Fig. 2, I plot the data points at two temperatures against the theory (Eq. 6). It is necessary to take a model for the frequency-dependent conductivity, which is not entirely Drude-like in the manganates. I have adopted a simplified version of the model of Okimoto el al. in which there is a frequency-independent part $`\sigma _{01}`$ and a Drude part $`\sigma _d(\omega )`$ which is as in Eq. 7. This introduces an additional parameter $`r=\sigma _{01}/\sigma _d(0)`$ which measures the relative strength of the two components. The authors of Ref. base their model on the analysis of their data on optical conductivity of La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> which is isoelectronic to the the calcium-doped system. Some sharp structure in the optical data, presumably due to phonons, is neglected in the model. If included, it might account for some of the additional small structure observed in PE.
The parameters for the upper curve are: $`\tau =5\times 10^{14}`$s, $`\rho _0=1/\sigma _d(0)=0.296m\mathrm{\Omega }`$cm, T= 80 K, $`P_0=0.0025`$ and $`r=0.2`$. The parameters for the lower curve are: $`\tau =10^{14}`$ s, $`\rho _0=1/\sigma _d(0)=1.48m\mathrm{\Omega }`$-cm, T= 280 K, $`P_0=0`$ and $`r=0.25`$. The curves are normalized to agree at a binding energy of 600 meV.
Again in this case, there is no change in the underlying density of states and the change in the theoretical intensity is entirely due to extrinsic effects.
To make a convincing case for a pseudogap from PE material a careful analysis of data using Eq. 6 is required to extract a pseudogap from experimental data. This suggests that meaningful investigation of electronic structure in poorly conducting materials requires a combination of PE with optical conductivity and electron energy loss measurements. This allows us to apply Eq. 6 and back out the density of states. A simple check can always be made. The inelastic part of the spectrum is inversely proportional to the speed of the outgoing electron, as may be seen from Eq. 1. Hence, to be genuine, a pseudogap must be present in the observed intensity at all incoming photon energies.
One may make some qualitative statements about the current situation in some of the more important classes of materials beyond the CMR manganates.
In good-quality high-T<sub>c</sub> superconductors, the conductivity in the $`ab`$ plane typically exceeds the Mott value. However, the conductivity along the $`c`$axis is often less. Thus, these materials form a marginal case for the loss mechanism described here. There are other very strong indications that the pseudogap in the underdoped materials is quite real. ARPES itself shows that the pseudogap is momentum-dependent, which the loss is not. There is also corroboration from other experiments, tunneling being perhaps the most persuasive because it is also a direct measure of the DOS . On the other hand, details of lineshapes may still be affected by extrinsic processes in high-T<sub>c</sub> materials. A distinct sharpening of quasi-particle-like peaks is often observed as the temperature is lowered and the DC conductiviity increases, suggesting a decrease in energy loss.
In 1D systems, conductivity in two directions is very low, and one might expect the losses to be substantial. Intriguingly, it often appears to be the case that the gap or pseudogap measured in PE is greater than that given by other experiments. In (TaSe<sub>4</sub>)<sub>2</sub>I, for example, the PE gap at low temperatures is about 500 meV, whereas other experiments give values near 250 meV . Another well-known example is TTF-TCNQ. At room temperature, DC transport data may be interpreted as that of a highly anisotropic gapless metal , but a pseudogap of 120 meV is observed in ARPES . These are only two of numerous examples of this puzzling mismatch which can be cited in 1D conductors. Such results are a strong indication that extrinsic processes are influencing the photoelectron spectrum in these systems.
|
no-problem/9901/astro-ph9901302.html
|
ar5iv
|
text
|
# Solving the kilo-second QPO problem of the intermediate polar GK Persei
## 1 Introduction
GK Per (Nova Per 1901; Campbell 1903), belongs to a subgroup of cataclysmic variables (CVs) called Intermediate Polars (IPs). In these systems an asynchronously-rotating, magnetic white dwarf accretes material from a less-massive, late-type companion filling its Roche lobe. Gas leaving the companion star attempts to form an accretion disc around the primary star but its magnetic field either prevents the formation of the disc or truncates it near the white dwarf.
GK Per was identified with the X-ray source A0327+43 by King, Ricketts & Warwick (1979) and confirmed as an IP by the detection of a 351 s X-ray spin pulse by Watson, King & Osborne (1985; hereafter WKO) and Norton, Watson & King (1988). The same period was subsequently found in optical photometry by Patterson (1991). GK Per has the longest orbital period from the sample of known CVs, P$`_{\text{orb}}`$ = 2 d, (Crampton, Cowley & Fisher 1986; hereafter CCF). The wide binary separation combined with a relatively weak magnetic field ($``$ 1 MG) means that a truncated accretion disc must be present if current theories of disc formation are correct (Hameury, King & Lasota 1986). The presence of a disc has yet to be confirmed by direct observation, although the system does undergo dwarf nova outbursts every 2–3 years where its optical brightness increases from 13th to 10th magnitude (Sabbadin & Bianchini 1983). The most-likely mechanism for dwarf nova outbursts is a thermal instability within an accretion disc (Osaki 1974). GK Per outbursts have been modelled as such by Cannizzo & Kenyon (1986) and Kim, Wheeler & Mineshige (1992).
This paper is a continuation of paper i (Morales-Rueda, Still & Roche 1996), in which we presented spectrophotometric observations of GK Per taken on the rise to its 1996 outburst (Mattei et al. 1996). We reported the detection of quasi-periodic oscillations (QPOs) within the Doppler-broadened emission lines of H i and He ii. This provides an opportunity to map the velocity structure of the oscillations. QPOs are defined as low-coherence brightness oscillations thought to be associated with material within the inner accretion flows of CVs. Theoretical models developed to explain QPOs consider the presence of dense blobs of material orbiting in the inner regions of the accretion disc (Bath 1973), or non-radial pulsations over the surface of the white dwarf (Papaloizou & Pringle 1978), or radially-oscillating acoustic waves in the inner disc (Okuda et al. 1992; Godon 1995). In these models, the QPO timescales match observations of dwarf novae and are of the order of a few hundred seconds. However the QPO periods detected in GK Per are an order of magnitude longer than this. Previous to this paper they have only been detected in X-ray data taken during outbursts; WKO discovered them in 1.5–8.5 keV EXOSAT data at the peak of the 1983 outburst, while Ishida et al. (1996) report a second detection at 0.7–10 keV with ASCA during the rise to the 1996 outburst discussed in this paper.
To explain the long timescales WKO suggested the QPO mechanism is caused by beating between the 351 s white dwarf spin period and inhomogeneous gas orbiting at the inner edge of the accretion disc. Hellier & Livio (1994; hereafter HL) noted that the X-ray hardness ratio varies over the QPO cycle as expected from photoelectric absorption by cool gas and that a period of a few thousand seconds is consistent with the orbital frequency of gas if it is deposited onto the disc by a gas stream which has partially avoided impacting the outer disc rim and follows a ballistic trajectory. They propose that the QPO mechanism is X-ray absorption by vertically-extended blobs of gas orbiting at this preferred inner impact radius. In paper i we determined that the characteristic velocity structure of the optical counterpart to the QPOs observed by Ishida et al. (1996) is consistent with blobs in the inner disc. In the current paper we present further analysis which indicates that the optical QPO is also driven by absorption, but favours strongly a beat model over the disc-overflow interpretation.
## 2 Observations
Between 1996 February 26 and 28, 6–8 days before the ASCA pointings of Ishida et al. (1996), we obtained spectrophotometry of GK Per using the Intermediate Dispersion Spectrograph mounted on the 2.5 m Isaac Newton Telescope (INT) on La Palma. Table 1 gives a journal of observations. In Fig. 1 we show a visual light curve obtained by the Variable Star Network during the 1996 outburst, with arrows indicating the days on which we made observations. The quick readout mode was used in conjunction with a Tektronix CCD windowed to 1024 $`\times `$ 150 pixels to reduce dead time and obtain good sampling of the spin cycle. The exposure times and resolution of the data were already described in paper i.
After debiasing and flat-fielding the frames by tungsten lamp exposures, spectral extraction proceeded according to the optimal algorithm of Horne (1986). The data were wavelength calibrated using a CuAr arc lamp and corrected for instrumental response and extinction using the flux standard HZ 15 (Stone 1977). The spectrograph slit orientation of PA $`249.1^{}`$ allowed a 15th magnitude nearby star approximately 0.5 arcsec ENE of GK Per to be employed as calibration for light losses on the slit.
We also have available to us spectroscopy of various K-type stars from 1995 October 11 to 13 obtained from the INT and from 1995 October 30 to November 2 with the 2.1 m telescope in the McDonald Observatory in Texas. The INT instrumental setup was identical to the one used for the 1996 observations described above. For the McDonald data, the low-to-moderate resolution spectrometer ES2 was employed in conjunction with the TI1 CCD and a grating ruled at 1200 lines mm<sup>-1</sup> covering the wavelength region $`\lambda `$4196 Å–$`\lambda `$4894 Å giving a resolution of 200 km s<sup>-1</sup> at $`\text{H}\beta `$.
The spectra were flat-fielded, optimally extracted and wavelength calibrated also in the standard manner. Flux calibrations were applied using observations of the standards HD19445 (Oke & Gunn 1983) and Feige 110 (Stone 1977) for the October and November data respectively. Table 2 gives a list of the K-type templates observed over both runs.
## 3 Results
### 3.1 Average spectra
Fig. 2 presents the average of all the data collected on 1996 Feb 28. It is characterised by a flat continuum, broad Balmer and He i lines in emission, high excitation lines of He ii, N iii and C iii and numerous faint, narrow absorption features of Fe i, Ca i, Ti ii and Sr ii that had been identified as signatures of the K-type secondary star by Kraft (1964), Gallagher & Oinas (1974), CCF and Reinsch (1994).
We employed K star spectral templates to determine which luminosity class best matched the secondary star in this system during outburst and search for signatures of increased X-ray irradiation. Using CCF’s fit to the orbital radial velocity of the secondary star we shifted out the orbital motion of the absorption lines with a quadratic rebinning algorithm. We binned in velocity the spectra of GK Per and the K-type templates to ensure that they all had identical wavelength ranges and dispersions. We employed the optimal subtraction algorithm of Marsh, Robinson & Wood (1994) to determine the K star spectral type – we multiply the template by a monochromatic constant which represents the contribution to the spectrum from non-stellar sources of light and subtract the resulting spectrum from the GK Per data. The residual was smoothed using a high-pass band filter (FWHM of gaussian = 13 Å), and a $`\chi ^2`$ test performed between the original and smoothed residual. This is an iterative procedure to determine the optimum value of the monochromatic constant which continues until $`\chi ^2`$ is minimised. Table 2 lists the templates, their spectral classes, and the reduced $`\chi ^2`$ obtained after applying optimal subtraction. The best fit template is the K1iv star HD197964 which provided a reduced $`\chi ^2`$ of 2.5. The secondary star contributes 13 per cent of the total light in this spectral region on the third night of observations. This compares to 33 per cent found by CCF and Gallagher & Oinas (1974) during quiescence indicating that the accretion flow has increased in brightness.
The best-fit luminosity classes are consistent with the quiescent classifications of K2ivp by Kraft (1964), K2ivp–K2v by Gallagher & Oinas (1974), K0iii by CCF, and K3v by Reinsch (1994). We find the spectral type to be constant across our two phase samples - one during which a large area of the white dwarf-facing surface is visible and the other when it is mostly limb-occulted. Consequently there is no observational evidence for an increase in irradiating flux from the accretion regions over the inner face of the companion star, although we are limited by a small range of spectral templates and poor orbital sampling.
In order to measure integrated emission line fluxes from each of the three nights, we fitted a third order polynomial through wavelength bands relatively free of line features ($`\lambda \lambda `$4147-4212Å, $`\lambda \lambda `$4278-4306Å, $`\lambda \lambda `$4560-4608Å, $`\lambda \lambda `$4770-4838Å and subtracted the fit from the data. Fluxes were measured by summing under each line profile and these are provided in Table 3. The intensity of the continuum, the lines and the relative intensity of He ii$`\lambda `$4686 Å with respect to the Balmer lines, increases from the first night to the last as the system approaches the outburst maximum. We fit the emission lines during the three nights with a power law function of time $`Ft^\alpha `$, and provide the index $`\alpha `$ for each line in Table 4.
Power-law fits of the form $`f_\nu =\nu ^\alpha `$ on each consecutive night provide $`\alpha `$ = $``$1.61 $`\pm `$ 0.03, $``$1.43 $`\pm `$ 0.03 and $``$1.39 $`\pm `$ 0.11. Continuum slope changes slightly within statistical uncertainties during the observing run, the spectra becoming bluer with time consistent with a rise in temperature through the accretion flow. These indices are inconsistent with an accretion disc emitting as a discrete set of blackbodies (Pringle 1981).
A comparison of the Feb 28 averaged spectrum with the spectra presented by Reinsch (1994) reveals that the Balmer line fluxes are $``$1.7 times larger than during quiescence and the He ii$`\lambda `$4686 Å feature and the C iii/N iii$`\lambda \lambda `$4640–50 Å Bowen blend are $``$5.3 times brighter. He i$`\lambda `$4471.7Å and He i$`\lambda `$4921.9Å are 1.4 and 2.3 times brighter during this outburst stage, respectively. In quiescence the Balmer lines are the brightest emission lines, whereas the strongest line in the current data is He ii$`\lambda `$4686 Å. Szkody, Mattei & Mateo (1985) and CCF present spectra of GK Per taken during the 1983 outburst maximum and 20 days after outburst respectively in which this behaviour is also clear.
### 3.2 Radial velocities
In paper i we provided an analysis of the emission line velocities. To complete the radial velocity analysis we now consider the absorption lines. In Sec. 3.1 we determined that our best secondary star template has a spectral type of K1iv. By masking out the emission lines in individual GK Per data and subtracting fits to the continua from all spectra, we were able to cross-correlate the absorption spectrum of GK Per with our template (Tonry & Davis 1979). We corrected the resulting radial velocities by the systemic velocity of the template star (-6.5 $`\text{km}\text{s}^1`$; Evans 1979) and fitted them with a circular function:
$$V=\gamma +K\mathrm{sin}2\pi \left[\varphi \varphi _0\right]$$
(1)
Orbital phases were adopted relative to the corrected CCF ephemeris, where $`\varphi _0`$ corresponds to superior conjunction of the white dwarf. $`\gamma `$ represents the systemic velocity of the binary, $`K`$ is the radial velocity semiamplitude of the companion star and $`\varphi `$ is the orbital phase.
We combined the radial velocities measured by previous authors (Kraft 1964; CCF; Reinsch 1994) with our own values and plot them together in Fig. 3. We assume that the errors on all individual measurements previous to this study are equal to the mean error of 20 $`\text{km}\text{s}^1`$. The solid curve is the fit to all data, providing $`\gamma =30\pm 1`$$`\text{km}\text{s}^1`$, $`K=119\pm 2`$$`\text{km}\text{s}^1`$and $`\varphi _0=0.998\pm 0.003`$. The dot-dashed curve is a fit to all the data excluding the current set where $`\gamma =22\pm 2`$$`\text{km}\text{s}^1`$, $`K=128\pm 2`$$`\text{km}\text{s}^1`$and $`\varphi _0=0.009\pm 0.003`$, providing reasonable agreement although the fits are not consistent within the given errors. Martin (1988) showed that an elliptical fit can account approximately for irradiation processes over the inner face of the secondary star. Elliptical fits to the quiescent data have already been produced by CCF and Reinsch (1994). We do not have suitable phase sampling to produce a significant elliptical fit with the current data. Therefore although we find no evidence for secondary star irradiation in the absorption line radial velocities during outburst, our phase coverage prevents us from ruling it out.
### 3.3 Emission line profiles
The continuum-subtracted data are presented as time-series of selected line profiles over each night of observation in Fig. 4. At times theses lines display double-peaked profiles. This is often considered a signature of accretion disc emission (Smak 1981), however the velocity structure across the accretion flow is so complex in IPs that this cannot be considered a conclusive detection of the accretion disc. The profiles are asymmetric where the peak apparently shifts from the blue to the red and back again in the Balmer lines over the observations. This behaviour is reminiscent of emission from a localised region in the system such as the bright spot where the accretion stream strikes the outer rim of the disc, or an irradiated region on the secondary star, but we note that the orbital phasing of the observed $`\text{H}\beta `$ feature is inconsistent with both interpretations. Moreover, the orbital phases at which we see these variations are not those at which the hot spot and the irradiated face of the secondary are best observed, i.e. phases 0.8 and 0.5 respectively. The profile variations of the Hei and He ii$`\lambda `$4686 Å lines are different to those of $`\text{H}\beta `$ either because they originate from different locations or are more sensitive to intervening absorption regions.
The most interesting variation occurs in the blue wings of all these profiles. First we note that the profiles are asymmetric about their rest velocities, regardless of orbital phase, where each line has a red bias. This shift is much larger than the systemic velocity of the binary, measured from secondary star photospheric lines. Secondly we note that this asymmetry is periodic, at least in $`\text{H}\beta `$ and He ii$`\lambda `$4686 Å, and this period corresponds to the kilo-second QPOs we reported in paper i. The QPOs manifest in blue-shifted material and appear to be the result of absorption either of the line source or the underlying continuum. We have presented trails of $`\text{H}\beta `$ and He ii$`\lambda `$4686 Å against QPO phase after subtracting the nightly average from each spectrum in paper i.
After the orbital period, the third likely signal present in these trails is the 351 s spin period of the white dwarf. We attempted to remove the orbital variations in the line profile by shifting out the motion of the white dwarf according to the ephemeris and radial velocity fit of CCF. The QPO contribution was accounted for approximately by combining the resulting spectra into 40 bins phased over the QPO cycle and subtracting the spectrum in the bin nearest in time from each individual spectrum. Fig. 5 shows the resulting trails of $`\text{H}\beta `$, He ii$`\lambda `$4686 Å and the sum of the Hei lines binned into 30 bins, over the spin period using Ishida’s et al. (1992) ephemeris. The modulated signal is faint during the Feb 26 and 27, but clearly present in the $`\text{H}\beta `$ and He ii$`\lambda `$4686 Å profiles on Feb 28 extending out as far as $``$ 1 000 $`\text{km}\text{s}^1`$ in the $`\text{H}\beta `$ profile. We see one modulation per cycle with signal moving from the red peak to the blue. This is reminiscent of the spin signal found in the trails of the IP RX J0558+5353 (Still, Duck & Marsh 1998), although in that case most of the power occurred on the 1st harmonic of the spin period, indicating accretion onto two poles of the primary star. In the current trails of GK Per we see no evidence for power on the 1st harmonic. Harlaftis & Horne (1998) postulate that the origin of this spin pulsed emission is the region where the disc material is threaded onto the magnetic field. Similar trails but for spectra obtained during quiescence are presented by Reinsch (1994), where the spin signal is also clear on the fundamental frequency in the Balmer and He ii$`\lambda `$4686 Å lines but not very strong in He i.
### 3.4 V/R ratios
By measuring the fluxes under the continuum-subtracted blue wings (from -1100 to 0 $`\text{km}\text{s}^1`$) of the $`\text{H}\beta `$ and He ii$`\lambda `$4686 Å lines and dividing these by the flux under the red wings (from 0 to 1100 $`\text{km}\text{s}^1`$) we produce a time-series of V/R ratios. These are plotted in Fig. 6 for the three nights of observation. Kilo-second oscillations are observed over all three nights. A power search over the ratios was performed using the Lomb-Scargle algorithm (Scargle 1982) and the QPO periods found are listed in Table 5. The errors quoted are only an estimate of the minimum error and depend on the frequency sampling. A significance test (a variant of the randomisation Monte Carlo technique, Linnell Nemec & Nemec 1985) was run by iteratively searching for periods after small shifts of the data had been performed. After 1000 permutations we found that the periods present in the V/R ratios are within the quoted errors with 95 per cent confidence. Note that the QPOs tend to shorter periods over the three nights.
In order to determine the nature of higher-frequency variations we created a coarse version of each V/R curve over 40 time bins on each night, each bin approximating to one white dwarf spin cycle. The QPO signal was filtered out by subtracting the bin nearest in time from each V/R measurement. We searched for power in the modified V/R ratios using the Lomb-Scargle algorithm and obtained the power spectra plotted in Fig 7.
Four of the spectra clearly show power at the white dwarf spin period (246 cycles day<sup>-1</sup>), but we cannot determine whether there is any power on the 1st harmonic which occurs beyond the Nyquist limit. We folded the modified V/R ratio data using the spin ephemeris from Ishida et al. (1992) using 40 bins and plot them in Fig. 8. The V/R ratios show sinusoidal behaviour but not as clearly as in Garlick et al. (1994) and Reinsch’s (1994) quiescence data. The maximum in these curves has previously been observed in quiescence at spin phase 0 rather than phase 0.25.
### 3.5 Emission line and continuum fluxes
We conducted a power search across the same regions of continuum listed in Sec. 3.1 and the integrated flux over each emission line. Power is present at kilo-second periods whose values decrease on consecutive nights; see Table 6 and in Fig. 9 we present the power spectra for the integrated fluxes of $`\text{H}\beta `$, He ii$`\lambda `$4686 Å and the continuum. No significant power was found in the spin period during any of the nights. Garlick et al. (1994) and Reinsch (1994) find clear modulations in the intensity of the Balmer lines with the spin period during quiescence but no significant contributions at the spin period in the continuum.
By means of a power search across radial velocity data, paper i determined that the characteristic velocities of the QPO signal were intermediate between the orbital regime and the spin regime. This is consistent with a signal origin in the inner disc or the threading region between disc and magnetic curtain. However the search had no means to discriminate between blue- and red-shifted material and therefore could say little more about the QPO mechanism. In this paper we conduct a similar power search but in the line fluxes across discrete velocity bins of the emission profiles. The resulting power maps for $`\text{H}\beta `$ and He ii$`\lambda `$4686 Å are plotted in velocity–frequency space and provided in Fig. 10, and Table 7 lists the QPO period on differentq nights sampled at $``$400 $`\text{km}\text{s}^1`$ and $``$600 $`\text{km}\text{s}^1`$ in the line profile.
We find that power associated with the QPO is not symmetrically distributed about the rest wavelength, but biased towards the blue wing of each line, as we have previously noted in Sec. 3.3. QPO power extends from $``$500 to $``$1 000 $`\text{km}\text{s}^1`$ depending on the line and night but the QPO does not appear to be a strong function of velocity consistent with our results from paper i. We also find, the QPO tending to longer frequencies with time, as we have already determined from paper i and the V/R ratio analysis in the current paper. We discuss the significance of this result in Sec. 4.
We also find power on the white dwarf spin frequency extending to $``$ 1000 $`\text{km}\text{s}^1`$, although weaker than we have found in the radial velocity analysis of paper i.
## 4 Discussion: QPO mechanisms
In paper i we found that the optical QPOs observed during the 1996 outburst have a velocity structure that is consistent with a mechanism where dense blobs of gas orbit in the inner disc at a radius determined by the impact of an overflowing gas stream (HL). We determined that the QPO is an approximately constant function of velocity ruling out mechanisms involving radial or vertical oscillations in the inner disc flow (Carroll et al. 1985), and that the QPO tends to higher frequencies with time. In this paper we have shown that the QPO is biased towards blue-shifted material and we discuss this result in terms of the disc-overflow accretion model and the alternative beat model proposed by WKO.
### 4.1 The Disc-overflow accretion model
This model was proposed by HL on the basis that a 5000 s period is consistent with the Keplerian period of blobs of material deposited in the inner disc by the overflowing gas stream (Lubow & Shu 1975; Lubow 1989; Hellier 1993; Armitage & Livio 1996, 1998), and that the X-ray hardness ratio measured as a function of QPO phase from the data collected by WKO is consistent with the photo-electric absorption of soft X-rays by cool intervening gas. The optical counterpart to the X-ray QPOs could either be direct reprocessing off the blobs or the periodic reprocessing off material in the outer disc as the blobs intermittently absorb the X-rays from the central object.
We have determined that, in velocity, the QPO ranges between the expected rotational velocity of the outer disc and the overflow impact site (paper i). Although this distribution of QPO power is consistent with the proposed mechanism, it is more difficult to reconcile the blue-shifted bias of power in terms of the overflow model (Fig. 10).
It is unlikely that the optical QPOs can be the result of reprocessing off the disc unless there is an emission mechanism within the disc which is extremely anisotropic. Similarly direct emission from the blobs must also be anisotropic. In this case the cooling of shocked- or viscous-heating gas within the blobs could cause the anisotropy provided the blobs are orbiting faster than the surrounding disc material. It is not clear why this should be the case.
### 4.2 A disc-curtain beat mechanism
WKO proposed a QPO mechanism where the observed periods are the beats between the spin frequency of the white dwarf and the Keplerian frequency of dense blobs of gas orbiting at the inner rim of the disc (Alpar & Shaham 1985 a, b). Each time an accretion curtain sweeps over a blob we observe an increase in column density across the curtain, providing a cool absorbing body for the X-ray emission. The prediction follows that the orbital period of the inner disc is either $``$ 320 s or $``$ 380 s. We investigate whether this model can explain the observed bias in the QPOs across the optical emission lines. Two schematics of the accretion flow in the binary are depicted in Fig. 11. We take the orbital inclination to be consistent with $`46^{}<i<72^{}`$ (Reinsch 1994). We have assumed that the magnetic axis of the white dwarf is misaligned with the rotational axis of the system by $`45^{}`$, although its true inclination is unknown.
In Sec. 3.3 we determined that similar to the X-ray QPOs, the optical counterpart in the emission lines are the result of absorption. We consider two mechanisms which modulate the line emission by absorption over the WKO beat cycle. First we consider self-absorption of line flux from the curtain just above each threading region. Our observation that QPO signal does not extend to as large velocities as the spin signal provides some justification for this assumption. We require the absorption profile to be saturated such that line strength is a function of both column density through the curtain and the velocity gradient across the flow (see e.g. Horne & Marsh 1986). At spin phase equal zero, $`\varphi _{spin}`$ = 0 the upper accretion curtain lies behind the white dwarf and material flowing along that curtain is blue-shifted. Conversely material in the curtain feeding the lower pole is red-shifted. We should observe a blue shifted QPO signal when the blob sweeps through the first threading region every few thousand seconds, increasing the column density along the approaching curtain, but not a red-shifted signal because the second threading region is obscured by the inner accretion disc. The result is a blue bias in the QPO signal across the emission lines at this spin phase. However at $`\varphi _{spin}`$ = 0.5 the curtain geometry has rotated by 180 and both curtains are equally visible. But in this configuration the velocity gradient across the line forming regions is small compared to the $`\varphi _{spin}`$ = 0 case and consequently the amount of absorption across the line profile is smaller. In this way the blue bias in the signal is conserved over the beat cycle.
An equally plausible alternative is that the absorption is of continuum light from the accretion disc behind the white dwarf. As before, the column density along the accretion curtains is modulated on the beat cycle as a blob sweeps through the threading regions. This provides a kilo-second QPO by periodic absorption which occurs when the upper curtain is back-illuminated by the disc at $`\varphi _{spin}`$ = 0. Since there is no back-illuminating source for the lower curtain, or for the upper curtain when it is red-shifted, this mechanism provides a natural blue bias to the QPO signal.
The beat model explains the long-timescale QPOs from GK Per using the pre-existing models of QPO generation. In these models the driving mechanism has a timescale of a few hundred seconds, as observed in the rest of the dwarf nova class of objects. The extra ingredient for GK Per is provided by its properties as both a dwarf nova and an intermediate polar, where the QPO beats with the accretion curtains which thread the disc onto the rapidly spinning white dwarf to provide the observed kilo-second periods. In the above discussion we have considered the QPO in terms of a blob mechanism but the alternative models of radially-oscillating acoustic waves in the inner accretion disc work equally well. (Okuda et al. 1992; Godon 1995). Consequently we do not require a new physical explanation of QPOs to explain the phenomenon in GK Per.
### 4.3 QPOs or DNOs?
QPOs are present in dwarf novae during quiescence and outburst. However the kilo-second QPOs in GK Per have to date only been found when the system is in outburst (Reinsch (1994) claims a tentative kilo-second detection in optical photometry but provides no evidence). This behaviour is more typical of another class of oscillations – the dwarf nova oscillations (DNOs), which occur on timescales of tens rather than a few hundred seconds (Robinson and Nather 1979). A characteristic of DNOs is that they tend to shorter periods as a system approaches the peak of its outburst (Patterson 1981), similar to what we have found in the current observations. Given that the beat model is correct, the oscillations found in GK Per show characteristics of both QPOs and DNOs. The timescales are consistent with QPOs, whereas their behaviour is more comparable to DNOs.
The timescales of DNOs suggest they are driven at the inner edge of an accretion disc (see Warner 1995 for a review in DNOs), and since GK Per is an unusual dwarf nova in that the inner disc is truncated by the white dwarf field, it seems plausible that a DNO mechanism in this system would work on a larger timescale. If we are observing DNOs, the implication of the period increase over the three nights for the beat model is that the inner disc must be orbiting at 320 s rather than 380 s. In the latter case a decrease in the period of the driving DNO from the disc will result in an increase in the observed beat period. We do not find modulations in our data with either a 320 s or 380 s period. Tentative detections of signal at 390 $`\pm `$ 20 s and 410 $`\pm `$ 13 s have been claimed by Mazeh et al. (1985) from optical photometry during the 1983 outburst while Patterson (1981) reports a detection at 380 $`\pm `$ 20 s but never presented the result. If present during the quiescent state, a 380 s oscillation would suggest a QPOs classification. There have been no reports of a 320 s period in the literature.
## 5 Conclusions
We obtain new values for the systemic velocity and the velocity semi-amplitude of the secondary in agreement with previous authors. We conclude that there is no evidence for increased heating over the inner face of the donor star during this stage of the outburst. We find spin modulations in the V/R ratios of the lines but only tentatively in their integrated fluxes or the continuum. Spin power resolved across the line profiles extends to velocities of 1000 $`\text{km}\text{s}^1`$, a large fraction of the freefall velocity of the central object.
The detection of kilo-second QPOs across the optical emission line profiles of GK Per have provided an opportunity to test the mechanism behind the unique long-timescale QPOs in this object, which are an order of magnitude longer than QPOs normally observed in disc-accreting cataclysmic variables. We have rejected the model of HL which considers the direct effects of blobs orbiting at the Keplerian frequency of the annulus associated with a disc-overflow impact site. Our favoured models consider the long QPO period to be the consequence of beating between more typical timescale QPOs or DNOs of $``$ 300–400 s with the magnetic accretion curtain spinning with the white dwarf. Therefore we do not require a new model to explain these long timescales – the long oscillations are merely a consequence of the magnetic nature of the binary.
## ACKNOWLEDGEMENTS
We thank Janet Wood and John Lockley for obtaining the spectra of the K-type templates at the McDonald Observatory. MDS was supported by PPARC grant K46019. PDR acknowledges the support of the Nuffield Foundation via a grant to newly qualified lectures in science to assist collaborative research. The reduction and analysis of the data were carried out on the Sussex node of the STARLINK network. We thank Tom Marsh for providing his reduction software. LM also wishes to thank R. I. Hynes for useful discussion. The Isaac Newton Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
|
no-problem/9901/cond-mat9901177.html
|
ar5iv
|
text
|
# The surfactant effect in semiconductor thin film growth
## I. Introduction
Progress in the fields of electronic and optical devices relies on the ability of the semiconductor industry to fabricate components of ever increasing complexity and decreasing size. The drive for miniaturization has actually provided the impetus for much fundamental and applied research in recent years. Nanoscale structures in 2, 1 or 0 dimensions, referred to as quantum wells, wires and dots respectively, are at the forefront of exploratory work for next generation devices. In most cases these structures must be fabricated through epitaxial growth of semiconductor thin films, in either homoepitaxial (material A on substrate A) or heteroepitaxial (material A on substrate B) mode. There are usually two important requirements in this process: First, the film must be of high quality crystalline material, and second, a relatively low temperature must be maintained during growth. The need for the first requirement is self evident, since highly defected crystals typically perform poorly in electronic applications; the imperfections, usually in the form of dislocations, grain boundaries or point defects, act as electronic traps and degrade the electronic properties to an unacceptable level. The second requirement arises from the need to preserve the characteristics of the substrate during growth, such as doping profiles and sharp interfaces between layers, which can be degraded due to atomic diffusion when the growth temperature is high. These two requirements seem to be incompatible: In order to improve crystal quality, atoms need to have sufficient surface mobility so that they can find the proper crystalline sites to be incorporated into a defect-free crystal. On the other hand, excessive atomic mobility in the bulk must be avoided, which can only be achieved by maintaining lower than typical growth temperatures. These problems are exacerbated in the case of heteroepitaxial growth, where the presence of strain makes smooth, layer-by-layer growth problematic even when the temperature is high, since the equilibrium structure involves strain relieving defects.
Early studies of the effects of contaminants and impurities on growth modes had indicated that it is indeed possible to alter the mode of growth by the inclusion of certain elements, different than either the growing film or the substrate. For a discussion of epitaxial growth modes and the effect of contaminants see the authoritative review by Kern et al. . A breakthrough in the quest for controlled growth of semiconductor films was reported in 1989 when Copel et al. demonstrated that the use of a single layer of As can improve the heteroepitaxial growth of Ge on Si, which is otherwise difficult due to the presence of strain. Growth in this system typically proceeds in the Stranski-Krastanov mode, that is, it begins with a few (approximately 3) wetting layers but quickly reverts to three-dimensional (3D) island growth. The eventual coalescence of the islands unavoidably produces highly defective material. In the experiments of Copel et al., the monolayer of As was first deposited to the Si substrate, and continued to float during the growth of the Ge overlayers. The presence of the As monolayer in the system led to a drastic change in both the thermodynamics (balance of surface and interface energies) and the kinetics (surface and bulk mobility of deposited atoms), making it possible to grow a Ge film in a layer-by-layer fashion, to unprecedented thicknesses for this system (several tens of layers). This remarkable behavior was termed the “surfactant effect” in semiconductor growth. Since then, a large number of experiments has confirmed this behavior in a variety of semiconductor systems (for early reviews see ).
We digress momentarily to justify the terminology. The typical meaning of the word surfactant is different than here, which has led to some debate about the appropriateness of the term in the context of semiconductor thin film growth. The word surfactant, as defined in scientific dictionaries, is used commonly in chemistry to describe “a substance that lowers the surface or interfacial tension of the medium in which it is dissolved” , or “a material that improves the emulsifying, dispersing, wetting, or other surface modifying properties of liquids” . While these physical situations and the effect itself are different than the systems considered in the present review, we adopt here the term “surfactant” to describe the effect of adsorbate layers in semiconductor thin film growth for two reasons. First, for a reason of substance: there are indeed some similarities between adsorbate layers in semiconductor thin film growth and the classical systems to which the term applies; namely, in both cases the presence of this extra layer reduces the surface tension and changes the kinetics of atoms or clusters of atoms (small islands in semiconductor surfaces, molecules in classical systems) at the surface. Second, for a practical reason: the term surfactant has essentially become the accepted term by virtue of its wide use in semiconductor growth, and the need for consistency with existing literature forces it upon us. These reasons, we feel, justify and legitimize the use of the term surfactant in the present context.
Surfactants have been used to modify the growth mode of several systems, including growth of metal layers in homoepitaxy and heteroepitaxy. In our view, the physics relevant to such systems is significantly different than in the case of semiconductors. Supporting this view is the fact that typically a small fraction of a monolayer is needed to produce the surfactant effect in metal growth, whereas in semiconductors typically a full monolayer of the adsorbate species (or what is the equivalent to full substrate coverage, depending on the surface reconstruction) is required. Presumably, in the case of metals, a small amount of the adsorbate is sufficient to induce the required changes in surface kinetics , by altering nucleation rates or step-edge barriers (for some representative examples of metal-on-metal growth mediated by surfactants see Table 1). In semiconductors, on the other hand, the entire surface must be covered by the adsorbate in order for the required changes in energetics and kinetics to be obtained. Due to this fundamental difference, in the present article we will concentrate on the surfactant effect in semiconductor systems.
In the following we first review the available information on the subject, both from the experimental (section II) and the theoretical (section III) point of view. We then present some theoretical arguments that we have advanced in an effort to create a comprehensive picture of the phenomenon (section IV). Finally, we discuss our views on remaining important issues for future research on surfactants and comment on prospects for their use in the fabrication of electronic and optical semiconductor devices (section V).
## II. Experimental Observations
A wide range of systems have been studied where the surfactant effect was demonstrated. We classify these in three categories: the first consists of growth of group-IV layers on group-IV substrates, the second of growth of III-V compounds on III-V substrates, and the third of mixed systems, including growth of elemental and compound systems on various substrates. This categorization has been inspired by the substrate features and the nature of the deposited species, which together determine the growth processes.
### 1. Group-IV films on group-IV substrates
In the first category, the substrate is either Si or Ge (in different crystallographic orientations), on which combinations of different group-IV elements are deposited (Si, Ge, C). In these systems the deposited species are mostly in the form of single group-IV atoms. The adsorbate layers consist of monovalent (H), trivalent (Ga, In), tetravalent (Sn, Pb), pentavalent (As, Sb, Bi), and hexavalent (Te) elements, or noble metals (Au). These adsorbates remove the usual reconstruction of the surface (the different versions of the $`(2\times 1)`$ reconstruction for the Si and Ge(100) surfaces, the $`(7\times 7)`$ and the c$`(2\times 8)`$ for the Si and Ge(111) surfaces, respectively), and produce simpler reconstructions which are chemically passivated. Characteristic examples are the $`(1\times 1)`$ reconstruction (induced by H or As on the (111) surfaces), the $`(\sqrt{3}\times \sqrt{3})`$ reconstruction on the (111) surfaces with either one adsorbate atom per unit cell (induced by Ga or In) or three adsorbate atoms per unit cell (induced by Sn, Sb, Pb, Au), the $`(2\times 1)`$ reconstruction of the (100) surfaces (induced by the trivalent and pentavalent elements or by H in the monohydride phase), and the $`(1\times 1)`$ reconstruction of the (100) surfaces (induced by Te or by H in the dihydride phase). In these reconstructions the dangling bonds of the substrate atoms are saturated by the additional electrons of the adsorbate atoms, producing low-energy, chemically unreactive surfaces.
One important issue in these systems is the strain induced by the deposition of atoms with different covalent radius than the substrate atoms. The normal growth mode in strained systems involves the formation of 3D islands which relieve the strain by relaxation at the island edges, either right from the initial stages of deposition (the so called Volmer-Weber or 3D island growth), or after the formation of a wetting layer (the Stranski-Krastanov growth). When surfactants are employed, it is possible to induce layer-by-layer growth in strained systems by avoiding the formation of 3D islands for film thicknesses much beyond what is obtained under normal conditions. The reduction of strain-induced islanding was in fact one of the early intended results of surfactant use, and remains a goal pursued in several experimental studies. Even when surfactants are used, however, and the 3D island mode of growth is suppressed, the strain in the heteroepitaxial film is still present and is usually relieved by the introduction of a network of misfit dislocations. The mechanism by which this happens is not known and remains to be analyzed by atomistic models.
It is natural to expect that diffusion of group-IV adatoms on the adsorbate covered surfaces will be relatively easy due to the chemical passivation by the surfactant layer. Such a situation may lead to a substantial increase of the diffusion length of adatoms on top of the surfactant layer. Indeed, it has been reported experimentally that in Ge on Si heteroepitaxy certain elements, like Ga, In, Sn and Pb, lead to an increase in the width of the depleted zone around islands . At the same time, however, it has also been found that other elements, like As, Sb, Bi and Te, lead to a decrease in the width of the depleted zone . These observations were interpreted as indicative that the former type of surfactant (group-III and group-IV atoms) enhance the diffusion length while the latter type of surfactants (group-V and group-VI atoms) reduce the diffusion length. Moreover, this interpretation has been frequently invoked as an explanation of the suppression of 3D islanding in heteroepitaxy by group-V and group-VI surfactants.
Since it is generally easier for group-V and group-VI elements to provide a chemically passive surface, we argue that the above interpretation may not be unique. In fact, we show in section IV that even if it were true, it would not explain the surfactant effect neither in homoepitaxy nor in heteroepitaxy. We propose an alternative interpretation of the experimental results according to which the diffusion length is mostly irrelevant. Instead, the essential question is whether the surfactant layer passivates island edges or not. Some surfactants (group-III and group-IV elements) cannot passivate island edges which then act as strong sinks of newly deposited atoms, while other surfactants (group-V and group-VI elements) passivate island edges as well as terraces, so that island edges do not act as adatom sinks, and the width of the depleted zone is reduced. We show that this interpretation is consistent with the experimentally observed surface morphologies and island densities in the presence of surfactants. It also explains why group-V and group-VI adsorbates suppress 3D islanding in heteroepitaxy. The systems studied experimentally that belong to this category are listed in Table 2. The relative simplicity of the surface reconstruction induced by the surfactant and the fact that the deposited species is mostly single group-IV atoms make the systems in this category the easiest to analyze from a microscopic point of view. Indeed, most atomistic scale models of the surfactant effect address systems in this category.
### 2. III-V films on III-V substrates
The second category consists of III-V substrates on which combinations of other III-V systems are deposited. The deposited species in this case are more complicated, since at least two types of atoms have to be supplied with different chemical identities. Under usual conditions the group-III species is deposited as single atoms, whereas the group-V species is deposited as molecules (dimers or tetramers), which have to react with the group-III atoms and become incorporated in the growing film. This is already a significant complication in growth dynamics, and makes the construction of detailed atomistic growth models considerably more difficult. Moreover, the usual surface reconstructions of these substrates are more complicated and depend on deposition conditions (temperature and relative flux of group-III to group-V atoms). In the presence of surfactants both the surface reconstructions and the atomic motion is altered, but much less is known about the atomic level details. The surfactants used in these systems include H, Be, B, In, Sn, Pb, As, Sb, Te. In certain cases, the surfactant species is the same as one of the atoms in the growing film (such as In in InAs growth on GaAs), or one of the atoms in the substrate (such as Sb in growth of InAs on AlSb). Strain effects are important in these systems as well. Building high quality III-V heterostructures has been one of the goals of many technologically oriented studies, and the use of surfactants has been beneficial in reducing the problems associated with strain. However, the more complex nature of these systems has prevented detailed analysis of the type afforded in group-IV systems. A compilation of experimental results for this category is given in Table 3.
### 3. Mixed film and substrate systems
The final category consists of mixed systems in which group-IV films are grown on III-V substrates (for example Si on GaAs) or vice versa (for example GaN on Si). In these systems, in addition to the usual strain effects one has to consider also polarity effects, which arise from the fact that at the interface different types of atoms are brought together and their dangling bonds contain different amounts of electronic charge which do not add up to the proper value for the formation of covalent bonds. It is possible that the surfactant layer plays an important role in reducing polarity problems, as well as modifying the energetics and suppressing strain effects, as it does in the previous two categories.
Since the substrate and the thin film are rather different for systems in this category, we include in the same category a number of odd systems which involve the presence of insulating buffer layers (like CaF<sub>2</sub> in the growth of Ge on Si substrates) and the growth of metal layers (like In on Si, In on GaAs, Sn on Ge and Ag on Si) or silicide layers (like CoSi<sub>2</sub> on Si), as well as the growth of technologically important semiconductors on insulators (like GaN on sapphire). All these cases are important for device applications and it is interesting to study how surfactants can be employed to improve the quality of growth. However, the complexity of the structures involved and the several different species of atoms present make the detailed analysis of systems in this category rather difficult. We tabulate the experimental information for systems in this category in Table 4.
## III. Theoretical Models
Following the experimental observations, a number of theoretical models have been studied in order to understand and explain the surfactant effect in semiconductor growth. We divide these models in three categories. In the first category we place models that have concentrated on the microscopic aspects, attempting to understand the atomic scale features and processes involved in this phenomenon; models of this type typically employ sophisticated quantum-mechanical calculations of the total energy in order to evaluate the relative importance of the various structures, and in order to determine the relevant activation energies involved in the kinetic processes. In the second category we place models that are more concerned with the macroscopic aspects of the surfactant effect, such as island morphologies and distributions as well as the effects of strain, without attempting to explain the details of the atomistic processes, although these may be taken into account in a heuristic manner. Finally, in the third category we place models that attempt to combine both aspects, that is, they try to use realistic descriptions of the atomistic processes as the basis for macroscopic models. Evidently, this last type of models is the most desirable, but also the most difficult to construct. We review models in these three categories in turn.
### 1. Microscopic models
Initial attempts at understanding the microscopic aspects of surfactant mediated growth focused on the thermodynamic aspects, that is, strived to justify why it is reasonable to expect the surfactant layer to float on top of the growing film. This was investigated by calculating energy differences between configurations with the surfactant layer buried below layers of the newly deposited atoms versus configurations with the surfactant on top of the newly deposited atoms . These energetic comparisons, based on first-principles calculations employing density functional theory, established that there exists a strong thermodynamic incentive for keeping the surfactant layer on top of the growing film.
In a similar vein, calculations by Kaxiras established that certain surfactants are more likely to lead to layer-by-layer growth than others, while a simplistic analysis of their chemical nature would not reveal such differences. These calculations were done using different types of surfactants on the same substrate and considering the relative energies of the various surface reconstructions induced by the surfactant layer. Specifically, three different types of group-V atoms were considered as surfactants, P, As and Sb, on the Si(111) surface for Ge or Si growth. The similar chemical nature of the three elements would argue for very similar surfactant behavior. However, the total energy calculations indicated that the three elements give significantly different reconstructions, some of which would lead to relatively easy floating of the surfactant layer, while others would hamper this process. This is related to the manner in which, in a given reconstruction, the surfactant atoms are bonded to the substrate. For instance, in the energetically preferred reconstructions, the P and As atoms are bonded to the substrate with three strong covalent bonds each, while Sb atoms are bonded to the substrate with only one strong covalent bond per adsorbate atom. Based on these comparisons, Kaxiras proposed that Sb would work well on this substrate as a surfactant, while P and As would not , a fact that was subsequently verified experimentally .
A study by Nakamura et al. of the same system (the Si(111) substrate with Sb as surfactant for Ge growth), based on the discrete variational approach and the cluster method to model the surface, reported that the presence of the surfactant strengthens the bonds between the Ge atoms on the surface. This effect, it was argued, leads to nucleation of stress-relieving dislocations at the surfaces which is beneficial for layered growth of defect-free films. In this analysis neither the defects themselves nor any type of exchange and nucleation mechanisms was considered explicitly. Moreover, the bond-strengthening arguments are of a chemical nature, which may be useful in a local description of chemical stability, but sheds little light on the dynamics of atoms during surfactant mediated growth. The chemical nature of Sb bonding on the Si(100) and the Ge(100) substrates was also investigated by Jenkins and Srivastava . In this work, first-principles density functional theory calculations were employed to determine the structure and the nature of bonding of Sb dimers in the $`(2\times 1)`$ reconstruction, which though interesting in itself, provides little direct insight into the process of surfactant mediated growth.
The theoretical models considered so far addressed the problem of surfactant mediated growth by considering what happens at the microscopic level, but for entire monolayers, that is by imposing the periodicity of the reconstructed surface in the presence of the surfactant. Subsequent atomistic models studied the equilibrium configurations and dynamics of individual adsorbed atoms or dimers, which is more appropriate for understanding the nature of growth on the surfactant-covered surface. A first example was an attempt by Yu et al. to justify how newly deposited Ge atoms on the As-covered Si(100) surface exchange place with the surfactant atoms in order to become embedded below the surfactant layer. In these calculations, based on total energy comparisons obtained from density functional theory, the metastable and stable positions of Ge dimers are established, indicating possible paths trough which the newly deposited Ge atoms can be incorporated under the surfactant As layer. However, no explicit pathways were determined, and therefore no activation energies that might be relevant to growth kinetics were established. Furthermore, even though specific mechanisms for growth of needle-like islands by appending Ge dimers to a seed are discussed, the lack of calculated energy barriers for the exchange process and a large number of unproven assumptions involved in the proposed mechanisms, means they are of little help in understanding the surfactant effect. For instance, the work of Yu et al. claims that Ge dimers are actually situated between As dimer rows instead of on top of the As dimer rows, while in their proposed island growth mechanism they employ configurations that involve Ge dimers on top of the As dimer rows.
A similar type of analysis by Ohno , also using density functional theory calculations of the total energy, was reported for Si-on-Si(100) homoepitaxy using As as surfactant. Ways of incorporating the newly deposited Si dimers below the As layer were considered by studying stable and metastable positions, and the rebonding that follows the exchange process. Again, though, actual exchange pathways and the corresponding activation energies relevant to growth kinetics, were not considered. In this study it is shown explicitly how exchange of isolated Si dimers on top of the As layer is not exothermic, while the presence of two Si dimers leads to an energetically preferred configuration after exchange. This fact is used to argue that the Si dimer interactions are responsible for both their mutual repulsion and the initiation of the exchange. It appears however that these two effects, that is, the strong repulsion of ad-dimers and the requirement of their presence at neighboring sites for the initiation of exchange, would be incompatible as far as growth is concerned. Both the work of Yu et al. and the work of Ohno deal with mechanisms in which the basic unit involved in the exchange process is a deposited dimer, as was originally suggested by Tromp and Reuter .
An interesting microscopic study of surfactant mechanisms was reported by Kim et al. . In this work, first-principles molecular dynamics simulations were employed to investigate the effect of Sb atoms at step edges on the Si(100) surface for Si homoepitaxy. This study examined the effect of Sb dimers on the step-edge barriers (also referred to as Schwoebel-Ehrlich barriers , for which we adopt here the acronym SEB which is both descriptive and referential). These are extra barriers to adatom attachment to the step-edge when the adatom arrives from the upper terrace, compared to the barriers for diffusion on the flat terraces. The authors find that the presence of Sb at the step edge gives a significant SEB for the attachment of a single Si atom, but much smaller SEB for attachment of a Si dimer by the push-over mechanism (in which the Si dimer at the upper terrace pushes the Sb dimer at the step edge over by one lattice constant, and thus becomes incorporated in the bulk). This relative suppression of the SEB for dimer attachment leads to layer-by-layer growth as opposed to 3D island growth, and consequently, Kim et al. argue, the presence of the surfactant Sb dimer at the step edge would lead to layered growth. This is an interesting suggestion, but it remains to be proven that it is the correct view for the system under consideration. Specifically, it is not clear whether a configuration with Sb dimers only at step edges of the Si(100) surface is stable. Typically, an entire monolayer is needed for the surfactant effect in similar systems, and the precise coverage is a crucial aspect of the effect. If the surfactant coverage is different than that assumed in the model of Kim et al. then the atomic processes at the step edge could be very different leading to a different picture of the effect. Furthermore, kinetic Monte Carlo studies are required to establish that the calculated energy barriers can actually lead to the predicted mode of growth, since the density of Si adatoms (determined by the flux, the diffusion rate and the attachment-detachment rates) will also influence the growth process.
Another detailed study of activation energies for diffusion and exchange processes in surfactant mediated epitaxy was reported by Ko et al. . This study was also based on first-principles calculations of the energetics and addressed Si epitaxy on Si(100) with As acting as surfactant. In this work it was established that the exchange of a Si adatom with a sublayer As site involves an energy barrier of 0.1 eV, which is considerably lower than the energy barrier for diffusion (of order 0.5 eV) or the energy barrier for dimer exchange (of order 1.0 eV) which had been invoked as a possible mechanism in earlier studies of the same system . This is a very interesting suggestion, but falls short of providing a complete picture of the surfactant effect. Specifically, it is not clear how a single exchange step of the type investigated in the study of Ko et al. can lead to a configuration that will nucleate the next layer of the crystal. This process may well involve additional important steps with different activation barriers, so that the barrier calculated, though important and interesting, may not be the determining step in the growth process. In fact, Ko et al. find that exchange of two individual Si atoms at neighboring sites leads to the formation of a protruding As dimer, which acts as a seed for further growth. This protruding As dimer binds additional Si adatoms and leads to the formation of a Si dimer, which eventually undergoes site exchange with a neighboring As dimer with an energy barrier of 1.1 eV. It would then appear that it is this last step that is the determining step in growth, which leads back to the dimer-exchange picture discussed earlier, albeit now with a more detailed picture of how this process may be initiated by the barrierless exchange of single Si adatoms.
Two separate studies of growth on III-V surfaces addressed the surfactant effect in these systems. In the first study, by Miwa et al. , the dimer exchange mechanism on III-V surfaces, using Te as the surfactant, was investigated using first-principles calculations. These authors find that InAs growth on GaAs(100) proceeds by complete dimer exchange between the In and Te layers on the As-terminated surface, while on the In-terminated surface the exchange between the Te layer and an overlayer of As is only partial. The second study, by Shiraishi and Ito , examined the equilibrium configurations of adatoms on the GaAs(100) surface with different As coverages, using first-principles total energy calculations. This study concluded that preadsorbed Ga atoms play a “self-surfactant” effect by significantly influencing the adsorption energy of As dimers at various sites on the surface. In this analysis, energy barriers for diffusion and exchange mechanisms are not taken into account, and consequently the interpretation of actual growth processes is limited.
The most detailed study of actual atomistic mechanisms for diffusion and exchange was reported by Schroeder et al. . This work examined the motion of Si adatoms on the As covered Si(111) surface, using first-principles total-energy calculations. The authors report a very interesting pathway for exchange between the additional Si atom and an As surfactant atom with an energy barrier of only 0.27 eV. This is comparable to the diffusion barrier for the Si atom on top of the As layer, calculated to be 0.25 eV. The Si atom can undergo the reverse of the exchange process, and by so doing it can get on top of the As surfactant layer, a process that involves an energy barrier of 1.1 eV, according to the results of Schroeder et al. This leads to a rather complex sequence of events, with Si adatoms arriving at the surfactant covered substrate, diffusing, exchanging, undergoing the reverse process and diffusing again, with the relevant energy barriers. The possibility of the reverse of the exchange process was first explicitly introduced in the work of Kandel and Kaxiras , where it was called “de-exchange”. We adopt this term in the following as more descriptive of the reverse of the exchange process, since this undoes the effect of an exchange step rather than repeat it, as the term “re-exchange” (used in the work of Schroeder et al.) might suggest. The de-exchange process was shown by Kandel and Kaxiras to be a crucial process in maintaining the layer-by-layer growth mode in the presence of the surfactant (see more details below). This de-exchange process had been found to have a higher activation energy (1.6 eV) than either exchange (0.8 eV) or diffusion (0.5 eV) by Kandel and Kaxiras, although this was established by considering the exchange or de-exchange of entire monolayers of newly deposited atoms on top of the surfactant layer. Schroeder et al. show that the same energy ordering is valid also for individual adatoms on top of the surfactant layer, but the actual barriers for individual adatoms are lower (1.1 eV, 0.27 eV and 0.25 eV for de-exchange, exchange and diffusion, respectively). This establishes unequivocally the importance of the de-exchange process. What is lacking from the work of Schroeder et al. is a sequence of steps that can actually lead to the formation of the next layer of deposited material. Specifically, even after the single Si adatom has exchanged positions with a surfactant As atom, the system is not in a configuration from which the repeated sequence of similar steps could lead to the formation of a new layer. In the system studied by Schroeder et al. this process may be quite complicated, since the Si(111) surface consists of double layers, the formation of which may involve additional energy barriers which supersede the one determined for the exchange of a single adatom.
### 2. Macroscopic models
There is a debate in the literature on whether the suppression of 3D islanding by surfactants in heteroepitaxy is an equilibrium effect or a kinetic one. While most researchers in the field take the kinetic approach, there has been some effort to try and explain the surfactant effect using thermodynamic considerations. According to the thermodynamic approach, the equilibrium state of the newly deposited material in the presence of a surfactant layer is a smooth flat film. The underlying assumption behind kinetic models is that even with surfactants, the true equilibrium state of the system is that of 3D islands. The role of surfactants, in this case, is to induce layer-by-layer growth kinetically and to make the approach to equilibrium longer than realistic time scales. We will first give examples of the thermodynamic approach to the surfactant effect and then elaborate on some kinetic models.
Kern and Müller calculated the free energy of formation of a crystal of material A stretched to be coherent with a substrate of material B. They took into account effects of surface energy as well as surface stress and obtained the equilibrium shape of the crystal by minimizing its free energy with respect to its height and width. In their view, surfactants may reduce surface stress and surface energy, and hence lead to flatter islands and maybe even to wetting of the substrate by the deposited material (which happens when the equilibrium island height vanishes). They view such surfactant induced wetting as a transition from 3D growth to 2D layer-by-layer growth. This Kern-Müller criterion may serve as an indication of whether the effect of a certain surfactant is in the right direction to suppress 3D islanding. However, they do not consider the possibility of 3D growth when the deposited material wets the substrate (Stranski-Krastanov growth mode). They also ignore strain relaxation, which reduces the cost of 3D island formation. Thus, the question of whether the surfactant effect could be a purely thermodynamic one is left unanswered.
A different equilibrium argument was proposed by Eaglesham et al. . These authors argue that surfactants change the surface energy anisotropy and this leads to the suppression of 3D islanding. They examine experimentally islands of Ge on Si(100) films with and without surfactants, and find that their equilibrium shape changes radically in the presence of surfactants and depends strongly of the specific surfactant used. For example, Sb as a surfactant favors (100) facets, whereas In favors (311) facets. They advance the idea that if the surfactant favors facets in the same orientation as the substrate, the equilibrium shape of the islands generated will be flat. This will lead to earlier coalescence of islands and will enhance layer-by-layer growth. The mechanism proposed by Eaglesham et al. may have a significant impact on the growth mode. But it cannot be the main explanation of the surfactant effect, since the equilibrium morphologies observed in their experiments include 3D islands. Therefore, in their explanation of the surfactant effect they supplement the equilibrium consideration with a kinetic one, i.e. the reduction of the diffusion length induced by surfactants.
It seems quite difficult to explain the surfactant effect relying on thermodynamics alone. For this reason most researchers in the field make the assumption that surfactants suppress 3D islanding kinetically. Markov’s work is an example of such a kinetic model. He developed an atomistic theory of nucleation in the presence of surfactants. The main results of this work are expressions for the nucleation rate and saturation density of islands. These quantities depend crucially on the difference between the energy barrier for adatom diffusion on top of the surfactant layer and the barrier for diffusion on a clean surface. If this difference is positive, surfactants decrease the diffusion length for adatoms and the saturation density of islands rises sharply. Such an anomalously high island density in the presence of surfactants has been seen experimentally in various systems, and is viewed by many researchers as the main mechanism by which surfactants change the growth mode of the film and suppress 3D islanding in heteroepitaxy. We will show in section IV that this mechanism does not explain the surfactant effect.
An entirely different approach was taken by Barabasi . Rather than looking at the kinetics of the system on the atomic length scale, he viewed the growing film on a much coarser scale. He represented the local height of the film and the local width of the surfactant layers as continuous fluctuating fields, in the spirit of the KPZ model of kinetic roughening . Based on the relevant symmetries of the system he wrote down a set of coupled differential equations which describe the dynamics of these two fields. The quantity of interest in this approach is the width of the film surface and its dependence on system size. Typically, such a theory would predict a rough surface where the width diverges with system size. Barabasi found that surfactants can induce a flat phase where the surface width does not diverge with system size. He associated this phase with a layer-by-layer growth mode. It is interesting to see that a theory on such a macroscopic length scale can capture effects which depend critically on processes that occur on an atomic scale. The drawback of this theory is that it is not clear what role the lattice mismatch and strain play in the kinetics of the system. Also, in the rough phase the model predicts a self-similar structure for the surface. Experimentally, however, the morphology of a surface with 3D islands is not self-similar, and it is not clear whether this continuum theory can describe the experimental morphologies. Barabasi and Kaxiras extended this model to include two different dynamical fields, one representing the surfactant layer, the other the surface film layer. This allowed an investigation of whether subsurface diffusion, which had been neglected in the previous model, could change the behavior. It was found that subsurface diffusion essentially always leads to roughening, and if it were operative in real systems it would prevent layer-by-layer growth.
Most models of surfactant mediated epitaxial growth emphasize the significance of adatom diffusion for the determination of the growth mode. Another atomic process of importance is attachment and detachment of adatoms from island edges. In fact, in section IV we develop a model according to which surfactants suppress 3D islanding by passivating island edges, thus suppressing adatom detachment. It is therefore of interest to investigate the influence of island-edge passivation on surface morphology. Kaxiras first introduced the idea of island-edge passivation by the surfactant , and carried out kinetic Monte Carlo simulations on a very simple model to show that it can lead to morphologies compatible with experimental observations. Kandel also carried out a study of island-edge passivation effects . He investigated a simple model of submonolayer homoepitaxial growth in the framework of rate equation theory using the critical island approximation (only islands of more than $`i^{}`$ atoms are stable, while smaller islands decay). The main result of this work is that the island density scales with flux, $`F`$, as $`F^\chi `$ with $`\chi =2i^{}/(i^{}+3)`$ when island edges are passivated, while $`\chi =i^{}/(i^{}+2)`$ without island-edge passivation. This conclusion is important because the exponent $`\chi `$ can be measured experimentally and one can learn from its value whether island-edge passivation is operative or not in the experimental system at hand. For example, a value of $`\chi >1`$ can occur only if the surfactants passivate island edges. Kandel’s theory relies on a somewhat oversimplified picture of submonolayer growth, and the conclusions are yet to be verified with a more rigorous theory or by detailed simulations of the growth process.
## IV. The Diffusion–De-exchange–Passivation Model
To our knowledge the only attempt to construct a comprehensive model that includes both the microscopic aspects of atomic motion and a realistic description of the large length-scale evolution of the surface morphology has been reported by the present authors . The work of Zhang and Lagally is another attempt to link the microscopics and the macroscopics of the effect of surfactants on thin film growth. However, their work discusses homoepitaxial growth of metals, a subject which is interesting in its own right, but is beyond the scope of the present review article.
### 1. General considerations
Before we embark on the construction of the theoretical model , we briefly review the relevant experimental information since we do not claim that a single model captures every type of surfactant mediated growth mode. We focus here on growth of elemental or compound semiconductors, in which a single species of atoms controls the diffusion and exchange (or de-exchange) processes, and the surfactant produces a chemically passivated surface. We take as the canonical case a group-IV substrate (examples are Si(100) or Si(111)) and a group-III or group-V surfactant (these are actually the systems that have been studied most extensively experimentally, as is evident from Tables 2-4).
It appears that a full monolayer of surfactant coverage is required for growth of high quality semiconductor crystals. This is different from the case of surfactant effects in the growth of metals, where a small amount of surfactant (typically few percent of a monolayer coverage) is sufficient. The most direct evidence on this issue was provided by the experiments of Wilk et al. , who studied homoepitaxial growth of Si on Si(111) using Au as a surfactant. These authors report that the density of defects in the film correlates well with the surfactant coverage, with the minimum defect density corresponding to full monolayer coverage by the surfactant. This is a physically appealing result, and can be interpreted as evidence that the better the passivation of the surface by the surfactant, the more effective the surfactant is in promoting high quality growth. In the following we will assume that full monolayer coverage of the substrate is the standard condition for successful surfactant mediated growth of semiconductors.
The model we will now describe assumes the surfactant effect is kinetic in nature. As with all other kinetic models of surfactant mediated growth, the underlying idea is that at equilibrium the heteroepitaxial system generates 3D islands even in the presence of surfactants. The role of surfactants is to make the approach to equilibrium very slow, so that 3D islands are not generated during the growth of the film. This means that surfactants kinetically suppress one or more microscopic processes, which are essential for the growth of 3D islands. The most important ingredient of any explanation of the surfactant effect is the identification of these processes. Almost all the explanations found in the literature identify the relevant process as adatom diffusion . The idea is that the energy barrier for exchange of an adatom with a surfactant atom, $`E_{ex}`$, is smaller than the barrier for diffusion of an adatom on top of the surfactant layer, $`E_d`$. An adatom therefore diffuses a very short distance before it exchanges, and after exchange it cannot diffuse (once it is underneath the surfactant layer). This suppressed diffusion mechanism explains the surfactant effect in the following way: the reduced diffusion length makes the density of islands nucleating on the surface very high. As a result, island coalescence occurs before any second layer islands nucleate on top of existing first layer islands. This is how, according to this mechanism, 3D islanding is suppressed.
As mentioned in section II, the support for this hypothesis comes from various experiments and particularly those of Voigtländer et al. . In these experiments, they studied the effects of various surfactants on submonolayer homoepitaxial growth of Si on Si(111). The results were correlated with studies of the effect of the same surfactants on heteroepitaxial growth of Ge on Si(111). Voigtländer et al. found that generally there are two types of surfactants. Group-III and group-IV elements tend to significantly decrease the island density in submonolayer homoepitaxy and lead to 3D islanding in heteroepitaxy. On the other hand, group-V and group-VI elements drastically increase the island density in submonolayer homoepitaxy and suppress 3D islanding in heteroepitaxy. If one interprets an increase in the island density as an indication of suppression of diffusion, these results confirm the mechanism discussed above.
Despite the appealing nature of the suppressed diffusion hypothesis, we have proposed that it may not be the entire story. Our concerns arose from the fact that group-V and group-VI elements chemically passivate the surface more efficiently than group-III and group-IV elements. Intuitively, this should lead to faster diffusion on surfaces covered by the former elements. But the experimental results are consistent with the latter elements enhancing diffusion and the former ones suppressing it. To clarify this issue, we decided to examine more carefully the microscopic processes involved in the kinetics of surfactant mediated epitaxy. Our investigation led to an entirely different explanation of the influence of surfactants on epitaxial growth modes.
A schematic representation of the possible atomic processes is shown in Fig. 1. The simplest process is of course diffusion of adatoms on top of the surfactant layer \[Fig. 1(a)\]. A second important process is the exchange of adsorbed atoms with the surfactant atoms, so that the former can be buried under the surfactant layer and become part of the bulk. This process can take place either on a terrace or at a step \[Fig. 1(b)\]. From thermodynamic considerations, we must also consider the process by which atoms de-exchange and become adatoms which can diffuse on top of the surfactant layer \[Fig. 1(c)\]. Again, this process can take place on terraces or at surface steps. Finally, we have to consider separately the case of surfactants that cannot passivate step edges, in which case both the exchange \[Fig. 1(d)\] and de-exchange processes \[Fig. 1(e)\] will be different than at passivated steps, since they no longer involve actual exchange events between adatoms and surfactant atoms. We refer to our model as the Diffusion–De-Exchange–Passivation (DDP) model, since these are the three processes that determine the behavior in surfactant mediated epitaxy: diffusion is always present; de-exchange obviously implies also the presence of exchange; and passivation (always present on terraces) may or may not be present at island edges, but either its presence or its absence is a crucial element.
### 2. First-principles calculations
In order to evaluate the relative contributions of these processes and their influence on the growth mode, the corresponding activation energies must be calculated. This is a difficult task because very little is known about the atomic configurations involved. We therefore begin by considering two idealized processes that involve entire monolayers, discuss how the corresponding activation energies could be representative and relevant for growth mechanisms, and obtain their values from first-principles calculations.
The first process we consider is diffusion on a surface covered by a surfactant monolayer. The representative system we chose to study consists of a Si(111) substrate, covered by a bilayer of Ge, with Sb as the surfactant. In this case, it is known that the structure of the Sb layer is a chain geometry with a periodicity of $`(2\times 1)`$ as shown in Fig. 2 . An additional Ge atom is then placed on top of the Sb layer and the energy is optimized for a fixed position of the Ge atom along the direction parallel to the Sb chains. All other atomic coordinates, including those of the Ge atom perpendicular to the Sb chain and vertical with respect to the surface, were allowed to relax in order to obtain the minimum energy configuration. The energy and forces were computed in the framework of Density Functional Theory and the Local Density Approximation (DFT/LDA), a methodology that is known to provide accurate energetic comparisons for this type of system (see in particular the reviews by Kaxiras on the application of such calculations to semiconductor growth phenomena). By considering several positions of the extra Ge atom along the chain direction and calculating the corresponding total energy of the system, we obtained a measure of the activation energy for diffusion in this direction. We found that the activation energy for diffusion along this path is 0.5 eV.
We next considered a possible exchange mechanism in the same system, through which the newly deposited Ge atoms can interchange positions with the surfactant atoms and become buried under them. To this end, we modeled the system by a full monolayer of Ge deposited on top of the surfactant layer \[Fig. 3(a)\]. We studied a concerted exchange type of motion for the Ge-Sb interchange. In the final configuration \[Fig. 3(e)\] the Ge layer is below the Sb layer, and the system is now ready for the deposition of the next Ge layer on top of the surfactant. The middle configuration, Fig. 3(c), corresponds to a metastable structure, in which half of the newly deposited Ge layer has interchanged position with the Sb surfactant layer. The configurations between the initial and middle geometries and the middle and final geometries, Fig. 3(b) and Fig. 3(d) respectively, correspond to the saddle point geometries which determine the activation energy for the exchange. From our DFT/LDA calculations we found that the energy difference between structures 3(a) and 3(b) is 0.8 eV, and the energy difference between structures 3(c) and 3(d) is the same to within the accuracy of the results. Similarly, the energy difference between structures 3(c) and 3(b) and structures 3(e) and 3(d) is 1.6 eV. These two numbers correspond to the exchange activation energy \[0.8 eV, going from 3(a) to 3(c) through 3(b), or going from 3(c) to 3(e) through 3(d)\], and the de-exchange activation energy \[1.6 eV, going from 3(c) to 3(a) through 3(b), or going from 3(e) to 3(c) through 3(d)\], for this hypothetical process.
We discuss next why these calculations give reasonable estimates for the activation energies involved in surfactant mediated growth. As far as the diffusion process is concerned, it is typical for semiconductor surfaces to exhibit anisotropic diffusion constants depending on the surface reconstruction, with the fast diffusion direction along channels of atoms that are bonded strongly among themselves. This is precisely the pathway we examined in Fig. 2. As far as the exchange process is concerned, it is believed that the only way in which atoms can exchange positions in the bulk is through a concerted exchange type of motion, as first proposed by Pandey for self diffusion in bulk Si . This motion involves the breaking of the smallest possible number of covalent bonds during the exchange, which keeps the activation energy relatively low. In the case of bulk Si, the activation energy for concerted exchange is 4.5 eV . In the present case the activation energy is only 0.8 eV, because, unlike in bulk Si, the initial configuration \[Fig. 3(a)\] is not optimal, having the pentavalent Sb atoms as four-fold coordinated (they would prefer three-fold coordination) and the newly deposited Ge atoms as three-fold coordinated (they would prefer four-fold coordination). In the final configuration \[Fig. 3(e)\], which has lower energy than the initial one, all atoms are coordinated properly (three-fold for Sb, four-fold for Ge).
While we have argued that the above described atomic processes are physically plausible, we have not established neither their uniqueness, nor their supremacy over other possible atomic motions. In fact, the calculations of Schroeder et al. discussed in section III.2, are much more realistic as far as the exchange of single adatoms with surfactant atoms on terraces is concerned. However, those calculations refer to a single event, and the formation of an additional substrate layer could (and probably does) involve additional steps in the exchange process due to the double-layer nature of the Si(111) substrate. In our calculations, the structure of the layer below the surfactant is compatible with the lower half of the substrate double layer, so that the process of exchange can proceed with very similar steps to complete the double-layer growth. In this sense, we feel that the barriers we obtained are not too far from realistic values. To keep our discussion general we will consider the two sets of energy barriers as corresponding to a range of physical systems: the first set is suggested by our results ($`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV, $`E_{deex}=1.6`$ eV) and the second by the results of Schroeder et al. ($`E_d=0.25`$ eV, $`E_{ex}=0.27`$ eV, $`E_{deex}=1.1`$ eV).
### 3. De-exchange and generalized diffusion
It is clear that diffusion and exchange processes affect the morphology of the growing film. What about de-exchange processes? The energy barrier for de-exchange is significantly larger than the other two energy barriers. Are de-exchange events frequent enough to have any effect on the growth mode? Or, to be more quantitative, suppose an adatom exchanged with a surfactant atom; will it de-exchange before another adatom exchanges in its vicinity? To answer this question we assume that the time scale associated with a process with energy barrier $`E`$ is $`\nu ^1\mathrm{exp}(E/kT)`$, where $`\nu =10^{13}`$ sec<sup>-1</sup> is the basic attempt rate, $`k`$ is the Boltzmann constant and $`T`$ is the temperature. Even at the fairly low temperature of $`350^{}`$C and with the large de-exchange barrier of 1.6 eV the time it takes an atom to de-exchange is only 0.9 seconds. The time it takes to grow a layer at a typical flux of 0.3 layers/minute is 200 seconds. Therefore an atom will de-exchange quite a few times before it will interact with additional atoms in the same layer. We conclude that de-exchange processes can influence the growth mode and should not be ignored.
The above discussion changes our view of diffusion in surfactant mediated epitaxy. The effect of diffusion cannot be simply understood by comparing $`E_d`$ with $`E_{ex}`$, because after an adatom has exchanged it may still continue to diffuse on top of the surfactant layer by de-exchanging with a surfactant atom. It is instructive to compare the effective diffusion constant, $`D_{eff}`$, which corresponds to this complex diffusion process, with the bare diffusion constant of an adatom on a surface (without surfactants), $`D=\nu a^2\mathrm{exp}(E_d^{(b)}/kT)`$. Here $`a`$ is the lattice constant and $`E_d^{(b)}`$ is the energy barrier for bare diffusion.
To calculate $`D_{eff}`$, we consider the case $`E_{ex}>E_d`$ (a similar calculation can be done for the opposite case and yields an identical result). An effective diffusion hop consists of a de-exchange event followed by several microscopic diffusion hops and finally an exchange event. We calculate $`D_{eff}`$ from the expression $`D_{eff}=A^2/\tau _{eff}`$, where $`\tau _{eff}`$ is the average time it takes to carry out an effective diffusion hop. $`A`$ is the average distance an atom travels during such a hop, and obeys the relation $`A=a\sqrt{n}`$, where $`n`$ is the average number of microscopic diffusion hops the atom carries out between the de-exchange and exchange events. $`n`$ is easily calculated as the ratio between the time for an exchange event and the time for a microscopic diffusion hop. This leads to the result $`n=\mathrm{exp}[(E_{ex}E_d)/kT]`$. $`\tau _{eff}`$ is the time it takes to carry out a de-exchange event followed by an exchange event. Therefore, $`\tau _{eff}=\nu ^1[\mathrm{exp}(E_{deex}/kT)+\mathrm{exp}(E_{ex}/kT)]`$. The final expression for the effective diffusion constant is
$$D_{eff}=D\frac{\mathrm{exp}[(E_d^{(b)}E_d)/kT]}{1+\mathrm{exp}[(E_{deex}E_{ex})/kT]}.$$
(1)
Clearly, a comparison of $`E_d`$ with $`E_{ex}`$ does not tell us much about the magnitude of $`D_{eff}`$. Passivation of the surface by the surfactant implies $`E_d^{(b)}>E_d`$ and $`E_{deex}>E_{ex}`$. Thus both the numerator and the denominator are larger than 1, and the question is which one is larger. For the values of energy barriers calculated from first-principles (both ours and those of Schroeder et al.) $`E_{deex}E_{ex}0.8`$ eV. The denominator of Eq. (1) is therefore a very large number (between $`10^3`$ and $`10^6`$ for typical temperatures). The numerator is much smaller and therefore $`DD_{eff}`$, i.e. diffusion is suppressed. This is not necessarily the case for all surfactants. For example, for surfactants which are less efficient in passivating the surface $`E_{deex}E_{ex}`$ may be comparable or even smaller than $`E_d^{(b)}E_d`$, which would lead to diffusion enhancement. This may be the case for group-III and group-IV surfactants, which enhance diffusion according to experiments . It would be interesting to check this possibility with DFT/LDA calculations. An interesting conclusion is that effective diffusion can be suppressed by surfactants even if $`E_{ex}>E_d`$; i.e., a surfactant can enhance diffusion on top of the surfactant layer, and at the same time suppress effective diffusion, which takes into account de-exchange processes.
### 4. Island-edge passivation
At this stage, it is tempting to claim that we have reached a much better understanding of the surfactant effect. However, as we show next, this is not so. In fact, a more careful analysis shows that suppression of diffusion has nothing to do with the explanation of the surfactant effect, and that two surfactants, which lead to the same value of $`D_{eff}`$ may induce very different growth modes. This happens because different surfactants may vary drastically in their ability to passivate steps or island edges. We will see that the issue of island-edge passivation is crucially important to the morphology of the growing film and its surface. It is especially important for the ability of the surfactant to suppress 3D islanding in heteroepitaxy.
To understand the role of island edges in the determination of growth modes, we have to understand the reason for the formation of 3D islands in heteroepitaxial growth. They form because their presence facilitates strain relaxation in a much more efficient way than in flat layers. For example, Tersoff and Tromp calculated the elastic energy per unit volume, $`E_{el}`$, of a strained rectangular island of lateral dimensions $`s`$ and $`t`$ (measured in units of the lattice constant). They showed that
$$E_{el}\left(\frac{\mathrm{ln}s}{s}+\frac{\mathrm{ln}t}{t}\right).$$
(2)
The energy of a narrow island is thus smaller than the energy of a wide one. Therefore, after a monolayer-high island has grown beyond a certain width, it is beneficial to grow another layer on top of it rather than make it wider. The film then tends to grow in narrow and fairly tall 3D islands. The kinetic process which prevents the island from growing farther laterally is the detachment of atoms from the island edges. If such detachment processes are suppressed, the island will not reach its equilibrium shape. It will tend to be too wide and flat. It is quite obvious that surfactants which passivate island edges will also suppress detachment events. Hence, they may change the growth mode from 3D islanding to layer-by-layer growth. Suppression of diffusion may not be sufficient to suppress 3D islanding, since detachment of atoms from island edges may lead to islanding even with very little diffusion. Passivation of island edges, on the other hand, can change the growth mode even if diffusion is not enhanced.
We now use our knowledge of the chemical nature of different surfactants to speculate about their ability to passivate island edges: group-V atoms (especially As and Sb) should be effective in passivating steps and island edges on the (111) and (100) surfaces of tetravalent semiconductors such as Si and Ge. This is because group-V atoms prefer to have three-fold coordination, in which they form three strong covalent bonds with their neighbors using three of their valence electrons, while the other two valence electrons remain in a low-energy lone-pair state. This is precisely what is needed for passivation of both terrace and step geometries on the (111) and (100) surfaces of the diamond lattice, which are characterized by three-fold coordinated atoms. On the other hand, it is expected that elements with the same valence as the substrate, or noble metals, will not be effective in passivating step edges. In the case of the tetravalent semiconductors Si and Ge, for example, the elements Sn and Pb have the same valence, and while they can form full passivating layers on top of the substrate, they clearly cannot passivate the step geometries since they have exactly the same valence as the substrate atoms and hence can only form similar structures. Analogously, certain noble metals can form a passivating monolayer on the semiconductor surface, but their lack of strong covalent bonding cannot affect the step structure. We note that not all noble metals behave in a similar manner, with some of them forming complex structures in which they intermix with the surface atoms of the substrate (such as Ag on the Si(111) surface), in which case it is doubtful that they will exhibit good surfactant behavior.
### 5. Kinetic Monte Carlo simulations
We have given a plausibility argument that surfactants suppress 3D islanding in heteroepitaxy by limiting atom detachment from island edges and not by suppressing diffusion. The complexity of the growth process does not allow us to give a more rigorous argument. However, our ansatz can be tested quite easily by carrying out kinetic Monte Carlo (KMC) simulations of homoepitaxial and heteroepitaxial growth, in which all the relevant microscopic processes occur randomly with rates determined by the corresponding activation energies. Accordingly, we consider a system in which the processes examined above are operative, and the activation energies corresponding to them are the ones obtained from the DFT/LDA calculations for the hypothetical cases illustrated in Fig. 2 and Fig. 3.
For simplicity, our simulation was carried out on a cubic lattice. Atoms land on the surfactant covered surface with a flux of 0.3 layers/second (a typical value of the flux in experiments), and diffuse on top of the surfactant. They can exchange with surfactant atoms and become buried underneath the surfactant layer. A buried atom can de-exchange with a surfactant atom and float on top of the surfactant layer again. This can happen provided the buried atom does not have lateral bonds with other atoms underneath the surfactant layer. If it is bonded laterally, we consider this atom as being part on an island edge. An atom attached to an island underneath the surfactant layer can detach from the island edge and float on top of the surfactant layer. This detachment process is of major importance, as discussed above. However, it involves breaking of lateral bonds between the detaching atom and the island edge. This will be taken into account in the activation barrier for detachment. Also, we did not allow simultaneous breaking of two or more lateral bonds, so an atom attached to an island edge by more than a single lateral bond cannot detach. A diffusing atom can attach to a step or an island edge. The activation barriers for attachment and detachment processes depend on whether the surfactant passivates steps and island edges or not. Barriers for detachment from an island edge also depend on whether the island is strained or not.
We now describe the results of the simulations that we did under various conditions. In each case we give a detailed list of the activation energy values. First, we studied homoepitaxial growth, i.e., we considered a system without lattice mismatch and hence no strain effects. We investigated the influence of island-edge passivation (IEP) on surface morphology. To that end we carried out KMC simulations of a surface of size $`100\times 100`$ at a temperature of 600C, and deposited on it 0.15 of a layer. The values of the activation energies used were $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. The energy barrier for detachment from an island edge (provided only one lateral bond is broken) was $`E_{det}=3`$ eV for a surfactant which passivates island-edges and $`E_{det}=1.6`$ eV for a surfactant that does not. Typical morphologies are shown in Fig. 4(a), with IEP and Fig. 4(b) without IEP. Evidently, there is a marked difference between the growth process with and without IEP. First, surfactants that passivate island edges lead to a significantly higher island density in submonolayer growth. Secondly, with IEP the island edges are very rough, while without passivation the islands are faceted. As discussed above, experimental results indicate that surfactants that suppress 3D islanding also increase the island density in homoepitaxy. The rough island edges are also observed experimentally. This gives strong support to the IEP ansatz. We note that the high density of islands induced by surfactants which passivate island edges is not a result of suppression of diffusion. It arises from the fact that adatoms can cross passivated island edges without attaching to them and then nucleate on a flat part of the surface, thus generating more islands.
In Figs. 4(c) and 4(d) we present results from similar simulations with another set of activation barriers: $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV, $`E_{deex}=1.1`$ eV, $`E_{det}=2.5`$ with IEP and $`E_{det}=1.6`$ without. Although these barriers are very different from the ones we used to produce Figs. 4(a) and 4(b), the morphologies are very similar. The change in the energy barriers has not influenced the island densities, nor has it affected the shape of the islands significantly. The only noticeable effect is that the shape of the islands in Fig. 4(c) is more fractal-like than the shape of the ones in Fig. 4(d). Note that in the first set of energy barriers $`E_d<E_{ex}`$, whereas in the second one the opposite holds. Thus the relation between $`E_d`$ and $`E_{ex}`$ does not have a significant influence on the growth morphology. Based on these results, we expect the energy barriers we calculated using DFT/LDA and those of Schroeder et al. (see above) to lead to similar growth morphologies; i.e. the difference between these two sets of activation barriers is irrelevant for the determination of the growth mode. The results of Fig. 4 support our generalized diffusion analysis. The two sets of energy barriers we used give the same value for the effective diffusion constant, $`D_{eff}`$, according to Eq. (1). This is the reason the final surface morphologies are so similar.
It is also important to check the temperature dependence of the growth process in the case of surfactants with IEP. To this end we performed KMC simulations of a $`300\times 300`$ lattice with activation energies identical to the ones used for Fig. 4(a). The resulting surface morphologies after deposition of 0.15 of a layer are shown in Fig. 5 for three different temperatures: 600C, 700C and 850C. At all three temperatures IEP leads to a high density of compact islands with rough edges. The island density decreases with temperature. All of these observations are consistent with experimental results .
Finally, we consider the effects of strain in surfactant mediated heteroepitaxial growth. Strain is difficult to include in an atomistic calculation in a self consistent manner. Here we will rely on the theory developed by Tersoff and Tromp for the elastic energy of strained islands on a substrate (see Eq. (2)). In analogy with this theory, we postulate that the effect of strain is to alter the strength of the bonds in elastically strained islands according to the expression of Eq. (2), which depends on the island size through the values of $`s`$ and $`t`$. The most important consequence of this effect is a change in the activation energy for detachment of atoms from island edges, $`E_{det}`$, since this process involves breaking of a lateral bond which is strongly affected by strain. $`E_{det}`$ will now depend on the island size. The other barriers, having to do with processes that take place on top of the surfactant (diffusion and exchange on terraces and island edges), will be unaffected to lowest order by the presence of strain. Therefore, the only important change in the kinetics comes from an island-size dependent detachment rate, given by
$$E_{det}=ϵ_0+ϵ_1\left(\frac{\mathrm{ln}s}{s}+\frac{\mathrm{ln}t}{t}\right),$$
(3)
where $`ϵ_0=E_{deex}`$ for surfactants which passivate island edges, and $`ϵ_0=0`$ when there is no IEP.
In our simulations we take the value $`ϵ_1=3.0`$ eV, which is a reasonable number for the typical strength of bonds and the amount of strain involved in the systems of interest (4% for the case of Ge on Si). As was done in the case of homoepitaxial growth, we first study the effects of passivation of island edges on surface morphology. We simulated a system of size $`100\times 100`$ at a temperature of 300C, and deposited on it one layer. The values of the activation energies used were $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. The results are shown in Figs. 6(a) and 6(b) for the cases with and without IEP, respectively. Different surface heights are represented by different colors; white is the initially flat substrate. The system without IEP shows clear 3D islanding \[Fig. 6(b)\]. Most of the substrate is exposed even after deposition of a full layer, and the deposited material is assembled in faceted tall (up to 7 layers high) islands. The surfactant which passivates island edges, on the other hand, suppressed 3D islanding completely \[Fig. 6(a)\]. Most of the surface is covered by one layer (the blue color), with some small one-layer-high islands and holes in it.
We have also checked the influence of changes in the values of the activation barriers, by repeating the simulations with $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV and $`E_{deex}=1.1`$ eV. The results are presented in Figs. 6(c) and 6(d) for the cases with and without IEP, respectively. The system without IEP does not show any change. In the case with IEP the densities of islands and holes decreased and their sizes increased accordingly. But the growth mode remained layer-by-layer, and 3D islanding was entirely suppressed. These results together with the results on homoepitaxy demonstrate convincingly that island-edge passivation, and not suppression of diffusion, is responsible for the surfactant effect.
To study the effect of temperature on the growth mode in heteroepitaxial growth with IEP, we simulated a system of size $`300\times 300`$, with the activation barriers $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. The resulting morphologies at the temperatures of 300C, 350C, 400C and 450C are shown in Fig. 7. In the first two cases, growth is essentially indistinguishable from the case of homoepitaxy, with a high density of small islands. However, at $`T=400^{}`$C, despite the small rise of only 50<sup>0</sup>C, a dramatically different growth mode is evident, with a large number of tall 3D islands and a substantial amount of the substrate left uncovered. This trend is even more evident at the higher temperature of 450C. We also carried out simulations of heteroepitaxial growth on vicinal surfaces, with exactly the same parameters as those of Fig. 7, but starting from a system with atomic steps present on the substrate. Fig. 8 shows the results of these KMC simulations for the same temperatures as in Fig. 7. Again, the surfactant suppressed 3D islanding at low temperatures, but not at high temperatures. This is precisely the type of abrupt transition from layer-by-layer growth at low temperature, to 3D island growth at higher temperature observed experimentally for the strained heteroepitaxial systems, such as Ge/Si with Sb as a surfactant.
## V. Discussion
We have provided a critical review of the literature on surfactant mediated semiconductor epitaxy with emphasis on comparisons between experimental observations and model calculations. Our main goal was to arrive at a consistent explanation of the mechanism by which surfactants suppress 3D islanding in heteroepitaxial growth.
There is a vast number of experimental articles on the subject and we gathered most of them in tables according to the relevant combination of deposit-surfactant-substrate materials. The most important message, which one can take from these experimental studies, is that in semiconductor epitaxy surfactants can be divided into two categories. In the first category we have surfactants which lead to an anomalously high island density in submonolayer homoepitaxy, and also suppress 3D islanding in heteroepitaxy. The second category consists of surfactants which lead to step flow growth in homoepitaxy and are inefficient in suppressing 3D islanding in heteroepitaxy.
Explanations of the surfactant effect have focused on the relation between the activation energy for adatom diffusion on top of the surfactant layer and the barrier for exchange of an adatom with a surfactant atom. The rationale was that surfactants of the first category suppress 3D islanding and increase the island density because they suppress diffusion. Suppression of diffusion was associated with relatively easy exchange processes. Surfactants of the second category, on the other hand, are thought to enhance diffusion, and exchange processes were expected to be relatively difficult.
Several first-principles calculations of the diffusion and exchange barriers have been carried out for various systems. In a typical calculation, specific paths for diffusion and exchange were proposed and total energies of the system in relaxed configurations along these paths were calculated. This allows fairly accurate estimates of the relevant energy barriers. Different studies arrived at different conclusions about the barriers mainly because the paths proposed were different. Thus the main deficiency of these microscopic calculations is their inability to predict the correct kinetic path for the process under consideration.
We proposed a new scenario for the explanation of the surfactant effect. According to our ansatz, neither the relation between diffusion and exchange nor suppression of diffusion are relevant for the explanation of the surfactant effect. Instead, we argued that the efficiency of a surfactant is determined by its ability to passivate island edges. Surfactants which passivate island edges also lead to an anomalously high density of islands and suppress 3D islanding. We supplied ample evidence for this scenario. The most convincing evidence comes from kinetic Monte Carlo simulations of the growth process and from the comparison of the results with experimental observations. Using realistic activation energies we showed that a surfactant that suppresses diffusion, but does not passivate island edges, does not suppress 3D islanding. It also does not lead to a very high density of islands. Moreover, the islands generated in the growth process mediated by such a surfactant are faceted and do not have the rough edges observed in experiments. By contrast, island-edge passivation does lead to suppression of 3D islanding and to islands of a shape consistent with the experimentally observed shape. The temperature dependence of the island density, as well as the abrupt transition from layer-by-layer to 3D growth as the temperature is raised, are predicted correctly by simulations with IEP.
The evidence we provided for the validity of our scenario, although convincing, is far from being a rigorous proof. In fact, it is based on a very simplified model, which fails to take into account various aspects of the experimental system that may be important. For example, the use of an isotropic square lattice is not appropriate for all cases (the substrates with diamond lattice and (111) orientation have a hexagonal surface lattice, whereas those with (100) orientation have a square lattice but exhibit strong anisotropy in the directions parallel and perpendicular to the surface dimers); nor do we account for the fact that in some cases the film grows in bilayers (as in (111) substrates) rather than in monolayers (as in (100) substrates). We do not properly treat the issue of the critical island size, which may be very large in Si homoepitaxy or in heteroepitaxy of Ge on Si. There is therefore room for further discussion and more detailed modeling of surfactant mediated thin film growth.
There are various unresolved issues in surfactant mediated epitaxial growth, which we have not discussed. Perhaps the most important among them is the issue of strain relaxation. Heteroepitaxial films grown in the layer-by-layer growth mode are initially highly strained. This strain energy must somehow relax after growth of a few layers. Indeed, dislocations appear in the film during surfactant mediated heteroepitaxy. In some cases these dislocations do not thread the film and hence do not harm its epitaxial quality, but in other cases they do. It is therefore very important to study strain relaxation in the films. Some experimental studies have been carried out, but their description is beyond the scope of the present article. To the best of our knowledge, there has not been any detailed theoretical work on the problem.
Another important issue is related to the fact that inevitably some of the surfactant layer gets trapped in the growing film. This leads to unintended doping if the surfactant is not isoelectronic with the deposited material. This could be beneficial, if high levels of doping are desired, or detrimental, if a film of high purity is desired. In any case, controlling the amount of the incorporated surfactant by carefully adjusting external conditions (such as flux rate, temperature, surface preparation) is highly desirable . A better understanding of the surfactant effect, along the lines proposed here for the DDP model, will probably go a long way toward controlling the electronic properties of the film, which are strongly influenced by surfactant incorporation, strain relaxation defects and surface morphology. Further research in these directions is necessary and essential before surfactant mediated growth can become useful in practical applications.
This work was supported by the Office of Naval Research Grant \# N00014-95-1-0350, and by THE ISRAELI SCIENCE FOUNDATION founded by The Israeli Academy of Sciences and Humanities. D.K. is the incumbent of the Ruth Epstein Recu Career Development Chair.
Figure Captions
Figure 1: Schematic illustration of important mechanisms in surfactant mediated growth on a substrate (represented by white circles) with a full monolayer surfactant coverage (represented by continuous shaded area): (a) diffusion on terraces and steps for surfactant that passivates step edges; (b) exchange at terraces and passivated steps; (c) de-exchange at terraces and passivated steps; (d) diffusion on terrace and exchange at non-passivated steps; (e) de-exchange at terrace and at non-passivated steps.
Figure 2: Representative surface diffusion pathway, top and side views. The dark circles represent the substrate atoms, the light circles the surfactant atoms. The smaller gray circle represents an extra atom deposited on top of the surfactant layer, at different positions. The geometries correspond to a Ge adatom on a Si(111) surface (the substrate) covered by a monolayer of Sb (the surfactant) in a ($`2\times 1`$) chain reconstruction.
Figure 3: Representative exchange pathway. The color scheme is the same as in Fig. 2. (a) Structure with one layer of newly deposited atoms on top of the surfactant layer. The geometries depicted in (b), (c), (d) are the intermediate structures during a concerted exchange that brinks the surfactant layer on top of the newly deposited layer, shown as the final configuration in (e). Structure (c) is metastable, while structures (b) and (d) are saddle-point configurations. Solid lines linking the atoms correspond to covalent bonds, while dashed lines correspond to borken bonds. The geometries correspond to the same physical system as in Fig. 2.
Figure 4: Kinetic Monte Carlo simulations of surfactant mediated homoepitaxy in the DDP model, on a substrate of size $`100\times 100`$ at a temperature of 600C. A total of 0.15 monolayer of new material has been deposited: (a) Simulations with IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV, $`E_{deex}=1.6`$ eV and $`E_{det}=3`$ eV. (b) Simulations without IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV, $`E_{deex}=1.6`$ eV and $`E_{det}=1.6`$ eV. (c) Simulations with IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV, $`E_{deex}=1.1`$ eV and $`E_{det}=2.5`$ eV. (d) Simulations without IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV, $`E_{deex}=1.1`$ eV and $`E_{det}=1.6`$ eV. IEP clearly increases the island density and significantly affects the island shape.
Figure 5: Kinetic Monte Carlo simulations of homoepitaxial surfactant mediated growth in the DDP model, with IEP on a substrate of size $`300\times 300`$. The activation energies were $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV, $`E_{deex}=1.6`$ eV and $`E_{det}=3`$ eV. A total of 0.15 monolayer of new material has been deposited at 600C, 700C and 850C. The high density of small islands at low temperature is evident, as well as the decrease of the island density with increasing temperature.
Figure 6: Kinetic Monte Carlo simulations of surfactant mediated heteroepitaxy in the DDP model, on a substrate of size $`100\times 100`$ at a temperature of 300C. A total of one monolayer of new material has been deposited. (a) Simulations with IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. (b) Simulations without IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. (c) Simulations with IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV, $`E_{deex}=1.1`$ eV. (d) Simulations without IEP, with the activation energies $`E_d=0.5`$ eV, $`E_{ex}=0.3`$ eV and $`E_{deex}=1.1`$. Different colors indicate different surface heights. The surfactant which passivates island edges suppresses 3D islanding completely and induces layer-by-layer growth. Without IEP 3D islands form on the film. They reach a height of 7 layers after deposition of one layer of material.
Figure 7: Kinetic Monte Carlo simulations of heteroepitaxial surfactant mediated growth in the DDP model, on a substrate of size $`300\times 300`$ with IEP. The activation energies were $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. A total of one monolayer of new material has been deposited at 300C, 350C, 400C and 450C. The different colors indicate surface heights. The transition between layer-by-layer growth and 3D island growth takes place somewhere between 350C and 400C.
Figure 8: Kinetic Monte Carlo simulations of heteroepitaxial surfactant mediated growth in the DDP model, on a vicinal substrate of size $`300\times 300`$ with IEP. The activation energies were $`E_d=0.5`$ eV, $`E_{ex}=0.8`$ eV and $`E_{deex}=1.6`$ eV. A total of one monolayer of new material has been deposited at 300C, 350C, 400C and 450C. The different colors indicate surface heights.
|
no-problem/9901/math9901035.html
|
ar5iv
|
text
|
# Vita: Friedrich Wilhelm Wiener
## 1. Introduction
In a recent note , we proved a multi-dimensional analogue of the following classical theorem of Harald Bohr . (For subsequent developments in the multi-dimensional theory, see .)
###### Theorem 1 (Bohr).
Suppose that a power series $`_{k=0}^{\mathrm{}}c_kz^k`$ converges for $`z`$ in the unit disk, and $`|_{k=0}^{\mathrm{}}c_kz^k|<1`$ when $`|z|<1`$. Then $`_{k=0}^{\mathrm{}}|c_kz^k|<1`$ when $`|z|<1/3`$. Moreover, the radius $`1/3`$ is the best possible.
In one part of the proof, we adapted to higher dimensions an elegant argument that Bohr attributed to Wiener. Since Bohr mentioned this name in the same sentence with the names of Riesz and Schur, we assumed it to be the famous Norbert Wiener, and we added the initial “N” in our attribution. Our assumption was false. Lawrence Zalcman brought to our attention that Edmund Landau mentioned the name of one F. Wiener in connection with Bohr’s theorem \[9, §4\].
## 2. Wiener’s life
Having never heard of a mathematician F. Wiener, we investigated. We report here on what information we have discovered about the life and work of F. Wiener, hoping that his name may be preserved in mathematical history for another generation.
According to the curriculum vitae accompanying his dissertation, Friedrich Wilhelm Wiener was born in 1884 in Meseritz, then part of the Prussian province of Posen and now part of Poland. After completing high school (gymnasium), he pursued studies in Göttingen. After a year of compulsory military service in 1904–1905, he resumed studies in Berlin. He returned to Göttingen in 1909, the same year that Landau was called there as Minkowski’s successor. Wiener attended lectures of such famous mathematicians as Frobenius, Hilbert, Landau, Schottky, Schur, and Schwarz. He completed his doctoral dissertation under the supervision of Landau in 1911.
Wiener published one journal article in 1910, which is cited in standard books . After a promising beginning, he seems to have published nothing further, not even his dissertation. There is no evidence that Wiener was ever a member of the Deutsche Mathematiker-Vereinigung (DMV); no obituary notice for Wiener appeared in the DMV *Jahresbericht* . Although we do not know the circumstances of Wiener’s death, this must have occurred no later than 1921, as the index published that year to volumes 51–80 of *Mathematische Annalen* lists Wiener as deceased. We conjecture that Wiener may have been a casualty of the war.
## 3. Wiener’s work
The focus of Wiener’s mathematical work was to discover simple proofs of known theorems. Both of his papers have the word “elementary” in the title.
### 3.1. Hilbert’s inequality
Wiener’s 1910 paper concerns Hilbert’s double series theorem stating the boundedness in $`\mathrm{}_2`$ of the quadratic form $`_{m=1}^{\mathrm{}}_{n=1}^{\mathrm{}}x_mx_n/(m+n)`$.
###### Theorem 2 (Hilbert).
$$\left|\underset{m=1}{\overset{\mathrm{}}{}}\underset{n=1}{\overset{\mathrm{}}{}}\frac{x_mx_n}{m+n}\right|C\underset{n=1}{\overset{\mathrm{}}{}}|x_n|^2.$$
Moreover, the inequality holds with $`C=\pi `$, and no smaller value of the constant $`C`$ will do.
Hilbert’s proof was first published in the dissertation of his student Hermann Weyl in 1908. The theorem attracted a great deal of attention, and numerous proofs and generalizations were published subsequently. The classical book by Hardy, Littlewood, and Pólya devotes a whole chapter to this inequality. At the time of Wiener’s work, it was not known that the sharp value of the constant $`C`$ is $`\pi `$: Schur proved this the following year .
What Wiener meant by an “elementary” proof of Hilbert’s inequality was a proof that used no integration and no function theory. His proof consists of the following elementary steps.
1. Reduce to the case that $`\{x_n\}_{n=1}^{\mathrm{}}`$ is a decreasing sequence of positive real numbers.
2. Group the terms in the inner sum into blocks whose terms have indices running between consecutive squares.
3. Apply the Cauchy-Schwarz inequality to both the inner sum and the outer sum.
4. Interchange the order of summation.
5. Invoke Cauchy’s condensation test for convergence of series.
### 3.2. Wiener’s dissertation
In his dissertation, Wiener addresses two questions in the theory of entire functions of one complex variable.
The first part of the dissertation concerns the minimum modulus of an entire function $`f`$. Let $`m(r)=\mathrm{min}\{|f(re^{i\theta })|:0\theta 2\pi \}`$. Since $`m(r)`$ is zero when $`f`$ has a zero of modulus $`r`$, the natural question to ask about a lower bound for $`m(r)`$ is whether $`m(r)`$ is frequently large: is there some reasonable comparison function $`c(r)`$ such that $`lim\; sup_r\mathrm{}m(r)/c(r)>0`$?
If $`f`$ is an entire function of finite order at most $`\rho `$, meaning that $`lim_{|z|\mathrm{}}|f(z)|e^{|z|^{\rho +ϵ}}=0`$ for every positive $`ϵ`$, then Hadamard’s factorization theorem implies that $`lim\; sup_r\mathrm{}m(r)e^{r^{\rho +ϵ}}=\mathrm{}`$ for every positive $`ϵ`$. In other words, $`m(r)`$ cannot tend to zero too fast. This weak estimate cannot be improved in general. For example, the exponential function $`e^z`$ has order $`1`$ and $`m(r)=e^r`$. On the other hand, if $`f`$ is a non-constant polynomial, then $`m(r)`$ tends to infinity like a power of $`r`$. The question arises of whether an entire function of sufficiently small order is enough like a polynomial that its minimum modulus must be unbounded.
In 1905, A. Wiman confirmed that the minimum modulus of every non-constant entire function of order $`\rho `$ strictly less than $`1/2`$ is indeed unbounded. Moreover, $`lim\; sup_r\mathrm{}m(r)e^{r^{\rho ϵ}}=\mathrm{}`$ when $`0<ϵ<\rho <1/2`$. The cutoff at $`1/2`$ is sharp, for the convergent infinite product $`_{n=1}^{\mathrm{}}(1\frac{z}{n^2})`$, which equals $`(\mathrm{sin}\pi \sqrt{z})/(\pi \sqrt{z})`$, has order $`1/2`$ and $`m(r)r^{1/2}/\pi `$. (See \[5, Chapter 3\] for more about the minimum modulus of entire functions of small order.)
Wiener’s dissertation gives a new proof of Wiman’s theorem. The proof is elementary in the sense that it uses only arguments about series and products of real numbers; it avoids using theorems from function theory.
Wiener’s proof even supplements Wiman’s theorem by giving some information in the endpoint cases $`\rho =0`$ and $`\rho =1/2`$. Namely, Wiener shows that if $`f(z)=_{n=1}^{\mathrm{}}(1\frac{z}{a_n})`$, where $`\{a_n\}_{n=1}^{\mathrm{}}`$ is a sequence of non-zero complex numbers of increasing modulus, and if $`lim_n\mathrm{}n^2/|a_n|=0`$, then $`lim\; sup_r\mathrm{}m(r)r^k=\mathrm{}`$ for every positive $`k`$. This result applies to all transcendental entire functions of order $`0`$ (for example, to $`_{n=1}^{\mathrm{}}(1\frac{z}{n^n})`$) and to some entire functions of order $`1/2`$ (for example, to $`_{n=2}^{\mathrm{}}(1\frac{z}{n^2\mathrm{log}n})`$).
The second part of Wiener’s dissertation is motivated by a theorem of Landau that generalizes Picard’s little theorem.
###### Theorem 3 (Landau).
There is a positive function $`R`$ such that every polynomial of the form $`a_0+z+a_2z^2+\mathrm{}+a_nz^n`$ assumes at least one of the values $`0`$ and $`1`$ in the disk $`\{z:|z|R(a_0)\}`$. The function $`R`$ is independent of the degree $`n`$ and the higher coefficients $`a_2`$, …, $`a_n`$.
One might hope that a theorem about polynomials would have an elementary proof, which would then yield an elementary proof of Picard’s theorem. Wiener was able to find an elementary proof (using Rouché’s theorem, but nothing else from function theory) of Landau’s theorem under an additional hypothesis about the location of the zeroes of the polynomial. Namely, he assumed that the zeroes are located within the two equal acute angles determined by two lines intersecting at the origin. If the radian measure of the acute angle is $`\frac{1}{2}\pi \beta `$, then one can take $`R(a_0)=28|a_0\mathrm{log}a_0|/\mathrm{sin}\beta `$. (The cases $`a_0=0`$ and $`a_0=1`$ are of no concern, because then the polynomial takes the value $`0`$ or $`1`$ at the origin.)
## 4. Acknowledgments
For assistance in this project of identifying and tracing F. Wiener, we thank Samuel J. Patterson (Georg-August-Universität Göttingen), Constance Reid, Heinrich Wefelscheid (Gerhard-Mercator-Universität Gesamthochschule Duisburg), and Lawrence A. Zalcman (Bar Ilan University). We are especially indebted to Professor Wefelscheid for locating and sending to us a copy of Wiener’s dissertation. We thank Heidemarie Wörmann Boas for help with German translation.
|
no-problem/9901/quant-ph9901040.html
|
ar5iv
|
text
|
# 1 Model description
## 1 Model description
The initial state of the particle is given by a wave function $`\psi `$ associated with a preparation procedure. In operational terms, an ensemble of noninteracting particles, represented symbolically as $`\{E_0\}`$, is sent towards the barrier -one particle at a time- from the left with identical specifications. (In our calculations the initial state at $`t=0`$ is a minimum-uncertainty-product Gaussian centered at position $`x=20`$, momentum $`p=8`$ and spatial variance $`9/4`$, all quantities in atomic units. The potential barrier is a square barrier with “height” $`V_0=50`$ from $`x=80`$ to $`x=80+d`$ and the particle has mass $`m=1`$.)
Two particle detectors $`A`$ and $`B`$ are located on both sides of a barrier potential, at $`x=a`$ and $`x=b`$. The first one is a passage detector that does not destroy the particle. The second one is an arrival detector. The translational degree of freedom of the particle, $`x`$, is the only one represented explicitly. A simplifying assumption is that only one of the two detectors is working at a time: When the particle is sent to the barrier only $`A`$ is active. Detection of the particle at $`A`$ disconects this detector and activates the second, $`B`$.
### 1.1 First detector: Probability of detection
It can be proved using multichannel scattering theory techniques that the incident channel amplitude (corresponding to translational motion of the particle and the detector $`A`$ in its lower state) can be represented by an effective Schrödinger equation with a complex potential . (In “Event Enhanced Quantum Theory” as described in the imaginary part of the potential is deduced rigorously from the Lindblad form of the Liouville equation that describes a coupling of the quantum system with a classical detector.) Here the effective Schrödinger equation is written as
$$H\psi (x,t)=\frac{\mathrm{}^2}{2m}\frac{^2}{x^2}\psi (x,t)+[V(x)+\mathrm{\Lambda }(x)]\psi (x,t),$$
(3)
where $`V(x)`$ represents the potential barrier and the complex potential, $`\mathrm{\Lambda }`$, is written as
$$\mathrm{\Lambda }(x)=\frac{i}{2}g^2(x;a),$$
(4)
with
$$g(x,a)=se^{(xa)^2/2\sigma ^2}.$$
(5)
The “intensity”, $`s`$, and “width”, $`\sigma `$, of the detector are adjustable parameters.
The norm of the incident channel,
$$N(t)=_{\mathrm{}}^{\mathrm{}}\psi ^{}(x,t)\psi (x,t)𝑑x,$$
(6)
decreases, due to the detector presence, from the initial value $`N(0)=1`$. The total absorption $`1N(\mathrm{})`$ is the efficiency of the detector. It is not necessarily equal to one so the ensemble of particles detected at $`A`$, $`\{E_a\}`$, is generally smaller than $`\{E_0\}`$. The normalized probability density for triggering the detector at time $`t_a`$ is proportional to the absorption rate $`dN/dt|_{t_a}`$. Normalizing with respect to the ensemble $`\{E_a\}`$ it is given by
$$P(t_a|E_a)=\frac{dN(t_a)/dt_a}{_0^{\mathrm{}}𝑑t𝑑N(t)/𝑑t}.$$
(7)
### 1.2 Effect of detection on the wave functions
It will be assumed, within the spirit of a simplified phenomenological model, that after each detection (a “click”) the state of the particle can be effectively represented by a modified wavefunction. The true final states should be determined by a detailed analysis of the interaction between the system and the detector. Instead we shall later assume a physically motivated functional form. The ensemble of detected particles can be represented by a statistical mixture of such states. This is of course reminiscent of Von Neumman’s projection postulate. However an important feature of a bubble chamber track is that it does not look like a random walk. This cannot be explained with a naive projection localizing the particle position by means of position eigenstates, since a position eigenstate has equal probability to expand in any direction (erasing the memory of the state previous to the measurement) so there would be no tendency to ionize atoms in the direction of the dominant incident momentum . A modified projection postulate correcting this fact has been derived by Jadczyk and Blanchard. The wave function resulting from a click at time $`t_a`$ and consistent with track formation has a memory of the previous state and reflects also the detector properties. A simple expression satisfying these two conditions is
$$\psi _{t_a}(x)=\frac{g(x)\psi (x,t_a)}{[_{\mathrm{}}^{\mathrm{}}g^2(x)|\psi (x,t_a)|^2𝑑x]^{1/2}},$$
(8)
where $`\psi (x,t_a)`$ is the wave function evolved with the Schrödinger equation (3).
To determine the effect of the detector we have examined the momentum average and its variance for the ensembles $`\{E_0\}`$ and $`\{E_a\}`$. Averages over $`\{E_a\}`$ require some care since they imply a a double average: The first one (represented as $`Q`$) is a quantum mechanical average using each wave packet $`\psi _{t_a}`$; the second ($`D`$) is an average over the times of detection $`t_a`$ weighted by $`P(t_a|E_a)`$,
$$p_{E_a}=DQpP(t_a|E_a)\psi _{t_a}|\widehat{p}|\psi _{t_a}𝑑t_a.$$
(9)
Since there are two types of average different “variances” are possible . For the ensemble $`\{E_a\}`$ the important one is $`\mathrm{\Delta }_{DQ}^2DQ[p^2(DQp)^2]`$. (This is a variance computed over detected particles regardless of their detection time .) The average momentum is conserved well (especially by weak detectors) except for very narrow detector widths. For all detectors used in this work $`DQpp_{E_0}`$ better than 0.2$`\%`$. However the “momentum widths” $`\mathrm{\Delta }_{DQ}`$ (square root of variance) may change drastically with respect to the momentum width $`\mathrm{\Delta }_p`$ of the original packet. Fig. 1 shows that wider detectors tend to keep the variance of the original state while narrow detectors give very large variances. Weak detectors (small $`s`$) conserve the variance better than strong detectors (large $`s`$). In summary, in our model weak and wide detectors are the best as far as conservation of the momentum distribution of the original packet is concerned. They are however not very efficient, for $`s=1`$ the absorbed norm goes from $`0.05`$ to $`0.6`$ in the $`\sigma `$-interval of Fig. 1. In comparison the full norm is absorbed for $`s=10`$.
### 1.3 The second (arrival) detector
The second detector is assumed to be a perfect one as described in , so that the full transmitted packet is absorbed. It is located at the right edge of the barrier. Let $`\{E_b\}`$ be the ensemble of particles that produce two clicks at times $`t_a`$ and $`t_b`$ and $`P(E_b|t_a)`$ the transmittance of $`\psi _{t_a}`$, i.e., the fraction of the norm of $`\psi _{t_a}`$ that will be transmitted and therefore detected at $`B`$ . The probability for being detected at $`B`$ conditioned to having been detected at $`A`$ is
$$P(E_b|E_a)=P(E_b|t_a)P(t_a|E_a)𝑑t_a.$$
(10)
Instead of using an expression similar to (7) the distribution of arrival times $`t_b`$ for a perfect absorber can be approximated accurately by the (normalized) flux without absorber . In particular, for a wave packet $`\psi _{t_a}(x;t_a)`$, the detection probability density at $`t_b`$ in $`B`$, conditioned to having been detected at $`t_a`$ in $`A`$ and restricted to the ensemble $`\{E_b\}`$, is given by
$$P(t_b|E_b,t_a)=\frac{J_{t_a}(b,t_b)}{J_{t_a}(b,t_b)𝑑t_b},$$
(11)
where $`J_{t_a}`$ is the flux for the state $`\psi _{t_a}`$. Using Bayes’ rule the joint probability density for detection at $`t_a`$ in $`A`$ and $`t_b`$ at $`B`$ restricted to the ensemble $`\{E_b\}`$ is given by
$$P(t_b,t_a|E_b)=\frac{P(t_b|E_b,t_a)P(E_b|t_a)P(t_a|E_a)}{P(E_b|t_a)P(t_a|E_a)𝑑t_a}.$$
(12)
Finally, the probability distribution of $`\tau t_bt_a`$ is computed, for the ensemble $`\{E_b\}`$, by integrating over $`t_b`$ and $`t_a`$ with the delta function $`\delta (t_bt_a\tau )`$,
$$P(\tau |E_b)=\frac{P(t_a+\tau |E_b,t_a)P(E_b|t_a)P(t_a|E_a)𝑑t_a}{P(E_b|t_a)P(t_a|E_a)𝑑t_a}.$$
(13)
We have calculated average traversal times $`\tau _{E_b}P(\tau |E_b)\tau 𝑑\tau `$ versus the barrier width $`d`$ for two different weak detectors at $`a`$, both with $`s=1`$. One of them, $`A_1`$, is a wide one and conserves well the momentum distribution of $`\{E_0\}`$. The other one, $`A_2`$, is a narrow detector, and produces a momentum variance which is approximately ten times the initial one. The detector before the barrier is always put far from the barrier ($`a=50`$) to compare with the type of Gedanken experiment performed in ref. , so that the initial packet may pass through $`a`$ before interacting significantly with the barrier, and $`b`$ is located at the right barrier edge. Let $`\tau _1`$ and $`\tau _2`$ be the averages corresponding to using the two initial detectors $`A_1`$ and $`A_2`$. Figure 2 shows that the Hartman effect, i.e. the fact that the average traversal time does not grow with $`d`$ (actually it decreases slowly ) can still be seen with $`A_1`$ until a critical barrier width $`d_c`$ where the “classical passage” of momenta “above” the barrier starts to dominate . When the narrow detector $`A_2`$ is used the momentum variance is so large that the transmission is always dominated by fast momenta well above the barrier (We have independently checked this fact by calculating the ratio between transmission due to energies above and below the barrier energy.) so that the behaviour is the one expected classically, i.e., a linear growth of $`\tau _2`$ with $`d`$. Figure 2 also shows $`\tau _T`$, which is qualitatively very similar to $`\tau _1`$. The relation $`\tau _T<\tau _1`$ is due to the two different ways the average is performed in the initial detector and can be also understood on classical grounds. The right front of the incident packet is dominated by faster momenta and it contributes with more particles to the transmitted ensemble. For computing the later, no distinction is made at $`a`$ between particles to be transmitted or not. (The effect grows with $`d`$ until it saturates when the transmission is purely above the barrier.) Note that $`\tau _T`$ could be negative while the times defined in the present work are always, by construction, strictly positive. The “displacement” of the curve $`\tau _\mathrm{a}`$ with respect to $`\tau _T`$ (note the difference in the value of the critical barrier) is due to the slight difference in the momentum variances.
In summary a two detector measurement of a particle traversal time has been modelled. Passage detectors conserving the initial wave packet momentum distribution still show the Hartman effect.
Support by Gobierno Autónomo de Canarias (Spain) (Grant PI2/95) and by Ministerio de Educación y Ciencia (Spain) (PB 93-0578) is acknowledged. J.P. Palao acknowledges an FPI fellowship from Ministerio de Educación y Ciencia.
References
1. L. A. McColl, Phys. Rev. 40 (1932) 621.
2. M. Buttiker and R. Landauer, Phys. Lett. 49 (1982) 1739
3. E. H. Hauge and J. A. Stovneng, Rev. Mod. Phys. 61 (1989) 917; M. Büttiker in Electronic Properties of Multilayers and Low-Dimensional Semiconductor structures, ed. by Chamberlain J. M. et al (Plenum Press, New York, 1990) p. 297; R. Landauer, Ber. Bunsenges. Phys. Chem. 95 (1991) 404; C. R. Leavens and G. C. Aers, in Scanning Tunneling Microscopy and Related Techniques, ed. by R. J. Behm, N. García and H. Rohrer (Kluwer, Dordrecht, 1990); V. S. Olkhovsky and E. Recami, Phys. Rep. 214 (1992) 339; R. Landauer and T. Martin, Rev. Mod. Phys. 66 (1994) 217; J. T. Cushing, Quantum Mechanics (The University of Chicago Press, Chicago, 1994).
4. D. Sokolovski and J. N. L. Connor, Physical Review A 44 (1992) 1500.
5. S. Brouard, R. Sala and J. G. Muga, Phys. Rev. A 49 (1994) 4312.
6. J. G. Muga, S. Brouard and R. Sala, Phys. Lett. A 167 (1992) 24.
7. V. Delgado, S. Brouard and J. G. Muga, Solid State Commun. 94 (1995) 979.
8. J. G. Muga, S. Brouard and D. Macías, Annals of Physics (NY) 240 (1995) 351.
9. V. Delgado and J. G. Muga, Annals of Physics (NY) 248 (1996) 122.
10. S. Brouard and J. G. Muga, Phys. Rev. A 54 (1996) 3055.
11. Ph. Balcou and L. Dutriaux, Phys. Rev. Lett. 78 (1997) 851.
12. C. R. Leavens, Solid State Commun. 85 (1993) 115; 89 (1993) 37.
13. A. Jadczyk, Prog. Theor. Phys. 93 (1995) 631; Ph. Blanchard and A. Jadczyk, Ann. d. Phys. 4 (1995) 583; Ph. Blanchard and A. Jadczyk, Helv. Phys. Acta 69 (1996) 613.
14. The propagation of the wave packets is performed with the discretization algorithm described by S. E. Koonin in Computational Physics (Benjamin, Menlo Park, CA, 1985).
15. J. R. Taylor, Scattering Theory (John Wiley, New York, 1972).
16. L. E. Ballentine, Found. Phys. 11 (1990) 1329; Phys. Rev. A 43 (1991) 9.
17. J. G. Muga and R. D. Levine, Mol. Phys. 67 (1989) 1209.
18. J. G. Muga and R. D. Levine, Mol. Phys. 67 (1989) 1225.
19. J. G. Muga, S. Brouard and R. F. Snider, Phys. Rev. A 46 (1992) 6075.
Figure Captions
Figure 1. Square root of the momentum variance after detection, $`\mathrm{\Delta }_{DQ}`$, for $`s=1`$ (solid line) and $`s=10`$ (dashed line). The dashed-dotted line is the reference value of the momentum variance for the original ensemble $`\{E_0\}`$.
Figure 2. Average traversal times versus barrier width $`d`$ evaluated for (a) $`s=1`$, $`\sigma =4.5`$ (dashed line); (b) $`s=1`$, $`\sigma =0.2`$ (dashed-dotted line). The average time $`\tau _T`$ is also represented (solid line).
|
no-problem/9901/cond-mat9901093.html
|
ar5iv
|
text
|
# Delocalisation transition of a rough adsorption-reaction interface
## Abstract
We introduce a new kinetic interface model suitable for simulating adsorption-reaction processes which take place preferentially at surface defects such as steps and vacancies. As the average interface velocity is taken to zero, the self- affine interface with Kardar-Parisi-Zhang like scaling behaviour undergoes a delocalization transition with critical exponents that fall into a novel universality class. As the critical point is approached, the interface becomes a multi-valued, multiply connected self-similar fractal set. The scaling behaviour and critical exponents of the relevant correlation functions are determined from Monte Carlo simulations and scaling arguments.
PACS Numbers: 68.35.Ct, 82.65.Jv, 05.70.Ln, 68.35.Rh
Kinetically roughened interfaces display a rich phenomenology, have deep connections with fields as diverse as self-organized criticality, spin-glasses and complex pattern formation, and lend themselves to modelling various systems with practical applications, ranging from heterogenous catalysis to geomorphology . The huge amount of numerical and analytical effort that has recently been invested in them has revealed that they obey universal scaling relations, which fall into one of a few universality classes. In this letter we would like to present a kinetic interface model which exhibits an anisotropic to isotropic phase transition with novel scaling behaviour at the delocalisation critical point.
Reaction fronts formed by $`A+B\mathrm{}`$ reactions in heterogeneous systems where the reaction takes place on a two dimensional substrate are often confined to a narrow “reactive zone” especially if the reactants are either initially segregated or become segregated due to reaction kinetics . Our present model is motivated by recent findings of high reaction rates and strong bonding at surface defects like steps and vacancies in studies of heterogeneous catalysis, a burgeoning new field in surface science.
We consider an idealised surface with only one step, terminating a terrace made up of $`A`$ particles (Fig.1). The surface is exposed to two kinds of incoming particles, $`A`$ and $`B`$, which are allowed to adsorb at first contact, and only on sites adjacent to the step, which we will call “interface sites.” The adsorption of $`A`$ particles makes the interface advance. The adsorbing $`B`$ particles, on the other hand, immediately react with an $`A`$ neighbor to form a product which leaves the surface. This eats into the step, making the interface recede. We investigate the effect of changing the rate of injection of the two reactants. We do not allow any reactions to take place with the substrate atoms. We assume, for simplicity, that the temperature is low enough so that no surface restructuring occurs; the bonding to the interface sites is sufficiently strong for diffusion along the interface to be prohibited. The kinetics is therefore driven by the adsorption and reaction steps and not by the transport of the reactants.
The model is defined on an infinite strip of width $`L`$, on which we impose periodic boundary conditions. The interface is initially a perfectly straight line located at $`h=0`$. The system is driven weakly so that at any instant only one particle of either $`A`$ or $`B`$ type, with probabilities $`p_A`$ or $`p_B=1p_A`$, impinges on the interface. As the interface moves with a mean velocity equal to $`ϵp_A0.5`$, it roughens, and becomes multiply connected, shedding “islands” or “lakes” in its wake. As $`|ϵ|0`$, the growth direction is completely delocalised, the width of the interfacial region keeps on growing indefinitely, and the interface breaks up into an isotropic fractal (see Fig.2). For finite $`L`$, there may exist more than one spanning string of interface sites at $`ϵ=0`$ ; this phenomenon is similar to the formation of Liesegang bands ). It is the purpose of this letter to understand the nature of this delocalization transition, to describe the crossover behaviour and to characterise the self-similar reactive region formed as $`|ϵ|0`$.
For many interface problems with a well defined growth direction, such as the Eden model, or the Edwards-Wilkinson model , where the interface can be described with a single-valued, self-affine curve. The scaling behaviour of the interface width may be conveniently summarized by the scaling form
$$wt^\beta g(\mathrm{}/t^{1/z}),$$
(1)
where $`g(u)\mathrm{const}`$ for $`u<1`$ and $`u^\chi `$ for $`u1`$;$`z`$ is the dynamical critical exponent, and $`\beta =\chi /z`$ . Kardar, Parisi and Zhang have found the values $`z=3/2`$, $`\chi =1/2`$ and $`\beta =1/3`$ for the stochastic differential equations describing Eden growth in $`d=1+1`$. This set of critical exponents characterizes a wide range of anisotropic growth phenomena with annealed noise , and where the local velocity of the interface increases with the slope. In the limit that the velocity goes to zero or is independent of the slope , one gets the Edwards-Wilkinson model , which is exactly solvable in $`d=1+1`$ dimensions and falls into another universality class, characterized by $`z=2`$, $`\chi =1/2`$ and $`\beta =1/4`$.
Since our interface is typically multivalued we define the width function within an interval of size $`\mathrm{}`$ as,
$$w(\mathrm{})^2=\frac{1}{N(\mathrm{})}\underset{i}{\overset{N(\mathrm{})}{}}(h_i\overline{h(\mathrm{})})^2$$
(2)
where $`h_i`$ is the height of the $`i`$th interfacial site, $`i=1,\mathrm{},N(\mathrm{})`$, and $`\overline{h(\mathrm{})}`$ the mean position of the interface. We find $`\beta =1/3`$ for early times. In the limit of $`p_A=0`$ or $`p_B=0`$, i.e., for $`|ϵ|0.5`$, our model is equivalent to Eden growth, and indeed, along the singly connected part of the interface $`\chi =1/2`$ in the steady state. However, effective roughness exponent ($`\chi _{\mathrm{eff}}`$) goes continuously to zero as $`|ϵ|0`$ as shown in Fig.3.
The reason for this is that as we decrease $`|ϵ|`$, the surface becomes highly convoluted, with islands (or lakes) of all sizes, and therefore increasingly multivalued. A competing length scale emerges in the system, the “thickness” of the interface, which we can measure by the variance of the height,
$$y(i)=\left(\frac{1}{n_i}\underset{j=1}{\overset{n_i}{}}(h_{ij}\overline{h_i})^2\right)^{1/2},$$
with $`n_i`$ being the number of interfacial sites $`\{h_{ij}\}`$, above any point $`i`$ along the horizontal axis. The thickness obeys a skewed–Gaussian distribution. The average $`y_L<y>_L`$ and the second moment of this distribution both diverge as $`|ϵ|0`$ with a critical exponent $`\nu =0.55\pm 0.051/2`$, as $`|ϵ|^\nu .`$ The scaling form for $`y_L`$ is,
$$y_Lt^{\stackrel{~}{\beta }}G(|ϵ|^\nu /t^{1/\zeta })$$
(3)
where $`G(v)v`$ for $`v<1`$ while for $`v>1`$, $`G(v)\mathrm{const}.`$ with $`\stackrel{~}{\beta }=1/2`$ and the (longitudinal) dynamical critical exponent $`\zeta `$ obeys $`\zeta =1/\stackrel{~}{\beta }`$. The thickness of just the singly connected part does not diverge as $`|ϵ|0`$, so that the disconnected parts make up almost all of the interfacial region at the delocalization transition.
Normalizing $`w(\mathrm{})`$ in Eq.(2) by $`y_L^{1.1}`$ yields a collapse of the data for all $`ϵ`$ as can be seen from Fig.4. We believe that the small deviation of the power of $`y_L`$ from unity is due to insufficient statistics for $`y_L`$ which converges extremely slowly as $`|ϵ|0`$ and may safely be neglected, and we conclude
$$w(\mathrm{})y_L\{\begin{array}{cc}\mathrm{}^{1/2}/y_L\hfill & \mathrm{}^{1/2}y_L\hfill \\ \mathrm{const}.\hfill & \mathrm{}^{1/2}y_L\text{.}\hfill \end{array}$$
(4)
Thus, $`\chi _{\mathrm{eff}}`$ goes to zero as the self-affine excursions of the interface are blurred by the thickness of the interface as $`y_L`$ becomes greater than $`\mathrm{}^{1/2}`$.
If one considers coarse–grained width functions, either by taking the average height at any given point or the maximum height for $`ϵ>0`$ (minimum for $`ϵ<0`$), one finds that they obey the scaling form (1) with KPZ exponents for $`\mathrm{}L`$.
To get the local scaling picture, we focus on a single spanning string in the interface and consider
$`C_x(l)`$ $`=`$ $`(x(r+l)x(r))^2^{1/2},`$ (5)
$`C_h(l)`$ $`=`$ $`(h(r+l)h(r))^2^{1/2}.`$ (6)
where both $`r`$ and $`l`$ are the (“chemical”) length measured along the string, and $`x`$ and $`h`$ are Cartesian coordinates of the interface site. The scaling relations we have found from Fig.5 for these quantities are given below. In the transient regime ($`lt^{1/2}`$),
$$C_x\{\begin{array}{cc}l\hfill & \text{ }y_Ll^{1/2}\hfill \\ l/t^\psi \hfill & \text{ }y_Ll^{1/2}\hfill \end{array}$$
(7)
and
$$C_ht^\beta $$
(8)
where $`\psi =1/6`$.
Note the horizontal projection of a segment of fixed “chemical length” decreases with $`t`$ as the surface crumples with time in the critical ($`|ϵ|0`$) region. In the steady state, ($`lt^{1/2}`$),
$$C_x\{\begin{array}{cc}l\hfill & (y_Ll^{1/2})\hfill \\ l^{\chi _{\mathrm{isot}}}\hfill & (y_Ll^{1/2})\hfill \end{array}.$$
(9)
and
$$C_h\{\begin{array}{cc}l^{1/2}\hfill & (y_Ll^{1/2})\hfill \\ l^{\chi _{\mathrm{isot}}}\hfill & (y_Ll^{1/2})\text{.}\hfill \end{array}$$
(10)
We see that for $`y_Ll^{1/2}`$, the interfacial region becomes isotropic, with $`C_xC_h`$. This regime is characterized by an “isotropic” roughness exponent $`\chi _{\mathrm{isot}}=2/3`$. In the opposite limit, $`C_hC_x^{1/2}`$, as expected for the self-affine Eden surface. This crossover is clearly seen in Fig.6. In accordance with the above observations we propose the following scaling functions for the whole range of $`ϵ`$. Defining $`u=l/t^{1/z}`$ and $`s=y_L/l^\chi `$, we have,
$`C_h`$ $``$ $`\alpha _h(ϵ)t^\beta f(u,s)`$ (11)
$`C_x`$ $``$ $`\alpha _x(ϵ)t^{1/z}g(u,s)`$ (12)
where
$$f(u,s)\{\begin{array}{cc}\mathrm{const}.\hfill & u1\hfill \\ u^\chi \hfill & u1,s1\hfill \\ us^{1/z}\hfill & u1,s1\hfill \end{array}$$
(13)
and
$$g(u,s)\{\begin{array}{cc}u\hfill & s1\hfill \\ u^\chi /s\hfill & u1,s1\hfill \\ s^{1/\stackrel{~}{\beta }z}\hfill & u1,s1.\hfill \end{array}$$
(14)
where $`\chi ,z,`$ and $`\beta `$ have their KPZ values. The amplitudes are defined as $`\alpha _x(ϵ)=(a+|ϵ|^{1/6})`$ and $`\alpha _h=1/(a^1+|ϵ|^{1/6})`$, where $`a`$ is some constant.
From these scaling forms and Eq.(3) we see that the new critical exponents obey the relationships,
$$\psi =\stackrel{~}{\beta }\frac{1}{z^{\mathrm{KPZ}}}+\frac{\chi ^{\mathrm{KPZ}}}{z^{\mathrm{KPZ}}}$$
(15)
and
$$\chi ^{\mathrm{isot}}=\zeta \chi ^{\mathrm{KPZ}}/z^{\mathrm{KPZ}}$$
(16)
which yields,
$$\chi ^{\mathrm{isot}}=\beta ^{\mathrm{KPZ}}/\stackrel{~}{\beta }.$$
(17)
From Eqs.(9,10) we see that the graph dimension of a singly connected part in the isotropic regime is $`D_g=1/\chi ^{\mathrm{isot}}`$. Since in two dimensions, $`D_g`$ is related to the roughness exponent via $`D_g=2\chi `$, for $`\chi =1/2`$ we get $`\chi ^{\mathrm{isot}}=2/3`$. The scaling relation (17) yields $`\stackrel{~}{\beta }=1/2`$, from (15) and (16) it follows that $`\psi =1/6`$ and $`\zeta =2`$. The fractal dimension of the self-similar set of interface sites within a band of width $`y_L|ϵ|^\nu `$ is found, from boxcounting to be $`D_I=1.85\pm 0.05`$ for length scales $`\mathrm{}<y_L^2`$.
In conclusion, we have presented an absorption-reaction model where the interface undergoes a delocalisation transition at the point where the mean velocity of the interface goes to zero. Although it might be conjectured that as the velocity of the interface vanishes, the scaling behaviour should cross over to the Edwards-Wilkinson universality class, this is not the case here. It has previously been observed , that the presence of overhangs, islands and inclusions may cause the small-scale structure of the interface to crossover form being self-affine to self-similar while the large scale behaviour remains self-affine. In the present model, this crossover is driven by a competing length scale, the thickness of the interface, which diverges at the critical point as $`|ϵ|^\nu `$ with $`\nu =1/2`$. In the critical region, the interface is characterized by new set of exponents $`\chi ^{\mathrm{isot}}=2/3`$, $`\stackrel{~}{\beta }=1/2`$, $`\zeta =2`$, $`\psi =1/6`$, and the fractal dimension $`D_I=1.85`$. Except for $`\nu `$ and $`D_I`$, these exponents may be obtained from the KPZ exponents via scaling relations.
It should finally be mentioned that the reaction region can be described by stochastic differential equations of the multiplicative noise type with a single component field, since in our model interface sites cannot be created spontaneously either in the bulk or the vacant region. A field-theoretic renormalization group computation a la Tu, Grinstein and Muñoz is presently under way to obtain the values of the critical exponents.
Acknowledgements A.K. would like to thank the Gürsey Institute, where this work was initiated, for their hospitality and gratefully acknowledges support by the Scientific and Technical Research Council of Turkey (TÜBITAK) and by the U.S. National Science Foundation under Grant No. DMR-94-00334. A.E. thanks Mustansir Barma, Satya Majumdar, Deepak Dhar, and Sondan Durukanoğlu Feyiz, for a number of useful conversations and acknowledges partial support from the Turkish Academy of Sciences.
|
no-problem/9901/cond-mat9901323.html
|
ar5iv
|
text
|
# Spin accumulation induced resistance in mesoscopic ferromagnet/ superconductor junctions
## Abstract
We present a description of spin-polarized transport in mesoscopic ferromagnet-superconductor (F/S) systems, where the transport is diffusive, and the interfaces are transparent. It is shown that the spin reversal associated with Andreev reflection generates an excess spin density close to the F/S interface, which leads to a spin contact resistance. Expressions for the contact resistance are given for two terminal and four terminal geometries. In the latter the sign depends on the relative magnetization of the ferromagnetic electrodes.
Andreev reflection ($`AR`$) is the elementary process which enables electron transport across a normal metal-superconductor (N/S) interface, for energies below the superconducting energy gap $`\mathrm{\Delta }`$. The incoming electron with spin-up takes another electron with spin-down to enter the superconductor as a Cooper pair with zero spin. This corresponds to a reflection of a positively charged hole with a reversed spin direction.
The spin reversal has important consequences for the resistance of a ferromagnetic-superconductor (F/S) interface. A suppression of the transmission coefficient has been reported in F/S multilayers, and in transparent ballistic F/S point contacts a reduction of the conductance has been predicted and observed. In F/S point contacts the Andreev reflection process is limited by the lowest number of the available spin-up and spin-down conductance channels, which are not equal due to a separation of the spin bands in the ferromagnet, caused by the exchange interaction. However, in most experiments the dimensions of the sample exceed the electron mean free path $`l_e`$, and therefore the electron transport cannot be described ballistically.
We present a description for spin-polarized transport in diffusive F/S systems, in the presence of Andreev reflection for temperatures and energies below $`\mathrm{\Delta }`$. We will show that the $`AR`$ process at the F/S interface causes a spin accumulation close to the interface, due to the different spin-up and spin-down conductivities $`\sigma _{}`$ and $`\sigma _{}`$ in the ferromagnet.
In first approximation we will ignore the effects of phase coherence in the ferromagnet, which in the presence of a superconductor can give rise to the proximity effect. The spin-flip length ($`\lambda _{sf}^F`$) of the electrons in the ferromagnet, which is the distance an electron can diffuse before its spin direction is randomized, is much larger than the exchange interaction length. This means that all coherent correlations in the ferromagnet are expected to be lost beyond the exchange length, but the spin of the electron is still conserved.
Transport in a diffusive metallic ferromagnet is usually described in terms of its spin-dependent conductivities $`\sigma _,=e^2N_,D_,`$, where $`N_,`$ are the spin-up and spin-down density of states at the Fermi energy and $`D_,`$ the spin-up and spin-down diffusion constants . In a homogeneous 1D-ferromagnet the current carried by both spin directions ($`j_,`$) is distributed according to their conductivities:
$$j_,=(\frac{\sigma _,}{e})\frac{\mu _,}{x}$$
(1)
where $`\mu _,`$ are the electrochemical potentials of the spin-up and spin-down electrons, which are equal in a homogeneous system. In a non-homogeneous system however, where current is injected into, or extracted from a material with different spin-dependent conductivities, the electrochemical potentials can be unequal. This is a consequence of the finite spin-flip scattering time $`\tau _{sf}`$, which is usually considerably longer than the elastic scattering time $`\tau _e`$. The transport equations therefore have to be supplemented by:
$$D\frac{^2(\mu _{}\mu _{})}{^2x}=\frac{\mu _{}\mu _{}}{\tau _{sf}}$$
(2)
where $`D=(\frac{N_{}}{(N_{}+N_{})D_{}}+\frac{N_{}}{(N_{}+N_{})D_{}})^1`$ is the spin averaged diffusion constant. Eq. 2 describes that the difference in $`\mu `$ decays over a length scale $`\lambda _{sf}=\sqrt{D\tau _{sf}}`$ , the spin-flip length.
To describe the $`F/S`$ system the role of the superconductor has to be incorporated. We assume that the interface resistance itself can be ignored, which is justified in metallic diffusive systems with transparent interfaces. The Andreev reflection can then be taken into account by the following boundary conditions at the F/S interface ($`x=0`$):
$`\mu _{}|_{x=0}`$ $`=`$ $`\mu _{}|_{x=0}`$ (3)
$`j_{}|_{x=0}`$ $`=`$ $`j_{}|_{x=0}.`$ (4)
Here the electrochemical potential of the superconductor S is set to zero. Eq. 3 is a direct consequence of $`AR`$, where an excess of electrons with spin-up corresponds to an excess of holes and therefore a deficit of electrons with spin-down and vice versa. Eq. 4 arises due to the fact that the total Cooper pair spin in the superconductor is zero, so there can be no net spin current across the interface. Note that for Eqs. 3 and 4 to be valid, no spin-flip processes are assumed to occur at the interface as well as in the superconductor.
Eqs. 1 ,2, 3 and 4 now allow the calculation of the spatial dependence of the electrochemical potentials of both spin directions, which have the general forms:
$$\mu _{}=A+Bx+\frac{C}{\sigma _{}}e^{x/\lambda _{sf}^F}+\frac{D}{\sigma _{}}e^{x/\lambda _{sf}^F}$$
(5)
$$\mu _{}=A+Bx\frac{C}{\sigma _{}}e^{x/\lambda _{sf}^F}\frac{D}{\sigma _{}}e^{x/\lambda _{sf}^F}$$
(6)
where A,B,C and D are constants defined by the boundary conditions. For simplicity we first calculate the contact resistance at the F/S interface in a two terminal configuration, noted by $`V_{2T}`$ in Fig. 1(a), ignoring the presence of the second ferromagnetic electrode F2. In this configuration we find:
$$\mu _{}|_{x=0}=\mu _{}|_{x=0}=\frac{\alpha _F\lambda _{sf}^FeI}{\sigma _F(1\alpha _F^2)A}$$
(7)
where $`\alpha _F=(\sigma _{}\sigma _{})/(\sigma _{}+\sigma _{})`$ is the spin polarization of the current in the bulk ferromagnet and $`\lambda _{sf}^F`$, $`\sigma _F=\sigma _{}+\sigma _{}`$, $`A`$ are the spin-flip length, the conductivity and the cross-sectional area of the ferromagnetic strip, respectively. Note that at the interface the electrochemical potentials are finite, despite the presence of the superconductor. This is illustrated in the left part of Fig. 2, where the spin-up and spin-down electrochemical potentials are plotted as a function of x in units of $`\lambda _{sf}^F`$. Defining a contact resistance as $`R_{FS}=\mathrm{\Delta }\mu /eI`$ at the F/S interface yields:
$$R_{FS}=\frac{\alpha _F^2\lambda _{sf}^F}{\sigma _F(1\alpha _F^2)A}.$$
(8)
Note that this is exactly half the resistance which would be measured in a two terminal geometry of one ferromagnetic electrode directly coupled to another ferromagnetic electrode with anti-parallel magnetization. One may therefore consider the F/S interface as an ’ideal’ domain wall (which does not change the spin direction), the superconductor acting as a magnetization mirror.
The presence of the contact resistance at a F/S boundary clearly brings out the difference between a superconductor and a normal conductor with infinite conductivity. In the latter case the boundary condition Eq. 3 at the interface is replaced by $`\mu _{}=\mu _{}=0`$, and no contact resistance would be generated. An interesting feature to be noticed from Fig. 2 is that the electrochemical potential of the minority spin at the interface is *negative*.
The second observation to be made here is that the excess charge density $`n_c\mu _{}+\mu _{}`$ is zero, whereas the spin density $`n_s\mu _{}\mu _{}`$ has a maximum close to the interface. This is a direct consequence of the $`AR`$ process, where a net spin current is not allowed to enter the superconductor. Continuity of the spin currents at the F/S interface results in a spin accumulation in the ferromagnet, being build up over a distance of the spin-flip length $`\lambda _{sf}^F`$.
The contact resistance is small ($`R_{FS}20m\mathrm{\Omega }`$ for a nickel strip with a thickness of 20 nm, a width of 100 nm, a resistivity of $`80\times 10^9\mathrm{\Omega }m`$, a spin polarization $`\alpha _F0.2`$ and a spin-flip length of 20 nm) compared to the total resistance of the ferromagnetic strip F1.
To identify the small contact resistance it is necessary to use a multi-terminal geometry. The four terminal resistance is measured by sending a current through terminals 1 and 3, and measuring the voltage between terminals 2 and 4, as illustrated by $`V_{4T}`$ in Fig. 1(a). We assume that all current flows into the superconductor at $`x=0`$, which is reasonable to assume when the thickness $`d_F`$ of the ferromagnetic strip is small compared to the width $`W`$ of the superconductor (cf. Fig. 1(b)). The width $`W`$ of the superconductor is assumed to be smaller than the spin-flip length of the ferromagnetic strip, $`W<\lambda _{sf}^F`$. Now the second ferromagnetic electrode (F2) has to be included in the calculation. This is done by requiring Eqs. 3 and 4 to include the spin currents of both ferromagnetic electrodes and requiring their spin-up and down-spin electrochemical potentials to be continuous. For the resistance in the four terminal geometry of Fig. 1 the calculation yields:
$$R_{FS^{}}=\pm \frac{1}{2}\frac{\alpha _F^2\lambda _{sf}^F}{\sigma _F(1\alpha _F^2)A}$$
(9)
where the sign refers to the parallel (+) or anti-parallel (-) relative orientation of the magnetization of the two ferromagnetic electrodes. In the case of anti-parallel arrangement one therefore has the rather unique situation that the voltage measured can be outside the range of source and drain contacts.
The above holds as long as the spin-flip length $`\lambda _{sf}^F`$ exceeds the width $`W`$ of the superconductor. The complication of the above experiment would be that it requires the width of the superconductor to be shorter than the spin-flip length in the ferromagnet, which is expected to be around 20 nm. To remedy these complications, we consider an alternative geometry.
The geometry (F/N/S) of Fig. 3 consists of two superconducting strips S, which are coupled by a thin layer of normal metal N, which has a larger spin-flip length ($`\lambda _{sf}^N`$) than the spin-flip length of the ferromagnet ($`\lambda _{sf}^F`$). On top of the normal metal two ferromagnetic strips F1 and F2 are placed. Current is injected by F1 through the normal metal, into the superconductor, whereas the voltage is detected by F2.
In the absence of a spin polarized current $`I`$, the measured resistance $`R=V/I`$, will decay exponentially with $`R_0exp(CL/d_N)`$, where $`R_0\rho _Nd_N/A_C`$ is the resistance of the normal metal between the superconductor and the current injector F1. Here $`\rho _N`$ is the resistivity of the normal metal, $`A_C`$ the contact area between F1 and S, $`d_N`$ the thickness of the normal metal, $`C`$ a constant of order unity and $`L`$ the distance between the two ferromagnetic strips. This resistance will therefore vanish in the regime $`Ld_N`$. However, in the presence of a spin-polarized current $`I`$ a spin density is created at the current injector F1, stretching out towards the voltage probe F2.
To calculate the signal at F2 we have to include the normal region. First, we assume that the superconductor in the region S$`^{^{}}`$ in Fig. 3 is absent. We take the non-equilibrium spin density to be uniform in the normal metal in the region under F1, which is allowed as the thickness of the normal metal is small compared to the spin-flip length ($`\lambda _{sf}^N`$) in the normal metal, $`d_N\lambda _{sf}^N`$. The electrochemical potentials in the normal region between the two ferromagnetic strips are described by solutions of Eq. 5 and 6, with the constants $`A=B=0`$. We then calculate the resistance in the relevant limit that the distance $`L`$ does not exceed the spin-flip length of the normal region, $`L\lambda _{sf}^N`$. The expression for the resistance in this limit is given by:
$$R_{FNS}=\pm \frac{\alpha _F^2\lambda _{sf}^F}{2\sigma _FA(1\alpha _F^2)+\frac{L\sigma _F^2A}{\sigma _N\lambda _{sf}^F}(1+\alpha _F)^2(1\alpha _F)^2}$$
(10)
where $`\sigma _N`$ is the conductivity of the normal metal and $`L`$ is the distance between the two ferromagnetic electrodes. When $`L>\lambda _{sf}^N`$ the signal will decay exponentially.
Eq. 10 and Fig. 4 show that, even though no charge current flows in the N layer, nevertheless a signal is generated at the ferromagnetic electrode F2. In addition, Eq. 10 shows that the signal changes sign when the polarization of F2 is reversed. A reduction of the thickness of the N film will reduce the signal. This is a consequence of the fact that although no charge current flows, the spin-up and spin-down currents are non-zero, and their magnitude (and the associated voltage) depends on the resistance of the N layer.
The above analysis is based on classical assumptions, where the superconducting proximity effect has been ignored in the normal metal. However, it is known that a superconductor modifies the electronic states in the N layer, which would be the case when a superconductor is present in the region S$`^{^{}}`$ (cf. Fig. 3).
In this situation Eq. 10 would still hold, for the electrochemical potentials in the normal metal satisfy the boundary condition of Eq. 3. When the thickness $`d_N`$ of the normal layer is of the order of the superconducting coherence length $`\xi `$, a gap $`\mathrm{\Delta }_N`$ will be developed in the normal metal. This will prohibit the opposite spin currents in the normal metal to flow, and therefore no signal will be detected at the ferromagnetic electrode F2. One could control and eliminate the induced gap $`\mathrm{\Delta }_N`$ by applying a magnetic field parallel to the ferromagnetic electrodes.
To conclude, we have shown that the spin reversal associated with Andreev reflection in a diffusive ferromagnet-superconductor junction, leads to a spin contact resistance. The contact resistance is due to an excess spin density, which exists close to the F/S interface, on a length scale of the spin-flip length in the ferromagnet. In a multi-terminal geometry the contact resistance can have a positive and negative sign, depending on the relative orientation of the ferromagnetic electrodes.
The authors wish to thank the Stichting Fundamenteel Onderzoek der Materie and the EU ESPRIT project no 23307 SPIDER for financial support.
|
no-problem/9901/math9901034.html
|
ar5iv
|
text
|
# A remark about the Lie algebra of infinitesimal conformal transformations of the Euclidian space
## 1 Introduction
Interest has been shown in recent works about equivariant quantizations (see e.g. ) for some particular Lie subalgebras of vector fields over the $`n`$-dimensional Euclidian space $`\mathrm{I}\mathrm{R}^n`$. These are embeddings of $`\mathrm{𝑠𝑙}(n+1,\mathrm{I}\mathrm{R})`$ and $`\mathrm{𝑠𝑜}(p+1,q+1,\mathrm{I}\mathrm{R})`$, $`(p+q=n)`$, into the Lie algebra of polynomial vector fields over $`\mathrm{I}\mathrm{R}^n`$.
Because of its interpretation as the infinitesimal counterpart to the action of the group $`\mathrm{𝑆𝐿}(n+1,\mathrm{I}\mathrm{R})`$ on the $`n`$-dimensional real projective space, the embedding of $`\mathrm{𝑠𝑙}(n+1,\mathrm{I}\mathrm{R})`$ into these fields is called the *projective Lie algebra*. It is quite a well-known fact that it is maximal among polynomial vector fields; see for a proof.
In this paper, we focus our interest on the other subalgebras, related to infinitesimal conformal transformations.
We will in particular generalize and refine a theorem by V. I. Ogievetsky: in , a mix of projective and conformal vector fields is said to generate the Lie algebra of polynomial vector fields and an explicit proof given in dimension $`4`$.
When dimension $`n`$ is greater than 2, infinitesimal conformal transformations constitute a finite-dimensional Lie algebra, made up of polynomial vector fields. This is the considered embedding of $`\mathrm{𝑠𝑜}(p+1,q+1,\mathrm{I}\mathrm{R})`$ into vector fields. When $`n=2`$, this embedding is only a finite-dimensional subalgebra of the (infinite-dimensional) Lie algebra of conformal infinitesimal transformations.
In both cases, we prove that the subalgebra of such polynomial conformal transformations is maximal among polynomial vector fields. For the sake of completeness, we examine the special position of the introduced finite-dimensional subalgebras in dimension $`2`$.
We will denote by $`\mathrm{Vect}(\mathrm{I}\mathrm{R}^n)`$ the Lie algebra of vector fields over $`\mathrm{I}\mathrm{R}^n`$, and respectively by $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^n)`$, $`\mathrm{Vect}_i(\mathrm{I}\mathrm{R}^n)`$ and $`\mathrm{Vect}_i(\mathrm{I}\mathrm{R}^n)`$ the Lie algebra of polynomial vector fields over $`\mathrm{I}\mathrm{R}^n`$, the space of polynomial fields of degree not greater than $`i`$ and the space of homogeneous fields of degree $`i`$.
We will always assume that dimension $`n`$ is greater than $`1`$.
## 2 The algebra $`\mathrm{𝐶𝑜𝑛𝑓}(p,q)`$
Let us denote by $`\mathrm{𝐶𝑜𝑛𝑓}(p,q)`$ the Lie algebra of vector fields over $`\mathrm{I}\mathrm{R}^n`$, $`n=p+q`$, conformal with respect to the metric
$$g=\underset{i=1}{\overset{n}{}}a_i(\mathrm{𝑑𝑥}^i)^2,$$
where $`a_1=\mathrm{}=a_p=1`$ and $`a_{p+1}=\mathrm{}=a_n=1`$.
These are the fields $`X`$ which satisfy
$$L_Xg=\alpha _Xg$$
for some smooth function $`\alpha _X`$. Denote by $`_i`$ both the partial derivative along the $`i`$-th axis and the $`i`$-th natural basis vector of $`\mathrm{I}\mathrm{R}^n`$. It is equivalent for the components of $`X=_iX^i_i`$ to satisfy
$$\{\begin{array}{ccc}_i(a_jX^j)+_j(a_iX^i)\hfill & =\hfill & 0\hfill \\ _i(a_iX^i)_j(a_jX^j)\hfill & =\hfill & 0\hfill \end{array}$$
(1)
when $`ij`$.
For $`h\mathrm{I}\mathrm{R}^n`$, $`A\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})`$ and $`\alpha \mathrm{I}\mathrm{R}^n`$, define
$$\begin{array}{ccc}h^{}\hfill & =\hfill & _ih^i_i\hfill \\ A^{}\hfill & =\hfill & _{i,j}A_j^ix^j_i\hfill \\ \alpha ^{}\hfill & =\hfill & \alpha (x)_ix^i_i\frac{1}{2}(_ia_i(x^i)^2)\alpha ^{\mathrm{}}\hfill \end{array}$$
where $`\alpha ^{\mathrm{}}=_ia_i\alpha _i_i`$.
The space
$$(\mathrm{I}\mathrm{R}^n\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})\mathrm{I}\mathrm{R1}\mathrm{I}\mathrm{I}\mathrm{R}^n)^{}$$
is a Lie subalgebra of $`\mathrm{Vect}(\mathrm{I}\mathrm{R}^n)`$, isomorphic to $`\mathrm{𝑠𝑜}(p+1,q+1,\mathrm{I}\mathrm{R})`$. We will denote it by $`\mathrm{𝑠𝑜}(p+1,q+1)`$.
If $`n>2`$, it is known that $`\mathrm{𝐶𝑜𝑛𝑓}(p,q)=\mathrm{𝑠𝑜}(p+1,q+1)`$. This follows for instance from .
When $`p=2`$, $`q=0`$, condition $`(\text{1})`$ precisely means $`X^1+iX^2`$ is holomorphic in $`\mathrm{C}\mathrm{I}`$: $`\mathrm{𝐶𝑜𝑛𝑓}(2,0)`$ is then isomorphic to the Lie subalgebra
$$\{f\frac{\mathrm{d}}{\mathrm{d}z}:f\text{ is holomorphic in }\mathrm{C}\mathrm{I}\}$$
of $`\mathrm{Vect}(\mathrm{C}\mathrm{I})`$. Through this isomorphism, $`\mathrm{𝑠𝑜}(3,1)`$ is mapped onto $`\mathrm{Vect}_2(\mathrm{C}\mathrm{I})`$ and is isomorphic to $`\mathrm{𝑠𝑙}(2,\mathrm{C}\mathrm{I})`$ considered as a real Lie algebra.
When $`p=q=1`$, the classical change of coordinates
$$\{\begin{array}{ccc}x^1+x^2\hfill & =\hfill & 2u^1\hfill \\ x^1x^2\hfill & =\hfill & 2u^2\hfill \end{array}$$
transforms $`\mathrm{𝐶𝑜𝑛𝑓}(1,1)`$ into the space of smooth vector fields
$$U^1(x^1)_1+U^2(x^2)_2.$$
In other words, $`\mathrm{𝐶𝑜𝑛𝑓}(1,1)`$ is isomorphic to $`\mathrm{Vect}(\mathrm{I}\mathrm{R})\times \mathrm{Vect}(\mathrm{I}\mathrm{R})`$. This time, $`\mathrm{𝑠𝑜}(2,2)`$ is mapped onto $`\mathrm{Vect}_2(\mathrm{I}\mathrm{R})^2`$.
The particular form of condition $`(\text{1})`$ implies the following result, which turns out to be immediate but useful.
###### Lemma 1
If $`_1X,\mathrm{},_nX\mathrm{𝐶𝑜𝑛𝑓}(p,q)`$ then $`X\mathrm{𝐶𝑜𝑛𝑓}(p,q)+\mathrm{Vect}_1(\mathrm{I}\mathrm{R}^n)`$.
Proof. The first order derivatives of $`X`$ are conformal if and only if left hand sides of $`(\text{1})`$ are constant. Subtracting a linear vector field from $`X`$, one can force them to vanish and thus $`X`$ to be conformal.
## 3 Maximality of conformal algebras of polynomial vector fields
Denote by $`\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$ the Lie subalgebra of $`\mathrm{𝐶𝑜𝑛𝑓}(p,q)`$ made up of polynomial vector fields. Here is the announced result.
###### Theorem 2
$`\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$ is maximal in $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^n)`$.
The word maximal is to be taken in its usual algebraic sense : the only algebra of polynomial vector fields larger than $`\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$ is $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^n)`$.
In order to prove the theorem, it suffices to show that any larger subalgebra than $`\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$ contains every constant, linear or quadratic vector field, as implied by the following straightforward lemma.
###### Lemma 3
The smallest Lie subalgebra containing $`\mathrm{Vect}_2(\mathrm{I}\mathrm{R}^n)`$ is $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^n)`$.
It is of course not true when $`n=1`$, as $`\mathrm{Vect}_2(\mathrm{I}\mathrm{R})`$ is a subalgebra of $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R})`$.
Proof of theorem 2. Let $`X`$ be a polynomial vector field not in $`\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$. We may suppose that $`X\mathrm{Vect}_1(\mathrm{I}\mathrm{R}^n)(\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})\mathrm{I}\mathrm{R1}\mathrm{I})^{}`$. Indeed, if it is not the case, repeatedly applying lemma 1, we replace $`X`$ by some $`_iX=[_i,X]\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)`$ as long as possible and then subtract from the last built field its homogeneous parts of degree $`1`$.
Besides, as a module of $`\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})`$, $`\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})`$ is split into three irreducible components:
$$\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})=\mathrm{I}\mathrm{R1}\mathrm{I}\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})^+,$$
where $`\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})^+`$ denotes the space of traceless self-conjugate matrices with respect to the metric. The linear map
$$A\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})A^{}\mathrm{Vect}_1(\mathrm{I}\mathrm{R}^n)$$
being an isomorphism of Lie algebras, it follows that the iteration of brackets of fields of $`\mathrm{𝑠𝑜}(p,q,\mathrm{I}\mathrm{R})^{}`$ with $`X`$ allows to generate every linear vector field, i.e.
$$\mathrm{Vect}_1(\mathrm{I}\mathrm{R}^n)>\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q)\{X\}<,$$
if $`>S<`$ denotes the smallest Lie algebra containing a set $`S`$ of vector fields.
We still need to show that the latter algebra contains every homogeneous quadratic vector field.
It is known that the action $`[A^{},]`$ of $`A\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})`$ endows $`\mathrm{Vect}_2`$ with a structure of $`\mathrm{𝑔𝑙}(n,\mathrm{I}\mathrm{R})`$-module for which the subspace of divergence-free vector fields is irreducible. Now, writing a quadratic vector field
$$X=\frac{1}{n}\alpha ^{}+(X\frac{1}{n}\alpha ^{})$$
with $`\alpha \mathrm{I}\mathrm{R}^n`$ such that $`\alpha (x)=\mathrm{div}X`$, one easily sees that
$$\mathrm{Vect}_2(\mathrm{I}\mathrm{R}^n)=(\mathrm{Vect}_2(\mathrm{I}\mathrm{R}^n)\mathrm{𝐶𝑜𝑛𝑓}_{}(p,q))\mathrm{ker}\mathrm{div}.$$
Therefore, to generate $`\mathrm{Vect}_2`$, it suffices to find $`Y`$, $`Z`$ among the fields generated so far such that $`[Y,Z]0`$ and $`\mathrm{div}[Y,Z]=0`$. The fields $`Y=x^1_1`$ and $`Z=(\mathrm{𝑑𝑥}^2)^{}`$ do the job.
Hence the result.
As a consequence, $`\mathrm{𝑠𝑜}(p+1,q+1)`$ is not maximal in $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^n)`$ only if $`n=2`$. It is easy to check that $`\mathrm{𝑠𝑜}(3,1)`$ is maximal in the Lie subalgebra
$$\{X\mathrm{Vect}_{}(\mathrm{I}\mathrm{R}^2):X^1+iX^2\text{ is a polynomial of }z=x+iy\}$$
and that $`\mathrm{𝑠𝑜}(2,2)`$ is maximal in two subalgebras isomorphic to
$$\mathrm{Vect}_{}(\mathrm{I}\mathrm{R})\times \mathrm{𝑠𝑙}(2,\mathrm{I}\mathrm{R}),$$
in turn maximal in a copy of $`\mathrm{Vect}_{}(\mathrm{I}\mathrm{R})\times \mathrm{Vect}_{}(\mathrm{I}\mathrm{R})`$.
Institut de mathématique, B37
Grande Traverse, 12
B-4000 Sart Tilman (Liège)
Belgium
mailto:f.boniver@ulg.ac.be
mailto:plecomte@ulg.ac.be
|
no-problem/9901/cond-mat9901202.html
|
ar5iv
|
text
|
# Algorithm for normal random numbers
## Abstract
We propose a simple algorithm for generating normally distributed pseudo random numbers. The algorithm simulates $`N`$ molecules that exchange energy among themselves following a simple stochastic rule. We prove that the system is ergodic, and that a Maxwell like distribution that may be used as a source of normally distributed random deviates follows in the $`N\mathrm{}`$ limit. The algorithm passes various performance tests, including Monte Carlo simulation of a finite 2D Ising model using Wolff’s algorithm. It only requires four simple lines of computer code, and is approximately ten times faster than the Box-Muller algorithm.
Pseudo random number (PRN) generation is a subject of considerable current interest. Deterministic algorithms lead to undesirable correlations, and some of them have been shown to give rise to erroneous results for random walk simulations , Monte Carlo (MC) calculations , and growth models . Most of the interest has been focused on PRN’s with uniform distributions. Less attention has been paid to non-uniform PRN generation.
Sequences of random numbers with Gaussian probability distribution functions (pdf’s) are needed to simulate on computers gaussian noise that is inherent to a wide variety of natural phenomena . Their usefulness transcends physics. For instance, numerical simulations of economic systems that make use of so called geometric Brownian models (in which noise is multiplicative) also need a source of normally distributed PRN’s . There are several algorithms available for PRN’s with Gaussian pdf’s. Some, such as Box-Muller’s algorithm, require an input of uniform PRN’s, and their output often suffers from the pitfalls of the latter . Robustness is therefore a relevant issue. In addition, Box-Muller’s algorithm is slow and can consequently consume significant fractions of computer simulation times . The comparison method demands several uniform PRN’s per normal PRN, and is therefore also slow . Use of tables is not a very accurate method. Algorithms that are related, but not equivalent, to the one we propose here have been published , but they are somewhat cumbersome to use. In addition, no proof of their validity has been given.
We propose here a new algorithm for the generation of normally distributed PRN’s that is quite simple and fast. It is a stochastic caricature of a closed classical system of $`N`$ particles. Their velocities provide a source of PRN’s. We prove that, for any initial state, their pdf becomes Maxwellian in the $`N\mathrm{}`$ limit, after an infinite number of two-particle “collisions” take place. To this end, we first prove that our system is ergodic . The proof is not exceedingly difficult because our system is not deterministic. We also study its output as a function of $`N`$, and establish useful criteria for its implementation. Correlation test results are also reported.
For the motivation, consider numbers $`v_1,v_2\mathrm{}v_N`$, placed in $`N`$ computer registers, analogous to velocities of $`N`$ particles that make up a closed classical system in 1D. Pairs of registers $`ı`$ and $`ȷ`$, say, selected at random without bias, are to “interact” somehow, conserving quantity $`v_i^2+v_j^2`$. By analogy with the approach to equilibrium (i.e., to Maxwell’s velocities distribution) that is believed to take place in Statistical Physics, we expect that sufficient number of iterations will lead to an approximately Gaussian pdf of register values, from which the desired PRN’s may be drawn. (See also Ref. .) We define below the simplest interaction we can think of in order that (1) implementation on a computer be very fast, and (2) that we may be able to prove that a Gaussian pdf does indeed ensue.
Before the algorithm is implemented, all $`N`$ registers must be initialized to, say, $`v_ı=1`$ for all $`ı`$ satisfying $`1ıN`$, or all $`v_ı`$ may be read from a set of $`N`$ register values saved from a previous computer run, which we assume to fulfill $`v_ı^2=N`$. Let $`U(1,N)`$, $`U_ı(1,N)`$ be unbiased integer random variables, both in the interval $`[1,N]`$, except that $`U_ı`$ cannot equal $`ı`$. The algorithm follows:
$`ı=U(1,N);ȷ=U_i(1,N);`$ (1)
$`v_ı(v_ı+v_ȷ)/\sqrt{2};`$ (2)
$`v_ȷv_ı+\sqrt{2}v_ȷ`$ (3)
The updated value of $`v_ı`$, from Eq. (2), is used in Eq. (3). After an initial warm up phase (see below), $`v_ı`$ and $`v_ȷ`$ may be drawn each time transformation (1-3) is applied. They are two independent PRN’s, each one with an approximately Gaussian pdf, with $`v_ı=0`$ and $`v_ı^2=1`$ for all $`ı`$, if $`N`$ is sufficiently large (see below). Transformation (1-3) may be thought of as a rotation of $`\pm \pi /4`$ with respect to a randomly chosen $`ıȷ`$ plane ($`+`$ and $``$ signs are for the two possible index orderings, $`ıȷ`$ and $`ȷı`$). Thus, quantity $`v_{\mathrm{}}^2`$ is conserved. Frequencies of events from sequences of $`10^6`$, $`10^8`$ and $`10^{10}`$ PRN’s generated with transformation (1-3), with $`N=1024`$, are exhibited in Fig.1.
We first explain why PRN’s genered by transformation (1-3) are expected to be normally distributed. Let $`𝐏_n(𝐯)`$ be the probability density at $`𝐯=(v_1,v_2,\mathrm{},v_N)`$, after transformation (1-3) has been applied $`n`$ times, on the $`(N1)`$-dimensional spherical surface $`𝒮_{N1}`$, of radius $`\sqrt{N}`$ given by $`N=_{\mathrm{}=1,\mathrm{},N}v_{\mathrm{}}^2`$. Let the single register pdf $`p(v)`$ be the $`n\mathrm{}`$ limit of $`p_n(v)`$, where $`p_n(v_1)=𝐏_n(𝐯)𝑑v_2𝑑v_3\mathrm{}𝑑v_N`$. We show further below that $`𝐏_n(𝐯)constant`$ over $`𝒮_{N1}`$ as $`n\mathrm{}`$. It then follows by integration that,
$$p(v)\left(1\frac{v^2}{N}\right)^{(N3)/2}.$$
(4)
Clearly, $`p(v)C\mathrm{exp}(v^2/2)`$ in the $`N\mathrm{}`$ limit, which is the desired result.
We prove below, in three stages, that $`P_n(𝐯)`$ does indeed become homogeneous over spherical surface $`𝒮_{N1}`$, if $`N3`$, in the $`n\mathrm{}`$ limit. We first prove $`P_n(𝐯)P_n(𝐮)`$ as $`n\mathrm{}`$ if $`𝐯`$ and $`𝐮`$ are related. \[From here on, we say that points $`𝐯`$ and $`𝐮`$ are related if succesive transformations (1-3) of $`𝐯`$ can lead to $`𝐮`$.\] We then prove that the system’s “orbit” covers $`𝒮_{N1}`$ densely \[that is, that any point $`𝐯𝒮_{N1}`$ can be brought arbitrarily close to any other point $`𝐮𝒮_{N1}`$ by applying transformations (1-3) to $`𝐯`$ a sufficient number of times\]. Then, the desired result follows easily. It may help to place the significance of the proof that follows into proper perspective to note that if in Eq. (1) $`ȷU_i[1,N]`$ is replaced by $`ȷ=ı+1modN`$, the system becomes then non-ergodic, as can be easily checked numerically.
To start the proof, let kernel $`K(𝐯,𝐯^{})`$ be defined by $`P_{n+1}(𝐯)=K(𝐯,𝐯^{})P_n(𝐯^{})𝑑𝐯^{}`$, and let
$$F_n\{P_{n+1}^2(𝐯)P_n^2(𝐯)\}𝑑𝐯.$$
(5)
Note first that $`F_n<0`$ implies that $`P_{n+1}(𝐯)`$ is more uniform than $`P_n(𝐯)`$, in the sense that $`𝑑𝐯[P_{n+1}(𝐯)\overline{P}]^2<𝑑𝐯[P_n(𝐯)\overline{P}]^2`$, where $`\overline{P}=1/𝑑v`$. It follows from the definition of $`K(𝐯,𝐯^{})`$ that
$$F_n=𝑑𝐯\{[𝑑𝐯_\mathrm{𝟏}K(𝐯,𝐯_1)P_n(𝐯_1)]^2P_n^2(𝐯)\}.$$
(6)
Making use of the detailed balance condition, $`K(𝐯,𝐯^{})=K(𝐯^{},𝐯)`$, which our system satisfies, and the relation $`𝑑𝐯K(𝐯,𝐯^{})=1`$, Eq. (6) can be cast into,
$$F_n=\frac{1}{2}𝑑𝐯𝑑𝐯_\mathrm{𝟏}𝑑𝐯_\mathrm{𝟐}Q(𝐯,𝐯_1,𝐯_2),$$
(7)
where, $`Q=K(𝐯,𝐯_1)K(𝐯,𝐯_2)[P_n(𝐯_1)P_n(𝐯_2)]^2`$. Therefore, in the $`n\mathrm{}`$ limit, $`P_n(𝐯)`$ becomes constant over each set in $`𝒮_{N1}`$ within which any two points $`𝐯,𝐮`$ are related.
We now prove that the system’s orbit covers $`𝒮_{N1}`$ densely. Let $`H_N`$ be the group of transformations in $`N`$ dimensions defined by Eqs. (1-3). We first show that any rotation in $`3`$D \[that is, any element of $`SO(3)`$\] can be approximated arbitrarily close by elements of $`H_3`$. The proof is extended to higher dimensions by induction. Note first that $`H_3`$ does not belong to the set of finite rotation groups in $`3`$D, and is therefore an infinite group. Let group $`SO(3)`$ be covered by spheres of radius $`ϵ/2`$ each. A finite number of them is sufficient, since the volume of $`SO(3)`$ is finite . It follows that there must be at least one sphere with two elements of $`H_3`$ in it, since $`H_3`$ has an infinite number of elements. Let these two elements be $`r`$ and $`s`$, and let $`g(𝐮,ϵ)`$ be element $`rs^1`$ of $`H_3`$, which is a rotation by angle $`ϵ`$ about some undetermined $`𝐮`$ axis. We will build elements of $`H_3`$ that are as near as desired to any given rotation. To this end, it is sufficient to show that it can be done for a set of infinitesimal generators of rotations . One such set is made up of infinitesimal rotations about three linearly independent axes. Consider axes $`𝐮_1`$, $`𝐮_2`$, and $`𝐮_3`$ that are obtained from $`𝐮`$ by rotations $`g(1,\pi /2)`$, $`g(2,\pi /2)`$, and $`g(3,\pi /2)`$ about each one of the coordinate axes by angle $`\pi /2`$. The correspondng infinitesimal rotations are given by , $`g(𝐮_ı,ϵ)=g(ı,\pi /2)g(𝐮,ϵ)g^1(ı,\pi /2).`$ This concludes the proof for 3 dimensions.
We now prove by induction that any element $`g(ıȷ,\alpha )`$, for the rotation about plane $`ıȷ`$, by angle $`\alpha `$ of the rotation group $`SO(N)`$ can be approximated as nearly as desired by an element $`g`$ of group $`H_N`$, for $`N>3`$. By hypothesis, any $`g(ıȷ,\alpha )`$, for $`ı,ȷ=1,2,\mathrm{},N`$ can be approximated by an element $`g`$ of $`H_N`$. We show now that $`g(ıN+1,\alpha )`$, for $`ı=1,2,\mathrm{},N`$, can also be approximated by elements of $`H_{N+1}`$. We take $`gH_N`$ within distance $`ϵ`$ of $`g_{ıȷ}(\alpha )`$. Now, since rotations preserve distances, it follows that $`g(ıN+1,\alpha )SO(N+1)`$, given by $`g(ıN+1,\alpha )=g(ıN+1,\pi /2)g(ıȷ,\alpha )g^1(ıN+1,\pi /2)`$ is within distance $`ϵ`$ of $`g^{}H_{N+1}`$, given by $`g^{}=g(ıN+1,\pi /2)gg^1(ıN+1,\pi /2)`$. This proves dense coverage in $`N3`$ dimensions. This is a stochastic generalization of Jacobi’s theorem to more than two dimensions.
To conclude the proof that $`P_n(𝐯)constant`$ in the $`n\mathrm{}`$ limit, consider any two points $`𝐕`$ and $`𝐔^{}`$ as centers of disks $`𝒟_𝐕`$ and $`𝒟_𝐔^{}`$, both of radius $`r`$, in $`𝒮_{N1}`$. Since the system’s orbit covers $`𝒮_{N1}`$ densely for $`N3`$, it follows that a point $`𝐔`$ that is related to $`𝐕`$ exists arbitrarily close to $`𝐔^{}`$. Consider now disks $`𝒟_𝐕`$ and $`𝒟_𝐔`$. The fact that there exists at least one sequence of rotations in $`H_N`$ that take $`𝐕`$ into $`𝐔`$ implies that there exists at least one single rotation $`g`$ in $`H_N`$ that transforms $`𝐕`$ into $`𝐔`$. Since $`g`$ is a rotation, it transforms $`𝒟_𝐕`$ rigidly into $`𝒟_𝐔`$. It follows that $`𝑑𝐯P_N(𝐯)`$ over $`𝒟_𝐕`$ equals $`𝑑𝐮P_N(𝐮)`$ over $`𝒟_𝐔`$. Since $`r`$ is arbitrary, and $`𝐕`$ and $`𝐔^{}`$ are any two points in $`𝒮_{N1}`$, it follows that $`P(𝐯)`$ is constant over $`𝒮_{N1}`$ (except, perhaps, on a set of measure zero). This is the desired result. Ergodicity follows .
We next address the following practical issues: (1) how good an approximation to a Gaussian pdf of PRN’s is achieved with a necessarily finite set of $`N`$ registers; (2) how long must the warm up phase be.
It is convenient to rewrite Eq. (4) as follows,
$$p(v)e^{v^2/2}e^{g_N(v)/N}.$$
(8)
where $`g_N(v)=v^2(3v^2/2)/2+𝒪(1/N)`$. $`N^1g_N(v)`$ is approximately the fractional deviation, $`\delta p(v)/p(v)`$, from Gaussian form if $`\delta p(v)/p(v)1`$. We have checked this behavior numerically. Clearly, the number of registers $`N`$ that must be used increases with the number $`M`$ of PRN’s one intends to generate. This is because the value of the largest PRN generated increases, on the average, with $`M`$. More precisely, the value of $`v`$ beyond which PRN’s are only generated with probability $`q`$ is approximately given by $`v^22\mathrm{ln}(M/vq)`$. Now, it follows from Eq. (8) that the fractional error $`\delta P/P`$ in the probability density at $`v`$ is approximately $`N^1v^2(3v^2/2)/2`$ for very large $`N`$. (It is pointless to require this error to be too small since a PRN is expected to be generated beyond $`x`$ with a small probability $`q`$.) It then follows that $`[\mathrm{ln}(M/qv)]^2N\delta P/P`$ must be satisfied by $`N`$. Thus, approximately $`10^4`$ registers are sufficient in order to generate as many as $`10^{15}`$ PRN’s, with a roughly $`10\%`$ error in the probability for the largest PRN in the sequence. For results obtained from a sequence of $`10^{10}`$ PRN’s generated with $`1024`$ registers, see Fig. 1.
Our algorithm must be applied a number $`n_pN`$ of times before it is ready for use unless all $`v_ı`$ are initialized with“equilibrium” values (stored from some previous computer run). The distribution of all register values then evolves towards equilibrium, as illustrated in Fig. 2. Deviations from equilibrium are statistically insignificant for $`n_p2`$ and $`N=1024`$, and for $`n_p4`$ and $`N=\mathrm{1\hspace{0.25em}048\hspace{0.25em}576}`$. Since $`n_p`$ is expected to increase as $`\mathrm{ln}N`$, $`n_p=8`$ should provide ample warm up for any forseeable applications.
The number of PRN’s that must be generated before each PRN in sequence $`v_1,v_2,\mathrm{},v_N`$ returns within distance $`r`$ from its initial value is exponential in $`N`$. More specifically, we estimate it to be $`(\tau /\sqrt{N})(1/r)^N`$ for $`N1`$, where $`\tau `$ is the period of the algorithm used to select $`ı`$ and $`ȷ`$ in Eq. (1). The estimation is based on $`P_n(𝐯)constant`$ over $`𝒮_{N1}`$ as $`n\mathrm{}`$. Thus, an effectively infinite recurrence time follows for any reasonable value of $`N`$.
Correlations between a finite number of PRN’s clearly vanish as $`N\mathrm{}`$, since $`ı`$ and $`ȷ`$ in Eq. (1) are supposedly independent PRN’s. We have searched for correlations in $`m`$ succesively generated PRN’s $`v_1,v_2,\mathrm{}v_m`$, for $`m=3,4,\mathrm{},6`$, performing a chi-square isotropy test over the corresponding $`m`$-dimensional space. An $`m`$-tuple $`𝐯=v_1,v_2,\mathrm{},v_m`$ was said to belong to the $`ith`$ cone, of $`1024`$ randomly oriented cones with axes $`𝐰_1,𝐰_2,\mathrm{},𝐰_{1024}`$, if $`0.99𝐯.𝐰_\mathbf{ı}1`$. No significant deviations from isotropy were observed for $`10^6`$ generated $`m`$-tuples.
Implementation of Wolff’s algorithm in MC calculations of the Ising model’s critical behavior is a demanding test that some well known uniform PRN generators have failed . Large clusters are then flipped as a whole, and this tests correlations in very long sequences. We have used normal PRN’s generated by our algorithm as input into a MC simulation of an Ising system of $`16\times 16`$ spins at the critical temperature. \[For that, we note that $`v_ı^2+v_ȷ^2>2x`$ as often as $`u>\mathrm{exp}(x)`$ if $`v_ı`$ and $`v_ȷ`$ ($`u`$) are PRN’s with Gaussian (uniform) pdf’s, respectively.\] The energy obtained is shown in Fig. 3 as a function of the number of registers $`N`$. The following uniform PRN algorithms were used to select $`ı`$ and $`ȷ`$ in Eq. (1): ggl , R(250, 103,xor) , and RAN3 . We tried the latter two algorithms, which have been shown to lead by themselves to unacceptable results for the Ising model , in order to test our algorithm’s robustness. The results shown in Fig. 3 are gratifying.
Similarly, the specific heat $`c`$ and magnetization $`m`$ fluctuations data points obtained follow approximately the relations $`cc_0+8.4/N`$, and $`(\delta m)^2\chi _0+33/N`$, respectively, where $`c_0=1.497(1)`$ and $`\chi _0=0.5454(2)`$, in agreement with the known exact values.
Double precision is recommended. It prevents excesive drift of the sum $`v_ı^2`$ away from its assigned value. Even then, single precision accuracy is to be expected at the end of a sequence of some $`10^{16}`$ PRN’s, unless the sum is normalized several times during the run.
In summary, we have shown that implementation of Eqs. (1-3) provides a source of PRN’s with an approximately Gaussian pdf. Some $`10^4`$ registers (molecules) are sufficient for some purposes, but up to $`10^5`$ or more may be necessary for more demanding tasks. (Having to make a decision about the number of registers to be used may sometimes be an unwelcomed task. On the other hand, it is a virtue of the algorithm, that one can control, through the value of $`N`$, how close the output is to be from sequences of truly independent random numbers with Gaussian pdf’s.) Initial warm ups for arbitrary initial conditions are necessary; it is sufficient to let each register initially interact an average number of, say, 8 times. The system’s recurrence time was shown to be exponential in $`N`$, and therefore effectively infinite. Its behavior appears to be robust. The proposed algorithm runs an order of magnitude faster on computers than the most often used Box-Muller method . For a fortran code of our algorithm or other questions, please write JFF@Pipe.Unizar.Es.
Continuous help from Dr. Pedro Martínez with computer systems is deeply appreciated by JFF. We are indebted to Prof. P. Grassberger for an important suggestion. JFF and CC are grateful for partial financial support from DGICYT of Spain, through grants No. PB95-0797 and PB97-1080, respectively.
|
no-problem/9901/cond-mat9901351.html
|
ar5iv
|
text
|
# Nature’s Way of Optimizing
(March 3, 2024)
## Abstract
We propose a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organizing processes often found in nature. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions. Drawing upon models used to simulate far-from-equilibrium dynamics, it complements approximation methods inspired by equilibrium statistical physics, such as Simulated Annealing. With only one adjustable parameter, its performance proves competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.
In nature, highly specialized, complex structures often emerge when their most inefficient variables are selectively driven to extinction. Evolution, for example, progresses by selecting against the few most poorly adapted species, rather than by expressly breeding those species best adapted to their environment Darwin . To describe the dynamics of systems with emergent complexity, the concept of “self-organized criticality” (SOC) has been proposed BTW ; bakbook . Models of SOC often rely on “extremal” processes PMB , where the least fit variables are progressively eliminated. This principle has been applied successfully in the Bak-Sneppen model of evolution BS ; SBFJ , where a species $`i`$ is characterized by a “fitness” value $`\lambda _i[0,1]`$, and the “weakest” species (smallest $`\lambda `$) and its closest dependent species are successively selected for adaptive changes, getting assigned new (random) fitness values. Despite its simplicity, the Bak-Sneppen model reproduces nontrivial features of paleontological data, including broadly distributed lifetimes of species, large extinction events and punctuated equilibrium, without the need for control parameters. The extremal optimization (EO) method we propose draws upon the Bak-Sneppen mechanism, yielding a dynamic optimization procedure free of selection parameters GECCO . Here we report on the success of this procedure for two generic optimization problems, graph partitioning and the traveling salesman problem.
In graph (bi-)partitioning, we are given a set of $`N`$ points, where $`N`$ is even, and “edges” connecting certain pairs of points. The problem is to find a way of partitioning the points in two equal subsets, each of size $`N/2`$, with a minimal number of edges cutting across the partition (minimum “cutsize”). These points, for instance, could be positioned randomly in the unit square. A “geometric” graph of average connectivity $`C`$ would then be formed by connecting any two points within Euclidean distance $`d`$, where $`N\pi d^2=C`$ (see Fig. 1). Constraining the partitioned subsets to be of fixed (equal) size makes the solution to this problem particularly difficult. This geometric problem resembles those found in VLSI design, concerning the optimal partitioning of gates between integrated circuits VLSI .
Graph partitioning is an NP-hard optimization problem GareyJohnson : it is believed that for large $`N`$ the number of steps necessary for an algorithm to find the exact optimum must, in general, grow faster than any polynomial in $`N`$. In practice, however, the goal is usually to find near-optimal solutions quickly. Special-purpose heuristics to find approximate solutions to specific NP-hard problems abound AK ; JohnsonTSP . Alternatively, general-purpose optimization approaches based on stochastic procedures have been proposed Reeves ; Osman . The most widely applied of these have been physically motivated methods such as simulated annealing SA1 ; SA2 and genetic algorithms GA ; Bounds . These procedures, although slower, are applicable to problems for which no specialized heuristic exists. EO falls into the latter category, adaptable to a wide range of combinatorial optimization problems rather than crafted for a specific application.
Let us illustrate the general form of the EO algorithm by way of the explicit case of graph bi-partitioning. In close analogy to the Bak-Sneppen model of SOC BS , the EO algorithm proceeds as follows:
1. Choose an initial state of the system at will. In the case of graph partitioning, this means we choose an initial partition of the $`N`$ points into two equal subsets.
2. Rank each variable $`i`$ of the system according to its fitness value $`\lambda _i`$. For graph partitioning, the variables are the $`N`$ points, and we define $`\lambda _i`$ as follows: $`\lambda _i=g_i/(g_i+b_i)`$, where $`g_i`$ is the number of (good) edges connecting $`i`$ to points within the same subset, and $`b_i`$ is the number of (bad) edges connecting $`i`$ to the other subset. \[If point $`i`$ has no connections at all ($`g_i=b_i=0`$), let $`\lambda _i=1`$.\]
3. Pick the least fit variable, i.e. the variable with the smallest $`\lambda _i[0,1]`$, and update it according to some move class. For graph partitioning, the move class is as follows: the least fit point (from either subset) is interchanged with a random point from the other subset, so that each point ends up in the opposite subset from where it started.
4. Repeat at (2) for a preset number of times. For graph partitioning we require $`O(N)`$ updates.
The result of an EO run is defined as the best (minimum cutsize) configuration seen so far. All that is necessary to keep track of, then, is the current configuration and the best so far in each run.
EO, like simulated annealing (SA) and genetic algorithms (GA), is inspired by observations of systems in nature. However, SA emulates the behavior of frustrated physical systems in thermal equilibrium: if one couples such a system to a heat bath of adjustable temperature, by cooling the system slowly one may come close to attaining a state of minimal energy. SA accepts or rejects local changes to a configuration according to the Metropolis algorithm MRRTT at a given temperature, enforcing equilibrium dynamics (“detailed balance”) and requiring a carefully tuned “temperature schedule”. In contrast, EO takes the system far from equilibrium: it applies no decision criteria, and all new configurations are accepted indiscriminately. It may appear that EO’s results would resemble an ineffective random search. But in fact, by persistent selection against the worst fitnesses, one quickly approaches near-optimal solutions. The contrast between EO and genetic algorithms (GA) is equally pronounced. GAs keep track of entire “gene pools” of states from which to select and “breed” an improved generation of solutions. EO, on the other hand, operates only with local updates on a single copy of the system, with improvements achieved instead by elimination of the bad.
Another important contrast to note is between EO and more conventional “greedy” update strategies. Methods such as greedy local search Osman successively update variables so that at each step, the solution is improved. This inevitably results in the system getting stuck in a local optimum, where no further improvements are possible. EO, while registering its greatest improvements towards the beginning of the run, nevertheless exhibits significant fluctuations throughout, as shown in Fig. 2. The result is that, even at late run-times, EO is able to cross sizable barriers and access new regions in configuration space.
There is a closer resemblance between EO and algorithms such as GSAT (for satisfiability) that choose, at each update step, the move resulting in the best subsequent outcome — whether or not that outcome is an improvement over the current solution GSAT . Also, versions of SA have been proposed Greene ; Reeves that enforce equilibrium dynamics by ranking local moves according to anticipated outcome, and then choosing them probabilistically. Similarly, Tabu Search Glover ; Reeves uses a greedy mechanism based on a ranking of the anticipated outcome of moves. But EO, significantly, makes moves using a fitness that is based not on anticipated outcome but purely on the current state of each variable.
Figs. 3a-b show that the results of EO rival those of a sophisticated SA algorithm developed for graph partitioning Johnson . Further improvements may be obtained from a slight modification to the EO procedure. Step (2) of the algorithm establishes a fitness rank for all points, going from rank $`n=1`$ for the worst to rank $`n=N`$ for the best fitness $`\lambda `$. (For points with degenerate values of $`\lambda `$, the ranks may be assigned in random order.) Now relax step (3) so that the points to be interchanged are both chosen stochastically, from a probability distribution over the rank order. This is done in the following way. Pick a point having rank $`n`$ with probability $`P(n)n^\tau ,1nN`$. Then pick a second point using the same process, though restricting ourselves this time to candidates from the opposite subset. The choice of a power-law distribution for $`P(n)`$ ensures that no regime of fitness gets excluded from further evolution, since $`P(n)`$ varies in a gradual, scale-free manner over rank. Universally, for a wide range of graphs, we obtain best results for $`\tau 1.21.6`$. Fig. 3c shows these results for $`\tau =1.5`$, demonstrating its superior performance over both SA and the basic EO method.
What is the physical meaning of an optimal value for $`\tau `$? If $`\tau `$ is too small, we often dislodge already well-adapted points of high rank: “good” results get destroyed too frequently and the progress of the search becomes undirected. On the other hand, if $`\tau `$ is too large, the process approaches a deterministic local search (only swapping the lowest-ranked point from each subset) and gets stuck near a local optimum of poor quality. At the optimal value of $`\tau `$, the more fit variables of the solution are allowed to survive, without the search being too narrow. Our numerical studies have indicated that the best choice for $`\tau `$ is closely related to a transition from ergodic to non-ergodic behavior, with optimal performance of EO obtained near the edge of ergodicity. This will be the subject of future investigation.
To evaluate EO, we applied the algorithm to a testbed of graphs<sup>1</sup><sup>1</sup>1These instances are available via http://userwww.service.emory.edu/~sboettc/graphs.html discussed in Refs. Johnson ; HL ; BM ; MF1 ; MF2 . The first set of graphs, originally introduced in Ref. Johnson , consists of eight geometric and eight “random” graphs. The geometric graphs in the testbed, labeled “U$`N.C`$”, are of sizes $`N=500`$ and 1000 and connectivities $`C=5`$, 10, 20 and 40. In a random graph, points are not related by a metric. Instead, any two points are connected with probability $`p`$, leading to an average connectivity $`CpN`$. The random graphs in the testbed, labeled “G$`Np`$”, are of sizes $`N=500`$ and 1000 and connectivities $`pN=2.5`$, 5, 10 and 20. The best results reported to date on these graphs have been obtained from finely-tuned GA implementations BM ; MF1 ; MF2 . EO reproduces most of these cutsizes, and often at a fraction of the runtime, using $`\tau =1.4`$ and 30 runs of $`200N`$ update steps each. Comparative results are given in the upper half of Table 1.
The next set of graphs in our testbed are of larger size (up to $`N=143`$,437). The lower half of Table 1 summarizes EO’s results on these graphs, again using $`\tau =1.4`$ and 30 runs. On each graph, we used as many update steps as appeared productive for EO to reliably obtain stable results. This varied with the particularities of each graph, from $`2N`$ to $`200N`$ (further discussed below), and the reported runtimes are of course influenced by this. On the first four of the large graphs, the best results to date are once again due to GAs MF2 . EO reproduces all of these cutsizes, displaying an increasing runtime advantage as $`N`$ increases. SA’s performance on the graphs is extremely poor (comparable to its performance on Stufe10, shown later); we therefore substitute more competitive results given in Ref. HL using a variety of specialized heuristics. EO significantly improves upon these heuristics’ results, though at longer runtimes. On the final four graphs, for which no GA results were available, EO matches or dramatically improves upon SA’s cutsizes. And although the results from the U$`N.C`$ and G$`Np`$ graphs suggest that increasing $`C`$ slows down EO and speeds up SA, these results demonstrate that EO’s runtime is still nearly competitive with SA’s on the high-connectivity Nasa graphs.
Several factors account for EO’s speed. First of all, we employ a simple “greedy” start to construct the initial partition in step (1), as follows: pick a point at random, assigning it to one partition, then take all the points to which it connects, all the points to which those new points connect, and so on, assigning them all to the same partition. When no more connected points are available, construct the opposite partition by the same means, starting from a new random (unassigned) point. Alternate in this way, assigning new points to one or the other partition, until either one contains $`N/2`$ points. This clustering of connected points helps EO converge rapidly, and instantly eliminates from the running many trivial cases with zero cutsize. The procedure is most advantageous for smaller graphs, where it provides a significant speed-up; that speed-up becomes less relevant for larger graphs, but can still be productive if the graph has a distinct non-random structure (this was notably the case for Brack2). By contrast, greedy initialization does little to improve SA: unless the starting temperature is carefully fine-tuned, any initial advantage is quickly lost in randomization.
Second of all, we use an approximate sorting process in step (2) to accelerate the algorithm. At each update step, instead of perfectly ordering the fitnesses $`\lambda _i`$ (with runtime factor $`CN\mathrm{log}N`$), we arrange them on an ordered binary tree called a “heap”. The highest level, $`l=0`$, of this heap is the root of the tree and consists solely of the poorest fitness. All other fitnesses are placed below the root such that a fitness value at the level $`l`$ is connected in the tree to a single poorer fitness at level $`l1`$, and to two better fitnesses at level $`l+1`$. Due to the binary nature of the tree, each level has exactly $`2^l`$ entries, except for the lowest level $`l=[log_2N]`$. We select a level $`l`$, $`0l[log_2N]`$, according to a probability distribution $`Q(l)2^{(\tau 1)l}`$ and choose one of its $`2^l`$ entries with equal probability. The rank $`n`$ distribution of fitnesses thus chosen from the heap roughly approximates the desired function $`P(n)n^\tau `$ for a perfectly ordered list. The process of resorting the fitnesses in the heap introduces a runtime factor of only $`C\mathrm{log}N`$ per update step.
A further contributor to EO’s speed is the significantly smaller number of update steps (Fig. 2) that EO requires compared to, say, a complete SA temperature schedule. The quality of our large $`N`$ results confirms that $`O(N)`$ update steps are indeed sufficient for convergence. Generally, $`200N`$ steps were used per run, though in the case of the Nasa graphs only $`30N`$ steps were required for EO to reach its best results, and in the case of the Brack2 graph no more than $`2N`$ steps were necessary.
In summary, EO appears to be quite successful over a large variety of graphs. By comparison, GAs must be finely tuned for each type of graph in order to be successful, and SA is only useful for highly-connected graphs; Ref. EOperc demonstrates the dramatic advantage of EO over SA for sparse graphs. It is worth noting, though, that EO’s average performance has been varied. While on every graph, the best-found result was obtained at least twice in the 30 runs, the cutsizes obtained in other runs ranged from a 1% excess over the best (on the random graphs) to a 100% excess or far more (on the others). For instance, half of the Brack2 runs returned cutsizes near 731, but the other half returned cutsizes of above $`2000`$. This may be a product of an unusual structure in this particular graph, as noted in the discussion above on the initial partition construction. However, we hope that further insights into EO’s performance will be able to explain these wide fluctuations.
It is also clear that the EO algorithm is applicable to a wide range of combinatorial optimization problems involving a cost function. An example well known to computer scientists is the problem of maximum satisfiability. Since one must assign Boolean variables so as to maximize the number of satisfied clauses, a logical definition of fitness $`\lambda _i`$ for a variable $`i`$ is simply the satisfied fraction of clauses in which that variable appears. Another related problem of great physical interest is the spin-glass MPV , where spin variables $`\sigma _i=\pm 1`$ on a lattice are connected via a fixed (“quenched”) network of bonds $`J_{ij}`$ randomly assigned values of $`+1`$ or $`1`$ when $`i`$ and $`j`$ are nearest neighbors (and 0 otherwise). In this system the variables $`\sigma _i`$ try to minimize the energy represented by the Hamiltonian $`H=_{i,j}J_{ij}\sigma _i\sigma _j`$. It is intuitive that the fitness associated with each lattice site here is the local energy contribution, $`\lambda _i=\frac{1}{2}\sigma _i_jJ_{ij}\sigma _j`$. These applications of EO have the conceptual advantage that no global constraint needs to be satisfied, so that on each update a single variable can be chosen according to $`P(n)n^\tau `$; that variable undergoes a unambiguous flip, affecting the fitnesses of all its neighbors. We are currently investigating these problems.
In such cases, where the cost can be phrased in terms of a spin Hamiltonian MPV , the implementation of EO is particularly straightforward. The concept of fitness, however, is equally meaningful in any discrete optimization problem whose cost function can be decomposed into $`N`$ equivalent degrees of freedom. Thus, EO may be applied to many other NP-hard problems, even those where the choice of quantities for the fitness function, as well as the choice of elementary move, is less than obvious. One good example of this is the traveling salesman problem. Even so, we find there that EO presents a challenge to more finely tuned methods.
In the traveling salesman problem (TSP), $`N`$ points (“cities”) are given, and every pair of cities $`i`$ and $`j`$ is separated by a distance $`d_{ij}`$. The problem is to connect the cities using the shortest closed “tour”, passing through each city exactly once. For our purposes, take the $`N\times N`$ distance matrix $`d_{ij}`$ to be symmetric. Its entries could be the Euclidean distances between cities in a plane — or alternatively, random numbers drawn from some distribution, making the problem non-Euclidean. (The former case might correspond to a business traveler trying to minimize driving time; the latter to a traveler trying to minimize expenses on a string of airline flights, whose prices certainly do not obey triangle inequalities!)
For the TSP, we implement EO in the following way. Consider each city $`i`$ as a degree of freedom, with a fitness based on the two links emerging from it. Ideally, a city would want to be connected to its first and second nearest neighbor, but is often “frustrated” by the competition of other cities, causing it to be connected instead to (say) its $`\alpha `$th and $`\beta `$th neighbors, $`1\alpha ,\beta N1`$. Let us define the fitness of city $`i`$ to be $`\lambda _i=3/(\alpha _i+\beta _i)`$, so that $`\lambda _i=1`$ in the ideal case.
Defining a move class (step (3) in EO’s algorithm) is more difficult for the TSP than for graph partitioning, since the constraint of a closed tour requires an update procedure that changes several links at once. One possibility, used by SA among other local search methods, is a “two-change” rearrangement of a pair of non-adjacent segments in an existing tour. There are $`O(N^2)`$ possible choices for a two-change. Most of these, however, lead to even worse results. For EO, it would not be sufficient to select two independent cities of poor fitness from the rank list, as the resulting two-change would destroy more good links than it creates. Instead, let us select one city $`i`$ according to its fitness rank $`n_i`$, using the distribution $`P(n)n^\tau `$ as before, and eliminate the longer of the two links emerging from it. Then, reconnect $`i`$ to a close neighbor, using the same distribution function $`P(n)`$ as for the rank list of fitnesses, but now applied instead to a rank list of $`i`$’s neighbors ($`n=1`$ for nearest neighbor, $`n=2`$ for second-nearest neighbor, and so on). Finally, to form a valid closed tour, one link from the new city must be replaced; there is a unique way of doing so. For the optimal choice of $`\tau `$, this move class allows us the opportunity to produce many good neighborhood connections, while maintaining enough fluctuations to explore the configuration space.
We performed simulations at $`N=16`$, 32, 64, 128 and 256, in each case generating ten random instances for both the Euclidean and non-Euclidean TSP. The Euclidean case consisted of $`N`$ points placed at random in the unit square with periodic boundary conditions; the non-Euclidean case consisted of a symmetric $`N\times N`$ distance matrix with elements drawn randomly from a uniform distribution on the unit interval. On each instance we ran both EO and SA from random initial conditions, selecting for both methods the best of 10 runs. EO used $`\tau =4`$ (Eucl.) and $`\tau =4.4`$ (non-Eucl.), with $`16N^2`$ update steps<sup>2</sup><sup>2</sup>2Given these large values of $`\tau `$ and consequently low ranks $`n`$ chosen, an exact linear sorting of the fitness list was sufficient, rather than the approximate heap sorting used for graph partitioning.. SA used an annealing schedule with $`\mathrm{\Delta }T/T=0.9`$ and temperature length $`32N^2`$. These parameters were chosen to give EO and SA virtually equal runtimes. The results of the runs are given in Table 2, along with baseline results using an exact algorithm exact .
While the EO results trail those of SA by up to about 1% in the Euclidean case, EO significantly outperforms SA for the non-Euclidean (random distance) TSP. This may be due to the substantial configuration space energy barriers exhibited in non-Euclidean instances; equilibrium methods such as SA get trapped by these barriers, whereas non-equilibrium methods such as EO do not. (Interestingly, SA’s performance here diminishes rather than improves when runtimes are increased by using longer temperature schedules!) For Euclidean instances, the tour lengths found by EO on single runs were at worst 1% over the best-of-ten, and the tour lengths found by SA were at worst 4% over the best-of-ten; for non-Euclidean instances, these worst excesses were 5% (EO) and 10% (SA). Finally, note that one would not expect a general method such as EO to be competitive here with the more specialized optimization algorithms, such as Iterated Lin-Kernighan CLO ; JohnsonILK , designed particularly with the TSP in mind. But remarkably, EO’s performance in both the Euclidean and non-Euclidean cases — within several percent of optimality for $`N256`$ — places it not far behind the leading specially-crafted TSP heuristics JohnsonTSP .
Our results therefore indicate that a simple extremal optimization approach based on self-organizing dynamics can often outperform state-of-the-art (and far more complicated or finely tuned) general-purpose algorithms, such as simulated annealing or genetic algorithms, on hard optimization problems. Based on its success on the generic and broadly applicable graph partitioning problem, as well as on the TSP, we believe the concept will be applicable to numerous other NP-hard problems. It is worth stressing that the rank ordering approach employed by EO is inherently non-equilibrium. Such an approach could not, for instance, be used to enhance SA, whose temperature schedule requires equilibrium conditions. This rank ordering serves as a sort of “memory”, allowing EO to retain well-adapted pieces of a solution. In this respect it mirrors one of the crucial properties noted in the Bak-Sneppen model PRE96 ; PRL97 . At the same time, EO maintains enough flexibility to explore further reaches of the configuration space and to “change its mind”. Its success at this complex task provides motivation for the use of extremal dynamics to model mechanisms such as learning, as has been suggested recently to explain the high degree of adaptation observed in the brain bakbrain .
Thanks to D. S. Johnson and O. Martin for their helpful remarks.
|
no-problem/9901/cond-mat9901319.html
|
ar5iv
|
text
|
# Direct simulation of ion beam induced stressing and amorphization of silicon
## I Introduction
Mechanical response of materials to ion irradiation has implications for many materials applications. Ion processing of silicon is of widespread fundamental and technological importance due to its central role in the semiconductor industry. The high dose, large atomic number wafer implants used today lead to a significant amount of amorphization, which must be removed by subsequent annealing. Ion implant induced stresses can lead to such problems as substrate bending, delamination and cracking, and anomalous diffusion of dopants. The creation and modification of stress in silicon by ion irradiation has been examined experimentally by several research groups. Experimental measurements have shown that Si expands by approximately 1.8% upon ion beam induced amorphization. Little atomic scale modeling of the amorphization process has been conducted, although many attempts have been made to simulate the structure of amorphous Si (a-Si). These simulations usually involve a rapid quench of liquid Si and suffer from drawbacks such as limited system size and short simulation times. The vast majority of these simulations result in a structure with similar properties to experimental a-Si, except for the fact that the density is typically around 4% greater than crystalline Si (c-Si). Here, we study the formation and structure of a-Si by direct MD simulation of the ion beam amorphization process, rather than by a simulation of quenching.
## II Molecular Dynamics Simulation Scheme
We employ a minimal atomistic model of the radiation damage process and investigate stressing and deformation of the substrate. A full MD simulation incorporating a large enough section of the target material to completely contain the paths of many implanted ions, and to run for a realistic time is neither feasible, nor particularly desirable. What we wish to study is the response of a section of the material to the damage induced by the irradiation. In order to achieve this, we simulate a model system, that incorporates all the necessary features of the complete system.
At the ion energies used, the material response is a bulk, rather than a surface effect, so it is not necessary to explicitly include the surface in simulations. Hence, periodic boundary conditions are applied in all directions to minimize finite-size effects. Energy dissipation and equilibration of pressure are controlled by coupling the system to an external bath. This involves two parameters, $`\tau _T`$ and $`\tau _P`$, that are time constants controlling the rate at which temperature and pressure tend towards set values. This non-physical coupling gives us control over quenching and expansion of the system, but does limit the inferences we can draw about dynamic processes. We have analyzed the dependence of simulation results on the strength of these couplings in order to ensure that we do not introduce any systematic errors into our simulations. We assume that the ion dose is sufficiently low that the presence of ions in the material has a negligible effect and may be ignored. The effect of the irradiation is modeled by choosing primary knock-on atoms (PKAs) within the simulated region and assigning velocities to them from a distribution that can be dependent upon ion type, ion energy, and the depth in the substrate. PKAs are produced by collisions with the implanted ion, but the majority are due to collisions of knock-on atoms in sub-cascades. Hence our PKA velocity distribution must take account of all knock-on atoms that are created by ion implantation. In order to account for the free surface, the material is allowed to expand or contract in the direction normal to the surface; expansion is not allowed in the in-plane directions. The passage of high energy ions through the simulated region are rare events, between which the material will relax as energy is dissipated, then enter an essentially constant state for a long period of time. As this constant state contributes nothing to the response of the material, we omit it and start another PKA at this time. By taking this approach, we can reduce the system size to the extent that a representative set of simulations can be conducted on a few workstations in a matter of days.
## III Simulation Details
MD based ion implant simulations were used to to determine realistic PKA velocity distributions for various ion types and energies, at differing substrate depths; the resulting distributions were similar. The magnitude of PKA velocities could be approximately described by a $`\chi ^2`$ distribution. For an incident ion beam perpendicular to the surface, the in-plane distribution of PKA velocity vectors was uniform. The angle between the PKA velocity vector and the ion beam could be approximated by a Gaussian distribution with mean and standard deviation of $`85^{}`$ and $`10^{}`$ respectively.
PKA energies were chosen such that the size of displacement cascades was less than the system dimensions, to ensure that parts of the same damage cascade could not interact through the periodic boundaries. The selection of coupling constants for pressure and temperature control is critical; too long coupling times lead to excessive cpu requirements, whilst too short coupling times lead to unphysical system behavior, e.g., rapid quenching of hot disordered material will produce an excessive number of defects. Therefore, preliminary simulations were conducted with various combinations of PKA energy, system size and temperature and pressure coupling constants to investigate and minimize the sensitivity of the simulation on these parameters.
The final simulations were conducted using PKA energies with a mean of 1 keV. All silicon atoms were coupled to a 300 K heatbath with a coupling constant, $`\tau _T`$, of 1.0 ps; this was determined to be large enough that increasing it had no effect on simulation results. A new PKA was initialized as soon as the system temperature was quenched to below 320 K. Stress in the direction normal to the surface was relaxed to zero with a time constant, $`\tau _P`$, of 0.1, 1.0, or 10.0 ps. No single value of this parameter could be used as it effectively represents the ability of the material to locally deform, both elastically and through plastic flow. As such it is affected by the size of the ion path, ion mass and energy, the ion dose rate and the total ion dose. We therefore ran three sets of simulations spanning two orders of magnitude range in this parameter. This range is expected to contain most experimentally accessible conditions. In order to relate stress to change in atomic positions, a Young’s modulus of 113.0 GPa was used for both crystalline and damaged Si; the value of this parameter is expected to vary during amorphization, but its exact value is not critical as variations will only affect the value of the the non-critical time constant, $`\tau _P`$. The initial system was crystalline Si, with dimensions of approximately 50.0 Å $`\times `$ 50.0 Å $`\times `$ 50.0 Å, containing 6240 atoms, at zero pressure. In order to be able to simulate a sufficiently large sample, we use a many-body empirical potential for silicon. To our knowledge, the derivation of the pressure virial for this potential has not been previously published. A method of obtaining the virial, which may be applied to any many-body potential, is given in Appendix A.
## IV Amorphization of silicon during ion irradiation
Three sets of simulations are presented, the only difference between them being the magnitude of the pressure control coupling constant. In each case 9 separate simulations were conducted for each parameter set and the final results averaged. Data was recorded at the end of each quench, i.e., immediately prior to initializing a PKA, to generate radial distribution functions (RDFs), and distributions of bond angles, bond lengths, and atomic coordination. The simulation parameters chosen resulted in a mean time between PKA initialization of approximately 6.25 ps.
During irradiation, damage accumulates within the sample, resulting in the development of stress. As the simulation progresses, the material expands, or contracts to relieve the normal stress, as shown in Figs. 1 to 3. Anisotropic deformation occurs by flow of material to relieve the in-plane stress. The normal stress is relaxed to zero in the case of the two shorter $`\tau _P`$ values and remains negative for the longer $`\tau _P`$. The stress would be expected to reach an equilibrium state for any value of $`\tau _P`$, given a sufficiently long simulation time, but cpu constraints prevented us from achieving this for the largest value of $`\tau _P`$. The in-plane stress is intially positive (compression) due to damage accumulation. Once a sufficient density of damage is present the sample is able to relax via flow of material and the stress becomes negative (tension). The in-plane negative stress is due to the outflow of material during the ‘thermal spike’ phase of radiation damage. As the sample cools during expansion a tensile state is reached, but material is not able to flow back to relieve the stress. A similar response to ion irradiation has been previously observed in wafer curvature experiments. At long times, the damage appears to saturate and the material maintains a dynamic equilibrium, with a constant in-plane stress of around $`0.3`$ GPa. The percentage expansion of the system is dependent upon the value of $`\tau _P`$; smaller values (faster relaxation) result in expansion of the material, whilst the largest value gives an initial expansion followed by contraction. Due to cpu time constraints only the intermediate case was followed until the system reached an equilibrium condition. In this case the material reaches a density around 1.9% less than c-Si, approximately the same as the experimental value of 1.8%.
The distributions shown in Figs. 4 to 7 were calculated for the final state of each amorphization simulation, i.e., they correspond to the final data points in Figs. 1 to 3. The distribution of bond angles is shown in Fig. 4. To calculate the distribution, angles were weighted by the product of the Tersoff cut-off functions of the two bonds involved. The small peak at $`60^{}`$ corresponds to three-fold rings and five-fold coordinated atoms, the shoulder at around $`80^{}`$ is due to four fold rings, and the main peak is due to five and six fold rings. The distribution due to c-Si is also shown to illustrate the effect of damage and amorphization; the frequency is scaled by $`1/3`$ in order to allow the sharp peak to appear on the same plot as the a-Si results. The mean bond angle in the amorphized samples is around $`2^{}`$ less than the tetrahedral angle, with a standard deviation of approximately $`19^{}`$.
The distribution of bond lengths is shown in Fig. 5. When interpreting this it is useful to recall that the Tersoff switching function acts between 2.7 and 3.0 Å. The small peak between 2.9 and 3.0 Å is therefore due to non-bonding (repulsive) interactions. Bond lengths become about 0.09 Å greater than those in c-Si. The angle and length distributions due to the different relaxation rates are so similar that they cannot be easily distinguished. In order to count the coordination of atoms, we have to specify a criterion to decide if any pair of atoms are bonded. Based on the bond-length distribution we count all atoms with interaction distances within the mid point of the switching function (2.85 Å) to be bonded. The resulting coordination distribution is shown in Fig. 6. Very few ($``$3%) undercoordinated atoms are produced during amorphization, but approximately 26% become five-fold coordinated, and occasional ($``$2%) six-fold coordination occurs. Even though the average coordination of atoms increases to approximately 4.27 during amorphization, expansion is possible as bond-lengths also increase.
Table I contains structural information on the samples produced by these simulations. The mean and standard deviation of the structural parameters are given, and the percentage expansion of the irradiated samples and potential energy per atom are given relative to c-Si at 300 K. The Tersoff potential gives a cohesive energy for Si of 4.63 eV/atom at zero K and 4.59 eV/atom at 300 K (this is in agreement with equi-partition of kinetic and potential energy, as 300 K corresponds to a kinetic energy of 0.04 eV/atom). In the final damaged structures, the potential energy per atom is approximately 0.41 eV higher than that of c-Si at 300 K. This stored energy is approximately 8.9% of c-Si cohesive energy and is primarily due to bond angle distortions of around $`19^{}`$ as shown in Fig. 4 and Table I.
The resulting radial distribution functions, shown in Fig. 7 are indistinguishable between the various simulation sets, but differ from the experimental data obtained by neutron diffraction experiments. The broader peaks in the simulated RDFs indicate that the structures contain a higher degree of disorder than the experimental samples. There is also a feature at around 3 Å, where the artifact in the shape of the first minimum reflects the abrupt cutoff in the empirical potential. The first two peaks of the a-Si RDF correspond to those for c-Si showing the existence of short-range order and a tetrahedral like environment for each atom. There is little correlation between the positions of the other peaks, except for the third peak in the experimental data of Kugler et al.; this may be an indication of the presence of micro-crystallites in their sample.
## V Annealing
The damage simulations were carried out at room temperature with extremely limited times for structural relaxation. Therefore, the amorphized samples are expected to contain a large number of high energy defects; this is demonstrated by the difference between the experimental and simulated RDFs. In order to directly compare experimental and simulated samples, we have therefore carried out annealing simulations. The final structures from each set of damage simulations were taken, and subjected to four separate anneals. Anneals were done at room temperature, between room temperature and the a-Si to c-Si transition temperature, between the transition temperature and the c-Si melting point, and at the melting point. The Tersoff potential over-predicts the c-Si melting temperature, giving a value of approximately 3000 K. Therefore we applied the following scaling between actual and simulated temperatures:
$`T_{Tersoff}=\{\begin{array}{cc}T_{Experiment}\hfill & T_{Experiment}300\mathrm{K}\hfill \\ & \\ 1.947T_{Experiment}283.994\mathrm{K}\hfill & T_{Experiment}>300\mathrm{K}\text{.}\hfill \end{array}`$ (1)
This resulted in annealing temperatures of 300 K, 1121 K, 2194 K, and 3000 K, which approximately correspond to experimental temperatures of 300 K, 722 K, 1273 K, and 1687 K respectively. Anneals were continued until no further change in any of the measured structural parameters was observed. All simulation parameters were kept the same as those used for the damage simulations apart from the anneal temperature. After annealing, the samples were quenched to 300 K over a time of 40 ps before the final structural properties were measured.
The structural changes during annealing are shown in the following plots for the $`\tau _P`$=1 ps damage simulation. The high temperature anneals resulted in large density changes as shown in Fig. 8. After cooling to room temperature to remove the effect of thermal expansion, the final densities of the annealed samples corresponded to 2.1% and 1.9% expansion, and 0.4% and 1.0% contraction relative to c-Si respectively for the 300 K, 1121 K, 2194 K, and 3000 K anneals. The room temperature anneal gives very little structural modification in the time simulated; all distributions are indistinguishable from the unannealed sample.
The 1121 K anneal results in modifications to bond angle, bond length, and coordination distributions and to the RDF, whilst the density is almost unchanged. This indicates that local defects are being removed without any global structural reorganization. This is in agreement with experimental studies which indicate that defects in a-Si are similar to defects in c-Si, i.e., they behave as interstitials and vacancies within the amorphous network. Annealing therefore occurs through diffusion and annihilation of defects, rather than by a reorganization of the network. Fig. 9 shows the final distribution of bond angles; the mean increases by approximately $`1^{}`$, to around $`1^{}`$ less than the c-Si angle, while the standard deviation of bond angles is reduced to $`16^{}`$. Bond lengths are reduced by an average of 0.03 Å, to 0.06 Å longer than those in c-Si, as shown in Fig. 10. The anneal results in a reduction in 3-fold and 4-fold rings as evidenced by the reduction in the frequency of bond angles of $`60^{}`$ and $`80^{}`$, and the reduction in 5-fold coordinated atoms. After annealing and quenching, approximately 1% of atoms are 3-fold coordinated, 83% of atoms are 4-fold coordinated, and 16% are 5-fold coordinated, as shown in Fig. 11. The calculated radial distribution function, shown in Fig. 12, is in very good agreement with the experimental data, with the only discrepancy due to the short range of the potential cutoff. Annealing leads to an energy gain of approximately 0.13 eV per atom, to give an energy in the final relaxed structure 0.28 eV higher than that of c-Si. This stored energy is approximately 6.1% of c-Si cohesive energy and is primarily due to bond angle distortions of around $`16^{}`$ as shown in Fig. 9.
The 2194 K anneal causes partial melting of the amorphous material and subsequent reconstruction into a higher density structure. The final quenched structure contains more defects than the 1121 K annealed sample, but this may be in part due to the rapid quench from a semi-liquid state. We observe no indication of any recrystallization in the time simulated. The 3000 K anneal clearly involves complete melting of the sample, as the density increases to above that of crystalline Si. Again, the annealed sample contains more defects than the 1121 K annealed sample, but less than the un-annealed sample.
## VI Conclusions
We have investigated the structural response of crystalline silicon to ion-irradiation. We are able to simulate stress generation and structural relaxation with a relatively simple model. Simulation of continual radiation followed by annealing generates amorphous silicon with a low level of defects. Amorphous silicon prepared by various experimental methods usually has a density between 2% and 10% less than that of crystalline material, whereas atomistic simulations usually produce samples with a density around 4% higher than c-Si. The fact that a-Si prepared by simulations of quenching liquid Si has a higher density than c-Si is not surprising. Amorphous silicon has a structure similar to that of the liquid, which is 5% denser than the crystal form at the melting point . Therefore a defect free continuous random network (CRN) structure for a-Si would be expected to result in a density greater than c-Si.
We conclude that by conducting a direct simulation of radiation induced amorphization, we have produced the experimentally observed structure of ion beam amorphized Si. The structure differs from the proposed defect free CRN models, and differs from a-Si structures formed by simulations of quenching. Although this metastable structure is at a higher energy than CRN a-Si, it cannot be annealed to produce that structure, as transformation to c-Si, or melting will occur at a lower temperature. This leads us to question what is meant by ‘amorphous’ Si, as an experimental sample termed amorphous may contain micro-crystals, CRN structures, and defects such as vacancies, interstitials, c-a boundaries, etc. In some sense the never experimentally observed high density structure can be regarded as true amorphous material, whereas other less dense materials must be regarded as containing defects at the very least, with a detailed structure that is preparation dependent.
## VII Acknowledgments
We thank M. Nastasi for illuminating discussions during initial parts of this work. This work was performed under the auspices of the United States Department of Energy.
## A Derivation of the Virial for Many-Body Potentials
### 1 Introduction
While the pressure virial is easy to calculate for a thermodynamic system described by pair potentials, the extension to systems modeled by many body potentials is not obvious. Here we derive the virial for Tersoff type potentials, but note that the method is readily modified to other many body, or three body potentials so long as the configuration energy can be written down as a function of atomic coordinates.
Following Smith we write the pressure with an explicit dependence on volume. This is achieved by relating absolute atomic coordinates, $`𝐫`$, to scaled coordinates, $`𝝆`$ through the volume, $`V`$, so that under isotropic expansion, or contraction of the system, the scaled coordinates of atoms are unchanged:
$$𝐫=V^{1/3}𝝆.$$
(A1)
The pressure, $`P`$, is then given by:
$$P=NkT/V\mathrm{\Phi }/3V,$$
(A2)
where the virial, $`\mathrm{\Phi }`$, is:
$$\mathrm{\Phi }=3VU([V^{1/3}𝝆]^N)/V.$$
(A3)
### 2 Application to the Tersoff Potential
The configuration energy, $`U`$, is given by:
$$U=\underset{i=1}{\overset{N}{}}\underset{j<i}{}u_{ij}(𝐫^N),$$
(A4)
where $`u_{ij}`$ is the energy of the bond between atoms $`i`$ and $`j`$, as a function of $`𝐫^N`$, the set of atomic coordinates. The functional form is:
$$u_{ij}=f(r_{ij})[V_R(r_{ij})b_{ij}V_A(r_{ij})],$$
(A5)
where $`f(r_{ij})`$ is a cutoff function that makes the interactions short ranged, $`V_R(r_{ij})`$ and $`V_A(r_{ij})`$ are pair terms, $`r_{ij}`$ is the bond length, and $`b_{ij}`$ is a many body term. As $`b_{ij}`$ is a function of the positions of neighboring atoms of $`i`$ and $`j`$, these atoms experience a force due to the $`i`$-$`j`$ bond. Let the set of atoms involved in the calculation of $`u_{ij}`$, i.e. atoms $`i`$, $`j`$ and neighbors, be denoted by $`S_{ij}`$. Substituting Eq. (A4) into Eq. (A3) gives:
$$\mathrm{\Phi }=3V\underset{i=1}{\overset{N}{}}\underset{j<i}{}u_{ij}([V^{1/3}𝝆]^N)/V.$$
(A6)
Since from Eq. (A1):
$$𝐫=\frac{1}{3}V^{2/3}𝝆V,$$
(A7)
this can be rewritten as:
$$\mathrm{\Phi }=3V\underset{i=1}{\overset{N}{}}\underset{j<i}{}\frac{u_{ij}(𝐫^N)}{𝐫^N}\frac{V^{1/3}𝝆^N}{3V}.$$
(A8)
Explicitly writing the dependence on interacting atoms gives:
$$\mathrm{\Phi }=3V\underset{i=1}{\overset{N}{}}\underset{j<i}{}\underset{kS_{ij}}{}\frac{u_{ij}(𝐫^N)}{𝐫_k}\frac{V^{1/3}𝝆_k}{3V},$$
(A9)
or, in terms of atomic positions:
$$\mathrm{\Phi }=\underset{i=1}{\overset{N}{}}\underset{j<i}{}\underset{kS_{ij}}{}𝐟_k^{ij}𝐫_k,$$
(A10)
where $`𝐟_k^{ij}`$ is the force on atom $`k`$ due to the bond between atoms $`i`$ and $`j`$. The atomic positions, $`𝐫_k`$, are local to the bond being considered, e.g., periodic boundary conditions are accounted for before the potential and force calculations. All local atomic coordinates can in fact be translated to be relative to the position of atom $`i`$ for each bond, so the force on atom $`i`$ can be neglected:
$$\mathrm{\Phi }=\underset{i=1}{\overset{N}{}}\underset{j<i}{}\underset{kS_{ij}^{}}{}𝐟_k^{ij}𝐫_{ik}.$$
(A11)
If the interaction potential is pairwise, i.e., $`b_{ij}`$ depends only on atoms $`i`$ and $`j`$, Eq. (A11) reduces to the pair virial:
$$\mathrm{\Phi }=\underset{i=1}{\overset{N}{}}\underset{j<i}{}𝐟_j^{ij}𝐫_{ij}.$$
(A12)
Individual elements of the pressure tensor, $`𝚽`$, can be obtained by replacing Eq. (A1) with:
$$𝐫=𝐋𝝆,$$
(A13)
where $`𝐋`$ is a $`3\times 3`$ matrix with determinant $`V`$. For example, by using:
$$𝐋=\left(\begin{array}{ccc}L_{xx}& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right),$$
(A14)
one obtains:
$$\mathrm{\Phi }_{xx}=3\underset{i=1}{\overset{N}{}}\underset{j<i}{}\underset{kS_{ij}^{}}{}f_{x_k}^{ij}r_{x_{ik}}.$$
(A15)
|
no-problem/9901/hep-ph9901304.html
|
ar5iv
|
text
|
# Longitudinal subtleties in diffusive Langevin equations for non-Abelian plasmas
## I Introduction
Non-perturbative processes in a hot non-Abelian plasma at or near equilibrium are associated with slow evolution of magnetic gauge fields.<sup>*</sup><sup>*</sup>*This is explicitly argued in ref. , but this fact is also implicit in earlier analysis of specific thermal effects such as plasmon damping rates of fast-moving particles and the color conductivity . The characteristic spatial scale $`R`$ of non-perturbative gauge field fluctuations and the associated time scale $`t`$ for their evolution are of order
$$R\frac{1}{g^2T},t\frac{1}{g^4T\mathrm{ln}(1/g)},$$
(2)
for small coupling. Alternatively, the characteristic spatial momentum $`k`$ and frequency $`\omega `$ are
$$kg^2T,\omega g^4T\mathrm{ln}(1/g).$$
(3)
For a review, see the introduction of our earlier paper . The logarithm appearing in the time scale is a recent and interesting result of Bödeker , whose physical interpretation we discuss in ref. .
Throughout this discussion, “hot” means that the temperature is large enough that the running coupling $`\alpha (T)`$ is small, that chemical potentials are ignorable, and that there is no spontaneous symmetry breaking. Examples of non-perturbative processes include chirality violation in hot QCD, and baryon number violation in hot electroweak theory (in its high-temperature symmetric phase).
Bödeker has proposed an effective theory appropriate for the scales (*I) above. His effective theory is a classical field theory that involves only gauge fields with dynamics governed by the diffusive Langevin equation
$$\sigma 𝐄=𝐃\times 𝐁𝜻.$$
(4)
Here, $`𝐃`$ is the covariant derivative acting in the adjoint representation. In Bödeker’s proposal, $`𝜻`$ is a Gaussian white noise random force, normalized asWe will scale our gauge fields by a factor of $`g`$, so covariant derivatives contain no explicit couplings while the action (or energy) has an overall factor of $`1/g^2`$. In addition, we will take the gauge group generators, and also the gauge field $`𝐀𝐀^aT^a`$, to be anti-Hermitian. Hence, the covariant derivative is simply $`𝐃=\mathbf{}+𝐀`$, and for the adjoint representation $`(T^a)_{bc}=f_{bac}`$ with the structure constants $`f_{abc}`$ real and totally anti-symmetric.
$$\zeta _i^a(t,𝐱)\zeta _j^b(t^{},𝐱^{})=2\sigma g^2T\delta ^{ab}\delta _{ij}\delta (tt^{})\delta ^{(3)}(𝐱𝐱^{}),$$
(5)
where $`i,j`$ and $`a,b`$ are spatial vector and adjoint color indices, respectively. This effective theory is supposed to give a quantitative description of non-perturbative physics in the hot plasma to leading order in the logarithm of the coupling. In other words, corrections to this effective theory are suppressed only by powers of $`1/\mathrm{ln}(1/g)`$. In ref. , we showed that $`\sigma `$ can be interpreted as the color conductivity We are using “color” as a descriptive name for some non-Abelian gauge field. It should be emphasized that all discussion of “color” is applicable to the dynamics of, in particular, the SU(2) electroweak gauge field. of the plasma, which is given by
$$\sigma \frac{m_{\mathrm{pl}}^2}{\gamma _\mathrm{g}},$$
(6)
where $`m_{\mathrm{pl}}`$ is the plasma frequency and
$$\gamma _\mathrm{g}\alpha C_\mathrm{A}T\mathrm{ln}(1/g)$$
(7)
is the damping rate for hard thermal gauge bosons . The $``$ sign indicates equality at leading logarithmic order. \[That is, we are not distinguishing $`\mathrm{ln}(2/g)`$ from $`\mathrm{ln}(1/g)`$ in Eq. 7, but the coefficient of the logarithm is correct.\] The plasma frequency $`m_{\mathrm{pl}}`$ is well known<sup>§</sup><sup>§</sup>§ For hot electroweak theory with a single Higgs doublet, for instance, $`m_{\mathrm{pl}}^2=\frac{1}{18}(5+2n_\mathrm{f})g^2T^2`$ at leading order in $`g`$, where $`n_\mathrm{f}=3`$ is the number of fermion families, and the adjoint Casimir $`C_\mathrm{A}=2`$ in (7). For QCD with $`n`$ flavors of quarks, $`m_{\mathrm{pl}}^2=\frac{1}{3}\left(1+\frac{n}{6}\right)g^2T^2`$ and $`C_\mathrm{A}=3`$, where $`n`$ is the number of relevant quark flavors (u, d, s, c, b, t). at leading order in coupling and is of order $`gT`$.
Bödeker’s effective theory is well suited to numerical simulation because it is classical, insensitive to ultraviolet cut-offs , and when cast into $`A_0=0`$ gauge generates a straightforward local equation of motion for the evolution of $`𝐀`$:
$$\sigma \frac{d}{dt}𝐀=𝐃\times 𝐃\times 𝐀+𝜻.$$
(8)
A numerical investigation of Bödeker’s effective theory and its implication for electroweak baryon number violation has been recently carried out by Moore .
Nevertheless, there is something peculiar about the effective theory (4). In a high temperature plasma, static electric fields (or, more generally, longitudinal fields) are Debye screened . The screening distance is of order $`1/gT`$, which is small compared to the spatial scale $`R1/g^2T`$ of interest to us. More generally, the longitudinal modes of the gauge field are screened while, at low frequencies $`\omega k`$, the transverse modes are not. Because of Debye screening, it is the transverse electric and magnetic fields which are relevant for producing non-perturbative fluctuations at the scales (*I) quoted earlier. Longitudinal fields are irrelevant. Nonetheless, Bödeker’s effective theory (4) does describe long-distance longitudinal fluctuations. The longitudinal fields are those pieces of $`𝐄`$ which contribute to $`𝐃𝐄`$ and which perturbatively correspond to polarizations parallel to the momentum $`𝐤`$. Dotting $`𝐃`$ into both sides of (4), one sees that
$$\sigma 𝐃𝐄=𝐃𝜻.$$
(9)
$`𝐃𝐄`$ is therefore not zero, and so the fields in Bödeker’s effective theory (4) are not purely transverse. Our purpose in this paper will be to show two things: first, that Bödeker’s original derivation of the long-distance longitudinal dynamics relied on a questionable approximation which ignored subtleties associated with longitudinal dynamics, but that the end result is nevertheless correct; and, secondly, that the longitudinal dynamics is irrelevant and may be removed altogether, if one is purely interested in describing physical, gauge-invariant quantities that depend only on the transverse fields (for example, the rate of anomalous charge violation). The last point would be trivial in an Abelian theory, because then (4) would be linear in the fields and could be projected into one equation involving only the transverse fields and another independent equation involving only the longitudinal fields. The point is much less trivial in a non-Abelian theory because of the non-linearity of (4).
## II Review of Bödeker’s derivation
### A The effective Boltzmann-Vlasov equation
We will refer to gauge fields associated with the scales of interest (*I) as “soft” fields. In contrast, the dominant excitations in the hot plasma correspond to momenta of order $`T`$ and will be called “hard.” On his way to deriving the effective theory (4) for the soft gauge fields, Bödeker first derived an effective Boltzmann-Vlasov equation for the interaction of those fields with hard excitations:
$$(D_t+𝐯𝐃_𝐱)W𝐄𝐯\xi =\delta C[W],$$
(11)
$$D_\nu F^{\mu \nu }=J^\mu m_\mathrm{D}^2v^\mu W(𝐯)_𝐯,$$
(12)
where
$$\delta C[W](𝐯)\gamma _\mathrm{g}(𝐯,𝐯^{})W(𝐯^{})_𝐯^{},$$
(14)
and
$$(𝐯,𝐯^{})\delta ^{(2)}(𝐯𝐯^{})\frac{4}{\pi }\frac{(𝐯𝐯^{})^2}{\sqrt{1(𝐯𝐯^{})^2}}.$$
(15)
Here, $`m_\mathrm{D}^23m_{\mathrm{pl}}^2`$ is the leading-order Debye mass (squared). The first equation (11) is a linearized Boltzmann equation for the hard particles in the presence of a soft electromagnetic field, where $`W(𝐯,𝐱,t)`$ represents the color distribution of those particles and $`𝐯`$ is a unit vector representing the hard particles’ velocities.Technically, $`W`$ is the adjoint representation piece of the density matrix describing the color charges of the hard excitations, summed over the various species of excitations and integrated over the energy of excitations (for a fixed direction of motion $`𝐯`$). It is normalized in a way that simplifies the resulting equation. See ref. for the explicit definition. $`\delta C`$ represents a linearized collision term for $`22`$ scattering that randomizes the color charges of the hard particles , and $`\xi `$ is a source of random thermal noise. The second equation (12) is Maxwell’s equation, where all the fields on the left-hand side are to be understood as soft fields, and the current on the right side is the soft-momentum component of the current created by hard excitations. This current is proportional to the density $`W`$ of hard particles and the velocities of those particles, where $`v^\mu `$ means $`(1,𝐯)`$. In the explicit form (12) for the collision term, $`\mathrm{}_𝐯`$ denotes averaging over the direction of $`𝐯`$ and $`\delta ^{(2)}`$ is a $`\delta `$ function defined on the unit two-sphere with normalization
$$\delta ^{(2)}(𝐯𝐯^{})f(𝐯^{})_𝐯^{}=f(𝐯).$$
(16)
See Refs. for the derivation of the explicit form (12b) of the linearized collision operator.
One may avoid worrying about the details of noise terms such as $`\xi `$ until one reaches the final effective equation (4), at which point it is possible to then argue how the noise must in fact appear . However, since in this paper we will be discussing various subtleties, it will be useful to keep track of the noise explicitly at each step we consider. In particular, Bödeker derived that the appropriate normalization of the noise in the effective Boltzmann-Vlasov equation (II A) is related to the collision integral:
$$\xi ^a(t,𝐱,𝐯)\xi ^b(t^{},𝐱^{},𝐯^{})=\frac{2g^2T}{3\sigma }(𝐯,𝐯^{})\delta ^{ab}\delta (tt^{})\delta ^{(3)}(𝐱𝐱^{}).$$
(17)
We will not review any further the origin of the effective Boltzmann-Vlasov equations (II A) and direct the reader instead to Bödeker’s original work and our alternative derivation . It is in the step from these kinetic equations to Bödeker’s final effective theory (4) that subtleties in the treatment of longitudinal physics creep in, and that is the focus of this paper.
### B Solving for $`W`$
Bödeker obtains his final effective theory (4), at leading-log order, from the effective Boltzmann-Vlasov equations (II A) by arguing that the covariant derivative terms in the Boltzmann equation (11) are together of order $`g^2TW`$ and so can be ignored compared to the collision term, which is of order $`\gamma _\mathrm{g}W(g^2T\mathrm{ln}g^1)W`$ and hence larger by a logarithm. There is an important subtlety to this approximation which will be examined in the next section. But accepting this argument at face value for now, if one drops the covariant derivative terms then the Boltzmann equation becomes simply
$$𝐄𝐯+\xi \delta C[W].$$
(18)
Formally, the solution is
$$W=(\delta C)^1(𝐄𝐯+\xi ),$$
(19)
where $`\delta C`$ is to be understood here as an operator acting on the space of (adjoint-representation) functions of a unit vector $`𝐯`$. This result for $`W`$ yields the spatial current appearing in (12),
$$𝐉=m_\mathrm{D}^2𝐯(\delta C)^1(𝐄𝐯+\xi )_𝐯.$$
(20)
Next note that $`\delta C`$ preserves the parity (in $`𝐯`$) of functions it acts on. In other words, $`\delta C`$ maps even (odd) functions of $`𝐯`$ into even (odd) functions of $`𝐯`$. (In contrast, the $`𝐯𝐃_𝐱`$ operator that we dropped does not.) Moreover, in the space of odd functions of $`𝐯`$, $`\delta C`$ as given by (12) reduces to simply $`\delta C=\gamma _\mathrm{g}`$. So, since $`\delta C`$ is a symmetric operator, and since it acts to the left on the odd function $`𝐯`$ in (20), we can replace $`(\delta C)^1`$ by $`\gamma _\mathrm{g}^1`$ in that equation to obtain
$$𝐉=\frac{m_\mathrm{D}^2}{\gamma _\mathrm{g}}𝐯(𝐄𝐯+\xi )_𝐯=\sigma 𝐄+𝜻,$$
(21)
where
$$𝜻3\sigma 𝐯\xi _𝐯.$$
(22)
Using the correlation (17) for $`\xi `$, one produces the correlation (5) asserted earlier for $`𝜻`$. Taking the spatial part of the Maxwell equations (12) and dropping the $`d𝐄/dt`$ term which, for the scales (*I) of interest, is smaller (by four powers of coupling) than the $`\sigma 𝐄`$ term, one obtains Bödeker’s final effective theory (4).
## III Longitudinal subtleties
In the introduction, we noted that Bödeker’s effective theory (4) contains a fluctuating longitudinal electric field. This may seem puzzling since longitudinal electric fields are Debye screened. In this section, we will take a closer look at how both Debye screening, and Bödeker’s effective Langevin equation, do emerge from the Boltzmann-Vlasov equations (II A).
### A Zero mode of $`\delta C`$
In the last section, the Boltzmann equation for $`W`$ was simplified, at leading-log order, by arguing that $`\delta C`$ dominates over the convective derivative $`D_t+𝐯𝐃_𝐱`$ by a power of $`\mathrm{ln}(g^1)`$. This is not quite correct, however, because the operator $`\delta C`$ has an eigenvalue which is not order $`\gamma _\mathrm{g}`$ and which does not dominate over the convective derivative; specifically, $`\delta C`$ has a zero mode.
The necessity of this zero mode was noted by Bödeker, who observed that the effective Maxwell equation (12) for the soft fields requires conservation $`D_\mu J^\mu =0`$ of the current $`J^\mu =m_\mathrm{D}^2v^\mu W(𝐯)_𝐯`$ generated by the hard particles. From (11), this conservation requires $`\delta C[W]_𝐯=0`$, which is indeed satisfied by (12).
The fact $`\delta C[W]_𝐯=0`$ can be rephrased to say that the symmetric operator $`\delta C`$ has null states: it annihilates anything that is independent of $`𝐯`$. (This can be written in bra-ket notation in $`𝐯`$-space as $`\text{constant}|\delta C|W=W|\delta C|\text{constant}=0`$ for any $`W`$.) This point will be important later on, so let us give an alternative way of understanding it. The collision term $`\delta C`$ does not care, at leading-log order, about the dynamics of the soft fields. In particular, it does not care that the soft effective theory is a gauge theory, with a local color symmetry, instead of a non-gauge theory, with merely a global color symmetry. So, from the point of view of the calculation of $`\delta C`$ at leading-log order, the theory could have been one where it was meaningful to talk about the total color charge of the system. If one then imagined adding an infinitesimal chemical potential $`\mu `$ for this total color charge, the resulting equilibrium density would be
$$n=\left[e^{\beta (ϵ_𝐩g\mu _aT^a)}1\right]^1=n_0+n_0(1\pm n_0)\beta g\mu _aT^a+O(\mu ^2)$$
(23)
for each particle type, where $`n_0`$ is the $`\mu =0`$ equilibrium distribution. In equilibrium, the collision term in a Boltzmann equation always vanishes by detailed balance. Different values of $`\mu `$ correspond to different equilibrium states, and the collision term must therefore vanish for all $`\mu `$. That means that the linearized deviation
$$\delta n=n_0(1\pm n_0)\beta g\mu _aT^a$$
(24)
of the equilibrium distribution (23) away from $`n_0`$ must correspond to a null state of the linearized collision operator $`\delta C`$. The deviation (24) is isotropic and homogeneous—it is independent of both $`𝐯`$ and $`𝐱`$. As a result, when re-expressed in terms of the function $`W(𝐱,𝐯)`$ used to parametrize color distributions of the hard particles in the linearized Boltzmann equation (11), the deviation (24) corresponds to $`W(𝐱,𝐯)=`$ constant. That means that a constant $`W`$ is a null vector of $`\delta C`$. But since collisions are local in $`𝐱`$ (in the effective theory), the $`𝐱`$ dependence of $`W`$ is irrelevant, and so any $`W`$ which does not depend on $`𝐯`$ is a null vector of the linearized collision operator $`\delta C`$.
### B Longitudinal and transverse projections
Before continuing, it is worthwhile to introduce longitudinal and transverse projection operators. Perturbatively, the longitudinal projection operator for the electric field is
$$(P_\mathrm{L}^{\mathrm{pert}})^{ij}=\widehat{k}^i\widehat{k}^j=^i\frac{1}{^2}^j.$$
(25)
The gauge-covariant non-perturbative generalization is
$$P_\mathrm{L}^{ij}D^i\frac{1}{D^2}D^j,$$
(26)
where $`D^2`$ means $`𝐃𝐃`$. The transverse projection operator is of course
$$P_\mathrm{T}^{ij}=\delta ^{ij}P_\mathrm{L}^{ij}.$$
(27)
It is the longitudinal electric field which couples to external charges, since Gauss’ Law reads $`𝐃𝐄=\rho `$ and since $`𝐃(P_\mathrm{T}𝐄)=0`$. And it’s the transverse electric field that is produced by $`𝐃\times 𝐁`$ in the effective theory (4) since $`P_\mathrm{L}(𝐃\times 𝐁)=0`$. As mentioned in the introduction, the precise separation between longitudinal and transverse dynamics is not transparent from this simple discussion because of the non-linear dependence of $`𝐃\times 𝐁`$ on the underlying vector potential $`𝐀`$.
### C Solving for $`W`$ (again)
To examine the difficulties caused by the presence of a zero mode of $`\delta C`$, we now return to the effective Boltzmann-Vlasov equation (II A) and will repeat the analysis of section II B, this time being more careful about how we treat the convective derivative compared to $`\delta C`$. Formally, the solution for $`W`$ is
$$W=\frac{1}{D_t+𝐯𝐃_𝐱+\delta C}(𝐯𝐄+\xi ),$$
(28)
or $`W=G(𝐯𝐄+\xi )`$, where $`G`$ denotes the inverse of the linearized kinetic operator,
$$G\left[D_t+𝐯𝐃_𝐱+\delta C\right]^1.$$
(29)
This fluctuation in the distribution of hard excitations produces a current response \[from Eq. (12)\] of
$$𝐉=m_\mathrm{D}^2𝐯G(𝐯𝐄+\xi )_𝐯,$$
(30)
and the (color) charge density
$$𝐃𝐄=J^0=m_\mathrm{D}^2G(𝐯𝐄+\xi )_𝐯.$$
(31)
One may easily check that the current is conserved (as it must be), since
$`D_0J^0+𝐃𝐉`$ $`=`$ $`m_\mathrm{D}^2(D_t+𝐯𝐃)G(𝐯𝐄+\xi )`$ (32)
$`=`$ $`m_\mathrm{D}^2\left[1\delta CG\right](𝐯𝐄+\xi )`$ (33)
$`=`$ $`0.`$ (34)
The $`𝐯𝐄+\xi _𝐯`$ term vanishes due to isotropy, $`𝐯_𝐯=0`$, and the lack of bias in the noise, $`\xi _𝐯=0`$. And $`\delta CG(𝐯𝐄+\xi )_𝐯`$ vanishes because $`\delta C`$ is acting (to the left) on its $`𝐯`$-independent zero-mode.In slightly more explicit notation, this term is $`\gamma _\mathrm{g}(𝐯^{\prime \prime },𝐯^{})G(𝐯^{},𝐯)[𝐯𝐄+\xi (𝐱,𝐯)]_{𝐯,𝐯^{},𝐯^{\prime \prime }}`$. It vanishes because $`(𝐯^{\prime \prime },𝐯^{})_{𝐯^{\prime \prime }}=(𝐯^{\prime \prime },𝐯^{})_𝐯^{}=0`$.
### D The problem with the naive derivation
In following subsections, we will discuss how to evaluate the operator inverse that defines $`G`$. To begin, however, it is useful to see how the zero-mode problem manifests itself in a simple example of matrix inversion. To this end, let us for the moment replace the Green function $`G`$ of (29) by that of a simplified finite dimensional example. First, imagine that the gauge interactions are Abelian, so that $`D_t`$ and $`𝐃_𝐱`$ can be replaced by simply $`i\omega `$ and $`i𝐤`$, respectively. Next, imagine that the infinite-dimensional space of possible functions of $`𝐯`$, on which $`\delta C`$ acts, is truncated to the four-dimensional space of functions that are either independent of $`𝐯`$ or linear in $`𝐯`$. We wish to examine the matrix representing the action of $`i\omega +𝐯𝐃_𝐱+\delta C`$ within this truncated space. In order to distinguish clearly between longitudinal and transverse physics, it is convenient to choose a basis $`\{f_\alpha (𝐯)\}`$, $`\alpha =0,\mathrm{},3`$, where
$$f_0(𝐯)=1,f_i(𝐯)=\sqrt{3}\widehat{e}_i𝐯,$$
(35)
and $`\widehat{e}_i`$ are three mutually orthonormal unit vectors with $`\widehat{e}_1\widehat{k}`$ pointing in the direction of $`𝐤`$. The overall normalization has been chosen so that $`f_if_j_𝐯=\delta _{ij}`$. In this basis, the matrix elements of $`f_i|i\omega +𝐯𝐃_𝐱+\delta C|f_j_𝐯`$ are
$$\left(\begin{array}{cccc}i\omega & \frac{i}{\sqrt{3}}k& & \\ \frac{i}{\sqrt{3}}k& \gamma _\mathrm{g}& & \\ & & i\omega +\gamma _\mathrm{g}& \\ & & & i\omega +\gamma _\mathrm{g}\end{array}\right).$$
(36)
The inverse operator, corresponding to $`G`$ in our truncated space, is
$$G_{\mathrm{trunc}}=\left(\begin{array}{cccc}\hfill \gamma _\mathrm{g}/(i\gamma _\mathrm{g}\omega +\frac{1}{3}k^2)& \hfill \frac{i}{\sqrt{3}}k/(i\gamma _\mathrm{g}\omega +\frac{1}{3}k^2)& & \\ \hfill \frac{i}{\sqrt{3}}k/(i\gamma _\mathrm{g}\omega +\frac{1}{3}k^2)& \hfill i\omega /(i\gamma _\mathrm{g}\omega +\frac{1}{3}k^2)& & \\ & & \hfill (i\omega +\gamma _\mathrm{g})^1& \\ & & & \hfill (i\omega +\gamma _\mathrm{g})^1\end{array}\right).$$
(37)
The $`\omega 0`$ limit is particularly simple:
$$G_{\mathrm{trunc}}\left(\begin{array}{cccc}\frac{3\gamma _\mathrm{g}}{k^2}& \frac{i\sqrt{3}}{k}& & \\ \frac{i\sqrt{3}}{k}& 0& & \\ & & \gamma _\mathrm{g}^1& \\ & & & \gamma _\mathrm{g}^1\end{array}\right).$$
(38)
In contrast, the naive derivation of Bödeker’s theory corresponds to replacing $`i\omega +i𝐯𝐤+\delta C`$ by $`\delta C`$, and the corresponding “inverse” would then be
$$\left(\begin{array}{cccc}\mathrm{}& & & \\ & \gamma _\mathrm{g}^1& & \\ & & \gamma _\mathrm{g}^1& \\ & & & \gamma _\mathrm{g}^1\end{array}\right).$$
(39)
As one can see, there is no difference in the transverse sector (spanned by $`f_{2,3}`$), but there is a huge difference in the longitudinal sector. As a particular example, consider the noiseless part of the current $`𝐉`$, given in Eq. 30; namely $`m_\mathrm{D}^2𝐯G𝐯𝐄_𝐯`$. In the naive derivation, as represented by (39), this contribution gives
$$m_\mathrm{D}^2𝐯\gamma _\mathrm{g}^1𝐯𝐄_𝐯=\frac{m_\mathrm{D}^2}{3\gamma _\mathrm{g}}𝐄.$$
(40)
In our Abelian truncated-space calculation of $`G`$, however, it is instead given by
$$\frac{m_\mathrm{D}^2}{3}(G_{11}P_\mathrm{L}+G_{22}P_\mathrm{T})𝐄=\frac{m_\mathrm{D}^2}{3\gamma _\mathrm{g}}P_\mathrm{T}𝐄.$$
(41)
in the $`\omega 0`$ limit. The longitudinal part of $`𝐄`$ is projected out! This is quite different from the result of the naive derivation.
### E Low-frequency, long-wavelength dynamics
We will now show that, despite the major difference in $`G^1`$, one nonetheless does recover Bödeker’s effective theory even for longitudinal physics. To do so, we will also return to the full, original non-Abelian problem and dispense with the truncated Abelian model of the previous section.
If we restrict our attention to frequencies and wavenumbers which are small compared to the damping rate, $`\omega ,k\gamma _\mathrm{g}`$, then in the Greens’ function $`G`$ we could drop the convective derivative compared to $`\delta C`$ were it not for the fact that $`\delta C`$ has a zero-mode. To deal with this, let $`P_0`$ denote the projection operator onto the zero-mode of $`\delta C`$, so that
$$P_0\left(f(𝐯)\right)f(𝐯)_𝐯,$$
(42)
and separate the convective derivative into zero-mode and non-zero-mode pieces,
$`D_t+𝐯𝐃`$ $`=`$ $`\left(D_t+𝐯𝐃\right)P_0+P_0𝐯𝐃+(1P_0)\left(D_t+𝐯𝐃\right)(1P_0).`$ (43)
To see this, note that $`D_t`$ commutes with $`P_0`$, and that $`P_0𝐯𝐃P_0=0`$. The last term of (43) only perturbs the non-zero eigenvalues of $`\delta C`$, and may be neglected provided $`\omega `$ and $`k`$ are small compared to $`\gamma _\mathrm{g}`$. \[For $`k=O(g^2T)`$ this is a leading-log approximation.\] The first two terms of (43) are rank one perturbations which will lift the zero-mode of the linearized kinetic operator. And one may evaluate explicitly the change in the inverse of an operator produced by adding a finite rank perturbation. In this case, a short exercise shows that
$`G`$ $``$ $`\left[(D_t+𝐯𝐃)P_0+P_0𝐯𝐃+\delta C\right]^1`$ (44)
$`=`$ $`(1P_0)\delta C^1(1P_0)+\gamma _\mathrm{g}^1(\gamma _\mathrm{g}𝐯𝐃){\displaystyle \frac{P_0}{\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2}}(\gamma _\mathrm{g}𝐯𝐃).`$ (45)
(To verify the last equality, recall that $`\delta C`$ is nothing but multiplication by $`\gamma _\mathrm{g}`$ when acting on odd functions of $`𝐯`$. Hence, $`\delta C𝐯𝐃P_0=\gamma _\mathrm{g}𝐯𝐃P_0`$.)
Now pause to note the correspondences of this result with the truncated Abelian results of the previous section. The full inverse (45) gives
$`G_𝐯`$ $``$ $`\gamma _\mathrm{g}\left[\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2\right]^1,`$ (47)
$`𝐯G_𝐯`$ $``$ $`\frac{1}{3}𝐃\left[\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2\right]^1,`$ (48)
$`G𝐯_𝐯`$ $``$ $`\left[\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2\right]^1\frac{1}{3}𝐃,`$ (49)
$`𝐯G𝐯_𝐯`$ $``$ $`\gamma _\mathrm{g}\left\{1\frac{1}{3}𝐃\left[\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2\right]^1\frac{1}{3}𝐃\right\}\underset{D_t0}{}{\displaystyle \frac{1}{3\gamma _\mathrm{g}}}P_\mathrm{T},`$ (50)
for $`\omega ,k\gamma _\mathrm{g}`$. These are simple non-Abelian generalizations of (37). (The factor of 3 differences just reflect the normalizations of our chosen basis functions in the last section.)
The form (45) for $`G`$ may now be inserted into Eqs. (30) and (31). For the charge density, one finds
$$𝐃𝐄=J^0=\gamma _\mathrm{g}\left[\gamma _\mathrm{g}D_t\frac{1}{3}𝐃^2\right]^1\left(\sigma 𝐃𝐄+𝐃𝜻\right),$$
(51)
or
$$\left[D_t+\frac{\sigma }{m_\mathrm{D}^2}\left(𝐃^2+m_\mathrm{D}^2\right)\right]𝐃𝐄=𝐃𝜻,$$
(52)
where $`\sigma \frac{1}{3}m_\mathrm{D}^2/\gamma _\mathrm{g}`$, and $`𝜻3\sigma 𝐯\xi _𝐯`$. And for the current, from (30) and (51),
$$𝐉=\sigma 𝐄+𝜻\frac{\sigma }{m_\mathrm{D}^2}𝐃\left(𝐃𝐄\right).$$
(53)
Inserting this into the Maxwell equation $`D_t𝐄+𝐃\times 𝐁=𝐉`$ gives
$$D_t𝐄+\sigma 𝐄\frac{\sigma }{m_\mathrm{D}^2}𝐃(𝐃𝐄)=𝐃\times 𝐁𝜻.$$
(54)
This local equation of motion is the exact result which follows from approximating $`G`$ as shown in (45). However, that form for $`G`$ was based on the assumption that the frequencies and wavevectors of interest are small compared to the damping rate $`\gamma _\mathrm{g}`$. Since $`\gamma _\mathrm{g}`$ is $`O(g^2T\mathrm{ln}g^1)`$, this means that $`\omega `$ is tiny compared to the $`O(T/\mathrm{ln}g^1)`$ conductivity, and that $`k`$ is much smaller than the $`O(gT)`$ Debye mass. Hence, there is no point in retaining the $`D_t𝐄`$ or $`𝐃(𝐃𝐄)`$ terms in the effective equation (54). Dropping these terms immediately yields Bödeker’s equation (4). In other words, a more careful treatment of the effect of the zero mode in $`\delta C`$ does not produce any difference (in leading-logarithmic approximation) to the resulting effective theory.
### F Recovering Debye screening
To see how Debye screening emerges from the kinetic theory (II A), return to Eq. (28) for $`W`$ and now assume that the scales of interest are in the perturbative regime where $`kg^2T`$ and/or $`\omega g^4T\mathrm{ln}g^1`$. \[In other words, the necessary conditions (3) for non-perturbative fluctuations are not both satisfied.\] In this regime, the gauge fields in the covariant derivatives appearing in the Greens’ function $`G`$ may be treated as small.<sup>\**</sup><sup>\**</sup>\**For a more detailed justification, based on a computation of the power spectrum of gauge field fluctuations, see Ref. . Expanding $`G`$ in powers of the gauge field, the leading term,
$$G\left[_t+𝐯_𝐱+\delta C\right]^1,$$
(55)
is diagonal in momentum space. Fourier transforming Eq. (31) for the charge density then yields (to leading order in the gauge field)
$`i𝐤𝐄`$ $`=`$ $`m_\mathrm{D}^2\stackrel{~}{G}𝐯_𝐯𝐄+m_\mathrm{D}^2\stackrel{~}{G}\xi _𝐯`$ (56)
$`=`$ $`m_\mathrm{D}^2\stackrel{~}{G}𝐯𝐤_𝐯{\displaystyle \frac{𝐤𝐄}{𝐤^2}}+m_\mathrm{D}^2\stackrel{~}{G}\xi _𝐯`$ (57)
$`=`$ $`im_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right){\displaystyle \frac{𝐤𝐄}{𝐤^2}}+m_\mathrm{D}^2\stackrel{~}{G}\xi _𝐯,`$ (58)
or
$$\left[𝐤^2+m_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right)\right]i𝐤𝐄=m_\mathrm{D}^2𝐤^2\stackrel{~}{G}\xi _𝐯,$$
(59)
with
$$\stackrel{~}{G}(\omega ,𝐤)=\left[i\omega +i𝐯𝐤+\delta C\right]^1,$$
(60)
and $`𝐄`$ and $`\xi `$ now denoting the $`(\omega ,𝐤)`$ Fourier components of these fields. In the first step of (58), we used the fact that, with gauge fields neglected in $`\stackrel{~}{G}`$, the only vector which $`G𝐯_𝐯`$ can depend upon is $`𝐤`$, and therefore only the longitudinal component of $`𝐄`$ can contribute to the result. The following step used $`\stackrel{~}{G}(i𝐯𝐤i\omega )=1\stackrel{~}{G}\delta C=1`$ which is another consequence of the zero mode in $`\delta C`$. The result (59) shows that $`𝐤𝐄`$ satisfies a diffusive Langevin equation in which the noise and damping depend on $`\stackrel{~}{G}\xi _𝐯`$ and $`\stackrel{~}{G}_𝐯`$, respectively.
The power spectrum of charge density (or $`𝐃𝐄`$) fluctuations is defined as
$`\rho _\mathrm{L}(\omega ,𝐤)`$ $``$ $`{\displaystyle 𝑑td^3𝐱e^{i\omega ti𝐤𝐱}J^0(t,𝐱)J^0(0,0)_\xi }.`$ (61)
Using (59) to express $`𝐃𝐄`$ in terms of the noise $`\xi `$, and then recalling that the covariance of the noise $`\xi `$, as given by (17) and (12), is proportional to $`\delta C`$, allows one to write the power spectrum as
$`\rho _\mathrm{L}(\omega ,𝐤)`$ $`=`$ $`m_\mathrm{D}^4𝐤^4\stackrel{~}{G}\xi _𝐯\xi \stackrel{~}{G}^{}_𝐯^{}_\xi /\left|𝐤^2+m_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right)\right|^2`$ (62)
$`=`$ $`2g^2Tm_\mathrm{D}^2𝐤^4\mathrm{Re}\stackrel{~}{G}_𝐯/\left|𝐤^2+m_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right)\right|^2`$ (63)
$`=`$ $`{\displaystyle \frac{2T}{\omega }}\mathrm{Im}{\displaystyle \frac{g^2𝐤^4}{𝐤^2+m_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right)}}.`$ (64)
Once again, this answer is valid provided $`kg^2T`$ and/or $`\omega g^4T\mathrm{ln}g^1`$ since we neglected (for this discussion only) the soft gauge fields in the covariant derivatives. Furthermore, $`k`$ and $`\omega `$ must be small compared to $`T`$, since this is a basic requirement for any kinetic theory description to be valid.
Linear response theory, applied to the underlying quantum field theory, shows that the power spectrum
$$\rho _\mathrm{L}(\omega ,𝐤)𝑑td^3𝐱e^{i\omega ti𝐤𝐱}\frac{1}{2}\{J^0(t,𝐱),J^0(0,0)\},$$
(65)
(defined with a symmetrized ordering of quantum operators) is related to the retarded charge-density charge-density correlator
$$D_\mathrm{R}(\omega ,𝐤)i𝑑td^3𝐱e^{i\omega ti𝐤𝐱}\theta (t)[J^0(t,𝐱),J^0(0,0)],$$
(66)
by the fluctuation-dissipation relation
$$\rho _\mathrm{L}(\omega ,𝐤)=[2n(\omega )+1]\mathrm{Im}D_\mathrm{R}(\omega ,𝐤),$$
(67)
where $`n(\omega )=[e^{\beta \omega }1]^1`$ is the equilibrium Bose distribution function. And the retarded correlator $`D_\mathrm{R}(\omega ,𝐤)`$ is the analytic continuation of the Euclidean space (time-ordered imaginary-time) correlator $`D_\mathrm{E}(i\omega _n,𝐤)`$ from the imaginary Matsubara frequencies to (just above) the real frequency axis.
By comparing (64) to the form (66), the kinetic theory result for the power spectrum may be converted to an equivalent result for the retarded correlator. The leading factor of $`T/\omega `$ is just the low-frequency (classical) limit of the Bose distribution function, and hence the retarded charge density correlator is
$`D_\mathrm{R}(\omega ,𝐤)`$ $`=`$ $`g^2𝐤^2{\displaystyle \frac{g^2𝐤^4}{𝐤^2+m_\mathrm{D}^2\left(1+i\omega \stackrel{~}{G}_𝐯\right)}}.`$ (68)
The local (and temperature-independent) $`g^2𝐤^2`$ term, which does not contribute to the imaginary part, is determined by the current-current Ward identities, or equivalently by the requirement that $`D_\mathrm{R}`$ remain bounded as $`𝐤\mathrm{}`$. The low frequency limit,
$`D_\mathrm{R}(0,𝐤)`$ $`=`$ $`{\displaystyle \frac{g^2𝐤^2m_\mathrm{D}^2}{𝐤^2+m_\mathrm{D}^2}}=g^2\left[m_\mathrm{D}^2{\displaystyle \frac{m_\mathrm{D}^4}{𝐤^2+m_\mathrm{D}^2}}\right],`$ (69)
reproduces the correct static equilibrium Debye-screened charge density correlations.<sup>††</sup><sup>††</sup>††The factor of $`g^2`$ is present merely because we have chosen to scale all our gauge fields by $`g`$ relative to the usual perturbative conventions. In, for example, Coulomb gauge, $`D_\mathrm{R}`$ equals the one-loop gauge field self-energy $`\mathrm{\Pi }^{00}`$ (which is just $`m_\mathrm{D}^2`$ in the static limit), plus the one-particle reducible contributions which sum to $`\mathrm{\Pi }^{00}D^{00}\mathrm{\Pi }^{00}`$, where $`D^{00}=A^0A^0=1/(𝐤^2+\mathrm{\Pi }^{00})`$ is the Debye-screened $`A^0`$ propagator. More generally, the kinetic theory answer (68) reduces to the known hard-thermal-loop result whenever the frequency or momentum is large compared to the damping rate $`\gamma _\mathrm{g}`$. In this domain, the details of the scattering operator $`\delta C`$ are irrelevant, and $`\stackrel{~}{G}(\omega ,𝐤)`$ may be approximated by $`i[\omega 𝐯𝐤+iϵ]^1`$. The resulting average over $`𝐯`$ may then be performed analytically, and yields the standard HTL result for the self-energy.
If neither $`k`$ nor $`\omega `$ are large compared to the damping rate $`\gamma _\mathrm{g}`$, then the detailed form of $`\delta C`$ is significant. Evaluating $`\stackrel{~}{G}(\omega ,k)`$ is non-trivial if $`k`$ is comparable to the damping rate. However, if $`k`$ and $`\omega `$ are both small compared to $`\gamma _\mathrm{g}`$, then the previous representation (47) may be used. Perturbatively, it gives
$$\stackrel{~}{G}(\omega ,𝐤)_𝐯=\frac{\gamma _\mathrm{g}}{\frac{1}{3}𝐤^2i\omega \gamma _\mathrm{g}},$$
(70)
which gives
$$\rho _\mathrm{L}(\omega ,𝐤)|_{\text{Bödeker}}=\frac{2g^2T}{\sigma }𝐤^2.$$
(71)
This is the same result which emerges directly from the relation $`\sigma 𝐃𝐄=𝐃𝜻`$ in Bödeker’s effective theory, combined with (5) for the noise variance. This result is valid in the overlap of the perturbative and damping-dominated domains, that is when $`|k^2i\omega \sigma |g^4T^2`$ and $`|\frac{1}{3}k^2i\omega \gamma _\mathrm{g}|\gamma _\mathrm{g}^2`$.
## IV Irrelevancy of Longitudinal Dynamics
We have seen that Bödeker’s effective theory
$$\sigma 𝐄=𝐃\times 𝐁𝜻,$$
(72)
within its domain of validity, does correctly describe both longitudinal and transverse fluctuations. However, it is the transverse part of the gauge fields which are responsible for interesting non-perturbative phenomena such as topological transitions and associated baryon non-conservation. One may wonder if it is possible to formulate an equally valid effective theory which describes only transverse physics. On the face of things, this should be easy; just insert a transverse projection operator to eliminate the longitudinal part of the noise,
$$\sigma 𝐄=𝐃\times 𝐁P_\mathrm{T}𝜻.$$
(73)
This produces an effective theory with no longitudinal dynamics whatsoever, $`P_\mathrm{L}𝐄=0`$. In the case of an Abelian theory, the trivial decoupling of transverse and longitudinal parts of the gauge field would make it obvious that Bödeker’s theory (72) and the transverse-projected theory (73) describe exactly the same transverse dynamics. But for our non-Abelian theory, the dependence of covariant derivatives and projection operators on the gauge field makes this decoupling far less obvious. The remainder of this paper is devoted to showing that it is almost true that equations (72) and (73) do in fact generate identical transverse dynamics. The “almost” caveat reflects subtleties associated with the fact that white noise cannot be considered smooth even on infinitesimal time scales. Investigating these subtleties will reveal that the naive transverse equation (73) must be corrected but can then be made exactly equivalent to Bödeker’s equation (72) with regard to the transverse dynamics. It should be emphasized that this investigation was motivated by theoretical curiosity, not practical convenience.<sup>‡‡</sup><sup>‡‡</sup>‡‡ In fact, historically, it took us some time to recognize that longitudinal physics for $`\omega k\gamma _\mathrm{g}`$ is correctly reproduced by Bödeker’s effective theory, and the discussion in this section was motivated by the desire to show that it simply doesn’t matter. Bödeker’s equation (72) is local, whereas the transverse-projected equation (73) is not. Consequently, numerical simulations are far more easily performed in the original theory than in any transverse-projected variant.
To be precise, by “transverse dynamics” we refer to all physical observables that do not depend on $`A_0`$ when expressed as gauge-invariant functions of the fields $`A_\mu `$. The magnetic field $`𝐁=𝐃\times 𝐀`$ does not, of course, depend on $`A_0`$. Writing $`𝐄=𝐃A_0(d𝐀/dt)`$, it is easy to see that the transverse electric field $`P_\mathrm{T}𝐄=P_\mathrm{T}(d𝐀/dt)`$ does not either. Consequently, an example of a physical quantity which depends only on the transverse dynamics is the topological charge (or change in Chern-Simons number) of the gauge field, which is proportional<sup>\**</sup><sup>\**</sup>\** The precise formula is $`\mathrm{\Delta }N(t)=(1/8\pi ^2)_0^t𝑑td^3xE_i^aB_i^a`$. to $`\mathrm{tr}[𝐄𝐁]=\mathrm{tr}[(P_\mathrm{T}𝐄)𝐁]`$.<sup>\*†</sup><sup>\*†</sup>\*†The topological transition rate (or Chern-Simons number diffusion constant), is an important ingredient in scenarios of electroweak baryogenesis. Understanding the applicability of numerical simulations using Bödeker’s effective theory for extracting the topological transition rate motivated this investigation. See for such recent numerical work and related discussion.
### A Naive equivalence
We first wish to paint with a broad brush. We will for the moment ignore all subtleties and discuss how, if one implicitly and incorrectly (and only when advantageous) treats the noise $`𝜻(𝐱,t)`$ as a smooth function of $`t`$, one may show that the two theories (72) and (73) should generate the same transverse dynamics. We will wait until section IV B and its sequel to correct this discussion by taking into account the non-smooth nature of Gaussian white noise.
It is simplest to initially consider both theories (73) and (72) in $`A_0=0`$ gauge:
$$\sigma \frac{d}{dt}𝐀=𝐃\times 𝐁+𝜻,$$
(74)
and
$$\sigma \frac{d}{dt}𝐀=𝐃\times 𝐁+P_\mathrm{T}𝜻.$$
(75)
For the moment, imagine a particular instantiation of the white noise $`𝜻(𝐱,t)`$—that is, consider a particular member of the Gaussian ensemble of noise functions. Suppose that Bödeker’s equation (74) is satisfied by a gauge field $`𝐀(𝐱,t)`$. Now rewrite Bödeker’s equation in the form
$$\sigma \left(\frac{d}{dt}𝐀\sigma ^1P_\mathrm{L}𝜻\right)=𝐃\times 𝐁+P_\mathrm{T}𝜻.$$
(76)
Using the explicit form (26) of the longitudinal projection operator, this can be written as
$$\sigma \left(\frac{d}{dt}𝐀𝐃\stackrel{~}{A}_0\right)=𝐃\times 𝐁+P_\mathrm{T}𝜻,$$
(77)
where $`\stackrel{~}{A}_0`$ is simply a (suggestive) name for
$$\stackrel{~}{A}_0\sigma ^1D^2𝐃𝜻.$$
(78)
For a particular instantiation of the noise (and any initial condition), the solution to (77) may be interpreted in two different ways. On the one hand, $`𝐀`$ is, by construction, an $`A_0=0`$ gauge solution to Bödeker’s equation (72). On the other hand, if one says that $`\stackrel{~}{A}_0`$ is actually the time component of the gauge field, then the left-hand-side of (77) is just $`\sigma 𝐄`$. Therefore, $`A_\mu =(\stackrel{~}{A}_0,𝐀)`$ is a solution to the projected equation (73) in the particular gauge where $`A_0=\sigma ^1D^2𝐃𝜻`$.
But, given a solution $`(\stackrel{~}{A}_0,𝐀)`$ to (73) with $`A_00`$, one may always gauge transform back to $`A_0=0`$ gauge. The result will be a gauge field $`\overline{𝐀}`$ which obeys
$$\sigma \frac{d}{dt}\overline{𝐀}=\overline{𝐃}\times \overline{𝐁}+P_\mathrm{T}\overline{𝜻}.$$
(79)
This is just the $`A_0=0`$ transverse equation (75), except that the noise has been gauge transformed by the transformation which takes $`(\stackrel{~}{A}_0,𝐀)`$ into $`A_0=0`$ gauge:
$$\overline{𝜻}^a=U^{ab}𝜻^b,$$
(80)
with
$$U=𝒯\mathrm{exp}\left[_0^t\stackrel{~}{A}_0𝑑t\right]=𝒯\mathrm{exp}\left[\sigma ^1_0^t\frac{1}{D^2}𝐃𝜻𝑑t\right],$$
(81)
where $`𝒯`$ signifies time ordering, with the latest times on the right.
The distinction between $`𝜻`$ and its gauge transform $`\overline{𝜻}`$ will not matter, and our two theories (74) and (75) will be equivalent (subject to earlier caveats), provided the distribution $`\overline{𝜻}^a=U^{ab}𝜻^b`$ is Gaussian white noise, just like the distribution of the original $`𝜻`$. If our transformation $`U`$ was not a function of the noise, this would be trivial because then we would have
$$\overline{𝜻}^a\overline{𝜻}^b=U^{ac}U^{bd}𝜻^c𝜻^d.$$
(82)
Since the $`𝜻`$ correlator is proportional to $`\delta ^{cd}`$, and since $`U^{ac}U^{bd}\delta ^{cd}=\delta ^{ab}`$, it would follow that
$$\overline{𝜻}^a\overline{𝜻}^b=𝜻^a𝜻^b.$$
(83)
Even though our transformation $`U`$ depends on the Gaussian white noise $`𝜻`$, this result naively remains true. Consider, for instance, the equal time correlation
$$\overline{𝜻}^a(t)\overline{𝜻}^b(t)=U^{ac}(t)U^{bd}(t)𝜻^c(t)𝜻^d(t).$$
(84)
Because the noise correlation is local in time, while $`U`$ (formally) depends only on the noise prior to $`t`$, this can be factorized:<sup>\*‡</sup><sup>\*‡</sup>\*‡In fact, the dependence (or lack thereof) of $`U(t)`$ on the noise at exactly time $`t`$ is ill-defined and depends on the details of time discretization, as discussed in the next sub-section.
$$\overline{𝜻}^a(t)\overline{𝜻}^b(t)=U^{ac}(t)U^{bd}(t)𝜻^c(t)𝜻^d(t).$$
(85)
The $`𝜻`$ correlation is again proportional to $`\delta ^{cd}`$, which again contracts the $`U`$’s and eliminates them, so that we arrive at (83) as desired. A similar argument shows that unequal time correlations of $`\overline{𝜻}`$ vanish, as they should.
### B So what’s wrong?
#### 1 A toy model
To see what goes wrong with the previous equivalency argument, and to understand what it has to do with the short-time nature of white noise, it is instructive first to consider a system much simpler than non-Abelian gauge theory. Forget about field theory and instead imagine stochastic dynamics of a classical particle moving in two dimensions in a rotationally-symmetric potential $`V(r)`$:
$$\frac{d}{dt}𝐫=\mathbf{}V+𝜻,$$
(86)
$$\zeta _i(t)\zeta _j(t^{})=2T\delta _{ij}\delta (tt^{}).$$
(87)
For convenience, we have normalized the analog of $`\sigma `$ to 1. Imagine also we care only about the radial dynamics of this system and not at all about the angular dynamics.
Comparing to the gauge theory problem, $`𝐫`$ above is analogous to $`𝐀`$ in $`A_0=0`$ gauge, the radial dynamics to the transverse gauge dynamics, and the angular dynamics to the longitudinal dynamics. Circles about the origin are analogous to gauge orbits of 3-dimensional gauge configurations under 3-dimensional gauge transformations (since, in the gauge theory, infinitesimal displacements in the longitudinal direction are of the form $`\mathrm{\Delta }𝐀(𝐱)=𝐃\mathrm{\Lambda }(𝐱)`$, which is the form of an infinitesimal 3-dimensional gauge transformation). Eq. (86) is analogous to Bödeker’s effective theory (74), and the analog of the transverse-projected theory (75) is then
$$\frac{d}{dt}𝐫=\mathbf{}V+P_\mathrm{r}𝜻,$$
(88)
where the radial projection operator $`P_\mathrm{r}`$ is
$$P_\mathrm{r}^{ij}=\widehat{r}^i\widehat{r}^j=\delta ^{ij}\widehat{\theta }^i\widehat{\theta }^j.$$
(89)
Just as in section IV A, we can make a sloppy and not quite correct argument that the unprojected equation (86) and the projected equation (88) are equivalent. A transformation from a solution $`𝐫`$ of the unprojected equation to a solution $`\overline{𝐫}`$ of the projected equation appears to be
$$\overline{𝐫}=U𝐫,\overline{𝜻}=U𝜻,$$
(90)
or equivalently
$$𝐫=U^1\overline{𝐫},𝜻=U^1\overline{𝜻},$$
(91)
where, if $`𝐫`$ and $`𝜻`$ are represented by complex numbers $`r_x+ir_y`$ and $`\zeta _x+i\zeta _y`$, $`U`$ can be written in a form quite analogous to (81):
$$U=\mathrm{exp}\left(i_0^t\widehat{\theta }𝜻𝑑t\right).$$
(92)
$`U`$ simply rotates away the accumulated motion in the angular direction, so that the projected motion, at every instant in time, becomes purely radial. Naively plugging (91) into the unprojected equation (86), and implicitly but incorrectly assuming that $`𝜻`$ is a smooth function of time, yields the naive projected equation (88) for $`\overline{𝐫}`$.
One can immediately see that the two equations (86) and (88) cannot, however, actually describe the same radial dynamics. In the unprojected case (86), there is zero probability that the system would ever pass exactly through the origin $`r=0`$. The projected case (88), however, just describes one-dimensional motion, parameterized by $`r`$, along a line of constant $`\theta `$. That is, we could fix $`\theta `$ and just replace (88) by the one degree of freedom equation
$$\frac{d}{dt}r=\frac{dV}{dr}+\zeta .$$
(93)
(There does not appear to be an analog of this simplification in the gauge theory; see Appendix B.) As long as there are no infinite potential barriers, this one-dimensional system will eventually fluctuate through any value of $`r`$, including $`r=0`$.
To understand the discrepancy, we need to properly understand the small time behavior of white noise Langevin equations such as (86) and (88). The standard way of defining what such equations actually mean is to discretize time and only at the end of the day take the continuous time limit.
#### 2 Time discretization ambiguities
Before proceeding, we have to dispose of an instructive red herring concerning the time discretization of our various stochastic equations. It is not always true that continuum-time stochastic equations like the ones we have been writing down have an unambiguous meaning. To understand the possible ambiguities, imagine that instead of being interested in only the radial dynamics of our toy model, we were instead interested in only the angular dynamics, and so had proposed a projected equation of the form
$$\frac{d}{dt}𝐫=P_\theta 𝜻,$$
(94)
$$P_\theta ^{ij}=\widehat{\theta }^i\widehat{\theta }^j=\delta ^{ij}\widehat{r}^i\widehat{r}^j.$$
(95)
This continuum equation appears to describe motion for which the radius $`r`$ remains constant. Now imagine discretizing time with small time steps of size $`ϵ`$, so that
$$ϵ^1\mathrm{\Delta }𝐫=P_\theta 𝜻,$$
(96)
$$\zeta _i(t)\zeta _j(t+nϵ)=2Tϵ^1\delta _{ij}\delta _{n0}.$$
(97)
There is an ambiguity in the schematic way we have written the discretized equation (96): we have not made it clear whether the direction $`\widehat{\theta }`$ implicit in $`P_\theta `$ is supposed to be evaluated at the starting point of the tiny time interval, the end point, or somewhere in between. In the first case, the value of $`r`$ will drift out a little bit, as in fig. 1a. In the second case, it will drift in a little bit, as in fig. 1b. If we pick a symmetric convention, where we evaluate $`\widehat{\theta }`$ at the midpoint, then $`r`$ will remain constant, as in fig. 1c.
In non-stochastic equations, such discretization choices become irrelevant in the continuum limit $`ϵ0`$ (though they may have significance for the practicality of numerical calculations). For stochastic equations, however, the $`ϵ0`$ limit is more subtle because, by (97), the amplitude of the white noise $`𝜻`$ is order
$$\zeta \sqrt{\frac{T}{ϵ}}$$
(98)
and diverges as $`ϵ0`$. The drift $`\mathrm{\Delta }r`$ in figs. 1a and b is therefore of order
$$\pm \mathrm{\Delta }r\sqrt{r^2+(ϵ\zeta )^2}r\frac{(ϵ\zeta )^2}{r}\frac{ϵT}{r}$$
(99)
for a time interval $`ϵ`$. That means that the drift per unit time, $`\mathrm{\Delta }r/ϵ`$, is finite as $`ϵ0`$, and so the continuum limit really depends on one’s discretization conventions.
In the unprojected equation (86), there is no such discretization ambiguity. And in our actual toy model equation (88) with radial projection, there is no such ambiguity because motion in the $`\widehat{𝐫}`$ direction, unlike in the $`\widehat{\theta }`$ direction, is straight—$`\widehat{𝐫}`$ does not change between one end of the interval and the other. The situation is slightly more complicated for the transverse-projected equation (75) for gauge theory, however. There, motion of $`𝐀`$ in the transverse direction does change the transverse projection operator $`P_\mathrm{T}`$. However, we demonstrate in section IV C 2 that this change turns out to be high enough order in $`\delta 𝐀`$ that discretization ambiguities do not arise.
The upshot is that the continuum stochastic equations (88) and (75) for the radial-projected toy model and the transverse-projected gauge theory are not ambiguous. However, we shall next see that the very same discretization issues affect the transformations we used to argue that they were equivalent with their unprojected counterparts.
#### 3 Centrifugal drift
The way we proposed obtaining the projected equation (88) from the unprojected one (86) was by rotating away the accumulated $`\theta `$ motion. Imagine a single time step of the discretized unprojected equation. Then Whether it’s $`V(r(t))`$ or $`V(r(t+ϵ))`$, or the average of the two, does not matter in the continuum time limit $`ϵ0`$.
$$𝐫(t+ϵ)=𝐫(t)ϵ\mathbf{}V(r(t))+ϵ𝜻(t).$$
(100)
The motion of the radial coordinate $`r`$ is then
$`r(t+ϵ)`$ $`=`$ $`\left|𝐫(t)ϵ\mathbf{}V(r(t))+ϵ𝜻(t)\right|`$ (101)
$`=`$ $`\left[rϵV^{}(r)+ϵ\widehat{𝐫}𝜻+{\displaystyle \frac{ϵ^2}{2r}}|𝜻|^2\right]|_t+O(ϵ^{3/2}).`$ (102)
Given that, as $`ϵ0`$, a large number of successive tiny time steps will occur before the system appreciably changes position, the positive $`|\zeta |^2`$ term can be replaced by its statistical average (97):
$$r(t+ϵ)\left[rϵV^{}(r)+ϵ\widehat{𝐫}𝜻+\frac{ϵT}{r}\right](t).$$
(103)
The distribution of $`\widehat{𝐫}𝜻`$ does not care about the direction of $`\widehat{𝐫}`$, and (103) can be rewritten as
$$r(t+ϵ)r(t)ϵV_{\mathrm{eff}}^{}(r(t))+ϵ\zeta _r(t),$$
(104)
where $`\zeta _r`$ is uncorrelated white noise and
$$V_{\mathrm{eff}}(r)=V(r)T\mathrm{ln}r.$$
(105)
The continuum projected equations that are truly equivalent to the unprojected one are therefore (88) or (93) with $`V`$ replaced by $`V_{\mathrm{eff}}`$. The addition of the $`\mathrm{ln}r`$ term in (105) now provides a “centrifugal potential” which prevents the one-dimensional system (93) from passing through $`r=0`$.
#### 4 Equilibrium distributions
The exact form of the “centrifugal” correction was really determined from the very start. As we shall briefly review in section IV C 4, the equilibrium distribution in $`𝐫`$ generated by the unprojected equation (93) is proportional to $`\mathrm{exp}(V/T)`$. That means that the probability distribution for the radial variable $`r`$ must be proportional to
$$2\pi r\mathrm{exp}(V/T)\mathrm{exp}(V_{\mathrm{eff}}/T),$$
(106)
since the $`2\pi r`$ is the volume of the symmetry orbit. But $`\mathrm{exp}(V_{\mathrm{eff}}/T)`$ is precisely the equilibrium distribution generated by the projected equation (93) if $`V`$ is replaced by $`V_{\mathrm{eff}}`$.
In the gauge theory case, there will be no analog to the one-dimensional radial equation (93), and so it is worthwhile to understand how the equilibrium distribution could have been deduced directly from the two-dimensional projected equation (88). This requires deriving the Fokker-Planck equation that is associated with a given Langevin equation and which describes the time evolution of probability distributions $`𝒫(𝐫)`$. (This will be discussed explicitly in the gauge theory case below.) For the naive projected equation (88), one finds that
$$𝒫(𝐫)\frac{\mathrm{exp}(V/T)}{2\pi r}$$
(107)
is a (two-dimensional) equilibrium solution. But if we correct the naive equation by replacing $`VV_{\mathrm{eff}}`$, then we indeed recover the unprojected result
$$𝒫(𝐫)\frac{\mathrm{exp}(V_{\mathrm{eff}}/T)}{2\pi r}=\mathrm{exp}(V/T)$$
(108)
as the equilibrium distribution in $`𝐫`$.Because the projected equation conserves $`\theta `$, there is a family of two-dimensional equilibrium solutions to the projected toy model equation: namely, the rotationally invariant distribution (107) can be multiplied by an arbitrary angular distribution $`f(\theta )`$. This non-uniqueness is, of course, irrelevant if one is only interested in rotationally invariant observables. The appearance of an arbitrary function of $`\theta `$ in the general equilibrium distribution is a reflection of the non-ergodicity of the projected two-dimensional evolution equation. As discussed in Appendix B, for the transverse-projected gauge theory there does not appear to be any analog of a conserved gauge-orbit coordinate $`\theta `$ and, so far as we know, the transverse-projected gauge theory remains ergodic. \[Do not confuse the two-dimensional distribution (108) for $`𝐫`$ with the one-dimensional radial distribution (106) for $`r`$. Both describe the same equilibrium ensemble.\]
### C Gauge Theory
#### 1 Time discretization ambiguities
The transverse-projected version (75) of the soft effective theory is a Langevin equation of the form
$$\frac{d}{dt}q_i=_iV(𝐪)+e_{ia}(𝐪)\zeta _a,$$
(110)
$$\zeta _a(t)\zeta _b(t^{})=2𝒯\delta _{ab}\delta (tt^{}),$$
(111)
where, for the moment, we are using notation natural for a system with a finite number of degrees of freedom. In the field theory case, the dynamical variables $`𝐪`$ are the values of the gauge fields at different points in space and $`_i`$ becomes a functional derivative $`\delta /\delta A`$. The functions $`e_{ia}(𝐪)`$ characterize the coupling of the noise to the dynamical variables $`𝐪`$; for the gauge theory this is the transverse projection operator $`P_\mathrm{T}`$ (which depends on the gauge field $`𝐀`$).
To define exactly what is meant by this equation, imagine discretizing time into very small time steps of size $`ϵ`$.<sup>\*∥</sup><sup>\*∥</sup>\*∥ The following discussion roughly follows the presentation in sections 4.7 and 4.8 of ref. , although our normalizations are slightly different. Stochastic equations of the form (IV C 1) are generically ambiguous if the coupling $`e_{ia}(𝐪)`$ to the noise has non-trivial dependence of $`𝐪`$, because of the ambiguity, discussed earlier, of when to evaluate $`𝐪`$. In the discretized equation,
$$\frac{q_i(t+ϵ)q_i(t)}{ϵ}=\left[_iV+e_{ia}\zeta _a\right]_{t+\alpha ϵ},$$
(112)
$$\zeta _a(t)\zeta _b(t^{})=2𝒯ϵ^1\delta _{ab}\delta _{tt^{}},$$
(113)
this ambiguity appears as dependence on a parameter $`\alpha `$ which controls the time at which the right-hand side is evaluated. For example, $`\alpha =0`$ corresponds to a forward time derivative and is known as the Itô convention, $`\alpha =\frac{1}{2}`$ to the symmetric derivative, known as Stratonovich convention, and $`\alpha =1`$ to a backward time derivative. The precise meaning of evaluation at time $`t+\alpha ϵ`$ is to expand in $`\alpha ϵ`$. Keeping in mind that the amplitude of the noise $`\zeta `$ is $`ϵ^{1/2}`$, and using the equation of motion itself, the terms in the expansion which are non-negligible when $`ϵ0`$ are
$$\frac{q_i(t+ϵ)q_i(t)}{ϵ}=\left[_iV+e_{ia}\zeta _a+\alpha ϵ(_je_{ia})e_{jb}\zeta _a\zeta _b\right]_t.$$
(114)
The product $`\zeta _a\zeta _b`$ may be replaced by its expectation, giving the final discretized equation
$$\frac{q_i(t+ϵ)q_i(t)}{ϵ}=\left[_iV+2\alpha 𝒯(_je_{ia})e_{ja}+e_{ia}\zeta _a\right]_t.$$
(115)
The term proportional to $`\alpha `$ is a convention-dependent “drift” force. The naive continuous-time formulation (IV C 1) does not, in general, uniquely specify the dynamics.
#### 2 Vanishing ambiguity
We shall now show that the ambiguity vanishes for the transverse-only noise equation of (75). This implies that there is no real issue of convention dependence for this application.
We work in continuous space (rather than working on a spatial lattice, which would be more relevant to numerical simulations but also more complicated). The degrees of freedom in the gauge theory case are labeled by coordinates $`X=(𝐱,i,a)`$, where $`i`$ is a vector index and $`a`$ an adjoint color index. It will be convenient to introduce combined labels for several different choices of coordinates:
$$X=(𝐱,i,a),Y=(𝐲,j,b),Z=(𝐳,k,c).$$
(116)
The noise coupling $`e_{ia}`$ introduced above becomes the (matrix elements of the) transverse projection operator
$$P_{XY}=\delta ^{ij}\delta ^{ab}\delta (𝐱𝐲)(D^iD^2D^j)_{\mathrm{𝐱𝐲}}^{ab}.$$
(117)
This operator is symmetric in $`X`$ and $`Y`$, and the drift force discussed earlier is proportional to
$$P_{XZ}\frac{\delta }{\delta A_X}P_{ZY}.$$
(118)
When taking the variation of $`P_{ZY}`$, the variation must act on the left-most covariant derivative in (117), since otherwise that derivative will annihilate against the $`P_{XZ}`$ factor. One thus obtains
$`P_{XZ}{\displaystyle \frac{\delta }{\delta A_X}}P_{ZY}`$ $``$ $`{\displaystyle _𝐳}P_{XZ}\delta ^{ik}T_{ce}^a\delta (𝐱𝐳)\left(D^2D^j\right)_{\mathrm{𝐳𝐲}}^{eb}`$ (119)
$``$ $`dT_{ae}^a\left(D^2D^j\right)_{\mathrm{𝐱𝐲}}^{eb}(D^iD^2D^i)_{\mathrm{𝐱𝐱}}^{ac}T_{ce}^a\left(D^2D^j\right)_{\mathrm{𝐱𝐲}}^{eb}`$ (120)
in $`d`$ spatial dimensions, where no integration over $`𝐱`$ is implied. The first term vanishes because the adjoint generators $`T_{bc}^a`$ are anti-symmetric in $`(abc)`$ and so $`T_{ae}^a=0`$. The second term vanishes because $`(D^iD^2D^i)_{\mathrm{𝐱𝐱}}^{ac}`$ is symmetric in $`(ac)`$ and so vanishes when contracted with the anti-symmetric $`T_{ce}^a`$. So the convention-dependent drift force is exactly zero.<sup>\***</sup><sup>\***</sup>\*** The drift force is also proportional to $`𝒯`$ which, in the field theory case, is $`T\delta (\mathrm{𝟎})`$. If we were only interested in perturbative physics, we could have chosen to work in dimensional regularization, which sets $`\delta (\mathrm{𝟎})`$ to zero. <sup>\*††</sup><sup>\*††</sup>\*†† In the general case, a sufficient condition for the ambiguity to vanish can be expressed as follows. Suppose the potential $`V(𝐪)`$ of (IV C 1) is invariant under some symmetry transformations that have the infinitesimal form $`𝐪𝐪+\lambda 𝜽^\alpha (𝐪)`$, where $`\alpha `$ indexes the independent symmetry generators (and $`\lambda `$ is infinitesimal). Define a metric $`g^{\alpha \beta }=𝜽^\alpha 𝜽^\beta `$ on the space of symmetry generators. As in general relativity, let $`g_{\alpha \beta }`$ denote the (matrix) inverse of the metric. Now suppose that the noise coupling equals the projection operator $`e_{ij}=\delta _{ij}\theta _i^\alpha g_{\alpha \beta }\theta _j^\beta `$. One can then show that the ambiguity $`(_je_{ia})e_{ja}`$ vanishes if $`_i\theta _j^\alpha `$ is anti-symmetric under interchange of $`i`$ and $`j`$. This anti-symmetry condition is indeed satisfied in both the radial-projected toy model and transverse-projected gauge theory.
#### 3 Centrifugal drift
We now return to the transformation, between Bödeker’s effective theory and the transverse-projected theory, in order to derive the gauge theory analog of the centrifugal correction discussed for the toy model in section IV B 3. The time-discretized version of Bödeker’s effective theory is
$$𝐀(t+ϵ)=𝐀(t)\frac{ϵ}{\sigma }\left(\frac{\delta 𝒱}{\delta 𝐀}(t)𝜻(t)\right),$$
(121)
where
$$𝒱=\frac{1}{2}g^2_𝐱B_i^aB_i^a=\frac{1}{4}g^2_𝐱F_{ij}^aF_{ij}^a$$
(122)
is the potential energy associated with the magnetic field, the noise correlation is
$$\zeta _i^a(t,𝐱)\zeta _j^b(t+nϵ,𝐱^{})=2g^2T\sigma ϵ^1\delta ^{ab}\delta _{ij}\delta _{n0}\delta (𝐱𝐱^{}),$$
(123)
and we have chosen to use a forward time difference. We now want to apply (a discrete version of) the gauge transformation $`U`$ that was introduced in the time-continuum case, (81), to eliminate longitudinal motion of $`𝐀`$. For simplicity of presentation, we shall focus on one single, time step from $`t`$ to $`t+ϵ`$ and discuss how to transform $`𝐀(t+ϵ)`$ relative to $`𝐀(t)`$ in order to eliminate the longitudinal motion introduced during that step. Consider a gauge transformation $`U`$ which equals the identity at time $`t`$, but is a non-trivial infinitesimal transformation at time $`t+ϵ`$,
$$U(t,𝐱)=1,U(t+ϵ,𝐱)=\mathrm{exp}\alpha (𝐱).$$
(124)
For the moment, we leave the generator of the transformation, $`\alpha `$, arbitrary. The gauge transformed field is
$$\overline{𝐀}U(\mathbf{}+A)U^1.$$
(125)
Expanding in powers of the generator $`\alpha `$ at time $`t+ϵ`$, this gives
$$\overline{𝐀}(t+ϵ)=𝐀(t+ϵ)𝐃\alpha +\frac{1}{2}[𝐃\alpha ,\alpha ]+O(\alpha ^3),$$
(126)
where $`𝐃\alpha =\mathbf{}\alpha +[𝐀(t+ϵ),\alpha ]`$. Using the equation of motion (121) to rewrite $`𝐀(t+ϵ)`$ in terms of $`𝐀(t)`$ gives
$$\overline{𝐀}(t+ϵ)=𝐀(t)\frac{ϵ}{\sigma }\left(\frac{\delta 𝒱}{\delta 𝐀}𝜻\right)𝐃\alpha +\frac{ϵ}{\sigma }[\frac{\delta 𝒱}{\delta 𝐀}𝜻,\alpha ]+\frac{1}{2}[𝐃\alpha ,\alpha ]+O(\alpha ^3,\sqrt{ϵ}\alpha ^2),$$
(127)
where now all the covariant derivatives involve the gauge field at time $`t`$. Choosing the infinitesimal generator to equal
$$\alpha =\frac{ϵ}{\sigma }D^2𝐃𝜻,$$
(128)
as implied by our previous discussion \[c.f. (81)\], will cause the $`𝐃\alpha `$ term to cancel the longitudinal projection of the noise $`𝜻`$. Since the noise $`𝜻`$ is of order $`ϵ^{1/2}`$, this means that $`\alpha `$ is of order $`\sqrt{ϵ}`$. We need to keep all terms in (127) which are of order $`ϵ`$. The term $`(ϵ/\sigma )[\delta 𝒱,\alpha ]`$ is $`O(ϵ^{3/2})`$ and may be neglected. Consequently,
$`\overline{𝐀}(t+ϵ)=𝐀(t)`$ $`+`$ $`{\displaystyle \frac{ϵ}{\sigma }}\left({\displaystyle \frac{\delta 𝒱}{\delta 𝐀}}+P_\mathrm{T}𝜻\right){\displaystyle \frac{ϵ^2}{\sigma ^2}}[𝜻\frac{1}{2}𝐃D^2𝐃𝜻,D^2𝐃𝜻]+O(ϵ^{3/2}).`$ (129)
Once again, we can replace the terms quadratic in noise by their statistical averages, as given by (123). After some manipulation, one finds that this yields
$`\overline{𝐀}^a(t+ϵ,𝐱)`$ $`=`$ $`𝐀^a(t,𝐱)+{\displaystyle \frac{ϵ}{\sigma }}\left({\displaystyle \frac{\delta 𝒱}{\delta 𝐀^a(𝐱)}}+P_\mathrm{T}𝜻^a(t,𝐱)\right)+{\displaystyle \frac{Tϵ}{\sigma }}f^{abc}(𝐃D^2)_{\mathrm{𝐱𝐱}}^{cb}+O(ϵ^3)`$ (130)
$`=`$ $`𝐀^a(t,𝐱)+{\displaystyle \frac{ϵ}{\sigma }}\left({\displaystyle \frac{\delta 𝒱_{\mathrm{eff}}}{\delta 𝐀^a(𝐱)}}+P_\mathrm{T}𝜻^a(t,𝐱)\right)+O(ϵ^3),`$ (131)
where the effective potential $`𝒱_{\mathrm{eff}}`$ is
$$𝒱_{\mathrm{eff}}=𝒱\frac{1}{2}T\mathrm{Tr}\mathrm{ln}(D^2)=𝒱T\mathrm{ln}\sqrt{det(D^2)}.$$
(132)
As shown in Appendix A, $`\sqrt{det(D^2)}`$ is the volume of the gauge orbit containing a given spatial gauge field. Consequently, this logarithmic correction to the potential is completely analogous to the “centrifugal” potential appearing in the rotationally invariant toy model. The upshot is that the projected equation which is actually equivalent to Bödeker’s effective theory differs from the naive projected equation (75) by the replacement of $`𝒱`$ by $`𝒱_{\mathrm{eff}}`$:
$$\sigma \frac{d}{dt}𝐀=𝐃\times 𝐁+\frac{1}{2}T\frac{\delta }{\delta 𝐀}\mathrm{Tr}\mathrm{ln}(D^2)+P_\mathrm{T}𝜻.$$
(133)
#### 4 The equilibrium distribution
It’s interesting to examine what happens if one converts a Langevin equation of the generic form (115) into a Fokker-Planck equation for the evolution of the probability distribution $`𝒫(𝐪,t)`$. One finds (see for example ) that
$$\frac{}{t}𝒫=_i\left[𝒯_j\left(e_{ia}e_{ja}𝒫\right)+\left\{_iV2\alpha 𝒯(_je_{ia})e_{ja}\right\}𝒫\right].$$
(134)
If $`e_{ia}(𝐪)`$ were just $`\delta _{ia}`$, as in Bödeker’s equation (74), then the equilibrium distribution (the solution to $`d𝒫/dt=0`$) would simply be $`𝒫=\mathrm{exp}(V/𝒯)`$ up to an overall normalization constant.
For the naive radial-projected toy model (88) with $`e_{ia}=\widehat{r}_i\widehat{r}_a`$,
$$_j(e_{ia}e_{ja})=\frac{\widehat{r}_i}{r}=\widehat{r}_i\frac{d}{dr}\mathrm{ln}(2\pi r),$$
(135)
and $`(_je_{ia})e_{ja}=0`$. This leads to the equilibrium distributions (107) quoted in section IV B 4.
In the gauge theory case, we have seen that the convention-dependent drift force vanishes for the transverse-only noise equation (75), but there still remains $`e_{ia}`$ dependence in the Fokker-Planck equation. Plugging in the transverse projection operator for $`e`$, one finds
$$_j(e_{ia}e_{ja})\frac{\delta }{\delta A_Y}P_{XY}=\frac{1}{2}P_{XY}\frac{\delta }{\delta A_Y}\mathrm{tr}\mathrm{ln}(D^2),$$
(136)
and, solving for the equilibrium distribution,
$$𝒫=\frac{\mathrm{exp}(𝒱/T)}{\sqrt{det(D^2)}}$$
(137)
up to an overall normalization constant.<sup>\*‡‡</sup><sup>\*‡‡</sup>\*‡‡ In the more general notation of footnote \*††IV C 2, the assumption that $`_i\theta _j^\alpha `$ is anti-symmetric in $`i`$ and $`j`$ leads to $`_j(e_{ia}e_{ja})=\frac{1}{2}e_{ij}_j\mathrm{ln}\sqrt{g}`$ and $`𝒫=\mathrm{exp}(V/T)/\sqrt{g}`$, where $`g`$ is the determinant of the inverse metric $`g_{\alpha \beta }`$. As mentioned above, $`\sqrt{det(D^2)}`$ just represents the gauge orbit volume, and the above distribution is analogous to the toy model result (107). As with the toy model, however, if we examine the transverse theory that is truly equivalent to the unprojected theory, then we should replace $`𝒱𝒱_{\mathrm{eff}}`$ and we recover the correct equilibrium distribution
$$𝒫=\frac{\mathrm{exp}(𝒱_{\mathrm{eff}}/T)}{\sqrt{det(D^2)}}=\mathrm{exp}(𝒱/T).$$
(138)
## ACKNOWLEDGMENTS
We thank Dietrich Bödeker and Guy Moore for useful conversations. We are especially indebted to Deitrich Bödeker for conversations, concerning an earlier version of this manuscript, which inspired us to clarify our understanding of the $`\omega k\gamma _\mathrm{g}`$ limit of the longitudinal sector. This work was supported, in part, by the U.S. Department of Energy under Grant Nos. DE-FG03-96ER40956 and DF-FC02-94ER40818.
## A The volume of gauge orbits
The natural metric on the space of gauge field vector potentials is
$$ds^2=_𝐱\mathrm{tr}\left[d𝐀^{}d𝐀\right].$$
(A1)
This is the unique metric (up to an overall multiplicative constant) which is invariant under both gauge transformations and spacetime symmetries. The gauge orbit passing through a particular gauge configuration $`𝐀`$ consists of all gauge transforms of $`𝐀`$. Within a neighborhood of $`𝐀`$, configurations on the gauge orbit may be parameterized as
$$𝐀^\mathrm{\Lambda }e^\mathrm{\Lambda }𝐃e^\mathrm{\Lambda },$$
(A2)
where $`\mathrm{\Lambda }`$ is an arbitrary generator of the gauge group (i.e., $`\mathrm{\Lambda }(𝐱)\mathrm{\Lambda }^a(𝐱)T^a`$ is a Lie-algebra valued function of $`𝐱`$), and $`𝐃=\mathbf{}+𝐀`$ is the covariant derivative with gauge field $`𝐀`$. Since $`𝐀^\mathrm{\Lambda }𝐀𝐃\mathrm{\Lambda }`$, the induced metric on the gauge orbit, evaluated at $`𝐀`$, is just
$$ds^2|_{\mathrm{orbit}}=_𝐱\mathrm{tr}\left[(𝐃\delta \mathrm{\Lambda })^{}(𝐃\delta \mathrm{\Lambda })\right]=_𝐱\mathrm{tr}\left[\delta \mathrm{\Lambda }^{}(D^2)\delta \mathrm{\Lambda }\right].$$
(A3)
Consequently the induced volume element on the orbit, evaluated at $`𝐀`$, is
$$dv=\sqrt{det(D^2)}d\mathrm{\Lambda },$$
(A4)
where $`d\mathrm{\Lambda }_{𝐱,a}d\mathrm{\Lambda }^a(𝐱)`$ denotes the flat measure on the gauge algebra. But the gauge-invariant Haar measure on the gauge group is just $`d\mathrm{\Lambda }`$ when evaluated at the identity. And, because the functional determinant $`det(D^2)`$ is gauge invariant, it is constant over the gauge orbit. So, globally, the volume element $`dv`$ equals $`\sqrt{det(D^2)}`$ times the Haar measure on the gauge group. Hence,
$$\frac{\text{orbit volume}}{\text{gauge group volume}}=\left[det(D^2)\right]^{1/2},$$
(A5)
and so $`\sqrt{det(D^2)}`$ is the gauge orbit volume up to an overall $`𝐀`$-independent normalization factor.
## B No gauge theory analog to toy model $`\theta `$
Return, for a moment, to the toy model described in Sec. IV B. The projected equation (88) has two obvious properties. First, the particle always moves in a direction perpendicular to the gauge orbits $`r=\text{const}`$. Second, moving according to this equation, the particle cannot reach any point in the configuration space, but instead remains confined to a slice in a configuration space where $`\theta `$ is a constant. In particular, starting from a point $`(r,\theta )`$, one cannot reach any point that is gauge equivalent to it, except the original point. In other words, if one fixes the gauge $`\theta =\theta _0`$, with $`\theta _0`$ some constant, then this gauge-fixing condition remains satisfied throughout the random walk.
Now consider the gauge theory. Eq. (75) describes a motion in the space of field configurations that is analogous to that described by the projected equation (88) in the toy model. In terms of the natural metric (A1), on the space of gauge configurations, one can easily check that the motion is always along directions perpendicular to deformations generated by gauge transformations (This is equivalent to satisfying the condition $`𝐃\dot{𝐀}=0`$.) The question we want to ask is whether the second property of our toy model still holds, i.e., is the motion confined to a slice in configuration space? It might be surprising that the answer to this question is negative, and one can travel throughout the whole configuration space even when restricted to trajectories whose tangents, at every point, are perpendicular to the intersecting gauge orbit. This negative answer is perhaps less surprising if one notices that there is no obvious gauge-fixing condition similar to $`\theta =\theta _0`$ that is conserved during the transverse-projected random walk (75). Thus, in the gauge theory, there is no equivalence of the parameter $`\theta `$.
This can be seen most directly by the explicit construction of a trajectory that remains perpendicular to gauge transformations at all times, but nevertheless connects two distinct points on the same gauge orbit. The trajectory we are going to construct starts at $`𝐀=0`$ and remains small all the time, so we can use perturbation theory. Let us denote the small parameter by $`ϵ`$. Consider first the following trajectory,
$$A_i(t,𝐱)=\{\begin{array}{cc}tC_i(𝐱),\hfill & 0<t<ϵ;\hfill \\ ϵC_i(𝐱)+(tϵ)D_i(𝐱),\hfill & ϵ<t<2ϵ;\hfill \\ (3ϵt)C_i(𝐱)+ϵD_i(𝐱),\hfill & 2ϵ<t<3ϵ;\hfill \\ (4ϵt)D_i(𝐱),\hfill & 3ϵ<t<4ϵ.\hfill \end{array}$$
(B1)
Provided that $`C_i`$ and $`D_i`$ are transverse, $`_iC_i=_iD_i=0`$, it is trivial to check that $`𝐃\dot{𝐀}=0`$ to leading order in $`ϵ`$. This means the trajectory is everywhere perpendicular (to within $`O(ϵ^2)`$) to the gauge orbits it passes through.
This trajectory may seem uninteresting, since it is a closed loop that starts at $`𝐀=0`$ and ends at the same point. The interest arises when we go to next-to-leading order in $`ϵ`$. At next-to-leading order, the trajectory (B1) does not satisfy the condition $`𝐃\dot{𝐀}=0`$. For example, when $`ϵ<t<2ϵ`$, $`𝐃\dot{𝐀}=ϵ[C_i,D_i]`$. To correct for this deviation, we need to modify $`A_i`$ in the following way:
$$A_i(t,𝐱)=ϵC_i(𝐱)+(tϵ)D_i(𝐱)+(tϵ)\alpha _i(𝐱),ϵ<t<2ϵ,$$
(B2)
where $`\alpha _i=O(ϵ)`$, so the term involving $`\alpha _i`$ is of higher order than the other terms. Eq. (B2) satisfies the condition $`𝐃\dot{𝐀}=0`$ if one places the following constraint on $`\alpha _i`$:
$$_i\dot{\alpha }_i+ϵ[C_i,D_i]=0.$$
(B3)
This condition can be satisfied by choosing
$$\dot{\alpha }_i=ϵ_i\mathbf{}^2[C_j,D_j].$$
(B4)
In this manner, one can modify the whole trajectory (B1) so that the condition $`𝐃\dot{𝐀}=0`$ is satisfied through next-to-leading order. The result is
$$A_i(t,𝐱)=\{\begin{array}{cc}tC_i,\hfill & 0<t<ϵ;\hfill \\ ϵC_i+(tϵ)D_iϵ(tϵ)_i\mathbf{}^2[C_j,D_j],\hfill & ϵ<t<2ϵ;\hfill \\ (3ϵt)C_i+ϵD_i(ϵ^2+ϵ(t2ϵ))_i\mathbf{}^2[C_j,D_j],\hfill & 2ϵ<t<3ϵ;\hfill \\ (4ϵt)D_i2ϵ^2_i\mathbf{}^2[C_j,D_j],\hfill & 3ϵ<t<4ϵ.\hfill \end{array}$$
(B5)
One sees that the trajectory now starts at $`A_i=0`$ at $`t=0`$ and runs to $`A_i=2ϵ^2_i\mathbf{}^2[C_j,D_j]`$ at $`t=4ϵ`$. \[Including still higher-order corrections would only change this by $`O(ϵ^3)`$.\] At the end point, $`𝐀`$ is a pure gauge, and, for a general choice of $`C_i`$ and $`D_i`$, nonzero. Therefore, this trajectory presents a simple example of how, following a transverse projected trajectory, one can go from one field configuration to a field configuration that is gauge equivalent to it. From this result one may show (provided the gauge group is semi-simple) that any field configuration in the vicinity of $`𝐀=0`$ is accessible to the transverse-projected random walk (75). Hence, there can be no analog of the toy model “slice parameter” $`\theta `$ for transverse-projected dynamics in non-Abelian gauge theories.
|
no-problem/9901/nucl-th9901093.html
|
ar5iv
|
text
|
# Period of the 𝛾-ray staggering in the 150Gd superdeformed region
## Abstract
It has been previously proposed to explain $`\gamma `$-ray staggerings in the deexcitation of some superdeformed bands in the <sup>150</sup>Gd region in terms of a coupling between global rotation and intrinsic vortical modes. The observed 4$`\mathrm{}`$ period for the phenomenon is suggested from our microscopic Routhian calculations using the Skyrme SkM effective interaction.
This brief paper completes the theoretical investigation of the coupling between global rotation and intrinsic vortical modes proposed in Ref. as a tentative explanation for the rather rare band staggering observed in the decay of some superdeformed bands. Indeed while such a phenomenon was first claimed in <sup>149</sup>Gd , then in <sup>194</sup>Hg , and possibly in the $`A130`$ superdeformed region , its existence has been confirmed and extended to a couple of neighboring nuclei in the first case and ruled out in the second case . Various theoretical explanations have been proposed besides the one which we have discussed in Ref. . When making an attempt to describe such data, one should address the three following questions: (i) What is the mechanism at work? (ii) Why is this phenomenon so scarce and what makes it appear where it is observed ($`A150`$)? (iii) What is tuning the period of the staggering?
While some answers have been provided in our previous papers to the two first questions, we aim here at addressing the third one. In , a staggering in transition energies within the yrast band was shown to appear in cases where the relevant collective energy is quadratic in two quantized quantities. A particular realization of the latter, well suited to the description of fastly rotating superdeformed states, corresponds to the parallel coupling of global rotation and intrinsic vortical modes in ellipsoidally deformed bodies, known after Chandrasekhar as *S* ellipsoids. In this case, the two commuting operators are the projections on the quantification axis of the angular momentum operator and of the so-called Kelvin circulation operator (see, e.g., Ref. ), hereafter called $`I`$ and $`J`$, respectively. Whereas it is trivial to show that the Kelvin circulation operator satisfies the usual commutation relations of an angular momentum, its consideration as a quantity which is approximately a constant of the motion is a basic assumption of our collective model. Its exact amount of violation would deserve a specific microscopic study. A self-consistent description of such a coupling can be made upon generalizing the Routhian approach , amounting thus to solve the following variational problem:
$$\delta H\mathrm{\Omega }I\omega J=0,$$
(1)
where $`H`$ is the microscopic Hamiltonian, $`\mathrm{\Omega }`$ ($`\omega `$) an angular velocity associated with the global rotation (intrinsic vortical) mode. This approach was first investigated in Ref. within a simple oscillator mean field approximation. There, the assumption of an energy which is quadratic in ($`\mathrm{\Omega },\omega `$) or equivalently in ($`I,J`$) has been shown to be rather well satisfied. Recently, fully self-consistent solutions of the above variational problem have been made possible upon using the standard SkM force . Details about such calculations and some of their results will be discussed below.
In another paper , a physical analogy stemming from the well-known similarity between the motion of a charge in a magnetic field and of a mass in a rotating frame has been established. It relates this staggering phenomenon with the observation of persistent currents in mesoscopic conductor or semiconductor rings as a manifestation of an inherent Aharonov-Bohm phase. Apart from its obvious physical appeal this consideration has set a framework in which one is able to understand the scarcity of both phenomena. It results from the necessity of securing a sufficiently low level of a specific damping which happens to yield in the staggering case a condition on the width of the superdeformed state in the relevant collective variable (e.g., the usual axial quadrupole deformation $`\beta `$). It was deduced from microscopic calculations of the associated mass parameters , using the usual D1S Gogny effective force , that such a condition of existence was generally not met for Ce and Hg superdeformed states as well as for such states in Gd isotopes but for <sup>150</sup>Gd and possibly <sup>148</sup>Gd.
As a consequence of all these studies the explanation for the staggering phenomenon suggested in Ref. is of course neither exempted from a priori questions nor deemed as being the only possible one, yet it is conforted as a rather likely candidate. However, one point remains to be clarified concerning the $`4\mathrm{}`$ period of the staggering. In Refs. this particular period was associated with a ratio of $`I`$ and $`J`$ values close to 2 for the considered states. From either semiclassical estimates of the relevant inertia parameters or from actual microscopic calculations in the harmonic oscillator mean field approximation such a ratio is much more consistent with hyperdeformation than with the actual superdeformation of the nuclear states. It is on this point that the new self-consistent approach of Ref. brings some interesting insight.
Here the generalized Routhian variational problem is solved within the Hartree-Fock approximation. Numerical codes breaking the time-reversal and axial symmetries as requested by the considered physical problem, determine the single-particle wave functions either at the nodes of a spatial mesh or as resulting from an expansion on a suitably chosen truncated basis. In the latter case, all approaches so far , to the best of our knowledge, have used a triaxial basis. In our calculations an alternative method, shown to be less time consuming in most cases, has been developed where the expansion is made on an axial basis. Here the dependence of various fields and densities in terms of the angular variable of the cylindrical coordinate system is handled by convenient Fourier expansions.
Self-consistent solutions of the usual Routhian problem (i.e. without constraint on the Kelvin circulation, namely, for $`\omega =0`$) have been first performed for the <sup>150</sup>Gd nucleus. As usual to get a solution corresponding to a given (quantized) value of $`I`$, one solves the problem iteratively so as to determine the angular velocity constraint $`\mathrm{\Omega }`$ able to reach the requested angular momentum. For such solutions, one can estimate the dynamical moment of inertia by numerical differentiation. As suggested in Ref. , among the possible expressions for $`𝒥^{(2)}`$, we choose for numerical stability reasons to evaluate
$$𝒥^{(2)}=\frac{I}{\mathrm{\Omega }}.$$
(2)
Such derivatives have been determined from a two-point formula involving thus for each studied value of the angular momentum three different Hartree-Fock calculations (differing typically around $`I=50\mathrm{}`$ by $`\mathrm{\Delta }\mathrm{}\mathrm{\Omega }0.07`$ keV). As discussed, e.g., in Ref. , even in the absence of any constraint on the intrinsic vortical currents, the Kelvin circulation operator takes a finite value which is easily estimated from our solutions. The vector components of this operator are defined (see, e.g., Ref. ) by its action on single-particle wave functions as
$$J_\alpha =\frac{\mathrm{}}{i}\underset{\beta ,\gamma }{}ϵ_{\alpha \beta \gamma }\frac{c_\gamma }{c_\beta }x_\beta \frac{}{x_\gamma },$$
(3)
where $`ϵ_{\alpha \beta \gamma }`$ is the completely antisymmetrical third-rank tensor, $`x_\alpha `$ the $`\alpha `$ component of the particle position vector, and $`c_\alpha `$ the corresponding length scale factor. These factors are deduced from the values of the quadrupole tensor calculated from our variational solutions upon making an ellipsoidal shape approximation.
Some results of our Routhian calculations ($`\omega =0`$) for a range of $`I`$ values from $`I=40\mathrm{}`$ to $`I=66\mathrm{}`$, including states of relevance for the observed superdeformation, are displayed in Fig. 1. It shows the variation of the moment of inertia $`𝒥^{(2)}`$ as a function of $`I`$. As a matter of fact, it fits rather well, as it should, with the pure Hartree-Fock part of the results obtained in Ref. with the same interaction. The calculated yrast value $`J_{\mathrm{yrast}}(I)`$ of the Kelvin circulation as a function of $`I`$ falls very nicely on the straight line:
$$J_{\mathrm{yrast}}(I)0.8I+1.0\mathrm{}.$$
(4)
This is not at all surprising insofar as the quadratic approximation for the collective energy discussed in Ref. is valid over the whole considered range of values of $`I`$. However, this relation makes it very clear that around $`I=50\mathrm{}`$, e.g., the ratio $`I/J`$ is indeed very far from the value of 2 which would lead to the observed staggering period.
As a result of the continuity equation, tangential intrinsic vortical excitations yield phase-space modifications amounting only to a momentum redistribution . In that respect they are indeed quite similar to pairing correlations. This has been, for instance, illustrated from another point of view in Ref. . There, current patterns as functions of a fixed pairing gap display indeed the same type of variation as classical *S*-ellipsoid velocity fields with respect to the angular velocity of the intrinsic vortical modes. Therefore one may consider the latter modes as a collective model translation of pairing correlations in a somewhat similar fashion as small amplitude vibrational collective modes can model random phase approximations (RPA) correlations.
In order to implement this type of excitation on top of Hartree-Fock rotational solutions we have solved the generalized Routhian problem with two constraints ($`\mathrm{\Omega }`$ and $`\omega `$, both $`0`$). Indeed we have made calculations for fixed values of $`I`$ upon varying $`J`$. Clearly the single constraint corresponds to the yrast state (up to the quantization of $`J`$ of course). It is therefore no surprise to find it at the minimum of a somewhat parabolic energy curve $`_I(J)`$ as demonstrated in Fig. 2. Note in passing that the general pattern of such energy curves $`_I(J)`$ for various values of $`I`$ is consistent with a quadratic dependence of the total energy in $`I`$ and $`J`$ as assumed in Ref. and calculated in the simple model case of Ref. .
Now to make excursions, for a given value of $`I`$, out of the yrast solution, one has to perform a two Lagrange multipliers search, whose result is examplified in Fig. 3 for the $`I=50\mathrm{}`$ case. It is rather significant that to get $`J`$ values smaller than the yrast value, one should add a counterotating intrinsic vortical mode. This can be explained by using the quadratic approximation for the total energy. One finds
$$I=C\mathrm{\Omega }+B\omega $$
(6)
$$J=B\mathrm{\Omega }+A\omega $$
(7)
where $`A`$, $`B`$, and $`C`$ are (positive) inertia parameters defined in Ref. . From the above, one finds trivially
$$J=\frac{BI}{C}+\omega \frac{ACB^2}{C},$$
(8)
where the coefficient of $`\omega `$ is found to be positive for well-deformed nuclei as easily seen from the semiclassical estimates of Refs. . Therefore one can check that starting from $`\mathrm{\Omega }>0`$ for the yrast state, in order to decrease $`J`$, one diminishes (from its vanishing value) the $`\omega `$ velocity while increasing $`\mathrm{\Omega }`$ (keeping $`I`$ constant) so that one gets $`\omega \mathrm{\Omega }<0`$. Conversely the same reasoning yields $`\omega \mathrm{\Omega }>0`$ when increasing $`J`$ away from its yrast value.
Pairing correlations act against the global rotations (see, e.g., Ref. ). In our collective model account of such correlations, this corresponds to a product of angular velocities such that $`\omega \mathrm{\Omega }<0`$. Consequently starting from a noncorrelated (Hartree-Fock) solution, the inclusion of pairing correlations will tend to decrease the Kelvin circulation value (see Fig. 3). In Fig. 4, the dynamical moments of inertia $`𝒥^{(2)}`$ are plotted for three values of $`I`$ around $`50\mathrm{}`$ as functions of $`J`$. One sees that pairing correlations will indeed increase $`𝒥^{(2)}`$ from its Hartree-Fock value as actually found in the Hartree-Fock-Bogoliubov calculations of Ref. . In this paper, the authors have shown upon using simple yet realistic pairing matrix elements that the correlations raise the moment of inertia to the vicinity of the experimental value (typically $`𝒥^{(2)}90\mathrm{}^2`$ MeV<sup>-1</sup>). It is striking that when constraining the intrinsic vortical mode to obtain this value for $`𝒥^{(2)}`$, one gets a Kelvin circulation of $`25\mathrm{}`$ which fulfills precisely the $`I/J`$ ratio condition of being close to 2. The latter provides a $`4\mathrm{}`$ period for the oscillating behavior of the $`\gamma `$ transition energies in the <sup>150</sup>Gd region.
To confirm the above conclusion, it would be, of course, very interesting to perform variational generalized Routhian calculations within the Hartree-Fock-Bogoliubov approximation. We are currently working on it. Nevertheless, it seems to us very likely that our indirect estimate already leads to the conclusion that pairing correlations play the major role in fine-tuning the staggering period to what is experimentally observed.
###### Acknowledgements.
A part of this work has benefited from a IN2P3 (CNRS)-JINR grant (No. 97-30) which is gratefully acknowledged.
|
no-problem/9901/cond-mat9901109.html
|
ar5iv
|
text
|
# Bragg spectroscopy of a Bose–Einstein condensate
\[
## Abstract
Properties of a Bose–Einstein condensate were studied by stimulated, two–photon Bragg scattering. The high momentum and energy resolution of this method allowed a spectroscopic measurement of the mean-field energy and of the intrinsic momentum uncertainty of the condensate. The coherence length of the condensate was shown to be equal to its size. Bragg spectroscopy can be used to determine the dynamic structure factor over a wide range of energy and momentum transfers.
\]
The first evidence for Bose-Einstein condensation in dilute gases was obtained by a sudden narrowing of the velocity distribution as observed for ballistically expanding clouds of atoms . Indeed, most textbooks describe Bose-Einstein condensation as condensation in momentum space . However, the dominant contribution to the observed momentum distribution of the expanding condensate was the released interaction energy (mean–field energy) resulting in momentum distributions much broader than the zero–point motion of the ground state of the harmonic trapping potential. Since the size of a trapped condensate with repulsive interactions is larger than the trap ground state, the momentum distribution should be considerably narrower than in the trap ground state. In this paper, we show the momentum distribution of a trapped condensate to be Heisenberg uncertainty limited by its finite size. This is equivalent to showing that the coherence length of the condensate is equal to its physical size.
Sub–recoil momentum resolution has been previously achieved by resolving the Doppler width of a Raman transition between different hyperfine states or of a two–photon transition to a metastable excited state . Here we use Bragg scattering where two momentum states of the same ground state are connected by a stimulated two–photon process . This process can be used to probe density fluctuations of the system and thus to measure directly the dynamic structure factor S(q,$`\nu `$), which is the Fourier transform of the density–density correlation function and is central to the theoretical description of many–body systems . In contrast to measuring S(q,$`\nu `$) with inelastic neutron scattering like in superfluid helium , or using inelastic light scattering , Bragg scattering as used here is a stimulated process which greatly enhances resolution and sensitivity.
Bragg scattering of atoms from a light grating was first demonstrated in 1988 and has been used to manipulate atomic samples in atom interferometers , in de Broglie wave frequency shifters , and also to couple out or manipulate a Bose–Einstein condensate . Small angle Bragg scattering, called recoil–induced resonances, has been used for thermometry of laser–cooled atoms . In this work we establish Bragg scattering as a spectroscopic technique to probe properties of the condensate. We refer to it as Bragg spectroscopy in analogy to Raman spectroscopy which involves different internal states.
The absorption of $`N`$ photons from one laser beam and stimulated emission into a second laser beam constitutes an $`N`$–th order Bragg scattering process. The momentum transfer $`q`$ and energy transfer $`h\nu `$ are given by $`q=2N\mathrm{}k\mathrm{sin}(\vartheta /2)`$ and $`\nu =N\mathrm{\Delta }\nu `$, where $`\vartheta `$ is the angle between the two laser beams with wave vector $`k`$ and frequency difference $`\mathrm{\Delta }\nu .`$
For non–interacting atoms with initial momentum $`\mathrm{}k_i`$, the resonance is given by the Bragg condition $`h\nu =q^2/2m+\mathrm{}k_iq/m`$, which simply reflects energy and momentum conservation for a free particle. The second term is the Doppler shift of the resonance and allows the use of Bragg resonances in a velocity–selective way .
For a weakly interacting homogeneous condensate at density $`n`$, the dispersion relation has the Bogoliubov form
$$\nu =\sqrt{\nu _0^2+2\nu _0nU/h},$$
(1)
where $`nU=n4\pi \mathrm{}^2a/m`$ is the chemical potential, with $`a`$ and $`m`$ denoting the scattering length and the mass, respectively, and $`h\nu _0=q^2/2m`$. At low energies the excitation spectrum is phonon–like, obeying $`h\nu =cq`$, where $`c`$ is the speed of sound . For energies $`h\nu nU`$ the spectrum is particle–like
$$\nu \nu _0+nU/h.$$
(2)
The mean–field shift $`nU`$ reflects the exchange term in the interatomic interactions: a particle with momentum $`q`$ experiences twice the mean field energy as a particle in the condensate . We use this property to determine the condensate mean–field energy spectroscopically. This is related to, but different from the mean–field shift due to interactions with an electronically excited state which was used to identify BEC in atomic hydrogen .
Eq. (1) is the excitation spectrum of a homogeneous condensate with initial momentum $`\mathrm{}k_i=0`$. The inhomogeneous trapping potential adds two features which broaden the resonance: a distribution of initial momenta due to the finite size of the cloud, and a distribution of mean–field shifts due the density variation over space.
The momentum distribution along the x–axis is given by the Fourier transform of the wavefunction: $`f(p_x)=\left[𝑑x𝑑y𝑑ze^{ip_xx/\mathrm{}}\mathrm{\Psi }(x,y,z)\right]^2`$. In the Thomas–Fermi approximation the wave function $`\mathrm{\Psi }(x,y,z)`$ in an harmonic trapping potential is $`\left[\mathrm{\Psi }(x,y,z)\right]^2=n_0(1(x/x_0)^2(y/y_0)^2(z/z_0)^2)`$, where $`n_0`$ denotes the peak density. The size of the wavefunction is determined by $`n_0`$ and the trapping frequencies $`\nu _i`$ through the coefficients $`x_0,y_0,z_0`$: $`x_0=\sqrt{2n_0U/m(2\pi \nu _x)^2}`$ (and similar for $`y_0,z_0`$). The line shape $`I_p(p_x)`$ is proportional to the square of the Fourier coefficients
$$I_p(p_x)\left[J_2(p_xx_0/\mathrm{})/(p_xx_0/\mathrm{})^2\right]^2,$$
(3)
where $`J_2`$ denotes the Bessel function of order 2. This curve is very similar to a Gaussian and has an rms–width of $`\mathrm{\Delta }p_x=\sqrt{21/8}\mathrm{}/x_0`$. Thus, the corresponding Doppler broadening $`\mathrm{\Delta }\nu _p=\sqrt{21/8}q/2\pi mx_0`$ of the Bragg resonance is inversely proportional to the condensate size $`x_0`$ and does not depend explicitly on the number of atoms.
The same parabolic wavefunction gives the (normalized) density distribution $`(15n/4n_0)\sqrt{1n/n_0}`$. The simplest model for the spectroscopic lineshape $`I_n(\nu )`$ due to the inhomogenous density assumes that a volume element with local density $`n`$ leads to a lineshift $`nU`$ (eq. (2)):
$$I_n(\nu )=\frac{15h(\nu \nu _0)}{4n_0U}\sqrt{1\frac{h(\nu \nu _0)}{n_0U}},$$
(4)
which has a total width of $`n_0U/h`$, a maximum at $`2n_0U/3h`$, an average value (first moment) of $`4n_0U/7h`$, and an rms–width of $`\mathrm{\Delta }\nu _n=\sqrt{8/147}n_0U/h`$. In contrast to the Doppler broadening due to the finite size, the mean–field broadening only depends on the density but not explicitly on the size.
In our experiments the combined broadening mechanisms represented by eqs. (3) and (4) have to be considered. While the exact calculation of the lineshape requires detailed knowledge of the excitation wavefunctions, the total line shift (first moment) and rms–width can be calculated using sum rules and Fermi’s Golden Rule. Thus, it can be rigorously shown that the total line shift remains $`4n_0U/7h`$, while the rms–widths $`\mathrm{\Delta }\nu =\sqrt{\mathrm{\Delta }\nu _p^2+\mathrm{\Delta }\nu _n^2}`$ is the quadrature sum of the Doppler and mean–field widths .
We produced magnetically trapped, cigar–shaped Bose–Einstein condensates as in previous work . In order to study the resonance as a function of density and size, we prepared condensates using two different trapping frequencies and varied the number of atoms by removing a variable fraction using the rf–output coupler . The density of the condensate was determined from the expansion of the cloud in time–of–flight and the size from the atom number and the trapping frequencies . Bragg scattering was performed by using two counterpropagating beams aligned perpendicularly to the weak axis of the trap. Spectra were taken by pulsing on the light shortly before switching off the trap and determining the number of scattered atoms as a function of the frequency difference between the two Bragg beams. Since the kinetic energy of the scattered atoms was much larger than the mean–field energy, they were well separated from the unscattered cloud after a typical ballistic expansion time of 20 msec. Center frequencies and widths were determined from Gaussian fits to the spectra.
Duration, intensity and detuning of the Bragg pulses had to be chosen carefully. The instrumental resolution is limited by the pulse duration $`\delta t_{pulse}`$ due to its Fourier spectrum, in our case requiring $`\delta t_{pulse}>250\mu `$s for sub–kHz resolution. The maximum pulse duration was limited to less than one quarter of the trap period, $`\delta t_{pulse}<500\mu `$s, by which time the initially scattered atoms would come to rest and thus would be indistinguishable from the unscattered atoms in time–of–flight. The light intensity was adjusted to set the peak efficiency to about 20 %. Sufficient detuning was necessary to avoid heating of the sample. The ratio of the two–photon rate $`\omega _R^2/4\mathrm{\Delta }`$ to the spontaneous scattering rate $`\omega _R^2\mathrm{\Gamma }/2\mathrm{\Delta }^2`$ is $`\mathrm{\Gamma }/2\mathrm{\Delta }`$, where $`\omega _R`$ denotes the single beam Rabi frequency, $`\mathrm{\Delta }`$ the detuning and $`\mathrm{\Gamma }`$ the natural linewidth. Spontaneous scattering was negligible for the chosen detuning of 1.77 GHz below the 3S<sub>1/2</sub> F=1 $``$ 3P<sub>3/2</sub> F=2 transition.
The relative detuning of the two Bragg beams was realized in two ways. In one scheme, a beam was split and sent through two independent acousto–optical modulators driven with the appropriate difference frequency, and then overlapped in a counterpropagating configuration. Alternatively, a single beam was modulated with two frequencies separated by the relative detuning and backreflected. Both methods are insensitive to frequency drifts of the laser since the Bragg process only depends on the relative frequency of the two beams, which is controlled by rf–synthesizers. The second method simultaneously scattered atoms in the $`+x`$ and $`x`$ directions and was thus helpful to identify motion of the cloud. We estimate that residual vibrational noise broadened the line by less than 1 kHz. This resolution corresponds to a velocity resolution of 0.3 mm/s or 1 % of the single–photon recoil. At a radial trapping frequency of 200 Hz, we had to avoid any motion of the cloud with an amplitude larger than 0.2 $`\mu m`$.
Fig. 1 shows typical spectra, taken both for a trapped condensate and after 3 ms time of flight when the mean–field energy was fully converted into kinetic energy. The rms–width of the resonance for the ballistically expanding cloud is 20 kHz, which is much narrower than the 65 kHz wide distribution of a thermal cloud at 1 $`\mu `$K, a typical value for the BEC transition temperature under our conditions.
We could not measure the thermal distribution with the same pulse duration as for the condensate since the fraction of scattered atoms was too small due to the broad resonance. The spectra for the thermal cloud and the expanding condensate correspond to the spatial distributions observed by absorption imaging after sufficiently long time of flight. With this technique, the BEC transition is indicated by a sudden narrowing of the time–of–flight distribution by a factor of three. Using Bragg spectroscopy, the signature of BEC is much more dramatic — the condensate resonance is more than thirty times narrower than of the thermal cloud, and indeed narrower than the ground state of the trap. This demonstrates that Bragg spectroscopy is a superior way to probe the BEC transition.
Fig. 2 shows the conversion of mean–field energy into kinetic energy after the atoms are released from the trap, indicated by a broadening of the Bragg resonance from about 2 kHz to 20 kHz. After 3 ms, the cloud has reached its asymptotic velocity $`v_{\mathrm{}}`$. Fig. 2 agrees with the velocity evolution $`v_r=v_{\mathrm{}}2\pi \nu _rt/\sqrt{1+(2\pi \nu _rt)^2}`$ as expected from the scaling laws given in ref. which are derived for cigar shaped condensates with large aspect ratios in the Thomas–Fermi approximation. In order to compare with the data, the Doppler width due to the velocity evolution, the mean–field width and the finite–size width were added in quadrature, assuming Gaussian shapes. The finite–size width was calculated from the predicted evolution of the size $`x_r=x_0\sqrt{1+(2\pi \nu _rt)^2}`$ .
The narrow resonance of the trapped condensate (Fig. 1) was studied as a function of the condensate density and size. Fig. 3 (a) demonstrates the linear dependence of the frequency shift on the density. The slope of the linear fit corresponds to $`(0.54\pm 0.07)n_0U/h`$, in excellent agreement with the prediction of $`4n_0U/7h`$. In Fig. 3 (b), the expected widths due to the mean–field energy and finite size are shown for the two different trapping frequencies studied. The data agree well with the solid lines, which represent the quadrature sum of the two contributions. To demonstrate the finite–size effect the same data are shown in Fig. 3 (c) after subtracting the mean–field broadening and the finite pulse–length broadening (0.5 kHz). The linewidths are consistent with the expected $`1/x_0`$ dependence. Even without these corrections the measured linewidths are within 20 % of the value expected due to the Heisenberg–uncertainty limited momentum distribution (Fig. 3 (b)).
The momentum spread of the condensate is limited by its coherence length $`x_c`$ which, in the case of long–range order, should be equal to the size $`x_0`$ of the condensate. Our results show that $`x_cx_0`$ in the radial direction of the trap. This quantitatively confirms the earlier qualitative conclusion reached by interfering two condensates . In particular, our measurements indicate that the condensate does not have phase fluctuations, i.e. that it does not consist of smaller quasi–condensates with random relative phases. It would be interesting to study this aspect during the formation of the condensate, e.g. after suddenly quenching the system below the BEC transition temperature , and observe the disappearance of phase fluctuations.
In this work we have determined the normal mode of the condensate at a momentum transfer of two photon recoils, corresponding to a energy transfer $`\nu _0=100`$ kHz. Different momentum transfers are possible by changing the angle between the Bragg beams and/or the order $`N`$, thus enabling measurements of the dynamical structure function $`S(𝐪,\nu )`$ over a wide range of parameters. At low momentum transfer, the lineshape is dominated by the mean–field energy and by phonon–like collective excitations, whereas at high momentum transfers, the linewidth mainly reflects the momentum distribution of individual atoms. This is analogous to neutron scattering in liquid helium, where slow neutrons were used to observe the phonon and roton part of the dispersion curve, and fast neutrons were used to determine the zero–momentum peak of the condensate . While we have observed higher–order Bragg scattering up to third order in the trapped condensate using higher laser intensities, its spectroscopic use was precluded by severe Rayleigh scattering, and would require larger detuning from the atomic resonance.
The use of inelastic light scattering to determine the structure factor of a Bose–Einstein condensate was discussed in several theoretical papers . It would require the analysis of scattered light with kHz resolution and suffers from a strong background of coherently scattered light . Bragg spectroscopy has distinct advantages because it is a stimulated, background–free process in which the momentum transfer and energy are pre–determined by the laser beams rather than post–determined from measurements of the momentum and energy of the scattered particle.
In conclusion, we have established Bragg spectroscopy as a new tool to measure properties of a condensate with spectroscopic precision. We have demonstrated its capability to perform high–resolution velocimetry by resolving the narrow momentum distribution of a trapped condensate and by observing the acceleration phase in ballistic expansion. Since the momentum transfer can be adjusted over a wide range, Bragg spectroscopy can be used to probe such diverse properties as collective excitations, mean–field energies, coherence properties, vortices or persistent currents.
This work was supported by the Office of Naval Research, NSF, Joint Services Electronics Program (ARO), NASA, and the David and Lucile Packard Foundation. A.P.C would like to acknowledge support from NSF, D.M.S.-K. from the JSEP Graduate Fellowship Program and J.S. from the Alexander von Humboldt–Foundation.
|
no-problem/9901/hep-ph9901321.html
|
ar5iv
|
text
|
# Strange Asymmetries in the Nucleon Sea
## I Introduction
The sea of the nucleon continues to intrigue nuclear and particle physicists seeking to understand its structure and dynamical origin. The recent results from the E866 Collaboration at Fermilab, for example, on Drell-Yan production in proton–proton and proton–deuteron scattering, from which the $`x`$-dependence of the $`\overline{d}/\overline{u}`$ ratio was extracted, present a challenge to models of the nucleon’s structure . These data quite unambiguously indicate non-trivial non-perturbative effects in the proton’s sea which give rise to a rather large asymmetry in the light antiquark sector.
At somewhat lower energies, the HAPPEX Collaboration has recently presented results on the strange electromagnetic form factors of the proton obtained from parity-violating electron scattering at Jefferson Lab. The experiment found $`G_E^S+rG_M^S|_{(\mathrm{HAPPEX})}0.023\pm 0.048`$ at an average $`Q^2`$ of 0.48 GeV<sup>2</sup>, with $`r0.4`$ for the HAPPEX kinematics. This result is consistent with the earlier experiment by the SAMPLE Collaboration at MIT-Bates , $`G_{M(\mathrm{SAMPLE})}^S=+0.23\pm 0.44`$ at $`Q^2=0.1`$ GeV<sup>2</sup>.
These experiments are extremely valuable in our quest to arrive at a consistent picture of the nucleon’s substructure. While valence quark models have provided considerable insight into the structure of the nucleon’s core, describing the dynamics of the sea of the nucleon is considerably more model-dependent. Nevertheless, the nucleon sea provides a unique testing ground for QCD models, since a sea generated purely perturbatively generally results in vanishing sea quark form factors and asymmetries.
While constrained by conservation laws requiring equal numbers of strange and antistrange quarks in the nucleon (which in deep-inelastic scattering language corresponds to equal first moments of the $`s`$ and $`\overline{s}`$ quark distributions, or in elastic scattering, a zero strange electric form factor at $`Q^2=0`$), the distributions of $`s`$ and $`\overline{s}`$ quarks need not be identical in coordinate or momentum space . Indeed, since perturbative QCD predicts equal $`s`$ and $`\overline{s}`$ distributions, a difference between these would be clear evidence for non-perturbative effects in the structure of the nucleon.
Many models have been constructed in the literature which attempt to describe how strangeness arises in the nucleon. These range from vector meson dominance and quark models, to Skyrme and NJL types, as well as approaches which try to respect general properties such as analyticity and chiral symmetry . It is probably fair to say that none of these models is sophisticated enough or has the sufficient degrees of freedom necessary to provide a reliable microscopic description of all the strangeness observables. Nevertheless, in many cases such model studies can offer a glimpse of the underlying dynamics of strangeness generation.
A further complication arises with the need to consistently keep the same model degrees of freedom at different scales. For example, in deep-inelastic scattering, the natural degrees of freedom are partons on the light-cone; at low energies one can obtain reasonable descriptions of observables in terms of effective, constituent quarks. Until a rigorous connection is found between these (see however Ref.), use of quark-type models will be problematic if one aims for a unified description of strangeness observables.
A somewhat less ambitious endeavour than calculating structure functions from first principles is to accept the limitations of the QCD models, and try to see whether a piece of strangeness information from one experiment can be used to understand data from another experiment. This can also pose some challenges, as the validity of models is often limited to a specific energy range (for example, below the chiral symmetry breaking scale, for chiral hadronic models), forcing one to sometimes extrapolate models to regions where their reliability could be questioned.
One model which in the past has been applied to the study of both low energy observables, such as electromagnetic form factors, magnetic moments, hadron–hadron scattering, etc, as well as to deep-inelastic structure functions at much higher energies, is the chiral cloud model . Since there is a priori no resolution scale at which chiral symmetry can be ignored, a cloud of pseudoscalar mesons would be expected to play some role both at low and high energies, provided one can isolate the non-perturbative effects from the purely perturbative ones associated with QCD evolution between different scales.
One of the main difficulties with the implementation of cloud models of the nucleon in the past has been how to evaluate matrix elements of current operators between non-physical states, such as the virtual mesons or baryons of the cloud. Some of these can be circumvented by formulating the cloud on the light-cone (or in the infinite momentum frame). The light-cone offers many advantages for the description of hadron and nuclear structure and interactions, as advocated some 20 years ago by Lepage & Brodsky and others. Interpreting intermediate-state particles as being on their mass shells, one can avoid introducing ad hoc off-shell nucleon form factors or structure functions, and more consistently parameterise the momentum dependence of vertex functions at hadronic vertices. Furthermore, it is the natural framework for describing partonic substructure of hadrons and nuclei (see recent work by Miller and Machleidt in the application of light-cone techniques to nuclear matter). Nevertheless, at low energies there are some subtle issues, such as rotational invariance, which need special care when dealing with models on the light-cone, and particular care is paid to these in this paper.
A common assumption in the application of chiral cloud and other models is the impulse approximation, in which one truncates the Fock space at some convenient point (usually determined by one’s ability to calculate), and omits contributions from many-body currents. It is known, however, that the use of one-body currents alone for composite systems leads to a violation of Lorentz covariance . In Section II we discuss the consequences of this, and in particular outline how Lorentz covariance may be restored using the prescription of Karmanov et al.. More complete accounts can be found, for example, in Refs.. In Section III the light-cone chiral cloud model is applied to the strange axial charge, and important corrections are found to arise from the Lorentz symmetry breaking effects. A similar analysis for the strange magnetic form factor was reported in Ref., where the corrections changed even its sign, bringing it more into line with the SAMPLE measurement . The results for the strange electromagnetic form factors from the chiral cloud model are compared with the new data from the HAPPEX Collaboration at Jefferson Lab in Section IV. Finally, in Section V we summarise our findings and outline possible improvements of our analysis in future work.
## II Relativistic Covariance and the Light-Cone
As is well known, the light-cone formulation of dynamics has many advantages over other formulations when dealing with composite systems. The more pertinent ones to the current discussion are those connected with the fact that negative energy contributions to intermediate states are absent, and particles can be treated as if they were on-mass-shell. In practice, this means that matrix elements of hadrons (and nuclei) can be simply expressed as convolutions of the constituent particles’ matrix elements, and the constituents’ distributions in the hadron. The issue of off-mass-shellness has plagued many earlier, instant form calculations which attempted to incorporate relativistic effects . This is not to say that the light-cone formulation solves all problems which arise in instant form approaches — rather, to some extent one merely reshuffles them according to what is most convenient for a given application. Furthermore, if one is to unambiguously correlate information on strangeness observables from different experiments, one must utilise the same framework consistently throughout. Since the light-cone is the appropriate framework for high-energy deep-inelastic scattering, it would appear natural to also use the same model on the light-cone to describing observables such as elastic form factors.
One should point out, however, that the issue of relativistic covariance is relevant both in light-cone as well as instant-form approaches, whenever a Fock state suffers some kind of truncation, which invariably leads to a violation of Lorentz covariance. The problem exists because one-body currents, which do not include interactions, and to which most model calculations are restricted, do not commute with the interaction-dependent generators of the Poincaré group. Consequently, an incorrect four-vector structure appears in the matrix elements of current operators, resulting in the presence of additional unphysical, or spurious, terms in a form factor expansion of any electroweak current matrix element. The spurious form factors would not be present if the Lorentz symmetry were exact.
In the explicitly covariant formulation of light-cone dynamics developed by Karmanov et al., a specific method was proposed for extracting the nucleon’s physical form factors, excluding the spurious contributions. In this formulation, the nucleon state vector is defined on a light-cone given by the invariant equation $`nx=0`$, where $`n`$ is an arbitrary light-like four vector, $`n^2=0`$. Since this formulation is covariant, the Lorentz symmetry is restored, but the matrix elements now depend on the position of the light-cone plane, $`n^\mu `$, which in principle no physical quantity should. Because of the explicit dependence on the light-cone orientation, a form factor expansion of the matrix elements of the electroweak current will now involve three variables, the nucleon ($`p^\mu `$) and photon ($`q^\mu `$) four-momenta and $`n^\mu `$, rather than the usual first two. Therefore in general more structures will appear in the form factor decomposition, some of whose coefficients (namely, those depending on $`n^\mu `$) will be unphysical. The advantage of this prescription is that these $`n^\mu `$ dependent coefficients can then be identified and subtracted from the physical form factors.
One can also compare the covariant light-cone formulation with the approach more commonly used in the literature for calculating light-cone matrix elements, namely using the “+” component of currents, with $`n^\mu =(1;0,0,1)`$, so that $`t+z=0`$ defines the usual light-cone plane.
In Ref. the corrections to the electromagnetic vector form factors of the nucleon were calculated in a quark model. It was found that while the electric form factor does not suffer from any contamination from spurious form factors, the magnetic form factor receives quite large contributions. Following a similar philosophy, the corrections to the strange vector form factors of the nucleon were estimated in Ref. within a light-cone chiral cloud model. For intrinsically small quantities such as those involving strangeness, any corrections are likely to be relatively more important, and indeed the strange magnetic form factor was seen to change sign when the spurious contributions were subtracted. In addition to the strange vector matrix elements, those of the strange axial vector current are also of considerable interest, as they convey information on the spin distribution of strange quarks in the nucleon, which has been actively debated since the discovery of the EMC-spin effect a decade ago . In the following Section we examine the strange axial charge in the light-cone formulation, and estimate the contamination to this from the spurious form factors within the chiral cloud model.
## III Strange Axial Charge
Given that one of the main reasons for the focus on the strangeness content of the nucleon was the distribution of the proton’s spin amongst its constituents, it is clearly important to test whether previous estimates of strange contributions to the axial charge are reliable. In this Section we apply the prescription described above to the nucleon’s strange axial charge.
The strange axial current on the light-cone can be written covariantly as :
$`J_{\mu 5}^S`$ $`=`$ $`g_A^S\gamma _\mu \gamma _5+b_1^S{\displaystyle \frac{\overline{)}np_\mu }{np}}\gamma _5+\mathrm{},`$ (1)
where the “$`\mathrm{}`$” represent terms which do not contribute to axial matrix elements. The $`b_1^S`$ in this decomposition arises precisely because of the extra $`n`$ dependence introduced by the light-cone orientation. In an exact calculation it would be identically zero. In practice, however, when one uses Lorentz covariance violating approximations, such as restrictions to one-body currents, this $`n`$ dependent form factor can be non-zero.
Taking the forward matrix element of the current $`J_{\mu 5}^S`$, one can extract the axial charge $`g_A^S`$ by using the trace projection :
$`g_A^S`$ $`=`$ $`{\displaystyle \frac{1}{4(np)^2}}\mathrm{Tr}\left[𝒪_\mu \left((np)p^\mu \overline{)}n\gamma _5M^2n^\mu \overline{)}n\gamma _5(np)^2\gamma ^\mu \gamma _5\right)\right],`$ (2)
where $`𝒪_\mu =(\overline{)}p+M)J_{\mu 5}^S(\overline{)}p+M)/(4M^2)`$. Without correcting for the unphysical $`b_1^S`$ form factor, the axial charge would be :
$`\stackrel{~}{g}_A^S`$ $`=`$ $`g_A^S+b_1^S={\displaystyle \frac{M^2}{2(np)^2}}\mathrm{Tr}\left[𝒪_\mu n^\mu \overline{)}n\gamma _5\right].`$ (3)
To ascertain the importance of the difference between $`\stackrel{~}{g}_A^S`$ and the corrected $`g_A^S`$, one can use a simple chiral cloud model, in which the strangeness in the nucleon is assumed to reside in the kaon and hyperon components of the nucleon wave function. Because of the very different masses and momentum distributions of the kaon and hyperon, the overall strange and antistrange distributions will be quite different . In particular, in the valence approximation for the cloud, the $`\overline{s}`$ distribution is expected to be zero, since it resides entirely in the scalar kaon.
In the chiral cloud model the nucleon couples to a pseudoscalar kaon ($`K`$) and a spin-1/2 hyperon ($`Y`$) via a pseudoscalar $`i\gamma _5`$ interaction (the same results are also obtained with a pseudovector coupling). Extension of this analysis to spin-3/2 hyperons or strange vector mesons is straightforward, although beyond the scope of the present discussion (see below). Because the kaon has spin 0, the axial form factor receives contributions only from the $`\gamma ^{}\mathrm{\Lambda }`$ coupling, which can be written:
$`g_A^S`$ $`=`$ $`{\displaystyle \frac{g_{KNY}^2}{16\pi ^3}}{\displaystyle \frac{dyd^2𝐤_T}{y^2(1y)}\frac{^2}{(^2M^2)^2}\left(1\frac{M_Y}{yM}\right)\left(k_T^2+M_Y(M_YyM)\right)},`$ (4)
where $`^2=(k_T^2+M_Y^2)/y+(k_T^2+m_K^2)/(1y)`$ is the invariant mass of the intermediate state, and $``$ parameterises the hyperon-meson-nucleon vertex.
The momentum dependence of the vertex function $``$ can be calculated within the same model by dressing and renormalising the bare $`KNY`$ vertex by $`K`$ loops. However, since a detailed model description of the hadronic vertex is not the purpose of this paper, we shall instead follow the more phenomenological approach and parameterise the $`KNY`$ vertex by a simple function, such as a monopole, $`=(\mathrm{\Lambda }^2+M^2)/(\mathrm{\Lambda }^2+^2)`$. We shall comment on the dependence of the strangeness distribution on the shape of the form factor later.
For the uncorrected strange axial charge, from Eq.(3) we have:
$`\stackrel{~}{g}_A^S`$ $`=`$ $`{\displaystyle \frac{g_{KNY}^2}{16\pi ^3}}{\displaystyle \frac{dyd^2𝐤_T}{y^2(1y)}\frac{^2}{(^2M^2)^2}\left(k_T^2+(M_YyM)^2\right)},`$ (5)
which agrees with the expressions obtained in Ref.. The results for the strange axial charge $`g_A^S`$ are shown in Fig.1 as a function of the cut-off mass $`\mathrm{\Lambda }`$. In practice, the $`K\mathrm{\Lambda }`$ configuration turns out to give by far the dominant contribution to $`g_A^S`$ if standard coupling constants and form factor cut-offs are used. Also shown in Fig.1 is the uncorrected charge $`\stackrel{~}{g}_A^S`$. The $`n`$ dependent form factor turns out to be rather large, and contaminates the “true” $`g_A^S`$ to such an extent so as to produce the rather small $`\stackrel{~}{g}_A^S`$ value observed. The only empirical information available on the strange axial charge comes from the Brookhaven 734 experiment on elastic $`\nu p`$ and $`\overline{\nu }p`$ scattering . Unfortunately, the value of $`g_A^S`$ extracted from this experiment was found to be strongly correlated with the value of the cut-off mass, $`M_A`$, in the dipole axial vector form factor parameterisation. Varying $`M_A`$ between $`1.086\pm 0.015`$ GeV and $`1.012\pm 0.032`$ GeV, one can obtain anything between $`g_A^S=0`$ and $`0.21\pm 0.10`$ , as indicated by the shaded region in Fig.1.
One can also compare the strange axial charge with the first moment, $`\mathrm{\Delta }s`$, of the polarised strange quark distribution measured in deep-inelastic scattering . A recent world-averaged value extracted in the $`\overline{\mathrm{MS}}`$ scheme at a scale of 10 GeV<sup>2</sup> is $`\mathrm{\Delta }s=0.10\pm 0.04`$ , as indicated by the two long-dashed horizontal lines in Fig.1. Note that in the chiral cloud model the distribution $`\mathrm{\Delta }s`$ is given by a convolution of the $`y`$-integrand in Eqs.(4) and (5) with the polarised strange distribution in the hyperon . Since the latter is not a $`\delta `$-function, but has a non-trivial $`x`$-dependence, the resulting convolution would be expected to be smaller than the strange axial charge in the model. The experimental value of $`\mathrm{\Delta }s`$ is nonetheless consistent with the calculated $`g_A^S`$ if a soft $`KNY`$ form factor is used.
To constrain the size of the $`KNY`$ form factor, which is essentially the only parameter in the chiral cloud model, one can compare the model predictions with the measured unpolarised $`s`$$`\overline{s}`$ asymmetry. The possible differences between the $`s`$ and $`\overline{s}`$ quark distributions in the nucleon were investigated by the CCFR Collaboration via charm production in $`\nu `$ and $`\overline{\nu }`$ deep-inelastic scattering . Such differences were first predicted in the meson cloud framework more than 10 years ago by Signal and Thomas . The $`x`$-dependence of the calculated $`s`$$`\overline{s}`$ distribution is shown in Fig.2 for a form factor cut-off mass of $`\mathrm{\Lambda }=1`$ GeV (which gives an average multiplicity of kaons in the nucleon of $`6\%`$). The shaded region represents the data from Ref.. Also shown for comparison is the result with a $`t`$-dependent monopole form factor, as used in earlier analyses, $`=(\mathrm{\Lambda }m_K^2)/(\mathrm{\Lambda }t)`$, where $`t=(p_Np_Y)^2`$. Notice that the final shape and sign of $`s`$$`\overline{s}`$ are quite sensitive to the shape of the $`KNY`$ form factor. On the other hand, it is known that form factors which depend solely on the $`t`$ variable violate momentum conservation when one considers scattering from both the meson and hyperon components of the nucleon . More precise measurement of the strange asymmetry would be a valuable test of the dynamics of the $`KNY`$ interaction.
One can also compare the predictions of the model with the absolute values of the extracted $`s`$ and $`\overline{s}`$ distributions, as done in Ref. for example. As in Fig.2, one finds that for a hard hadronic form factor the meson cloud contributions overestimate the data, especially at large $`x`$ . The problem with comparing to the total $`s`$ and $`\overline{s}`$ distributions, however, as distinct from their difference, is that the total distributions contain singlet contributions in addition to the non-singlet. Modeling the former in general requires the (symmetric) perturbative sea arising from $`gs\overline{s}`$, as well as additional input for the structure of the bare nucleon distributions, uncertainties in which consequently make any real predictions of the model more elusive. For this reason a comparison with the non-singlet difference $`s\overline{s}`$, in which the perturbative contributions cancel, is more meaningful for the meson cloud model.
Finally, before ending this Section, we should note several concerns which have been raised in the literature regarding the implementation of loops in chiral models of the nucleon. In particular, it has been pointed out that truncations of the Fock state which stop at the one-loop order violate, in addition to the Lorentz covariance discussed above, also unitarity . While this is true in principle, the region where rescattering should become an issue is above the production threshold, which in practice is at rather high momenta compared with those most relevant to the current process. Furthermore, the chiral cloud model discussed here rests on a perturbative treatment of the effective hadronic Lagrangian, so that provided the form factors used at the hadronic vertices are not very hard, one would expect a one-loop calculation for the most part to give the dominant contribution. If two loop contributions were found to be large compared with the leading ones, the perturbative formulation of the chiral cloud itself would need to be reconsidered. Recent work based on coherent states techniques which include effects of higher-order, multi-pion Fock states for models such as the cloudy bag indicates that for relatively small meson densities, a one-loop, perturbative treatment comes very close to the exact result. The conclusion is therefore that so long as the hadronic vertices are relatively soft, with $`\mathrm{\Lambda }1`$ GeV, the one-loop result should give a reasonable estimate of cloud effects.
Concerns have also been raised about the omission of contributions from higher-mass intermediate states in the meson–baryon fluctuations . While the effects of heavier baryons such as the $`\mathrm{\Sigma }^{}`$ have been shown to be negligible , it has been argued that strange vector meson contributions are of the same order of magnitude as the $`K`$. In the analysis of Ref. a rather hard $`K^{}N\mathrm{\Lambda }`$ form factor was used, however, with a cut-off mass in the monopole parametrisation of $`\mathrm{\Lambda }2.2`$ GeV. This is to be compared with $`\mathrm{\Lambda }1.2`$ GeV for the $`KN\mathrm{\Lambda }`$ vertex . This relatively large value for the $`K^{}N\mathrm{\Lambda }`$ cut-off was taken from the hyperon–nucleon scattering analysis of Ref., although more recent work suggests that a value for both the $`K^{}N\mathrm{\Lambda }`$ and $`KN\mathrm{\Lambda }`$ form factor cut-offs of $`1`$ GeV is more appropriate, Such a smaller value would significantly reduce the $`K^{}`$ contribution. A re-evaluation of the strange vector meson effects with softer form factors would therefore be very useful before definitive conclusions about the reliability of lowest-order one-loop calculations can be made.
## IV Strange Electromagnetic Form Factors
As well as understanding polarised strangeness in the nucleon, there has also been considerable effort directed at measuring matrix elements of the electromagnetic vector currents. The first experimental result on the strange magnetic form factor of the proton was obtained by the SAMPLE Collaboration at MIT-Bates in 1997 in parity-violating electron scattering at backward angles, at $`Q^2=0.1`$ GeV<sup>2</sup>. While plagued with large errors, the data did seem to favour a relatively small, and possibly positive, value of the strange magnetic moment. More recently, the HAPPEX Collaboration at Jefferson Lab performed a similar experiment, although at forward angles, measuring the left-right asymmetry $`A`$ at $`Q^2=0.48`$ GeV<sup>2</sup>, where:
$`A`$ $`=`$ $`{\displaystyle \frac{\sigma _R\sigma _L}{\sigma _R+\sigma _L}}=\left({\displaystyle \frac{G_F}{\pi \alpha _{em}\sqrt{2}}}\right){\displaystyle \frac{1}{\epsilon G_E^2+\tau G_M^2}}`$ (7)
$`\times \left(\epsilon G_EG_E^{(Z)}+\tau G_MG_M^{(Z)}{\displaystyle \frac{1}{2}}(14\mathrm{sin}^2\theta _W)\epsilon ^{}G_MG_A^{(Z)}\right),`$
with $`\epsilon =\left(1+2(1+\tau )\mathrm{tan}^2(\theta /2)\right)^1`$, $`\tau =Q^2/4M^2`$, and $`\epsilon ^{}=\sqrt{\tau (1+\tau )(1\epsilon ^2)}`$ (the $`Q^2`$ dependence in all form factors is implicit).
Using isospin symmetry, one can relate the electric and magnetic form factors for photon and $`Z`$-boson exchange via:
$`G_{E,M}^{(Z)}`$ $`=`$ $`{\displaystyle \frac{1}{4}}G_{E,M}^{(I=1)}\mathrm{sin}^2\theta _WG_{E,M}{\displaystyle \frac{1}{4}}G_{E,M}^S,`$ (8)
where $`G_{E,M}^{(I=1)}`$ is the isovector form factor (difference between the proton and neutron). For the $`G_{E,M}`$ form factors we use the parameterisation from Ref.. The axial form factor for $`Z`$-boson exchange is given by $`G_A^{(Z)}=\frac{1}{2}(1+R_A)G_A+\frac{1}{2}G_A^S`$, where $`R_A`$ is an axial radiative correction, and the axial form factors are known phenomenologically .
In Fig.3 we plot the relative difference between the measured asymmetry, $`A`$, and that which would be expected for zero strangeness, $`A_0`$ — namely, $`(AA_0)/A`$. The solid curve corresponds to the light-cone chiral cloud model with a cut-off mass $`\mathrm{\Lambda }=1`$ GeV for the kaon–hyperon vertex. From the measured HAPPEX asymmetry, the combination $`G_E^S+rG_M^S`$ was also extracted at an average $`Q^2`$ of 0.48 GeV<sup>2</sup>, where $`r=(\tau /\epsilon )G_M/G_E0.4`$ for the HAPPEX kinematics. This is shown in Fig.4, compared with the chiral cloud prediction with $`\mathrm{\Lambda }=1`$ GeV. Both the magnetic (dotted) and electric (dashed) contributions are separately positive, resulting in a small and positive value, consistent with the experiment. Note that exactly the same parameters were used in Figs.3 and 4 as in the fit in Ref. to the SAMPLE data on $`G_M^S`$. Therefore the two form factor measurements, as well as the strange axial charge and the strange–antistrange asymmetry, seem to be consistently correlated within the chiral cloud model with soft form factors.
## V Conclusion
In this note we have pointed out the existence of corrections to the strange axial charge of the nucleon which arise in light-cone models based on the impulse approximation, or one-body operators, in which Lorentz covariance is not preserved. In the chiral cloud model, where the strangeness content of the nucleon is localised to the kaon–hyperon components of the nucleon wave function, these corrections are an order of magnitude larger than the uncorrected, Lorentz-violating results, and compatible with the sign and magnitude of the empirical $`g_A^S`$.
With the same model parameters, namely a soft kaon–hyperon–nucleon form factor (with a kaon probability in the nucleon of $`6\%`$), one also has good agreement with the strange electromagnetic form factors measured in recent experiments at low $`Q^2`$ at MIT-Bates and Jefferson Lab . The results are also compatible with data on the strange–antistrange asymmetry from the CCFR experiments .
One should of course mention some of the shortcomings of the simple one-loop meson cloud model treatment, which may qualify some of the quantitative predictions of the model. One of these is the problem of gauge invariance, which in earlier, instant-form approaches has been partially circumvented with the inclusion of contact, or so-called seagull, terms . Unfortunately, these are not unique , and to date one does not have control over the size of these contributions. Other potential contributions may arise from heavier meson Fock states (such as $`K^{}`$) or multi-meson configurations. These will be more quantitatively analysed in future work, but our previous experience suggests that their effects are unlikely to be dramatic in a perturbative treatment.
More theoretical work is obviously needed for a deeper understanding of the dynamics of strangeness generation in the nucleon. What seems to be becoming clearer, however, from the accumulating empirical evidence is that the importance of non-perturbative strangeness in the nucleon is likely to be relatively minimal. Future data from Jefferson Lab on the strange electromagnetic form factors, $`G_{E,M}^S`$, over a range of $`Q^2`$ should help to clarify this further.
###### Acknowledgements.
We would like to thank C. Boros, M.J. Ramsey-Musolf, F.M. Steffens and A.W. Thomas for helpful comments and discussions, and O. Melnitchouk for a careful reading of the manuscript. M.M. is partially supported by CNPq of Brazil.
|
no-problem/9901/cond-mat9901077.html
|
ar5iv
|
text
|
# Orientational relaxation in Brownian rotors with frustrated interactions on a square lattice
## I Introduction
Last decade or so have witnessed significant advances in our understanding of the underlying mechanism for the slow dynamics of supercooled liquids approaching the glass transition . The development of mode-coupling theory of supercooled liquids and extensive experiments and computer simulations have played crucial roles in such advances. Some efforts have also been devoted to devise model systems (even though somewhat artificial) which show glassy behavior similar to that of supercooled liquids. One line of research along this direction is to find (lattice) model systems with no quenched disorder but some intrinsic frustration built into the model, which may exhibit glassy relaxations .
One can imagine that there may exist a common microscopic mechanism which underlies the observed similarities in the relaxations of model systems and real supercooled liquids. This possibility is made more plausible by the universal scaling property observed in the dielectric susceptibilities of a variety of supercooled liquids and some plastic (glassy) crystal . Here in this work, we address the question of this possible common mechanism by investigating the equilibrium orientational relaxation of planar Brownian rotors whose interaction is prescribed by that of uniformly frustrated XY (UFXY) models with dense frustration, which is a prime example of non-randomly frustrated systems characterized by complex degeneracy of ground states and many metastable states.
While a recent simulation of the present authors deals with the relaxation of the vortex charge density for a purely dissipative dynamics, here we examine directly the orientational relaxation with finite rotational inertia, which offers more transparent views on the origin of the observed slow relaxation. Also, due to the one-dimensional nature of the phase of the planar rotors, it is convenient to probe the properties of the angular motions of the rotors of the system. We find that, by including phenomenological rotational inertia in the dynamic equation for the rotors, the orientational correlation exhibits a two-step relaxation, which is analogous to the (fast) $`\beta `$ and $`\alpha `$ relaxations of supercooled liquids. Mean square angular displacement (MSAD) exhibits three stage behavior, i.e., the early time ballistic, intermediate sub-diffusive, and late time diffusive regimes, which is argued to be consistent with the picture of the cage effect and long-time activated dynamics for the motion of the rotors. It is shown that there exist two dynamically distinct regimes: a high temperature regime where the dynamics is governed by a temperature-independent activation energy, and a low temperature regime, in which the activation energy increases with decreasing temperature, which is interpreted as arising from complex energy landscapes probed by the system in the low temperature regime.
## II Dynamic model and Simulation method
We consider the following Langevin dynamics for a collection of planar rotors on a square lattice
$$I\dot{\omega }_i(t)+\gamma \omega _i(t)=\frac{V(\{\theta \})}{\theta _i(t)}+\eta _i(t)$$
(1)
where $`I`$ is the moment of inertia, $`\omega _i(t)\dot{\theta }_i(t)`$ the angular velocity of the rotor at site $`i`$, $`\gamma `$ the damping constant, and $`\eta _i(t)`$ the thermal noise. The equation (1) describes the Brownian motion of rotors subject to the interaction potential energy $`V(\{\theta \})`$. The thermal noise $`\eta _i(t)`$ is given by a gaussian random variable
$`<\eta _i(t)>`$ $`=`$ $`0`$ (2)
$`<\eta _i(t)\eta _j(t^{})>`$ $`=`$ $`2\gamma T\delta _{ij}\delta (tt^{})`$ (3)
where the Boltzmann constant $`k_B`$ is set equal to unity. The variance of the noise in (3) ensures that the system at temperature $`T`$ evolves toward the equilibrium state whose properties are governed by the Boltzmann distribution $`\mathrm{exp}(E(\{\theta \},\{\omega \})/T)`$ where the energy $`E(\{\theta \},\{\omega \})`$ is given by $`E(\{\theta \},\{\omega \})=I_i\omega _i^2/2+V(\{\theta \})`$.
Here we chose the potential energy $`V(\{\theta \})`$ as the energy of the two dimensional UFXY model on a square lattice which takes the form
$$V(\{\theta \})=J\underset{(ij)}{}\mathrm{cos}(\theta _i\theta _jA_{ij})$$
(4)
where $`J`$ is the coupling constant and $`(ij)`$ denotes nearest neighbor pairs. The bond angles $`A_{ij}`$ satisfy the constraint
$$\underset{i,jP}{}A_{ij}=2\pi f$$
(5)
where the sum is over $`(i,j)`$ belonging to the unit plaquette $`P`$ causing competing interactions (frustration) between the rotors. Here, $`f`$ is called the frustration parameter of the system.
A convenient choice for $`A_{ij}`$ is the Landau gauge which is given by $`A_{ij}=0`$ for every horizontal bond and $`A_{ij}=\pm 2\pi fx_i`$ for the vertical bond directed upward (downward) with $`x_i`$ being the $`x`$-coordinate of the site $`i`$. It can be readily checked that this choice of the bond angles obeys the condition (5). Due to the invarince of the Hamiltonian (1) under $`ff+1`$ and $`ff`$, we need to consider the values of $`f`$ only over the range $`[0,1/2]`$. A physical realization of this model can be found in the two dimensional square array of Josephson junctions under a uniform perpendicular magnetic field. In this situation, the bond angle $`A_{ij}`$ is identified with the line integral of the vector potential $`𝐀`$ of the transverse magnetic field: $`A_{ij}=(2\pi /\mathrm{\Phi }_0)_i^j𝐀𝑑𝐥`$ where $`\mathrm{\Phi }_0`$ is the flux quantum $`\mathrm{\Phi }_0hc/2e`$ per unit plaquette. With this identification the strength of magnetic field $`B`$ is given by $`Ba^2=f\mathrm{\Phi }_0`$ where $`a`$ is the lattice constant.
The UFXY model can be mapped onto that of a lattice Coulomb gas with charges of magnitude $`(nf)`$, $`n=0,\pm 1,\pm 2,\mathrm{}`$, where charges correspond to phase-vortices with suitably defined vorticity around the plaquettes. The lowest excitation consists of charges with magnitudes $`1f`$ and $`f`$, respectively. The charge neutrality condition then implies that the number density of positive charges is equal to $`f`$. For the case of $`f=0`$, the well-known Kosterlitz-Thouless transition occurs via vortex-antivortex unbinding at a finite temperature. Except for this case of unfrustrated XY model, the equilibrium nature and associated phase transitions of these systems are not very well understood even for the next simplest case of $`f=1/2`$, the so-called full frustrated XY model . For example, the ground state configurations for the case of general $`f=p/q`$ ($`p`$ and $`q`$ are relative primes) are not known except for some low order rational values of $`f`$, such as $`f=1/2`$, $`1/3`$, $`2/5`$, $`3/8`$, etc, where staircase type of ground state configurations are known analytically .
As $`q`$ becomes large (the limit of irrational frustration), due to the complexity of the degeneracy of the system and long equilibration time, it is quite a difficult task to analyze the nature of the low temperature phase of the system. And, inspite of recent claim by Denniston and Tang that there exist a first order transition near $`T_c0.13J`$, in the case of $`f=1g`$, ($`g`$ being the golden-mean ratio $`g=(\sqrt{5}1)/20.618`$), it is fair to say that the low temperature phase is not completely understood yet. On the other hand, since it is clear that many metastable states are possible due to the dense frustration, one can expect that Brownian dynamics (1) with the potential energy (4) may generate a slow relaxation where trapping of the configurations in deep metastable minima and thermal activation across the potential barriers play a crucial role. Note that there is no intrinsic disorder in the present system, which distinguishes itself from a spin glass system where both intrinsic disorder and frustration are considered to be essential .
With the potential energy (4), the Langevin equation is explicitly given by
$$I\dot{\omega }_i(t)+\gamma \omega _i(t)=J\underset{j}{}\mathrm{sin}(\theta _i\theta _jA_{ij})+\eta _i(t)$$
(6)
We integrate the equation (6) in time, starting from random initial conditions $`\{\theta _i(0)\}`$ and $`\{\omega _i(0)\}`$ using an Euler algorithm on a square lattice of linear size $`N=34`$, In our simulations, we used $`I=1.5`$, $`\gamma =1`$, $`J=1`$ and $`f=13/34`$, which is a Fibonacci approximant to $`f=1g`$. Periodic boundary conditions are employed for both spatial directions. The results were averaged over $`1501000`$ different random initial configurations, depending on the quenching temperature. As for the integration time step, we used $`dt=0.05`$ in the dimensionless unit of time. No essential difference could be found in the results when compared with those obtained by using $`dt=0.01`$.
## III Results and Discussions
In order to probe the orientational relaxation of the system we first computed the on-site auto-correlation function for the planar spins
$$C_R(t)=\frac{1}{N^2}\underset{i=1}{\overset{N^2}{}}\mathrm{cos}(\theta _i(0)\theta _i(t))$$
(7)
where the bracket $`<\mathrm{}>`$ in (7) represents an average over different random initial configurations. In this work we focus only on the lowest order correlation even though one may also measure the higher order correlations, as was done in recent molecular dynamics simulations .
Shown in Fig. 1 is the on-site auto-correlation function $`C_R(t)`$. The relaxation continuously slows down as the temperature is lowered. In order to characterize the slowing down of the relaxation, one can define a characteristic relaxation time $`\tau _R(T)`$ as $`C_R(\tau _R)=1/e`$. The temperature dependence of $`\tau _R(T)`$ is shown in the inset of Fig. 1. It exhibits an Arrhenius behavior at high temperatures, while at low temperatures ($`T<0.20`$) it shows a non-Arrhenius behavior, which can be well fitted by the Vogel-Tamman-Fulcher form $`\tau _R(T)=\tau _0\mathrm{exp}[DT_0/(TT_0)]`$ with $`\tau _09.92`$, $`T_00.08`$, and $`D3.58`$ . Similar non-Arrhenius behavior was observed in the vorticity relaxation as well .
An interesting feature of the rotational relaxation is that it exhibits a two-step relaxation, a very fast relaxation (up to $`t3`$ for $`T=0.13J`$, the lowest temperature probed) and a slow relaxation following the fast relaxation. The earliest part of the fast relaxation is expected to be well described by the free rotation of the rotors $`I\dot{\omega }_i(t)+\gamma \omega _i(t)=0`$. For the time range where $`tI`$, the inertial term is dominant and hence $`\theta _i(t)\theta _i(0)\omega _i(0)t`$. It is then easy to show that the relaxation is given by $`C_R(t)1(T/2I)t^2`$ using the equipartition theorem $`<\omega ^2>=T/I`$.
The long-time part of the slow relaxation can be well fitted by the stretched exponential form $`C_R(t)=C_0\mathrm{exp}[C_1(t/\tau _R)^\beta ]`$ ($`C_1=1+\mathrm{ln}C_0`$ due to the definition of $`\tau _R`$), shown in Fig. 2. We find that the exponent $`\beta `$ varies with temperature: it decreases as the temperature is lowered, as shown below in the inset of Fig. 3. It is interesting to note that at low temperatures ($`T0.2`$) the short time part of the slow relaxation shows a deviation from its stretched exponential fit and the time region for this deviation tends to extend over longer time regions with lowering temperature. We have fitted this region with a power law decay known as the von-Schweider relaxation $`C_R(t)=C_2C_3t^b`$ where the exponent $`b`$ also varies with temperature (see the inset of Fig. 3). We now examine the scaling behavior of the rotational relaxation. Shown in Fig. 3 is $`C_R(t)`$ versus the rescaled time $`t/\tau _R(T)`$. Obviously the earliest part of the relaxation does not obey the scaling since faster time scale (the inverse of the inertia which is temperature independent) is involved in this regime. We also observe that the time-temperature superposition of the relaxation function is systematically violated in the late (slow) part of the relaxation, especially at low temperatures. This breakdown of the scaling is consistent with the fact that the two exponents $`b`$ and $`\beta `$ vary with temperature.
It would be interesting to examine the response function corresponding to the orientational correlation function $`C_R(t)`$. The response function in the frequency ($`\nu `$) domain can be defined as (via fluctutation dissipation theorem) $`\chi ^{^{\prime \prime }}(\nu )=2\pi \nu _0^{\mathrm{}}𝑑t\mathrm{cos}(2\pi \nu t)C_R(t)`$. Fig. 4 shows $`\chi ^{^{\prime \prime }}(\nu )`$ versus $`\nu `$ in a semi-log plot. We see that there exist two peaks, the low-frequency $`\alpha `$ peak and the high-frequency peak (microscopic peak). As the temperature is lowered, the $`\alpha `$-peak moves to lower frequency, indicating the slowing-down of the reorientational relaxation. At the same time, the maximum value of $`\chi ^{^{\prime \prime }}(\nu )`$, which is analogous to the Debye-Waller factor, continuously decreases, and the $`\alpha `$-spectrum becomes broadened as the temperature is lowered. We also note that as the temperature is lowered a minimum of the spectrum is slowly developed. All these features in the frequency spectrum of the orientational relaxation is qualitatively quite similar to the recent broad-band dielectric susceptibility measurement of supercooled liquids . According to the recent dielectric susceptibility data, the $`\alpha `$-spectrum of supercooled liquids consists of two power law regimes in the right-hand side of the $`\alpha `$-peak. The first power law relaxation clearly corresponds to the stretched exponential relaxation in time domain. In addition to this, another power law regime is observed in the high frequency side of the $`\alpha `$-spectrum. It is quite interesting that similar power law relaxation is also observed in the high frequency part of the magnetic susceptibility of a spin glass system . Although we can not better resolve the high frequency part of the $`\alpha `$-spectrum of the present orientational relaxation due to the bad statistics of the spectrum at low temperatures, we believe that our orientational relaxation spectrum also exhibits similar two-power-law regimes in the right hand side of the $`\alpha `$\- peak. The reason is that, even though the long time part of $`C_R(t)`$ can be well fitted by a stretched exponential function, the regime of its validity (for stretched exponential form) is limited to late time regime only and does not extend to intermediate time regime where so called von-Schweidler relaxation (with different exponent $`b`$) better fits the relaxation function. In the frequency domain this will correspond to two power law behavior.
In order to investigate the self-diffusion of the rotors, we measured the mean squared angular displacement (MSAD)
$$<(\mathrm{\Delta }\theta (t))^2>=\frac{1}{N}\underset{i=1}{\overset{N}{}}(\theta _i(t)\theta _i(0))^2$$
(8)
where the phase angle $`\theta _i(t)`$ is unbounded. Fig. 5 shows a log-log plot for the MSAD $`<(\mathrm{\Delta }\theta (t))^2>`$ versus time $`t`$. For all temperature range probed, we see that $`<(\mathrm{\Delta }\theta (t))^2>t^2`$ in the early time regime, which may be called the ballistic regime. It is expected that each rotor makes a free rotation in this time regime. Hence the MSAD is then given by $`<(\mathrm{\Delta }\theta (t))^2>(T/I)t^2`$ in the ballistic regime. This regime corresponds to the earliest part of the relaxation $`C_R(t)1(T/2I)t^2`$. For high temperatures this ballistic regime directly crosses over to the diffusive regime where $`<(\mathrm{\Delta }\theta (t))^2>t`$. But as the temperature is lowered, in the intermediate time regime a sub-diffusive regime characterized by $`<(\mathrm{\Delta }\theta (t))^2>t^\varphi `$ with $`\varphi <1`$ (for example, $`\varphi 0.3`$ for $`T=0.13J`$) starts to appear and extends over more than two decades of time at the lowest temperature probed ($`T=0.13J`$). The sub-diffusive regime sets in at the same time $`t2`$ for all temperatures. In this regime the rotational motion is significantly hindered. This can be directly seen in Fig. 6 which shows the angular displacements $`\mathrm{\Delta }\theta _i(t)\theta _i(t)\theta _i(0)`$ at some representative sites at $`T=0.15J`$. We clearly see from this figure that for all these phase angles the rotational motion looks almost frozen for more than a few thousand time units. This strongly indicates that the system is stuck in a particular configuration among many possible metastable states. The rotor then executes a local vibrational motion only, which corresponds to the caging in the dynamics of real supercooled liquids. At longer time scales, however, the local rotors can execute full rotations via activated tunneling through the potential barriers, showing occasional abrupt rotational motions, as shown in Fig. 6. Similar jump motions have been observed in MD simulations of soft-sphere mixtures , binary Lennard-Jones , and the colloidal glass . Also, neighboring rotors can execute collective rotations, thereby slowly rearranging the whole phase configurations. This stage will correspond to the slow part of $`C_R(t)`$. This entire time evolution of the self rotational motion is qualtitatively the same as that observed in MD simulations of the orientational relaxation of molecular supercooled liquids .
The rotational diffusion constant $`D_R(T)`$ can be obtained by the slope of the MSAD versus $`t`$ in the long time limit where MSAD exhibits diffusive behavior $`<(\mathrm{\Delta }\theta (t))^2>=2D_R(T)t`$. As shown in Fig. 7, at high temperatures the rotational diffusion constant exhibits an Arrhenius behavior, which is well fitted by $`D_R(T)=D_0\mathrm{exp}(\mathrm{\Delta }E/T)`$ with $`D_00.68`$ and the temperature independent activation energy $`\mathrm{\Delta }E0.87J`$. As the temperature is lowered, however, $`D_R(T)`$ shows a strong deviation from the Arrhenius behavior. This behavior implies that the long time dynamics in the high temperature regime is governed by activation barriers whose average height does not depend on temperature. In the low temperature regime, the rotors explore deeper valleys in the potential energy landscapes whose depth increases as the temperature decreases, giving rise to the non-Arrhenius behavior of the relaxation time .
It was observed in some experiments of supercooled liquids that while both translational and rotational diffusion constants are proportional to the inverse of viscosity at high temperatures, the decrease of the translational diffusion constant is less dramatic than the inverse of viscosity at low temperatures. The rotational diffusion constant, on the other hand, is still proportional to the inverse of viscosity at low temperatures down to the glass transition. This relative enhancement of the translational self-diffusion is also revealed in recent simulations of supercooled liquids and the lattice model systems . Here we compared the temperature dependences of the two time scales $`1/D_R(T)`$ and $`\tau _R(T)`$. Shown in the inset of Fig. 7 is a plot for $`D_R(T)\tau _R(T)`$ versus $`T`$. Since the product $`D_R(T)\tau _R(T)`$ in the plot is measured to be nearly contant down to $`T=0.20J`$, the two time scales are observed to be proportional to each other, i.e., $`\tau _R(T)D_R(T)^1`$ up to $`T=0.20J`$. The data points below $`0.20J`$ tend to deviate from this proportionality, indicating more rapid decrease (rather than enhancement) of the rotational diffusion constant. However, it is not clear to us whether this anomalous behavior is a genuine feature of the present model or not.
We have also measured the normalized angular velocity auto-correlation function (AVCF)
$$C_{AV}(t)=\frac{<_{i=1}^{N^2}\omega _i(0)\omega _i(t)>}{<_{i=1}^{N^2}\omega _i^2(0)>}.$$
(9)
In the absence of the interaction between rotors, $`C_{AV}(t)`$ can be easily obtained as $`C_{AV}(t)=\mathrm{exp}(\gamma t/I)`$. With interaction, as shown in Fig.8, the AVCF shows a strongly damped oscillatory motion. As the temperature is lowered, the amplitude of oscillation becomes enhanced. This behavior strongly indicates that the rotors execute angular rattlings in ‘cages’ .
For purely gaussian distribution of the angular displacements, it is easy to show that the rotational correlation function $`C_R(t)`$ can be expressed in terms of the mean square angular displacement $`<(\mathrm{\Delta }\theta (t))^2>`$ as $`C_R^{(G)}(t)\mathrm{exp}(<(\mathrm{\Delta }\theta (t))^2>/2)`$. Shown in Fig. 9 is the comparison of the rotational correlation function $`C_R(t)`$ and its gaussian approximation $`C_R^{(G)}(t)`$. We find that $`C_R(t)`$ exhibits a good agreement with the gaussian approximation in the early time regime whereas it shows a considerable deviation from the gaussian approximation in the late time regime. In order to characterize the non-gaussian nature of the distribution of displacements, the non-gaussian parameter has often been used in simulations of supercooled liquids . Here we measure the same quantity for the angular displacements, which is defined as
$$\alpha _2(t)=\frac{1}{3}\frac{<(\mathrm{\Delta }\theta (t))^4>}{<(\mathrm{\Delta }\theta (t))^2>^2}1$$
(10)
where the factor $`1/3`$ comes from the one dimensional nature for the motion of the rotors. As shown in Fig. 10, $`\alpha _2(t)`$ exhibits three time regimes of distinct behavior, as in the MSAD. It almost vanishes in the ballistic regime and then rapidly increases toward its maximum in the intermediate time regime, and finally decreases again in the long time regime. This temporal behavior is qualitatively the same as that observed in some MD simulations .
As the temperature is lowered, the maximum value of $`\alpha _2(t)`$ rapidly increases, and at the same time, the time regime where $`\alpha _2(t)`$ increases are extended, indicating strong non-gaussian nature of the rotational motion in this regime. This regime corresponds to the sub-diffusive regime in the time dependence of the MSAD shown in Fig. 5. It is expected that $`\alpha _2(t)`$ eventually decays to zero since, for pure diffusion, the gaussian distribution is expected for the angular displacement.
## IV Summary
We have shown that the relaxation of a phenomenological Brownian rotors based on densely frustrated XY model Hamiltonian exhibits a slow dynamics which is remarkably similar to the relaxation of fragile supercooled liquids. We find that there exist a dynamic cross-over from high temperature regime where the dynamics can be described by temperature-independent activation energy, and low temperature regime where non-Arrhenius behavior sets in, which can be attributed to the dynamic characteristics of the system probing deeper valleys in the potential energy landscapes with increasing height of the activation energy barrier. The caging in the metastable minima and thermal activation across potential barriers in the energy landscapes may provide the underlying physical origin for the similarity in the slow dynamic behavior of the present model system and that of real fragile supercooled liquids. It would be very interesting to quantitatively characterize the metastable states present in the system such as finding the local minima and densities of metastable states. In this regard, it would also be very instructive to examine how the dynamic features change as the value of the frustration parameter $`f`$ is varied. We can also consider Newtonian dynamics version of our system and compare with Langevin dynamics , which may provide further insight into these questions. We will undertake further study along these directions in the near future.
We thank Kyozi Kawasaki, Sidney Nagel, and Peter Lunkenheimer for valuable discussions. This work was supported by BSRI (BSRI 98-2412) and by SERI, Korea through CRAY R&D 98. We also acknowledge the generous allocation of computing time from the Supercomputing Center at Tong-Myung Institute of Technology.
FIGURE CAPTIONS
1. The rotational auto-correlation functions $`C_R(t)`$ versus time $`t`$ (in dimensionless units with $`\gamma =1`$ and $`J=1`$ ) for temperatures $`T/J=0.5`$, $`0.4`$, $`0.3`$, $`0.25`$, $`0.2`$, $`0.17`$, $`0.15`$, $`0.14`$, $`0.13`$. Inset: An Arrhenius plot for the characteristic relaxation time defined as $`C(\tau _R(T))1/e`$, where solid line is a Vogel-Tamman-Fulcher fit at low temperature regime (see the text).
1. Stretched exponential fit (dashed lines) to the long time part of the autocorrelation functions (for the same temperatures as in Fig. 1). Time $`t`$ is measured in the same dimensionless units as in Fig. 1.
1. Rotational autocorrelation functions $`C_R(t)`$ versus the rescaled time $`t/\tau _R(T)`$. Note that the time-temperature superposition is systematically violated. The inset shows the temperature dependence of the exponents $`b(T)`$ and $`\beta (T)`$ characterizing the slow part of the correlation function $`C_R(t)`$.
1. Dynamic response function $`\chi ^{^{\prime \prime }}(\nu )`$ corresponding to the rotational relaxation versus frequency $`\nu `$ for temperatures $`T=0.5`$, $`0.4`$, $`0.3`$, $`0.25`$, $`0.2`$, $`0.17`$, $`0.15`$. In addition to the microscopic peak, one can clearly see the development of $`\beta `$-minimum (as the temperature is lowered), decrease of the height of the $`\alpha `$ peak and broadening of the width of the $`\alpha `$ peak.
1. Mean squared angular displacement $`<(\mathrm{\Delta }\theta (t))^2>`$ versus time $`t`$ (in dimensionless units) for the same temperatures as in Fig. 1. At the lowest temperature probed ($`T=0.13J`$), sub-diffusive regime extends over more than two decades.
1. Angular displacement $`\mathrm{\Delta }\theta _i(t)`$ versus time $`t`$ (in dimensionless units) at some chosen lattice sites for $`T=0.15J`$. Rotational caging effect and occasional jump motions are exhibited.
1. An Arrhenius plot for the rotational diffusion constant $`D_R(T)`$. We can see a crossover from high temperature regime with Arrhenius behavior to low temperature regime with non-Arrhenius behavior. The inset shows an anomalous deviation from the Stokes-Einstein relation by plotting the product $`D_R(T)\tau _R(T)`$ versus $`T`$, where we can find that, at low temperaures, the coefficient of angular diffusion is smaller than that which would be expected from standard Stokes-Einstein relation.
1. The angular velocity auto-correlation functions $`C_{AV}(t)`$ for $`T=0.50J`$ and $`T=0.13J`$ ($`t`$ in dimensionless units). For comparison, dotted line represents exponential relaxation corresponding to the situation where the potentials are neglected. One can see a strong rotational cage effect indicated by the oscillating tail of $`C_{AV}(t)`$.
1. The rotational autocorrelation functions versus time $`t`$ (in dimensionless units) for temperatures $`T/J=0.5`$, $`0.3`$, $`0.17`$, $`0.14`$, and $`0.13`$ together with Gaussian approximation results (dotted lines). Systematic deviations are seen at late time stage.
1. Nongaussian parameter versus time $`t`$ (in dimensionless units) for the same temperatures as in Fig. 1.
|
no-problem/9901/cond-mat9901092.html
|
ar5iv
|
text
|
# Effect of disorder on the Kondo behavior of thin Cu(Mn) films
## I Introduction and Background
The behavior of magnetic impurities in metals, and in particular the Kondo effect , has been of interest for several decades . Even so, it is the object of much current work, as a number of important issues in this area remain unresolved. One of these issues is the Kondo behavior in small structures; that is, in thin films and narrow wires. Several years ago, experiments revealed that the Kondo effect can depend on system size. The Kondo effect makes a contribution to the resistivity which, at high temperatures, has the form
$$\mathrm{\Delta }\rho _K=B_K\mathrm{ln}(T).$$
(1)
The conventional Kondo effect leads to an increase in the resistivity at low temperatures ($`\mathrm{\Delta }\rho _K>0`$), so the coefficient $`B_K`$ is positive. The experimental studies noted above concerned $`\mathrm{\Delta }\rho _K`$ in thin films and narrow wires, and found that $`B_K`$ becomes smaller when the system size is reduced; that is, the Kondo effect is suppressed in small systems . These experiments also revealed that $`B_K`$ depends on the level of disorder, with its value decreasing as the elastic mean-free-path, $`\lambda `$, is made smaller. Quite recently, a similar size dependent suppression of the Kondo effect was observed in the thermopower .
Qualitatively, the Kondo effect is due to the screening of a magnetic impurity by the conduction electrons , and it is the interaction responsible for this screening which leads to (1). It was initially suggested that the observed size dependence of $`B_K`$ might be related to the spatial extent of the conduction electron screening cloud. However, subsequent experiments showed that this explanation cannot be correct, as the length scale associated with the suppression does not vary with $`T_K`$ in the manner expected from this general picture. In addition, it has been argued on theoretical grounds that screening cloud physics cannot explain the observed size dependence of $`\mathrm{\Delta }\rho _K`$ . While the experiments did not suggest the mechanism responsible for the size dependence, they did reveal that $`\mathrm{\Delta }\rho _K`$ can also be suppressed by an increase in the level of disorder, i.e., a reduction of the elastic mean-free-path, $`\lambda `$.
The observations that $`\mathrm{\Delta }\rho _K`$ depends on the system size and on $`\lambda `$ have recently been addressed by two theoretical studies. An explanation for the size dependence in the clean (large $`\lambda `$) limit has been developed by Zawadowski and co-workers . According to their picture, a combination of spin-orbit scattering and scattering from a surface (both involving the conduction electrons) gives rise to a uniaxial anisotropy at a magnetic impurity. This anisotropy has the form $`DS_z^2`$, where $`D`$ is a function of material parameters (such as the density of states) and distance from the surface. The precise value $`D`$ is very difficult to estimate, so the theory cannot be judged or tested based on its prediction of the precise value of $`D`$. However, we have recently tested this theory in another way. In the experiments mentioned above, the magnetic species had integer spin; in most cases it was Fe, which is believed to be described by an effective spin $`S=2`$ . In this case the anisotropy energy splits the Fe sublevels, leaving a singlet $`S_z=0`$ at an energy $`D`$ below the $`S_z=\pm 1`$ doublet. When $`D`$ is sufficiently large compared to $`k_BT`$, only this singlet ground state will be occupied, and the Fe will be nonmagnetic. This is the origin of the suppression of $`B_K`$; as a system is made smaller, an increasing fraction of the local “moments” are near the surface, and are thus rendered nonmagnetic by the splitting arising from this anisotropy. In recent work, we investigated the behavior of a magnetic impurity with half-integer spin, Mn which has $`S=5/2`$. In this case the ground state will always be a doublet, and hence magnetic. Hence, the theory predicts that in this case $`B_K`$ will not vanish as the system size $`0`$, and this was indeed observed in our experiments with Cu(Mn) .
In this paper, we again consider the behavior of Cu(Mn), but now we focus on the behavior of $`B_K`$ as a function of disorder. In the case of strong disorder, Martin, Wan, and Phillips have shown theoretically that there is an interplay between Kondo physics and the quantum interference effects which are responsible for weak localization . This interplay gives $`B_K`$ the form
$$B_K=B_K^0\left(1\frac{\alpha }{t\lambda ^2\tau _s}\right),$$
(2)
where $`B_k^0`$ is the value found in bulk systems, $`t`$ is the film thickness, $`\lambda `$ is the elastic mean free path, $`\tau _s`$ is the spin scattering time, and $`\alpha =1.2\mathrm{}/\pi mk_F`$ is a parameter which depends only on Fermi surface properties and is not expected vary with disorder. This result was obtained with the assumption that $`\lambda <t<L_\varphi `$, where $`L_\varphi `$ is the electron phase coherence length (which will be discussed further below) . Martin, Wan, and Phillips have shown that the prediction (2) gives a good account of previous experimental results from our group for Cu(Fe) films .
In the present paper we present results for the behavior of $`B_K`$ for Cu(Mn) as a function of $`\lambda `$ and $`t`$, along with independent measurements of $`\tau _s`$ via weak localization experiments. This has allowed us to test the prediction (2) in detail; we will see that it does not seem to provide a very good quantitative account of our new results.
## II Experimental Method
Cu(Mn) has been studied a great deal in connection with the Kondo effect in bulk alloys , making it an ideal choice for the present experiments. From previous work, we know that a Mn concentration, $`c`$, in the neighborhood of a few hundred parts per million (ppm) should be low enough to observe behavior in the dilute limit; i.e., $`\mathrm{\Delta }\rho _K/c`$ should be independent of $`c`$ . In order to produce films with high disorder we employed DC sputtering. To obtain Cu films with a small concentration of Mn impurities, several small pieces of manganin wire (approximately 2 mm in length and 0.5 mm diameter) were placed uniformly on the surface of a pure (99.999%) Cu sputtering target of diameter 5 cm. Manganin wire has the composition 86% Cu, 12% Mn, and 2% Ni and thus allows us to sputter a small amount of Mn along with the Cu. Using this method has two slight drawbacks. (1) the films will contain a small amount of Ni. However, since Ni does not have a local moment when placed in Cu, we do not expect its presence to affect our results . (2) It is difficult to know, with great precision, the Mn concentration. This uncertainty is due to the fact that the target does not sputter uniformly over long periods of time, and the different materials have different sputtering rates, etc. We have dealt with this problem in two ways. First, we will make direct comparisons only between samples prepared in the same sputtering session. In a single such session we deposited a series of films, with different thicknesses. This should produce films with the same Mn concentration, but with different thicknesses as determined by the time they are exposed to the sputtering beam, and different disorder as determined by the Ar pressure. Second, during each sputtering session the first and last films that were prepared were designed to have the same mean-free-path and thickness. We found that they always exhibited the same behavior (to within the uncertainties), which demonstrates that the Mn concentration was indeed constant throughout the session.
Based on the amount of manganin wire placed on the target and the relative sputtering rates of Cu and Mn, we estimate the concentration to be in the neighborhood of, or somewhat less than, 300 ppm. With the added knowledge that the data show no sign of magnetic ordering or coupling effects , we are confident of being in the dilute impurity regime.
In a sputtering session, a collection of substrates (glass) was mounted on a rotating holder system in such a way that only one substrate at a time was exposed to the sputtering beam. The sputtering took place with the substrates nominally at room temperature, with Ar pressures in the range 1.5 to 10 mTorr. After depositing a film, the sample holder was then rotated without breaking vacuum to expose another substrate, and the Ar gas pressure was adjusted to change the level of disorder in the next film, with higher pressures yielding films with greater disorder (i.e., shorter $`\lambda `$). This process was performed to make typically six films using this target setup. As noted above, the first and last films were made at the same Ar pressure as a check on our procedure.
Immediately after removing a given batch of films from the vacuum chamber they were coated with photoresist, and patterned with optical lithography and etching in dilute nitric acid, to produce strips of width $``$ 150 $`\mu \mathrm{m}`$ and length 60 cm. Note that the photoresist was then allowed to remain on the Cu(Mn) films, thus protecting them from oxidation. Resistance was measured as a function of temperature using a standard 4-wire DC method in a <sup>4</sup>He cryostat. Magnetoresistance measurements were made using an AC bridge technique with the reference resistor either at room temperature or at the same temperature as the sample.
The Kondo temperature of Cu(Mn) is not known precisely. Previous work has established only that it is below $`10^2\mathrm{K}`$. This well below the range studied here, so we will always be in the regime where the high temperature limit (1) is applicable.
## III Results and Discussion
Some typical results for the resistivity as a function of temperature are shown in Fig. 1. Here we have plotted just the change of the resistivity with temperature, with the zero of $`\mathrm{\Delta }\rho `$ chosen at a convenient temperature (here near 6 K). Below about 4 K it is seen that $`\mathrm{\Delta }\rho `$ varies approximately logarithmically with $`T`$, as expected from (1). At higher temperatures (not shown here) the resistivity increases with increasing $`T`$, due to the usual effect of electron-phonon scattering. To avoid having to deal with this effect, we will restrict our attention to the behavior below about 4 K, where electron-phonon scattering is negligible compared to the Kondo contribution to $`\mathrm{\Delta }\rho `$.
While the logarithmic variation seen in Fig. 1 is quite consistent with (1), there are two other effects which can give rise to a similar temperature dependence. In two dimensions, electron-electron interaction effects (EEI) give rise to a logarithmic variation of the sheet resistance, $`R_{\mathrm{}}`$, which has the form
$$\frac{\mathrm{\Delta }R_{\mathrm{}}}{R_{\mathrm{}}}=\frac{e^2}{2\pi ^2\mathrm{}}A_{ee}R_{\mathrm{}}\mathrm{ln}T,$$
(3)
where $`A_{ee}`$ is a screening factor which is typically near unity in metal films . Because of the dependence on $`R_{\mathrm{}}`$, this contribution is much smaller than the Kondo effect for large thicknesses. So far as we know, it is believed that the Kondo and EEI contributions are additive. With this assumption, we have subtracted the contribution calculated from (3) from our measurements (this subtraction was performed also for the data in Fig. 1). The screening factor $`A_{ee}`$ was obtained from careful measurements with pure Cu films (prepared under the sputtering conditions given above) with resistivities and thicknesses similar to the Kondo samples of interest. This was accomplished by measuring the variation of the resistivity (or equivalently, the sheet resistance, since the thickness was known) with temperature. By using pure Cu films, there was no Kondo contribution, and by restricting the measurement to low temperatures (below about 4 K) the electron-phonon contribution discussed above was negligible. This left only the EEI effect and weak localization (WL), which we will discuss in detail in a moment. The WL effect was quenched (without affecting the EEI contribution) by the application of a magnetic field of 15 kOe perpendicular to the film plane. From the results for several samples we found $`A_{ee}=1.0\pm 0.1`$, a value which is quite in line with that reported previously for similar films . The uncertainty in $`A_{ee}`$ is a conservative estimate which encompasses all of our results for Cu(Mn) films over a wide range of thickness. This value for $`A_{ee}`$ will be used below to subtract the EEI contribution from the total temperature variation found for our Cu(Mn) films.
Another contributor to our measured temperature dependence is weak localization (WL). This is a quantum interference effect which makes a contribution of the form
$$\frac{\mathrm{\Delta }R_{\mathrm{}}}{R_{\mathrm{}}}=\frac{e^2}{4\pi ^2\mathrm{}}R_{\mathrm{}}\mathrm{ln}L_\varphi (T),$$
(4)
for our Cu and Cu(Mn) samples (which exhibit antilocalization in small fields). Here $`L_\varphi `$ is the electron phase coherence length. If, as happens to occur in many cases, $`L_\varphi `$ varies as a power of $`T`$, the WL contribution varies logarithmically with temperature, with a magnitude similar to that of the EEI effect. However, in Cu(Mn) the phase coherence length is limited by the effect of the process of spin scattering , in which the spin of a conduction electron is flipped through an interaction with the local moment (Mn). This typically gives rise to phase coherence length which varies only weakly, if at all, with temperature .
In most cases below we are not interested in the WL contribution. For our thickest samples, which have the smallest sheet resistances, the WL effect is generally negligible (since the WL effect (4) is proportional to $`R_{\mathrm{}}`$). It becomes more important for the thinnest films, but even in such cases it can be avoided by performing the measurement in a magnetic field. As noted above, a large magnetic field applied perpendicular to the film quenches WL. While WL can thus be easily avoided, it can also provide some important information. The theoretical prediction (2) involves the spin scattering time, $`\tau _s`$. When the phase coherence length is dominated by spin scattering, it is given by $`L_\varphi =\sqrt{D\tau _s}`$, where $`D=v_F\lambda /3`$ is the electron diffusion constant (here $`v_F`$ is the Fermi velocity). Measurement of the magnetoresistance in low fields, which is due solely to WL, can be used to extract $`L_\varphi `$ and hence also $`\tau _s`$. We will make use of this fact below.
Returning to Fig. 1, we see that varying the level of disorder, i.e., $`\lambda `$, has a large effect on the behavior. This can also be seen from Fig. 2, which shows results for $`B_K`$ as a function of $`\lambda `$, for different values of film thickness, $`t`$. Each data point was obtained from a measurement of the resistivity, $`\rho `$, as a function of $`T`$, like the ones shown in Fig. 1. The results for $`\rho (T)`$ were fit to a logarithmic form, and the EEI effect was subtracted using the (previously) measured value of $`A_{ee}`$ discussed above. Note also that we give results for magnetic field $`H=0`$, and for $`H=5\mathrm{kOe}`$ applied perpendicular to the plane of the sample; the latter should be sufficient to quench WL. Typical uncertainties are shown; they are seen to become larger for the thinner films, for the following reason. As the film thickness decreases, the sheet resistance increases, making EEI larger relative to the Kondo effect. Our uncertainty in $`A_{ee}`$ then leads to a larger uncertainty in $`B_K`$. The solid curves in Fig. 2 are simply guides to the eye drawn through the data for $`H=5\mathrm{kOe}`$ (the filled symbols). The dashed curves have the general function form predicted by the theory (2).
Before discussing the results in Fig. 2 we should note again that all of the data in a given plot were obtained from samples prepared in a single sputtering session, and hence had the same Mn concentration. However, this concentration varied somewhat from one session to the next, so differences in the absolute scale of $`B_K`$ in the different cases are likely due, completely or in large part, to variations in the concentration. This should not affect either the general shapes of these curves as a function of $`\lambda `$, or the variation with field, and these are what we will rely on in our analysis below.
The thickest samples, with $`t=700`$ and $`400\mathrm{\AA }`$, exhibit similar behavior. For large disorder, i.e., small $`\lambda `$, $`B_K`$ appears to approach zero to within the uncertainties, except perhaps for the smallest values of $`\lambda `$ where the uncertainties (due to uncertainties in the EEI subtraction) become very large. As $`\lambda `$ is increased, $`B_K`$ also increases, with this increase becoming more rapid when $`\lambda `$ exceeds a “threshold” value. This threshold is $`\lambda 500\mathrm{\AA }`$ for the samples with $`t=700\mathrm{\AA }`$, and decreases to $`\lambda 150\mathrm{\AA }`$ when the thickness is reduced to $`400\mathrm{\AA }`$. It is especially noteworthy that the behavior for these two values of the film thickness is affected very little by the application of a magnetic field. The only change is a decrease in $`B_K`$ in the presence of a field; the magnitude of this decrease is very small for the case of weak disorder (large $`\lambda `$) and becomes larger as $`\lambda `$ is reduced. Qualitatively and quantitatively, it would appear that the only effect of a field is to quench weak localization.
The behavior for the thinner samples is more difficult to determine, as the uncertainties are larger (for the reasons discussed above). The results for the $`275\mathrm{\AA }`$ thick samples are consistent with $`B_K0`$ as $`\lambda `$ becomes small, or with a $`B_K`$ which is independent of $`\lambda `$. The uncertainties for the $`t=150\mathrm{\AA }`$ samples are also substantial. Here again the data are consistent (barely) with $`B_K`$ being independent of disorder, although they seem to prefer a value of $`B_K`$ which grows substantially as $`\lambda 0`$.
Let us now compare these results to the theory (2). Taken at face value, (2) would seem to predict that $`B_K`$ can become negative, i.e., a “negative” Kondo effect, when either $`\lambda `$ is made sufficiently short, or the thickness $`t`$ is made very small. To within our uncertainties, we have no evidence for a negative $`B_K`$ in these limits. However, it seems quite plausible that the expression (2) cannot be extrapolated to the regime where $`\alpha /(t\lambda ^2\tau _2)1`$; i.e., other higher order terms or contributions may then be important, etc. If this is the case, then (2) cannot be meaningfully extrapolated to the parameter regime where it yields a negative $`B_K`$. However, it may still make sense to use it to estimate the “threshold” values of $`\lambda `$ noted above. That is, the values of $`\lambda `$ at which $`B_K`$ is observed in Fig. 2 to increase substantially may be estimated from the condition
$$\frac{1}{t\lambda ^2}constant,$$
(5)
which is obtained by simply setting (2) to zero. The condition (5) suggests that the threshold value of $`\lambda `$ should increase as the thickness is reduced. However, this is opposite the trend observed in Fig. 2, as the thickness is reduced from 700 to 400$`\mathrm{\AA }`$. It does not appear that the trend found in our experiment can be accounted for by (2).
The effect of a field on $`B_K`$ is also noteworthy. The disorder correction to the bulk Kondo effect in (2) arises from a Kondo contribution to the spin scattering time which then affects WL. According to this theoretical picture, application of a magnetic field should not only quench the “ordinary” WL effect, but also destroy the suppression of $`B_K`$. This means that a field should cause the Kondo contribution to increase with respect to its value in zero field. Such behavior is contrary to what is observed in our experiments, Fig. 2, where $`B_K`$ is seen to be either essentially constant, or decrease, in the presence of a field.
### A Spin scattering rate
The spin scattering time, $`\tau _s`$, plays a key role in the prediction (2). We have therefore used measurements of the WL magnetoresistance to obtain an independent measure of $`\tau _s`$. Typical results for the variation of the resistivity as a function of magnetic field applied perpendicular to the film plane are shown in Fig. 3. The analysis of such measurements are now standard, and are described in detail elsewhere . A fit to the theoretical form for the WL magnetoresistance yields the phase coherence length, $`L_\varphi `$. The results from the data in Fig. 3 are $`1500\mathrm{\AA }`$ for the $`150\mathrm{\AA }`$ thick sample, and $`1300\mathrm{\AA }`$ for the thicker sample, with uncertainties of approximately $`100\mathrm{\AA }`$ in both cases. These values are in good accord with results found previously for other samples in which spin scattering was dominant . In addition, to within the uncertainties just noted, the value of $`L_\varphi `$ changed very little in going to 4.2 K, which is also expected when spin scattering is dominant.
To obtain $`\tau _s`$ from the phase coherence length, we must estimate the value of the diffusion constant. If we use nearly-free-electron theory to obtain the Fermi velocity , we find $`v_F=1.6\times 10^8\mathrm{cm}/\mathrm{s}`$, which for the samples in Fig. 3 leads to $`D15\mathrm{cm}^2/\mathrm{s}`$. Using this value together with the phase coherence lengths found from the magnetoresistance gives $`\tau _s1.5\times 10^{11}\mathrm{s}`$.
Let us now compare this with the theory by using (2) to calculate what value of $`\tau _s`$ would be needed to suppress $`B_K`$ to zero at the values of $`t`$ and $`\lambda `$ observed in the experiment, Fig. 2. From the results for $`t=400\mathrm{\AA }`$, we estimate that $`B_K0`$ at $`\lambda 150\mathrm{\AA }`$. Inserting these values into (2) we find $`\tau _s3\times 10^9\mathrm{s}`$ . This is approximately two orders of magnitude larger than the measured $`\tau _s`$. The measured value of $`\tau _s`$ is derived from the phase coherence length, and therefore depends on the value employed for the diffusion constant, which contributes some uncertainty. However, we do not believe that this uncertainty amounts to two orders of magnitude. The value of $`\tau _s`$ required to make the theory (2) compatible with our results is difficult to reconcile with our direct measurement of the spin scattering time.
## IV Conclusions
We have found that the Kondo effect in thin Cu(Mn) films is suppressed as the level of disorder is increased. This dependence on disorder is similar to that found previously for Cu(Fe). However, the present results are much more detailed, and are the first to reveal the detailed dependence of $`B_K`$ on the film thickness, elastic mean-free-path, and magnetic field. The theory of Martin, Wan, and Phillips provides only a qualitative account of our data. There are several, potentially serious, quantitative discrepancies which remain unresolved, and which suggest that the crucial physics for this problem is not yet accounted for.
## Acknowledgements
We thank A. Zawadowski, P. F. Muzikar, and especially P. Phillips and I. Martin, for many enlightening, and patient, discussions. This work was supported by the National Science Foundation through grant DMR 95-31638.
|
no-problem/9901/astro-ph9901353.html
|
ar5iv
|
text
|
# 𝑹𝑶𝑺𝑨𝑻 HRI monitoring of extreme X-ray variability in the narrow-line quasar PHL 1092
## 1 Introduction
PHL 1092 ($`B=16.7`$; $`z=0.396`$) is a luminous Narrow-Line Seyfert 1 (NLS1) class quasar that is one of the strongest optical Fe ii emitters known (Bergeron & Kunth 1980). Such objects have been generally found to have extreme X-ray spectral and variability properties, and it appears likely that their exceptional X-ray/optical characteristics arise as the result of an extreme value of a primary physical parameter (see Brandt & Boller 1998 for a recent discussion). This primary parameter must ultimately originate from the immediate vicinity of the supermassive black hole since it can strongly influence the energetically-important and rapidly-variable X-ray emission.
The ROSAT PSPC spectrum of PHL 1092 is one of the softest ever seen from a quasar (Brandt 1995; Forster & Halpern 1996, hereafter FH96; Lawrence et al. 1997). For example, a simple power-law fit to the PSPC spectrum gives a photon index of $`\mathrm{\Gamma }=4.2\pm 0.5`$. The poorly-sampled PSPC light curve showed remarkably rapid variability for such a luminous object. The count rate increased by a factor of $`4`$ during a 2-day period, and there were weak indications for even more rapid variability. No strong spectral variability was apparent. FH96 used the ‘radiative efficiency limit’ of Fabian (1979) to argue that the implied radiative efficiency was $`\eta >0.13`$, and they suggested that a Kerr black hole and/or anisotropic emission was implied. The other authors cited above interpreted the data in a somewhat more reserved manner (see below for discussion), and they found $`\eta \stackrel{>}{}0.02`$. In this case, a Kerr black hole and/or anisotropic emission is not necessarily required.
We performed an 18-day ROSAT HRI monitoring campaign on PHL 1092 to further study its X-ray variability properties, and here we report the results from our campaign. Our monitoring goals were (1) to determine whether extreme X-ray variability persistently occurs in this quasar and (2) to search for outstanding variability events even more extreme than those seen by the PSPC. As we discuss below, goal 2 is important since such variability events can place constraints on emission processes and may elucidate the origin of the extreme properties of ultrasoft NLS1 more generally. This work builds upon our HRI monitoring of the similar but lower-luminosity ultrasoft NLS1 IRAS 13224–3809 (Boller et al. 1997, hereafter BBFF). PHL 1092 is $`40`$ times more luminous than IRAS 13224–3809.
The Galactic column density towards PHL 1092 is $`(3.6\pm 0.2)\times 10^{20}`$ cm<sup>-2</sup> (Murphy et al. 1996), and the PSPC spectrum constrained the intrinsic column density to be $`<2.0\times 10^{20}`$ cm<sup>-2</sup>. We adopt $`H_0=70`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=\frac{1}{2}`$, and we hence derive a luminosity distance of 1840 Mpc. When we calculate luminosities below, we shall implicitly assume isotropic emission unless stated otherwise.
## 2 ROSAT HRI monitoring results
### 2.1 Observations and spatial analysis
ROSAT HRI observations of PHL 1092 were performed between 1997 July 16 (02:47:15 UT) and 1997 August 2 (13:53:58 UT). The total exposure time was 109.135 ks, and the source was centred on-axis. Up to five separate observations were obtained each day, and coverage was reasonably good except for a period when PHL 1092 was not observable due the position of the Moon (the largest gap in Figure 1). All data analysis has been performed using ftools.
We have added together all the HRI observations to produce a master image. The centroid position of PHL 1092 in the HRI master image is $`\alpha _{2000}=01^\mathrm{h}39^\mathrm{m}56.1^\mathrm{s}`$, $`\delta _{2000}=06^{}19^{}24.7^{\prime \prime }`$. This position is in good agreement with the precise optical position given by Fanti et al. (1977), and there is no significant evidence for X-ray spatial extent after errors in attitude correction are taken into consideration (see Morse 1994). The source plus background photons at the HRI position of PHL 1092 were extracted using a circular source cell with a radius of 25 arcsec. The background was extracted using an annulus centred on PHL 1092 with an inner radius of 40 arcsec and an outer radius of 100 arcsec.
### 2.2 Count rate variability
In Figure 1 we show the full monitoring light curve for PHL 1092. Strong X-ray variability is apparent throughout the light curve. We have calculated the smallest and largest observed HRI count rates to determine the maximum variability amplitude. In such calculations, we always average over at least 4 of the data points shown in Figure 1 to prevent statistical count rate fluctuations from inducing artificially large variability amplitudes. The smallest count rate is observed between days 7.2–7.4 in Figure 1, and the 6 data points during this span of time give a mean count rate of $`(1.13\pm 0.36)\times 10^2`$ count s<sup>-1</sup>. The largest count rate is observed between days 17.0–17.2, and the 4 data points during this span of time give a mean count rate of $`(1.57\pm 0.16)\times 10^1`$ count s<sup>-1</sup>. The most probable maximum variability amplitude is a factor of 13.9, and maximum variability amplitudes in the range 9.5–22.5 are most likely (Cauchy distributed; see section 2.3.4 of Eadie et al. 1971).
In Figure 2 we show the most rapid observed variability event. This event occurred around day 8.6 in Figure 1. The HRI count rate rose from $`(3.46\pm 0.98)\times 10^2`$ count s<sup>-1</sup> to $`(1.31\pm 0.19)\times 10^1`$ count s<sup>-1</sup>, thereby increasing by a factor of $`3.8`$ in $`<5000`$ s ($`<3580`$ s in the rest frame of PHL 1092). We also detect 6 additional events where the HRI count rate exhibits a highly-significant increase or decrease by a factor $`>2`$ in $`<1`$ day.
### 2.3 Luminosity variability
We use the spectrum observed by the ROSAT PSPC to determine a conversion factor, $`f`$, between HRI count rate and luminosity. Unlike FH96, we do not simply extrapolate a steep ($`\mathrm{\Gamma }=4.2`$) power-law model down to 0.1 keV to derive a 0.1–2.0 keV luminosity. The extrapolation of a steep power-law spectrum to low energies where Galactic absorption is important can lead to unphysically large, or at least highly uncertain, luminosities. For such models most of the luminosity lies between 0.1–0.2 keV where few counts are actually detected.<sup>1</sup><sup>1</sup>1This is the cause of the much higher radiative efficiency derived by FH96 as compared to Brandt (1995) and Lawrence et al. (1997).
We have instead considered the luminosity from 0.2–2.0 keV in the observed frame since, for the Galactic column density relevant here, this quantity is much less dependent upon the details of the spectral model. We have considered three spectral models that give statistically acceptable fits to the PSPC data: a power-law model (M1), a power-law plus double blackbody model (M2) and a power-law plus bremsstrahlung model (M3). Using xspec (Arnaud 1996) and pimms (Mukai 1997), we find $`f_{\mathrm{M1}}=6.0\times 10^{46}`$ erg count<sup>-1</sup>, $`f_{\mathrm{M2}}=4.8\times 10^{46}`$ erg count<sup>-1</sup> and $`f_{\mathrm{M3}}=5.2\times 10^{46}`$ erg count<sup>-1</sup>. In order to be reasonably conservative, we shall adopt $`f_{\mathrm{M2}}`$.
We find that the 0.2–2.0 keV luminosity of PHL 1092 varies between $`(5.4\pm 1.7)\times 10^{44}`$ erg s<sup>-1</sup> and $`(7.5\pm 0.8)\times 10^{45}`$ erg s<sup>-1</sup> during our monitoring observation. If we assume that the optical continuum is not highly variable on timescales of a few weeks (cf. section 3.1 of FH96), the $`\alpha _{\mathrm{ox}}`$ value for PHL 1092 varies between $`1.8`$ and $`1.4`$ (we calculate $`\alpha _{\mathrm{ox}}`$ between 3000 Å and 2 keV). For comparison, the $`\alpha _{\mathrm{ox}}`$ range for optically selected quasars is $`1.5\pm 0.1`$ (e.g. Laor et al. 1997). For the variability event shown in Figure 2, we detect a change in HRI count rate of $`(9.64\pm 2.14)\times 10^2`$ count s<sup>-1</sup>. This corresponds to a luminosity change of $`\mathrm{\Delta }L=(4.6\pm 1.0)\times 10^{45}`$ erg s<sup>-1</sup> in a rest-frame time interval of $`\mathrm{\Delta }t<3580`$ s.
## 3 Discussion
### 3.1 Radiative efficiency limit arguments
The variability event shown in Figure 2 has $`\frac{\mathrm{\Delta }L}{\mathrm{\Delta }t}>1.3\times 10^{42}`$ erg s<sup>-2</sup>, making it the most extreme such event we are aware of from a radio-quiet quasar (compare with Remillard et al. 1991 and table 3 of FH96). This can be further quantified by employing the radiative efficiency ($`\eta `$) limit: $`\eta >4.8\times 10^{43}\frac{\mathrm{\Delta }L}{\mathrm{\Delta }t}`$ (Fabian 1979). Straightforward application of the limit gives an extremely high $`\eta >0.62\pm 0.13`$. For comparison, optimal accretion onto a Kerr black hole rotating at the maximum plausible rate gives only $`\eta 0.3`$ (see Thorne 1974).
The extreme radiative efficiency derived above provides motivation to critically examine the assumptions underlying the radiative efficiency limit. The standard derivation assumes that the radiation release associated with an observed luminosity outburst occurs entirely at the centre of the emission region. If the radiation release is more uniform, the rapid emission of photons from the outer few Thomson depths facing the observer can invalidate the standard derivation (see Appendix A). A rapid rise in the flux from a source, such as we have for PHL 1092 (and such as has been typically used in the literature), need not therefore involve the whole source and no limit applies.
We can, however, recover the situation if either (a) we assume restrictions upon the manner of radiation release or (b) we can place an upper limit upon the Thomson depth of the emission region. For case (a), we return to the original efficiency limit if all parts of the region emitting the sharp flux increase are in causal contact with each other (rather than with some remote centre of an idealized sphere; see Appendix A). For case (b), we have from Appendix A when $`\tau _\mathrm{T}`$ is large that $`\mathrm{\Delta }t=\vartheta \frac{h}{c}=\vartheta \frac{R}{\tau _\mathrm{T}c}`$ where $`\vartheta `$ is a geometrical factor of order unity. Thus $`\tau _\mathrm{T}=\vartheta \frac{R}{c\mathrm{\Delta }t}`$. If we can place an upper limit on $`R`$ we also place an upper limit on $`\tau _\mathrm{T}`$. The emission dominating the ROSAT band is from the soft X-ray excess, and this emission is thought to be associated with the inner accretion disk. If we first make the assumption of blackbody emission we obtain $`R<4\times 10^{11}L_{44}^{1/2}T_6^2`$ cm where $`L_{44}`$ is the total blackbody luminosity in units of $`10^{44}`$ erg s<sup>-1</sup> and $`T_6`$ is the blackbody temperature in units of $`10^6`$ K. We have $`L_{44}50`$, and $`T_6`$ is unlikely to be smaller than $`0.35`$ based on accretion disk theory and observations of the soft X-ray excess in similar objects (see equation 3.20 of Peterson 1997; measured soft X-ray excess temperatures are usually larger than this but will be strongly affected by relativistic effects if the black hole is Kerr). We then find $`R\stackrel{<}{}2.3\times 10^{13}`$ cm and $`\tau _\mathrm{T}\stackrel{<}{}0.5\vartheta `$. While the derived $`R`$ constraint is physically plausible (of order a couple gravitational radii for a black hole mass of $`10^8`$ M), the low Thomson depth is inconsistent with the assumption that $`\tau _\mathrm{T}`$ is large. Thus when the source size is determined by blackbody radiation, one cannot escape the efficiency limit by invoking large $`\tau _\mathrm{T}`$ as is done in Appendix A. Additional opacity sources, such as free-free absorption, are expected in the case of blackbody or quasi-blackbody emission. These have the effect of strengthening the above argument ($`\tau _\mathrm{T}`$ is replaced by an effective $`\tau _{\mathrm{Tot}}>\tau _\mathrm{T}`$ that includes the additional opacity sources).
Another possibility for case (b) is that the emission is due to Comptonization (which is plausibly the fastest emission process in this band). In this case, a cloud of gas bathed in soft photons is impulsively heated. The soft photons must originate at energies below the HRI band for large variability to be seen. Large spectral variations will occur as the outgoing photons are successively Compton upscattered before escaping unless the electron temperature is large and $`\tau _\mathrm{T}`$ is small (see Guilbert, Fabian & Ross 1982). The relative change in photon energy per scattering is $`4kT/mc^2`$, and therefore $`kTmc^2/4`$ in order that many of the first burst of photons, which scatter at most once, appear in the HRI band. Such a high temperature requires that the Thomson depth of the whole source is small. The equilibrium Compton scattering formulae given by Zdziarski (1985; e.g. his equation 5; we correct his equation 6) require that $`\tau _\mathrm{T}0.04`$ if $`kT/mc^20.25`$ in order to obtain the steep mean photon index from the source of $`\mathrm{\Gamma }=4.2\pm 0.5`$. Again this upper limit on $`\tau _\mathrm{T}`$ allows a lower bound to be placed upon $`\eta `$.
The precise value of the coefficient in the efficiency limit does depend upon geometry and can be uncertain by a factor of a few. We have seen that plausible constraints on radiation mechanisms make it unlikely that the much larger uncertainty theoretically possible for a high $`\tau _\mathrm{T}`$ sphere is realised. We note further that there are likely to be inefficiencies in the conversion of the accretion energy into radiation, with some energy ending in kinetic or magnetic form. We conclude that the rapid variation seen in Figure 2 requires an efficiency of at least $`0.10`$ and more probably $`0.60`$.
### 3.2 Relativistic X-ray boosting in ultrasoft NLS1?
A straightforward application of the radiative efficiency limit to our most rapid observed variability implies $`\eta >0.62\pm 0.13`$. This appears to be substantially larger than can be explained without allowing for relativistic effects, but moderate relativistic boosting can easily explain our data (see Guilbert, Fabian & Rees 1983 and BBFF). The relativistic bulk motions required are of order $`0.3c`$ and might plausibly arise in the inner accretion disk. Relativistic X-ray boosting associated with a strong jet appears less likely given the spectral character of the soft X-ray emission and the fact that PHL 1092 is radio quiet.
Evidence for relativistic X-ray variability enhancement has been found in two other NLS1-class objects to date: PKS 0558–504 and IRAS 13224–3809. PKS 0558–504 showed a rapid X-ray flare that implied relativistic motions (Remillard et al. 1991), while IRAS 13224–3809 shows persistent, giant-amplitude, nonlinear variability that is most naturally explained via relativistic effects (BBFF). Our data on PHL 1092 now add further weight to the idea that unusually strong relativistic effects may be present in many ultrasoft NLS1. This hypothesis can be further examined by searching for X-ray spectral changes during putative relativistic variability events. XMM has the large collecting area and low-energy sensitivity needed for this work, and we hope to perform such observations.
## Acknowledgments
We thank the ROSAT team for scheduling help. ACF acknowledges support from the Royal Society. MR acknowledges support from an External Research Studentship of Trinity College, Cambridge; an ORS award; and the Stefan Batory Foundation. This work has been supported by NASA grant NAG5-6023 and a NASA LTSA grant.
## Appendix A Dependence of the radiative efficiency limit upon the geometry of radiation release
The radiative efficiency limit was derived by arguing that a radiatively-inefficient source must have such a high particle density that a rapid release of radiation in the source appears to an observer to be slowed down by Thomson scattering. For a luminosity outburst of amplitude $`\mathrm{\Delta }L`$ with a rise time of $`\mathrm{\Delta }t`$, the radiative efficiency $`\eta `$ is defined by the equation
$$\mathrm{\Delta }L\mathrm{\Delta }t=\eta Mc^2$$
(1)
where $`M`$ is the mass involved in the outburst and $`c`$ is the speed of light. $`Mm_\mathrm{p}nV`$ where $`m_\mathrm{p}`$ is the mass of the proton, $`n`$ is the proton number density (also assumed to be the electron number density), and $`V`$ is the volume of the emission region. The standard derivation of the radiative efficiency limit (Fabian 1979) assumes a uniform, spherical emission region with the release of radiation localized at its centre, and it also assumes that relativistic Doppler boosting and light bending are unimportant. If the emission region has a significant Thomson depth ($`\tau _\mathrm{T}1`$), $`\mathrm{\Delta }t`$ must satisfy
$$\mathrm{\Delta }t(1+\tau _\mathrm{T})\frac{R}{c},\tau _\mathrm{T}=n\sigma _\mathrm{T}R$$
(2)
where $`\sigma _\mathrm{T}`$ is the Thomson cross section and $`R`$ is the radius of the region. The limit on $`\eta `$ arises due to the competition between the light crossing time and the photon diffusion time. By combining equations (A1) and (A2) one obtains
$$\eta \frac{\mathrm{\Delta }L}{\mathrm{\Delta }t}f(\tau _\mathrm{T}),f(\tau _\mathrm{T})\frac{(1+\tau _\mathrm{T})^2}{\tau _\mathrm{T}}.$$
(3)
$`f(\tau _\mathrm{T})`$ has the asymptotic behaviour
$$f(\tau _\mathrm{T})\{\begin{array}{cc}\tau _\mathrm{T}^1\hfill & \text{for }\tau _\mathrm{T}1\text{ (light crossing time dominates)}\hfill \\ \tau _\mathrm{T}\hfill & \text{for }\tau _\mathrm{T}1\text{ (photon diffusion time dominates).}\hfill \end{array}$$
This means that there exists a minimum value of $`f(\tau _\mathrm{T})`$ (4 when $`\tau _\mathrm{T}=1`$) and hence of $`\eta `$ for a given $`\mathrm{\Delta }L`$ and $`\mathrm{\Delta }t`$.
In order to relax some of the assumptions entering the standard analytic derivation, we simulated photon diffusion from ‘clouds’ of moderate and high Thomson depths using Monte Carlo techniques. We first considered photon diffusion for the case described above with high Thomson depth and found our results to be consistent with the analytic results published by Sunyaev & Titarchuk (1980). We then considered a simple model of a uniform, spherical cloud but allowed radiation to be produced instantaneously throughout the cloud in a uniform manner. The results of the simulations for different Thomson depths are presented in Figure A1. The curves show the time dependence of the observed photon flux $`\frac{dN}{dt}L`$ for a wide range of $`\tau _\mathrm{T}`$. There is an important difference in the behaviour of these curves as compared to the case when the radiation release occurs entirely at the centre of the cloud: when $`\tau _\mathrm{T}`$ becomes large there is now no reduction in the rate of increase of $`\frac{dN}{dt}`$ at the start of the outburst. In other words, at the start of the outburst $`\frac{d^2N}{dt^2}\frac{dL}{dt}`$ increases monotonically as $`\tau _\mathrm{T}`$ is increased (even for $`\tau _\mathrm{T}1`$). This result can be understood by realizing that the radiation observed near the start of the outburst comes mostly from the outer few Thomson depths of the cloud facing the observer. The relevant ‘cup-shaped’ emission region at the start of the outburst is constrained by the surface of the cloud and the surface of equal photon arrival time. An upper limit to its volume for the diffusion dominated case is $`V_{\mathrm{cup}}\pi R^2h`$, where $`h\frac{R}{\tau _\mathrm{T}}`$. Appropriate modification of equation (A3) leads to
$$\eta \frac{\mathrm{\Delta }L}{\mathrm{\Delta }t}\stackrel{~}{f}(\tau _\mathrm{T}),$$
(4)
where
$$\stackrel{~}{f}(\tau _\mathrm{T})\tau _\mathrm{T}^2\begin{array}{c}\text{ for }\tau _\mathrm{T}1\text{ (photon diffusion time dominates).}\hfill \end{array}$$
The lower limit on the radiative efficiency in this case is a trivial $`\eta 0`$.
The assumption that radiation is released instantaneously throughout the cloud is obviously an unrealistic one because it violates causality. To investigate the effect of causality, we considered a ‘trigger signal’ propagating outwards from the centre of the cloud with speed $`c`$. Radiation is released in a volume element of the cloud only after the trigger signal reaches it. Inclusion of this effect in the simulations does not change the general behaviour described above; at the start of the outburst, $`\frac{d^2N}{dt^2}`$ increases monotonically as $`\tau _\mathrm{T}`$ is increased. Again the only constraint on the radiative efficiency is the trivial $`\eta 0`$.
A nonzero lower limit for $`\eta `$ can be derived if one considers somewhat different definitions for $`\mathrm{\Delta }L`$ and $`\mathrm{\Delta }t`$. For example, one could define the characteristic variation timescale $`\mathrm{\Delta }t`$ as the time (measured from the observed start of the outburst) it takes for $`90`$ per cent of the outburst photons reach the observer. Similarly $`\mathrm{\Delta }L`$ could be defined as 90 per cent of the energy liberated in the outburst divided by $`\mathrm{\Delta }t`$. There is a nonzero lower limit for $`\eta `$ with the above definitions (this has been checked numerically). However, in order to reliably obtain an efficiency bound in this way one must be able to determine the overall profile of an outburst. In particular, one must be able to measure any decaying ‘tail’ with reasonably high precision. This is usually not possible due to observational constraints and the complexity of active galaxy light curves (e.g. outbursts often overlap each other).
In summary, the radiative efficiency limit is quite sensitive to the geometry of radiation release within the emission region. Without constraints on this geometry or the Thomson depth of the emission region, it is difficult to place meaningful constraints on $`\eta `$.
|
no-problem/9901/astro-ph9901007.html
|
ar5iv
|
text
|
# Disk-Anchored Magnetic Propellers – A Cure for the SW Sex Syndrome
## 1 Introduction
We now possess a fairly satisfactory understanding of the double-peaked emission lines from the accretion disks of dwarf novae in quiescence. Their twin peaks are the result of supersonic Doppler shifts arising from gas in the accretion disk moving toward us on one side and away from us on the opposite side of the disk (Smak 1969; Horne & Marsh 1986). This picture is confirmed in eclipsing systems, where eclipses of the blue-shifted peak occur earlier than those of the red-shifted peak. The shapes of the Doppler profile wings indicate that emission line surface brightnesses decrease as $`R^{3/2}\mathrm{\Omega }_{\mathrm{Kep}}`$, suggesting that the quiescent disk emission lines are powered by magnetic activity similar to that which powers the chromospheres of rotating stars, which scale as $`\mathrm{\Omega }_{\mathrm{rot}}`$ (Horne & Saar 1991).
There are, however, a large number of higher accretion rate systems in which the emission lines display broad single-peaked profiles. These include nearly edge-on systems whose eclipses indicate that optically thick disks are present. It is hard to understand how broad single-peaked lines can arise in the disk flows in these systems. The emission lines also have anomalous orbital kinematics – radial velocity curves significantly delayed relative to the white dwarf orbit. A common interpretation of the phase-shifted velocity curves is that the disk emission lines are affected by a broad S-wave component related to the impact of the gas stream. However, we will argue in this paper that the anomalous emission lines arise instead from shocks in a broad equatorial fan of gas that is being ejected from the system by a magnetic propeller anchored in the inner accretion disk.
This disk-anchored magnetic propeller model is motivated by recent breakthroughs in understanding the enigmatic system AE Aqr, in which a rapidly spinning white dwarf magnetosphere expels the gas stream out of the system before an accretion disk can form (Wynn, King & Horne 1997). It has recently been realized that an internal shock zone should form in the exit stream to produce violently flaring broad single-peaked emission lines at just the right place to account for the anomalous orbital kinematics (Welsh, Horne & Gomer 1998).
Encouraged by this success in understanding AE Aqr, we propose to extend the idea to disk-anchored magnetic propellers operating in all high accretion rate disk systems. These systems are afflicted by the “SW Sex syndrome”, a cluster of anomalies including single-peaked emission lines with skewed kinematics, V-shaped eclipses implying flat temperature-radius profiles, shallow offset line eclipses, and narrow low-ionization absorption lines at phase 0.5. Magnetic fields anchored in the Keplerian disk sweep forward and apply a boost that expels gas stream material flowing above the disk plane. This working hypothesis offers a framework on which we can hang all the SW Sex anomalies. The lesson for theorists is that magnetic links appear to be transporting energy and angular momentum from the inner disk to distant parts of the flow without associated viscous heating in the disk.
## The SW Sex Syndrome
SW Sex is the prototype of a sub-class of nova-like (high accretion rate) cataclysmic variables (CVs) that display a range of peculiarities that do not seem to fit with the standard model of a cataclysmic variable in which a gas stream from the companion star feeds an accretion disk around a white dwarf. The same cluster of anomalous phenomena is in fact seen to a greater or lesser extent in most if not all CVs with high accretion rates. The SW Sex anomalies may be summarized as follows:
1) Anomalous emission-line kinematics (Young, Schneider & Shectman 1981; Still, Dhillon & Jones 1995). The emission lines have broad single-peaked profiles, even in eclipsing systems where lines from a Keplerian disk would produce two peaks. The radial velocity curves lag $`30^{}`$ to $`70^{}`$ behind the orbit of the white dwarf. The emission is centred at low velocities in the lower-left quadrant of the Doppler map, a location corresponding to no part of the binary system.
2) Anomalous emission-line eclipses (Young et al. 1981; Still et al. 1995). The emission-line eclipses are shallow and early compared with continuum eclipses. Red-shifted emission remains visible at mid eclipse. There is evidence for slow prograde rotation, eclipses being early on the blue side and late on the red side of the line profile.
3) Low-ionization absorption lines (Balmer, OI $`\lambda 7773`$), strongest around phase 0.5 when the secondary star is behind the disk (Young et al. 1981; Szkody & Piche 1990; Smith et al. 1993). The absorptions have widths and blue-shifts of a few hundred km s<sup>-1</sup>.
4) V-shaped continuum eclipses imply a temperature deficit in the inner disk, giving $`T(R)`$ profiles flatter than the $`TR^{3/4}`$ profile of steady-state viscous disks (Rutten, van Paradijs & Tinbergen 1992).
Most of these anomalies were first described in LX Ser by Young et al. (1981), and by 1992 they were enshrined as the defining characteristics a sub-class of CVs named for the prototype SW Sex (Thorstensen et al. 1992). The original classification of “SW Sex stars” required eclipses and 3h$`<P_{orb}<4`$h, but longer period systems like BT Mon ($`P_{orb}=8`$h) now qualify (Smith, Dhillon & Marsh 1998), and systems like WX Ari showing the emission-line anomalies but lacking absorptions and/or eclipses are probably low-inclination cases (Hellier, Ringwald & Robinson 1994). While particularly strong in systems with 3h$`<P_{orb}<4`$h, this “SW Sex syndrome” can be recognized in most if not all CVs in high accretion states.
The SW Sex stars have evaded satisfactory explanation for over 15 years. Proposals include accretion disk winds (Honeycutt, Schlegel & Kaitchuck 1986; Murray & Chiang 1996; 1997), magnetic white dwarfs disrupting the inner disk (Williams 1989; Wood, Abbott & Shafter 1992), and gas streams overflowing the disk surface (Shafter, Hessman & Zhang 1988; Szkody & Piche 1990; Hellier & Robinson 1994). The gas stream overflow model has been developed to the stage of predicting trailed spectrograms that do bear resemblance to those observed (Hellier 1998). While each idea explains part of the phenomenology, none seems to be entirely satisfactory. The syndrome is so widespread that we must admit that some important element is missing in our standard picture of the accretion flows in CVs.
## The Magnetic Propeller in AE Aqr
When facing a difficult puzzle, one helpful strategy is to examine extreme cases in search of clues. AE Aqr is an extreme CV in many respects. For example, it is detected over a spectacular range of energies, from radio (Bastian, Dulk & Chanmugam 1988; Abada-Simon et al. 1993) to perhaps TeV gamma rays (Meintjes 1994). Among its problematic behaviours are broad single-peaked emission lines that lag behind the white dwarf orbit by some $`70^{}`$ (Welsh, Horne & Gomer 1993) – a classic SW Sex symptom. We suggest that AE Aqr is the key to understanding the SW Sex syndrome.
All CVs flicker, with typical amplitudes of 5-30% (Bruch 1992), but AE Aqr flickers spectacularly (Patterson 1979; van Paradijs, Kraakman & van Amerongen 1989; Bruch 1991; Eracleous & Horne 1996). Its line and continuum fluxes can vary by factors of 2 to 3, with 10-min rise and decline times. Transitions between quiet and active states occur on timescales of hours. HST spectra reveal nearly synchronous flaring in the continuum and in a host of high and low ionization permitted and semi-forbidden emission lines (Eracleous & Horne 1996). The wide ionization and density ranges suggest that shocks power the lines. All the emission lines share the anomalous orbital kinematics.
AE Aqr harbours the fastest-rotating magnetic white dwarf. Coherent X-ray, UV, and optical oscillations indicate that the spin period is $`P_{spin}=33`$s, with 2 peaks per cycle arising from hot spots on opposite sides of the white dwarf, suggesting accretion onto opposite magnetic poles. The oscillation amplitudes are $`0.11`$% in the optical (Patterson 1979), but rise to 40% in the UV (Eracleous et al. 1994). Remarkably, the UV oscillation amplitude is independent of the flaring state, indicating that the flares are not accretion events. HST pulse timing accurately traces the white dwarf orbit. Optical pulse timing over a 13-year baseline shows that $`P_{spin}`$ is increasing with $`P/\dot{P}2\times 10^7`$yr (de Jager et al. 1994). Rotational energy extraction from the white dwarf at $`I\omega \dot{\omega }6\times 10^{33}`$ erg s<sup>-1</sup> exceeds the observed luminosity $`\nu L_\nu 10^{32}`$ erg s<sup>-1</sup>, thus AE Aqr can be powered by rotation rather than accretion.
What happens when the gas stream encounters this rapidly spinning magnetosphere? In AM Her stars ($`P_{spin}=P_{orb}`$), the gas stream is gradually stripped as material becomes threadded onto and slides down along field lines to accretion shocks near the magnetic poles. In AE Aqr ($`P_{spin}<<P_{orb}`$), a drizzle of magnetic accretion produces the white dwarf hot spots that give rise to the spin pulsations, but the magnetosphere rotates so rapidly that much of the threadded gas is flung outward to the crests of magnetic loops. The light cylinder, where co-rotation requires light speed, is comparable in size to the separation of the binary system. Particles trapped in the rapidly rotating magnetosphere, on field lines that are shaken each time they sweep past the companion star and gas stream, get pumped up to relativistic velocities, accounting for the observed radio synchrotron emission, and perhaps the TeV gamma rays (Kuijpers et al. 1997).
Rapid rotation has another important effect: high ram pressure in the rotating frame of the magnetosphere reduces the stripping rate, allowing the flow to remain largely dia-magnetic, slipping between the rotating magnetic field lines. So long as ram pressure exceeds the external magnetic pressure, the flow traces a roughly ballistic trajectory while gradually being dragged forward toward co-rotation with the magnetosphere (King 1993; Wynn & King 1995). In AE Aqr, the rapidly spinning magnetosphere acts as a “magnetic propeller”, boosting gas to escape velocity and ejecting it out of the binary system (Wynn, King & Horne 1997; left-hand side of Fig. 1).
On a Doppler map (right-hand side of Fig. 1), the flow of dia-magnetic blobs initially traces a ballistic trajectory, moving leftward (toward negative $`V_X`$) from the L1 point. The blobs then circulate counter-clockwise around a high-velocity loop in the lower-left quadrant, reaching a maximum velocity of $`1000`$ km s<sup>-1</sup> as they fly by the white dwarf. Finally, the blobs decelerate to a terminal velocity of $`200500`$ km s<sup>-1</sup>, depending on blob properties, as they climb out of the potential well. Here the trajectories pass right through the region required to account for the peculiar kinematics of AE Aqr’s emission lines. After escaping from the binary, blobs coast outward at constant velocity, and hence circulate clockwise around the origin of the Doppler map at their terminal velocities. This is because the Doppler map is in the rotating frame of the binary.
The exit stream of the magnetic propeller model places gas at low-velocity in the lower-left quadrant of the Doppler map, as required to account for the orbital kinematics of AE Aqr’s emission lines. No part of the binary system moves with this velocity. While this is a great success, it remains a puzzle why the flaring line and continuum emission do not arise near closest approach, where interaction with the magnetosphere is strongest, but rather several hours later in the decelerating exit stream somewhat before reaching the terminal velocity (Fig. 1).
## The Flare Mechanism in AE Aqr
Why does the magnetic propeller flow produce wildly flaring emission lines, and why do they occur on the way out rather than at closest approach? The first step toward answering this question is to note that the trajectories calculated for diamagnetic blobs of various sizes and densities indicate that the magnetic propeller acts as a “blob spectrometer”. Small dense blobs with low drag coefficients penetrate more deeply into the magnetosphere and emerge at larger azimuths than more “fluffy” blobs that are more readily repelled by the spinning magnetosphere. Thus a train of incoming blobs diverse in size and density is segretated in the magnetosphere to emerge as a broad fan sorted by the effective drag coefficient. The second step is to note that fluffy blobs reach a higher terminal velocity, and can therefore overtake and collide with compact blobs that passed closest approach at an earlier time. The crossing points of blob trajectories (Fig 1) indicate that blob-blob collisions occur in an arc-shaped zone of the exit stream well outside the Roche lobe. Each blob-blob collision spawns a hot expanding fireball representing a single flare. If the blob rate is low, most blobs pass quietly through the system without suffering a major collision (quiet states), but above some threshold blob-blob collisions become a frequent occurrence (flaring states). Velocity vectors in the blob-blob collision zone nicely match the observed emission-line kinematics (Welsh et al. 1998). Velocity differences between colliding blobs are $`\mathrm{\Delta }V\stackrel{<}{}300`$ km s<sup>-1</sup>, thus the fireballs may initially reach temperatures up to $`10^7`$K. Rapidly expanding and adiabatically cooling fireballs emit a flare of continuum and line emission, with decreasing density and ionization. The situation may be not unlike a supernova explosion, albeit with lower velocity, smaller size, and shorter timescale. In this way the magnetic propeller model provides a natural mechanism to explain the wild flaring behaviour and skewed orbital kinematics of the emission lines in AE Aqr.
It seems that AE Aqr’s rapidly spinning magnetized white dwarf prevents formation of a disk by driving a rotation-powered outflow. Energy and angular momentum are extracted from the white dwarf, conveyed by magnetic field lines to the gas stream, and expelled from the system. Remarkably, the observations show that the magnetic boost is a gentle one, producing little dissipation in the region of closest approach. Most if not all of AE Aqr’s exotic behaviour can be understood in the framework of this magnetic propeller model.
The processes we observe in AE Aqr and other magnetic cataclysmic variables should let us understand magnetic propellers well enough to identify their observational signatures and physical consequences in other types of systems. Magnetic propellers may represent a generic non-local dissipationless angular momentum extraction mechanism that could have a huge impact on our understanding of accretion flows in general. This prospect provides major motivation for studying magnetic accretors in detail.
## Disk-Anchored Magnetic Propellers
The magnetic propeller in AE Aqr offers a natural mechanism that produces highly variable emission lines with broad single-peaked velocity profiles and orbital kinematics lagging behind the white dwarf orbit. We witness these same anomalies in the emission lines of SW Sex stars and in fact most nova-like (high accretion rate) CVs. Could magnetic propellers be operating in these systems too?
Imagine tipping AE Aqr over until the secondary star eclipses the white dwarf. We would then see a shallow partial eclipse of the exit stream occurring a bit earlier than the deep eclipse of the white dwarf. Red-shifted emission would remain visible at mid eclipse. Low-ionization blue-shifted absorption lines would be seen over a range of phases around 0.5, when we view the white dwarf through the exit stream. Thus it seems that most of the SW Sex anomalies are reproduced at least qualitatively by the magnetic propeller model. We need only add a disk to produce deep continuum eclipses.
We therefore propose that magnetic propellers can be anchored in accretion disks as well as white dwarfs. The gas stream from the companion star deposits most of its material when it crashes into the disk’s rim, but fringes of the gas stream clear the rim and flow inward on ballistic trajectories above and below the disk surface (Lubow 1989). This gas interacts with magnetic fields anchored in the disk below. Closed loops near the disk surface drag the gas toward the local Kepler speed, encouraging it to crash down on the disk surface. But interactions higher above the disk plane increasingly involve open field lines and large loops anchored at smaller radii. These magnetic structures, bending out and back as their footpoints circle the inner disk, drag the gas forward, up, and out. There may be a watershed below which the gas falls onto the disk surface and above which the gas is boosted up to escape velocity and exits the system in a manner similar to the flow in AE Aqr. We propose that this general scenario accounts for the anomalous emission-line behaviour and the narrow absorptions at phase 0.5 in the SW Sex stars.
Disk-anchored magnetic propellers launch an equatorial outflow by extracting energy and angular momentum from the inner disk where the field lines are anchored. Removal of angular momentum from the inner disk drives accretion without depositing heat. This lowers inner disk temperatures below the $`TR^{3/4}`$ law of steady-state viscous disks, accounting for the flatter $`T(R)`$ profiles inferred from the V-shaped eclipses seen in SW Sex stars.
Our present heuristic sketch of a model is strongly motivated by the similarity of the anomalous phenomena observed in AE Aqr, SW Sex, and to some extent in all nova-like (high accretion rate) CVs. This new synthesis seems quite promising, but needs to be tested by the development of more detailed predictions and comparisons with observations. These tests include computing trailed spectrograms, Doppler maps, and eclipse effects for comparison with observations of SW Sex stars. The time-dependent spectra of the fireballs should be computed for comparison with the observed lightcurves and ultraviolet emission-line spectrum of the flares in AE Aqr. Such tests will be a focus for future work.
The main implication for accretion theorists may be that energy and angular momentum is being transported from disks to outflows via long magnetic links without associated dissipation in the disk. The SW Sex syndrome is the observational signature indicating that this non-local transport mechanism is occurring in CVs. Similar effects may be expected in AGN and protostellar disks, whenever incoming gas flows above the disk surface.
There will certainly be consequences for dwarf nova outbursts models, and for the long-term evolution of mass-transfer binary stars. For example, consider the 2-3 hour gap in the orbital period distribution of CVs. Angular momentum losses due to gravitational radiation and the companion star’s magnetic wind drive CV evolution toward shorter orbit periods. Magnetic wind losses are thought to disappear at short orbital periods ($`P_{\mathrm{orb}}<3`$h) becuase the dynamo generating the star’s magnetic field may shut down when the star, stripped down to $`0.3M_{}`$, becomes fully convective. Mass transfer then ceases when the star loses contact with its Roche lobe at $`P_{\mathrm{orb}}3`$h, and resumes when contact is re-established at $`P_{\mathrm{orb}}2`$h. This is an attractive scenario, but available evidence shows no decrease in the magnetic activity of single stars below this mass (e.g. Hawley, Gizis & Reid 1997). If CV evolution in long-period systems is driven instead by angular momentum losses in magnetic propeller outflows, the period gap might then arise from a cessation of this effect. This might occur at a critical mass ratio, e.g. when the 3:1 or other resonances enter the disk, causing the outer rim to thicken enough to prevent stream overflow. Such speculations as these may be worthy of further investigation.
###### Acknowledgements.
I am grateful for discussions on these topics with many friends and collaborators including Vik Dhillon, Martin Still, Danny Steeghs, Coel Hellier, Tom Marsh, Graham Wynn, Andy King, and Mario Livio.
|
no-problem/9901/cond-mat9901167.html
|
ar5iv
|
text
|
# Kondo effect in a quantum critical ferromagnet
## Abstract
We study the Heisenberg ferromagnetic spin chain coupled with a boundary impurity. Via Bethe ansatz solution, it is found that (i) for $`J>0`$, the impurity spin behaves as a diamagnetic center and is completely screened by $`2S`$ bulk spins in the ground state, no matter how large the impurity spin is; (ii) the specific heat of the local composite (impurity plus $`2S`$ bulk spins which form bound state with it) shows a simple power law $`C_{loc}T^{\frac{3}{2}}`$; (iii)for $`J<0`$, the impurity is locked into the critical behavior of the bulk. Possible phenomena in higher dimensions are discussed.
Kondo problem or the magnetic impurity problem in an electron host plays a very important role in modern condensed matter physics. It represents a generic non-perturbationable example of the strongly correlated many-body systems. Recently, with the development of research on some low-dimensional systems and the observation of unusual non-Fermi-liquid behavior in some heavy fermion compounds, the interest in this problem has been largely renewed. The multi-channel Kondo problem provided the first example of impurity systems which show non-Fermi-liquid behavior at low temperatures. In a Luttinger liquid, the impurity behaves rather different from that in a Fermi liquid and may interpolate between a local Fermi liquid and some non-Fermi liquid. Some new quantum critical phenomena have also been predicted in some integrable models. Generally speaking, these new findings indicate that the quantum impurity models renormalize to critical points corresponding to conformally invariant boundary conditions. Another important progress is the study on the Kondo problem in Fermi systems with pseudo gap, i.e., the density of states $`\rho (ϵ)`$ is power-law-dependent on the energy, $`\rho (ϵ)ϵ^r`$. With renormalization group (RG) analysis, Withoff and Fradkin showed that there is a critical value $`J_c`$ for the Kondo coupling constant $`J`$. For $`J>J_c`$, Kondo effect occurs at low temperatures, while for $`J<J_c`$, the impurity decouples from the host. We note that all the quantum critical behaviors mentioned above only occur for $`T0`$ and therefore fall into the general category of quantum phase transitions.
In an earlier publication, Larkin and Mel’nikov studied the Kondo effect in an almost ferromagnetic metal. With the traditional perturbation theory they showed that the impurity susceptibility is almost Curie type with logarithmic corrections at intermediately low temperatures. However, the critical behavior of a Kondo impurity in a quantum critical ferromagnet has never been touched. The main difficulty in approaching this problem is that almost all perturbation techniques fail in the critical regime and exact results are expected. As discussed in some recent works, the critical behavior of the impurity strongly depends on the host properties and seems to be non-universal. Typical quantum critical ferromagnet is the Heisenberg system in reduced dimensions ($`d2`$). These systems have long-range-ordered ground states but are disorder at any finite temperatures due to the strong quantum fluctuations. In this paper, we study the critical behavior of an impurity spin coupled with a Heisenberg ferromagnetic chain. The model Hamiltonian we shall consider reads
$`H={\displaystyle \frac{1}{2}}{\displaystyle \underset{j=1}{\overset{N1}{}}}\stackrel{}{\sigma }_j\stackrel{}{\sigma }_{j+1}+J\stackrel{}{\sigma }_1\stackrel{}{S},`$ (1)
where $`\stackrel{}{\sigma }_j`$ is the Pauli matrices on site $`j`$; $`N`$ is the length of the chain; $`\stackrel{}{S}`$ is the impurity spin sited at one end of the chain; $`J`$ is a real constant which describes the Kondo coupling between the impurity and the host. The problem is interesting because (i)the model is not conformally invariant due to the nonlinear dispersion relation of the low-lying excitations, $`ϵ(k)k^2`$, and $`\rho (ϵ)ϵ^{\frac{1}{2}}`$, and represents a typical quantum critical system beyond the universality of the conventional Luttinger liquid; (ii)the Hamiltonian is very simple (without any superfluous term) and allows exact solution via algebraic Bethe ansatz. In fact, most known methods developed for the impurity problem in a Luttinger liquid can not be used for the present system due to the strong quantum fluctuations.
Let us first summerize the solution of (1). Define the Lax operator $`L_{j\tau }(\lambda )\lambda +i/2(1+\stackrel{}{\sigma }_j\stackrel{}{\tau })`$, where $`\stackrel{}{\tau }`$ is the Pauli matrices acting on the auxiliary space and $`\lambda `$ is the so-called spectral parameter. For the impurity, we define $`L_{0\tau }\lambda +i(1/2+\stackrel{}{S}\stackrel{}{\tau })`$. Obviously, $`L_{j\tau }`$ and $`L_{0\tau }`$ satisfy the Yang-Baxter equation (YBE). It can be easily shown that the doubled-monodromy matrix
$`T_\tau (\lambda )L_{N\tau }(\lambda )\mathrm{}L_{1\tau }(\lambda )L_{0\tau }(\lambda ic)L_{0\tau }(\lambda +ic)L_{1\tau }(\lambda )\mathrm{}L_{N\tau }(\lambda ),`$ (2)
satisfies the reflection equation
$`L_{\tau \tau ^{}}(\lambda \mu )T_\tau (\lambda )L_{\tau \tau ^{}}(\lambda +\mu )T_\tau ^{}(\mu )=T_\tau ^{}(\mu )L_{\tau \tau ^{}}(\lambda +\mu )T_\tau (\lambda )L_{\tau \tau ^{}}(\lambda \mu ).`$ (3)
From the above equation we can show that the transfer matrices $`\theta (\lambda )Tr_\tau T_\tau (\lambda )`$ with different spectral parameters are commutative, $`[\theta (\lambda ),\theta (\mu )]=0`$. Therefore, $`\theta (\lambda )`$ serves as a generator of a variety of conserved quantities. The Hamiltonian Eq.(1) is given by
$`H={\displaystyle \frac{i}{2}}J(1)^N{\displaystyle \frac{}{\lambda }}\theta (\lambda )|_{\lambda =0}+{\displaystyle \frac{1}{2}}(N+1J),`$ (4)
with $`J=1/[c^2(S+1/2)^2]`$. Following the standard method we obtain the Bethe ansatz equation (BAE)
$`\left({\displaystyle \frac{\lambda _j\frac{i}{2}}{\lambda _j+\frac{i}{2}}}\right)^{2N}{\displaystyle \frac{\lambda _ji(S+c)}{\lambda _j+i(S+c)}}{\displaystyle \frac{\lambda _ji(Sc)}{\lambda _j+i(Sc)}}={\displaystyle \underset{lj}{\overset{M}{}}}{\displaystyle \frac{\lambda _j\lambda _li}{\lambda _j\lambda _l+i}}{\displaystyle \frac{\lambda _j+\lambda _li}{\lambda _j+\lambda _l+i}},`$ (5)
with the eigenvalue of Eq.(1) as
$`E(\{\lambda _j\})={\displaystyle \underset{j=1}{\overset{M}{}}}{\displaystyle \frac{1}{\lambda _j^2+\frac{1}{4}}}{\displaystyle \frac{1}{2}}(N1)+JS,`$ (6)
where $`\lambda _j`$ represent the rapidities of the magnons and M the number of the magnons.
Ground state. In the thermodynamic limit, the bulk solutions of $`\lambda _j`$ are described by the so-called n-strings. However, due to the presence of the impurity, some boundary bound states may exist for $`c>S`$, which are usually called the $`nk`$-strings:
$`\lambda _b^m=i(cS)+im,m=k,k+1,\mathrm{}n.`$ (7)
In the ground state, only some $`n0`$-strings may survive. We call them boundary $`n`$-strings. In our case, $`n0`$ has also an upper bound $`n2S1`$ since $`\lambda _j=\pm i(c+S)`$ are forbidden as we can see from Eq.(5). No bulk strings can exist at zero temperature since they carry positive energy. Boundary bound state can exist only for $`c>S+1/2`$ (antiferromagnetic Kondo coupling) because in this case, the boundary $`n`$-strings carry negative energy. For zero external magnetic field, the most stable boundary string has the length of $`2S`$ with the energy $`ϵ_{2S}=2S/[S^2(c1/2)^2]`$. Therefore, the impurity contributes a magnetization of $`S`$. Such a phenomenon can be understood in a simple picture. Due to the antiferromagnetic coupling between the impurity and the bulk, $`2S`$ bulk spins are swallowed by the impurity at zero temperature to form a $`2S+1`$-body singlet. This singlet does not contribute to the magnetization of the ground state. In this sense, the impurity is completely screened, no matter how large the impurity moment is. Such a situation is very different from that of the conventional Kondo problem, where the impurity moment can only be partially screened by the host when $`S>S^{}`$ ($`S^{}`$ the spin of the host particles). This difference is certainly due to the different properties of the hosts. In the antiferromagnetic spin chain or a normal metal, the spin correlation of the bulk is antiferromagnetic type which repels more than one bulk spin or electron to screen the impurity. However, in a ferromagnetic spin chain, the bulk correlation is ferromagnetic which allows and in fact enhances some bulk spins to form a larger moment to screen the impurity. The local singlet is nothing but a bound state of $`2S`$ magnons. The boundary string may be broken by the external field. In fact, there are $`2S`$ critical fields
$`H_c^n={\displaystyle \frac{1}{n}}\left[{\displaystyle \frac{2S}{(c\frac{1}{2})^2S^2}}{\displaystyle \frac{2Sn}{(c\frac{n+1}{2})^2(S\frac{n}{2})^2}}\right],n=1,2,\mathrm{},2S.`$ (8)
When $`H_c^n<H<H_c^{n+1}`$, only a boundary $`(2Sn)`$-string survives in the ground state and when $`H>H_c^{2S}`$, any boundary string becomes unstable. Notice that at $`H=H_c^n`$, the ground-state-magnetization has a jump $`\delta M=1`$, which corresponds to some type of quantum phase transition. The finite value of $`H_c^1`$ indicates that the zero temperature susceptibility of the local singlet is exactly zero.
Thermal BAE. Since we are interested mostly in the critical behavior, we consider $`T,H<<H_c^1`$ and $`J>0`$ case in the following text. In this case, any excitations breaking the boundary string can be plausibly omitted due to the energy gap associated with them. With the standard thermal Bethe ansatz, we derive the thermal BAE as
$`\mathrm{ln}(1+\eta _n)={\displaystyle \frac{2\pi a_n(\lambda )+nH}{T}}+{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}𝐀_{mn}\mathrm{ln}[1+\eta _m^1(\lambda )],`$ (9)
or equivalently,
$`\mathrm{ln}\eta _1(\lambda )={\displaystyle \frac{\pi }{T}}g(\lambda )+𝐆\mathrm{ln}[1+\eta _2(\lambda )],`$ (10)
$`\mathrm{ln}\eta _n(\lambda )=𝐆\{\mathrm{ln}[1+\eta _{n+1}(\lambda )]+\mathrm{ln}[1+\eta _{n1}(\lambda )]\},n>1,`$ (11)
$`\underset{n\mathrm{}}{lim}{\displaystyle \frac{\mathrm{ln}\eta _n}{n}}={\displaystyle \frac{H}{T}}2x_0,`$ (12)
where $`a_n(\lambda )=n/2\pi [\lambda ^2+(n/2)^2]`$, $`𝐀_{mn}=[m+n]+2[m+n2]+\mathrm{}+2[|mn|+2]+[|mn|]`$; $`g(\lambda )=1/2\mathrm{cosh}(\pi \lambda )`$; $`\eta _n(\lambda )`$ are some functions which determine the free energy of the system; and $`[n]`$ and $`𝐆`$ are integral operators with the kernels $`a_n(\lambda )`$ and $`g(\lambda )`$, respectively. The free energy is given by
$`F=F_{bulk}+F_{imp},`$ (13)
$`F_{bulk}=F_0(N+{\displaystyle \frac{1}{2}})T{\displaystyle g(\lambda )\{\mathrm{ln}[1+\eta _1(\lambda )]\frac{2\pi a_1(\lambda )+H}{T}\}𝑑\lambda },`$ (14)
$`F_{imp}={\displaystyle \frac{1}{2}}T{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \varphi _n^{}(\lambda )\mathrm{ln}[1+\eta _n^1(\lambda )]𝑑\lambda },`$ (15)
where $`a_{n,m}(\lambda )=_{l=1}^{min(m,n)}a_{n+m+12l}(\lambda )`$; $`\varphi _n^{}(\lambda )=a_{n,2S}(\lambda ic+i)+a_{n,2S}(\lambda +ici)`$; $`F_0`$ is the ground state energy; $`F_{bulk}`$ and $`F_{imp}`$ are the free energies of the bulk (including the bare boundary) and the impurity, respectively. Notice that Eq.(9) and Eq.(10) are more difficult to handle than those of the antiferromagnetic chain, since here all $`\eta _n`$ diverge as for $`T0`$. These equations were solved numerically in studying the critical behavior of the ferromagnetic Heisenberg chain. In addition, Schlottmann gave an analytical result based on a simple correlation-length approximation and the result coincides with the numerical ones very well. As we can see from Eq.(9) and Eq.(10), when $`T0`$, $`\eta _n\mathrm{}`$. To arrive at the asymptotic solutions of $`\eta _n(\lambda )`$, we make the ansatz $`\eta _n(\lambda )=\mathrm{exp}[2\pi a_n(\lambda )/T]\varphi _n`$. Substituting this ansatz into Eq.(10) we readily obtain $`\varphi _n1`$ for finite $`n`$ and $`\lambda `$. Therefore,
$`\eta _n\mathrm{exp}[{\displaystyle \frac{2\pi a_n(\lambda )}{T}}],T0.`$ (16)
On the other hand, when $`\lambda \mathrm{}`$ or $`n\mathrm{}`$, the driving term in Eq.(10) tends to zero. This gives another asymptotic solution of $`\eta _n`$ for very large $`\lambda `$ or $`n`$
$`\eta _n={\displaystyle \frac{\mathrm{sinh}^2[(n+1)x_0]}{\mathrm{sinh}^2x_0}}1+O({\displaystyle \frac{1}{T}}e^{\pi |\lambda |}),`$ (17)
For intermediate $`\lambda `$ and $`n`$ we have a crossover regime. We call Eq.(12) as the strong-coupling solution, while Eq.(13) as the weak-coupling solution. By equating them we obtain two types of crossover scales, $`\lambda _c(n)`$ for small $`n`$ and $`n_c(T)`$,
$`\lambda _c(n)\left[{\displaystyle \frac{n}{4T\mathrm{ln}(1+n)}}\right]^{\frac{1}{2}},n_c(T){\displaystyle \frac{1}{4T\mathrm{ln}(1+n_c)}}{\displaystyle \frac{1}{4T\mathrm{ln}T}},`$ (18)
which characterize the crossover of the strong-coupling regime and the weak-coupling regime. Notice that the strong-coupling solution gives the correct ground state energy and the low-temperature thermodynamics is mainly dominated by the weak-coupling solution. With such an approximation, the recursion for $`\eta _n`$ can be performed by substituting the asymptotic solutions into the right hand side of Eq.(9) and therefore the leading order correction upon the asymptotic solutions can be obtained. In the following recursion process, we adopt the strong-coupling solution in the region of $`\lambda <\lambda _c`$ and $`n<n_c`$, while the weak-coupling solution is adopted in other cases. This corresponds to an abrupt crossover, which does not affect the temperature dependence of the thermodynamic quantities in leading orders but their amplitudes. For convenience, we define $`\zeta _n(\lambda )\mathrm{ln}[1+\eta _n(\lambda )][2\pi a_n(\lambda )+nH]/T`$, which are responsible for the temperature-dependent part of the free energy.
Low-temperature susceptibility of the impurity. For convenience, we consider $`2c=integer`$ case. Taking the boundary string into account, the free energy of the impurity can be rewritten as
$`F_{imp}={\displaystyle \frac{1}{2}}T{\displaystyle g(\lambda )[\zeta _{2c+2S2}(\lambda )sgn(2c2S2)\zeta _{|2c2S2|}(\lambda )]𝑑\lambda }.`$ (19)
Substituting the asymptotic solutions Eq.(12) and Eq.(13) into Eq.(9) and omitting the exponentially small terms, we obtain
$`\zeta _n(\lambda ){\displaystyle \underset{m=1}{\overset{n_c}{}}}\{\mathrm{ln}[1+{\displaystyle \frac{1}{m(m+2)}}]{\displaystyle \frac{2}{3}}x_0^2\}[{\displaystyle _{\lambda _c(m)}^{\mathrm{}}}+{\displaystyle _{\mathrm{}}^{\lambda _c(m)}}]A_{mn}(\lambda \lambda ^{})d\lambda ^{}`$ (20)
$`+2n_c\mathrm{ln}{\displaystyle \frac{\mathrm{sinh}(1+n_c)x_0}{\mathrm{sinh}n_cx_0}},`$ (21)
where $`A_{mn}`$ is the kernel of $`𝐀_{mn}`$. For small $`n<<n_c`$, up to the leading order, we find that the $`x_0^2`$ term of $`\zeta _n(\lambda )`$ is exactly $`n`$-times of that of $`\zeta _1(\lambda )`$. From Eq.(15) we easily derive
$`\chi _{imp}=2S\chi _{bulk}+subleadingorderterms,`$ (22)
where $`\chi _{bulk}T^2\mathrm{ln}^1(1/T)`$ is the per-site susceptibility of the bulk. Very interestingly, the impurity contributes a negative susceptibility, which indicates a novel Kondo diamagnetic effect. That means the Kondo coupling dominates always over the “molecular field” generated by the bulk ferromagnetic fluctuations. Notice that Eq.(17) is only the contribution of the bare impurity. If we take the screening cloud ($`2S`$ bulk spins which form the bound state with the impurity) into account, we find that the total susceptibility of the local singlet is exactly canceled in the leading order. That means the polarization effect of the local bound state only occurs in some subleading order, which indicates a strong coupling fixed point $`J^{}=\mathrm{}`$. In fact, the local singlet is much more insensitive to a small external magnetic field as we discussed for the ground state. When $`T0`$, its susceptibility must tend to zero due to the bound energy as shown in Eq.(8). We note the present method is not reliable to derive the total susceptibility of the local singlet but the above picture must be true. The same conclusion can be achieved for arbitrary $`J>0`$.
Specific heat of the local composite. In the framework of the local Fermi-liquid theory, the Kondo effect is nothing but the scattering effect of the rest bulk particles ($`N2S`$) off the local-spin-singlet composite or equivalently, the polarization effect of the local composite due to the scattering. Taking the boundary string into account, the BAE of the bulk modes can be rewritten as
$`\left({\displaystyle \frac{\lambda _j\frac{i}{2}}{\lambda _j+\frac{i}{2}}}\right)^{2(N2S)}=e^{i\varphi (\lambda _j)}{\displaystyle \underset{lj}{\overset{M2S}{}}}{\displaystyle \frac{\lambda _j\lambda _li}{\lambda _j\lambda _l+i}}{\displaystyle \frac{\lambda _j+\lambda _li}{\lambda _j+\lambda _l+i}},`$ (23)
$`e^{i\varphi (\lambda )}={\displaystyle \frac{\lambda i(c+S1)}{\lambda +i(c+S1)}}{\displaystyle \frac{\lambda +i(cS1)}{\lambda i(cS1)}}\left({\displaystyle \frac{\lambda +\frac{i}{2}}{\lambda \frac{i}{2}}}\right)^{4S},`$ (24)
where $`\varphi (\lambda )`$ represents the phase shift of a spin wave scattering off the local composite (boundary bound state). When $`S=1/2`$, $`c1+0^+`$ or $`J+\mathrm{}`$, $`\varphi (\lambda )=0`$. That means one of the bulk spin is completely frozen by the impurity and the system is reduced to an $`N1`$-site ferromagnetic chain. When $`S=1/2`$, $`1<c<3/2`$, only $`\zeta _1(\lambda )`$ is relevant and the free energy of the local composite reads
$`F_{loc}=T{\displaystyle g(\lambda )[\zeta _1(\lambda )\frac{1}{2}\zeta _1(\lambda ic+i)\frac{1}{2}\zeta _1(\lambda +ici)]𝑑\lambda }.`$ (25)
When $`x_0=0`$, we have
$`\zeta _1(\lambda ){\displaystyle \frac{1}{2}}\zeta _1(\lambda ic+i){\displaystyle \frac{1}{2}}\zeta _1(\lambda +ici)`$ (26)
$`=16(c1)^2T^{\frac{3}{2}}{\displaystyle \frac{1}{\pi }}{\displaystyle \underset{m=1}{\overset{n_c}{}}}\mathrm{ln}[1+{\displaystyle \frac{1}{m(m+2)}}]m^{\frac{1}{2}}\mathrm{ln}^{\frac{3}{2}}(1+m)+\mathrm{}.`$ (27)
The sum in the above equation is convergent for large $`n_c`$. Therefore we can extend it to infinity, which gives the low-temperature specific heat of the local composite as
$`C_{loc}T^{\frac{3}{2}}.`$ (28)
Similar conclusion can be arrived for arbitrary $`S`$ and $`J>0`$. As long as the Kondo coupling is antiferromagnetic ($`cS+1/2`$), the low-temperature specific heat of the local composite is described by Eq.(22). There is a slightly difference between $`S=1/2`$ case and $`S>1/2`$ case. For the former when $`J\mathrm{}`$, the local singlet is completely frozen and $`C_{loc}0`$, while for the later even when $`J\mathrm{}`$, $`C_{loc}`$ takes a finite value. This can be understood in a simple picture. For $`S>1`$, more than one bulk spin will be trapped by the impurity. Even for $`J\mathrm{}`$, only one bulk spin (on the nearest neighbor site) can be completely frozen and the rest is still polarizable via the bulk fluctuation. We note the specific heat of the local singlet is much weaker than that of the Kondo impurity in a conventional metal. This still reveals the insensitivity of the local bound state to the thermal activation. Though the anomalous power law Eq.(22) looks very like that obtained in the Luttinger Kondo systems, they are induced by different mechanisms. In the present case, this anomaly is mainly due to the strong quantum fluctuation while in the Luttinger liquid, the anomaly is in fact induced by the tunneling effect of the conduction electrons through the impurity.
For the ferromagnetic coupling case ($`J<0`$), no boundary bound state exists. Even in the ground state, the impurity spin is completely polarized by the bulk spins. At finite temperature, the critical behavior is locked into that of the bulk ($`C_{imp}T^{\frac{1}{2}}`$, $`\chi _{imp}(T^2\mathrm{ln}T)^1`$).
Similar phenomena may exist in higher dimensions. The antiferromagnetic Kondo coupling indicates a local potential well for the magnons. Therefore, some bound states of the magnons may exist in the ground-state-configuration, which indicate the formation of the local spin-singlet. In this sense, the impurity behaves as a diamagnetic center. When $`J<0`$, the Kondo coupling provides a repulsive potential to the magnons and no local bound state can exist at low energy scales. The impurity must be locked into the bulk.
In conclusion, we solve the model of a ferromagnetic Heisenberg chain coupled with a boundary impurity with arbitrary spin. It is found that as long as the Kondo coupling is antiferromagnetic, (i) the impurity spin behaves as a diamagnetic center and is completely screened by $`2S`$ bulk spins in the ground state, no matter how large the impurity spin is; (ii) The specific heat of the local composite (impurity plus $`2S`$ bulk spins which form bound state with it) shows a simple power law $`C_{loc}T^{\frac{3}{2}}`$. We note that for a finite density of impurities, the local bound states are asymptotically extended to an impurity-band of the magnons, which is very similar to that of a ferrimagnetic system. The critical behavior may be different from that of the single impurity case. When the impurity density $`n_i1/(2S)`$, we expect a spin singlet ground state.
YW acknowledges the financial supports of AvH-Stiftung and NSF of China. He is also indebted to the hospitality of Institut für Physik, Universität Augsburg.
|
no-problem/9901/nucl-th9901081.html
|
ar5iv
|
text
|
# Theoretical Constraints for Observation of Superdeformed Bands in the Mass-60 Region
\[
## Abstract
The lightest superdeformed nuclei of the mass-60 region are described using the Projected Shell Model. In contrast to the heaviest superdeformed nuclei where a coherent motion of nucleons often dominates the physics, it is found that alignment of $`g_{9/2}`$ proton and neutron pairs determines the high spin behavior for superdeformed rotational bands in this mass region. It is predicted that, due to the systematics of shell fillings along the even–even Zn isotopic chain, observation of a regular superdeformed yrast band sequence will be unlikely for certain nuclei in this mass region.
\]
The mass-190 nuclei are the heaviest nuclei known where long-sequence rotational bands associated with the superdeformed (SD) minimum have been observed . In a recent systematic study using the Projected Shell Model (PSM) , it was concluded that the role of high-$`j`$ intruder orbitals is suppressed in these nuclei because of strong correlations in the quadrupole field and non-negligible correlations in the pair field . This conclusion was reinforced by the demonstration that quasiparticle additivity generally does not hold . Superdeformation in the mass-60 region was predicted some years ago and was recently observed . This is the lightest known region of SD rotational bands and these new bands show very different character from those of the mass-190 nuclei.
The mass-60 SD bands are associated with the highest rotational frequencies ($`\mathrm{}\omega `$ 1.8 MeV) observed so far in SD nuclear systems; in contrast, in the SD mass-190 nuclei the maximum rotational frequency is typically 0.4 MeV. However, the magnitudes of deformation and pairing appear to be comparable in the mass-60 and mass-190 regions. We may expect that the single-particle level density near the $`N`$ = 30 gap is much lower than for heavier nuclei. Thus, there can be substantial fluctuations in shell fillings along an isotopic chain, which could give rise to drastic changes in the single particle and collective behavior. In addition, the maximum spin within the yrast and near-yrast bands in SD mass-60 nuclei is generally much lower than in heavier nuclei (SD bands terminate earlier ). These new features lead us to expect complex behavior in this region relative to previously studied SD nuclei.
So far, the SD bands of the mass-60 nuclei have been explained using mean-field theories (cranked relativistic mean-field theory and cranked Nilsson model , or cranked Skyrme-Hartree-Fock method ), with complete neglect of pairing correlations. These descriptions reproduce many of the gross features found in these nuclei. However, some interesting questions have not been discussed. For example, why does the observed SD band in <sup>62</sup>Zn consist of only a few $`\gamma `$-rays, while population of the SD band in neighboring <sup>60</sup>Zn extends to low spin states? And why has one not seen a SD yrast band at all in <sup>64</sup>Zn ? Can one predict spin values for these bands? Can one give a microscopic justification for the complete neglect of pairing in all calculations reported prior to this one?
In an investigation using the PSM, we have found a surprisingly good description of the SD behavior in this region and rather plausible answers to these questions in terms of band crossings and band interactions involving the $`g_{9/2}`$ intruder orbits. Because of high angular momentum $`j`$, the fully paired $`g_{9/2}`$ quasiparticles in the ground state are most strongly affected by the Coriolis Anti-pairing force when the nucleus rotates. The pairs break during the rotation and align their spins along the direction of the collective rotation. Viewed in terms of bands, a 2-quasiparticle band (or a band with a broken pair) which lies higher in energy at zero rotation becomes lower at a certain angular momentum than the ground band. Thus, band crossing is related to the microscopic alignment process, and can be linked to experimental observations. In this Letter, we concentrate our discussion on the important physical consequences of our interpretation, leaving general results of our investigation to be published elsewhere.
The PSM has been successfully applied to normally deformed nuclei as well as SD nuclei in various mass regions . For details of the PSM theory we refer to the review article of Hara and Sun and to the published computer code . In the PSM, the many-body wavefunction is a superposition of (angular momentum) projected multi-quasiparticle states,
$$|\psi _M^I=\underset{\kappa }{}f_\kappa \widehat{P}_{MK_\kappa }^I|\phi _\kappa ,$$
(1)
where $`|\phi _\kappa `$ denotes basis states consisting of the quasiparticle (qp) vacuum, two quasi-neutron and -proton, and four qp states for even-even nuclei. The dimension of the qp basis in the present calculation is about 50. Since <sup>60</sup>Zn has a deformation of $`\beta _2=0.47`$ , the deformation of our basis is fixed at $`ϵ_2=0.45`$ for all nuclei calculated in this paper. Three full major shells ($`N=`$ 2, 3 and 4) are employed for neutrons and for protons (with a frozen <sup>16</sup>O core). For the Nilsson parameters $`\kappa `$ and $`\mu `$ we take the values of Ref. . Two-body interactions are then diagonalized in the basis generated using the above deformed mean field with angular momentum projection.
We use the usual separable-force Hamiltonian
$$\widehat{H}=\widehat{H_0}\frac{\chi }{2}\underset{\mu }{}\widehat{Q}_\mu ^+\widehat{Q}_\mu G_M\widehat{P}^+\widehat{P}G_Q\widehat{P}_\mu ^+\widehat{P}_\mu $$
(2)
with spherical single-particle, residual quadrupole–quadrupole, monopole pairing, and quadrupole pairing terms. The strength $`\chi `$ of the quadrupole–quadrupole term is fixed self-consistently with the deformation, so it is not a true parameter . Lack of SD data precludes determining the pairing interaction strength from experimental odd–even mass differences in a systematic way, so we have used the prescription introduced in Ref. , which corresponds in this case to multiplying the monopole pairing strengths $`G_M`$ of Ref. by 0.90 to accommodate the relative increase in the size of the basis for the present calculation. This amount of reduction is consistent with the principles described in Ref. . For the quadrupole pairing interaction $`G_Q`$, a ratio $`C=G_Q/G_M`$ = 0.28 is used, the same value used in the heavy SD nuclei .
To illustrate the physics, the calculated band diagram (energy for the projected basis states in Eq. (1) as a function of spin; see Ref. for a further interpretation of this diagram) is shown in Fig. 1 for <sup>60</sup>Zn. The 2-qp states correspond to the group of bands starting around 4–5 MeV in energy (solid lines for neutrons and dotted lines for protons). Among these bands, we observe that two behave in a unique way: at the bandhead they lie a little higher than the other 2-qp bands, but rapidly decrease relative to the other bands as the system rotates. Thus, in the initial band crossing region these two bands are on average 2 MeV lower than the other 2-qp bands. The 2-qp states exhibiting this behavior correspond to the neutron and the proton 2-qp state coupled from $`K=\frac{1}{2}`$ and $`K=\frac{3}{2}`$ particles in the $`g_{9/2}`$ orbital to a total $`K=1`$. The band corresponding to the $`g_{9/2}`$ proton 2-qp states crosses the ground band (the first band crossing) and becomes the lowest band beyond $`I=14`$.
A group of 4-qp states is illustrated in Fig. 1 as the set of dashed lines starting around 8–9 MeV. One of these that is flat in low spin region is constructed from the above-mentioned two $`g_{9/2}`$ pairs of neutrons and protons that are the most favorable 2-qp states in energy. The $`g_{9/2}`$ proton 2-qp band is crossed between $`I=18`$ and 20 by this 4-qp state (the second band crossing). Thus, the important multi-quasiparticle states that lie lowest in energy for the spin range to be considered are composed entirely from $`g_{9/2}`$ orbitals and we may expect that quasiparticle states from other orbitals (e.g., $`f_{7/2}`$ or $`p_{3/2}`$) will play a less important role near the yrast line.
From the preceding discussion, we conclude that high spin physics near the yrast line in the SD even–even, mass-60 nuclei should be governed by crossings and interactions between bands built upon neutron and proton $`g_{9/2}`$ quasiparticles. Because the single-particle state density is low, we may further expect the influence of band crossings and interactions to fluctuate drastically along isotopic chains. On the other hand, states built upon quasiparticles from other orbitals occur at much higher energies. They can contribute to the collective quantities (e.g. the collective portion of the angular momentum and the total electric quadrupole moment), but not strongly to quantities dominated by the quasiparticle properties.
We show the calculated energy spectra in terms of the transitional energy $`E_\gamma `$ in Fig. 2 and dynamical moments of inertia $`\mathrm{}^{(2)}`$ in Fig. 3 for the even–even isotopic chain <sup>60-66</sup>Zn. Comparisons with experimental data are shown where data are available. For <sup>60</sup>Zn, both $`E_\gamma `$ and $`\mathrm{}^{(2)}`$ agree with data reasonably well (data has the peak at $`I=20`$, while the calculated one is at $`I=18`$). The SD band in <sup>60</sup>Zn is linked experimentally to the known low-lying states , so the spin of this band is known. Thus this agreement supports the choices of interaction strengths used in the present calculation. We can then predict spins for other SD bands where no linking transitions are observed. For <sup>62</sup>Zn, the best agreement between theory and data corresponds to placing the measured first SD $`E_\gamma `$ at $`I=20`$ (see, Fig. 2), thus predicting this transition to be from the state $`I=20`$ to $`I=18`$. This agrees with the assignment proposed previously by Afanasjev et al .
In Fig. 3, there are in general two peaks in the $`\mathrm{}^{(2)}`$ plots that reflect the two successive band crossings discussed above. The first occurs at $`I12`$, with the location and size being similar for each of the 4 isotopes. This is because the first crossing is mainly the $`g_{9/2}`$ proton pair crossing, which is relatively constant within this isotopic chain. However, the next band crossing, caused by a 4-qp state of $`g_{9/2}`$ neutron and proton pairs, leads to very different consequences for each individual SD band; this implies significant theoretical constraints for the possibility of observation, as we now discuss.
The Projected Shell Model is known to give a good description of band crossings in heavier nuclei where it has been tested extensively (for example, see Ref. ). The nature of the crossing (e.g., whether the peak in $`\mathrm{}^{(2)}`$ is sharp or gentle) is related to the angle between crossing bands . A small angle spreads the interaction over a wide angular momentum range, thus producing a smoother change. A large angle implies that the bands interact over a narrow angular momentum range and a sudden discontinuity can occur. In our case, a smaller crossing angle is seen just before $`I=20`$ for <sup>60</sup>Zn, producing a smoothed interaction (see Fig. 1). In fact, the two peaks in $`\mathrm{}(2)`$ caused by first and second band crossings merge in this case, resulting in one wide and smooth peak ranging from low to high spins. However, a larger crossing angle is found at $`I=20`$ for <sup>62</sup>Zn. Thus, in the $`\mathrm{}(2)`$ plot for <sup>62</sup>Zn a clear separation of the two peaks is seen, with the second one at $`I=20`$ being much higher. If this discontinuity is pronounced, it may be expected to set a lower limit in angular momentum for observation of such a SD band with weak intensity, while the upper limit is determined by the band termination spin . This explains succinctly why the observed SD band in <sup>60</sup>Zn is long, while in the neighboring <sup>62</sup>Zn, where one might naively expect similar behavior, the observed band is very short.
Galindo–Uribarri et al. reported a rotational SD band in <sup>64</sup>Zn . Because of the strong dipole transitions discovered in their work, this band appears not to belong to the same type of bands (SD yrast bands characterized by even integer spins only) discussed above . An important question is why the usual SD yrast band has not been seen in <sup>64</sup>Zn. We find that, due to different neutron shell fillings, the position of the $`g_{9/2}`$ neutron 2-qp band is shifted higher in energy for this case, which in turn pushes the 4-qp band higher. Consequently, the second band crossing spin is shifted to $`I=22`$, a spin which is even closer to the band termination. In addition, this second band crossing is very sharp (see Fig. 3). If the experimental analysis were not able to follow the population over the sharp second band crossing, there would be at most three or four transitional gamma-rays to measure, making observation of the SD yrast band in this nucleus difficult.
Going to the next isotope, <sup>66</sup>Zn, a different picture appears. Because of the shift in neutron Fermi level, the pair of $`g_{9/2}`$ neutrons contributing to the 4-qp state is changed from $`K=\frac{1}{2}`$ and $`K=\frac{3}{2}`$ particles to $`K=\frac{3}{2}`$ and $`K=\frac{5}{2}`$ (still coupled to total $`K=1`$). Because of the even higher energy and steeper curvature of this 4-qp band, we find that it crosses the proton 2-qp band at $`I=26`$ at a very small angle. In fact, one can hardly see in the $`\mathrm{}^{(2)}`$ plot that there is a band crossing. Thus, our calculation suggests that there should be a much better chance to observe a long SD yrast band in <sup>66</sup>Zn, where no experiment has yet been reported .
It has been demonstrated previously that the position of band crossings can be shifted systematically to higher spin by a stronger quadrupole pairing interaction. Therefore, the discrepancy mentioned in the $`\mathrm{}(2)`$ plot in the <sup>60</sup>Zn calculation (the theoretical peak occurs two spin units too early) could be improved if a larger quadrupole pairing interaction were employed. We have not introduced this refinement because for this particular $`N=Z`$ nucleus we may expect that neutron–proton pairing correlations may also play a role. For example, Ref. found that the $`T=0`$ pairing becomes significant at very high spins ( where the g9/2 orbital became important) in the lighter $`N=Z`$ nucleus <sup>48</sup>Cr. This neutron–proton pairing has not been included in the present calculation or in the calculations of Ref. . Explicitly including the p-n pairing in the PSM is of interest for future work.
When we calculate the pairing gaps using the total many-body wavefunction, we find that both neutron and proton pairing is significant at $`I=0`$ (gaps of about 0.9 MeV). However, there is a rapid drop in pairing gaps near the first band crossing. Beyond $`I=18`$, they assume small, nearly constant values corresponding to about 40$`\%`$ of their initial values. All measured SD bands in the mass-60 region are in the spin range beyond $`I=18`$. Thus, our results may provide an understanding of the success of mean-field calculations, all of which have neglected pairing correlations completely . Details will be reported in a forthcoming paper.
To summarize, the Projected Shell Model has been used to carry out the first study of SD mass-60 even–even nuclei using techniques that go beyond the mean field. In contrast to the heaviest SD systems where coherent motion of many nucleons is important and alignment in specific orbits is less significant, it is found that alignment of $`g_{9/2}`$ proton and neutron pairs dominates the high-spin behavior in these lightest SD nuclei. Because of this, and the low level densities expected for this mass region near the Fermi surface, we find that the nature of the SD bands can fluctuate strongly with shell filling in even–even isotopic sequences. Calculations for the even–even Zn isotopic chain provide an explanation for the bands already observed, and make specific predictions about which nuclei are best candidates for long rotational SD sequences in this region. Because our calculations go beyond the mean field, they can be used to check various assumptions of the mean field descriptions. For example, we have calculated the pairing gaps dynamically and find that they generally are not small at low spins, but drop rapidly to non-zero but relatively small values in the region where data are available, thus providing a partial microscopic justification for the uniform neglect of pairing in all mean field calculations reported to date. Finally, for the only case in this region where the spin has been measured, our calculated spin agrees with the measured spin without parameter adjustment. This, coupled with numerous previous correct predictions of spin for SD bands in the mass-130 and mass-190 nuclei permits us to predict theoretical spins with confidence for those cases where they have not yet been measured.
Valuable discussions with A. Galindo–Uribarri, P. Ring, A.V. Afanasjev and D.J. Hartley are acknowledged. One of us (J.-y. Z.) is supported by the U. S. Department of Energy through Contract No. DE–FG05–96ER40983.
|
no-problem/9901/hep-ph9901351.html
|
ar5iv
|
text
|
# Constraints on the mSUGRA parameter space from electroweak precision data11footnote 1Contribution of the Precision Electroweak Subgroup of the SUGRA Working Group for the Physics at Run II – Supersymmetry/Higgs Workshop, November 19-21, 1998.
## Acknowledgments
Conversations with Vernon Barger were especially useful for finalizing our report. The work of Rob Szalapski is supported in part by U.S. Department of Energy under grant DE-FG02-91ER40685, and in part by the U.S. National Science Foundation under grants PHY-9600155 and INT9600243. The research of Gi-Chol Cho is supported by Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture of Japan. Chung Kao has received support from the U.S. Department of Energy under grant DE-FG02-95ER40896 and from the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation.
|
no-problem/9901/adap-org9901004.html
|
ar5iv
|
text
|
# Stochastic group selection model for the evolution of altruism
## 1 Introduction
Despite the scarcity of empirical facts supporting group selection as a relevant evolutive force in nature, the mathematical problems involved in its modeling have kept a recurrent theoretical interest on this controversial theory . Group selection is based on an analogy between individuals (or genes) and reproductively isolated subpopulations, termed demes. If the extinction of demes occurs at a rate depending on their composition, then such extinctions will favor the existence of individuals that increase the probability of survival of the deme they belong to. In the case that these individuals are disfavored by the usual selection at the individual level, group selection will oppose individual selection, and so it has been advanced as an explanation for the existence of altruistic traits in nature. Such a trait is defined as one that is detrimental to the fitness of the individual who expresses it, but that confers an advantage on the group of which that individual is a member.
The standard mathematical framework to study group selection was proposed by Levins more than 20 years ago . The key ingredients are the differential survival probability favoring demes with a large number of altruists, and the subsequent recolonization of the extinguished demes by the surviving ones. In fact, this is practically the only generally accepted mechanism to produce group selection in nature (see for an alternative proposal). However, the mathematical complexity of Levins’ formulation, based on a nonlinear integral partial differential equation, as well as the need for too restrictive assumptions, have motivated the proposal and study of a variety of discrete time versions of Levins’ model . These analyses have concentrated mainly on the deterministic regime, in which the number of demes $`M`$ is infinite, though the deme size $`N`$ (i.e., the number of individuals in each deme) is finite. In the absence of mutation, the finitude of $`N`$ is crucial to guarantee the fixation through random drift of the altruistic trait within some demes. The undesirable feature of considering $`M`$ finite as well, besides obliterating the possibility of an analytical solution to the problem, is that the fluctuations occurring during the extinction process will ultimately lead to the complete extinction of the population. This is a consequence of the sequential procedure that considers first the extinction of demes and then the recolonization of the extinct demes by the surviving ones. In this paper we modify the sequential extinction-recolonization procedure so as to avoid global extinction, allowing thus the numerical and analytical study of the effects of a finite population on the steady-state of the metapopulation (i.e., the population of demes). More pointedly, once a deme is extinct we immediately assign one of the $`M1`$ surviving demes to replace it, although the effective replacement of the extinct demes will take place only after all demes have passed the extinction stage. This replacement or recolonization occurs simultaneously for all demes.
Our goal is to study the effects of the fluctuations due to the finitude of the population on the stability of the altruistic state predicted by Eshel in the deterministic regime . The remainder of this paper is organized as follows. In section 2 we describe the events that comprise the life cycles of the individuals in the metapopulation and present the stochastic dynamics governing the time evolution of the metapopulation. A mean-field recursion equation is derived and its validity discussed in section 3. The results of the simulations as well as those of the mean-field approximation are presented and analyzed in section 4. Finally, in section 5 we present some concluding remarks and, in particular, point out the relevance of our results to the dynamics of parasite-host systems.
## 2 Model
The metapopulation is composed of $`M`$ demes, each of which being composed of $`N`$ haploid, asexually reproducing individuals. An individual can be either altruist or non-altruist. The cost associated with being altruistic is modelled by assigning the reproductive rate $`1\tau `$, with $`\tau [0,1]`$, to the altruists and the reproductive rate $`1`$ to the non-altruists. The demes are classified according to the number of altruistic individuals they have, so that there are $`N+1`$ different types of demes, labeled by the integers $`i=0,1,\mathrm{},N`$. In each generation the metapopulation is described by the vector $`𝐧=(n_0,n_1,\mathrm{},n_N)`$, where $`n_i`$ is the number of demes of type $`i`$, so that $`_in_i=M`$. The life cycle (i.e., one generation) consists of the following events, which will be discussed in detail in the sequel: extinction, recolonization, reproduction, and mutation.
### 2.1 Extinction and Recolonization
Within the differential extinction framework, we define the probability that a deme of type $`i`$ survives extinction, $`\alpha _i`$, by
$$\alpha _i=\{\begin{array}{cc}\frac{1}{2}\left(1+i/i_c\right)\hfill & \text{ if }i<i_c\text{ }\hfill \\ 1\hfill & \text{ otherwise},\hfill \end{array}$$
(1)
where $`i_c=0,1,\mathrm{},N`$ is a parameter measuring the intensity of the group selection pressure. The larger the number of altruists in a deme, the larger its chance of surviving extinction. Once a deme is extinct, a randomly chosen deme among the $`M1`$ surviving ones will immediately be assigned to replaced it. This contrasts with the standard modelling in which recolonization takes places only after all demes have passed the extinction procedure. Hence, given $`𝐧`$, the probability that a deme of type $`j`$ changes to a deme of type $`i`$, denoted by $`E_{ij}`$, is simply
$$E_{ij}=\{\begin{array}{cc}\alpha _i+\left(1\alpha _i\right)\left(n_i1\right)/\left(M1\right)\hfill & \text{ if }i=j\text{ }\hfill \\ \left(1\alpha _j\right)n_i/\left(M1\right)\hfill & \text{ if }ij.\hfill \end{array}$$
(2)
As expected, $`_iE_{ij}=1j`$. We note that the transition matrix $`𝐄`$ depends on $`𝐧`$ and so it changes as the population evolves. The conjunction of the extinction and recolonization procedures is termed interdemic selection since the correlation between the elements belonging to a same column of $`𝐄`$ yields an effective, indirect interaction between the demes.
### 2.2 Reproduction
The reproduction process occurs inside the demes and hence is termed intrademic selection. Since the size of the demes is fixed and finite ($`N`$), random drift occurs. Following Wright’s classical model we assume that the number of offspring that an individual contributes to the new generation is proportional to its relative reproductive rate. Thus, the probability that a deme of type $`j`$ changes to a deme of type $`i`$ is written as
$$R_{ij}=(\begin{array}{c}N\\ i\end{array})w_j^i\left(1w_j\right)^{Ni},$$
(3)
where
$$w_j=\frac{j\left(1\tau \right)}{Nj\tau }$$
(4)
is the relative reproductive rate of the subpopulation of altruists in a deme of type $`j`$. We note that $`_iR_{ij}=1j`$ and $`_iiR_{ij}=Nw_j`$. In the absence of mutations, the random drift inherent to the reproduction process will prevent the existence of mixed demes, i.e., a deme will be either of type $`N`$ (only altruists) or of type $`0`$ (only non-altruists). As a result, a choice of the parameter $`i_c`$ different from $`0`$ or $`N`$ in the definition of the survival probability $`\alpha _i`$ will have practically no effect on the metapopulation evolution.
### 2.3 Mutation
To include mutation into the model, we must descend to the level of the genes that determine the characteristics of the individuals. In particular, we assume that two alleles, say $`A`$ or $`B`$, at a single locus determine whether a given individual is altruist or non-altruist, respectively. Since the replication of a gene may not be perfect, we introduce the mutation rate $`u[0,1/2]`$, which gives the probability that the allele $`A`$ mutates to $`B`$ and vice-versa. Hence the probability that a deme of type $`j`$ changes to a deme of type $`i`$ due to mutations of its members is given by
$$U_{ij}=\underset{l=l_l}{\overset{l_u}{}}(\begin{array}{c}j\\ l\end{array})(\begin{array}{c}Nj\\ il\end{array})u^{i+j2l}\left(1u\right)^{Nij+2l},$$
(5)
where $`l_l=\text{max}(0,i+jN)`$ and $`l_u=\text{min}(i,j)`$. Clearly, $`_iU_{ij}=1j`$ and $`_iiU_{ij}=Nu+j\left(12u\right)`$.
### 2.4 Stochastic dynamics
Given a population characterized by the vector $`𝐧`$ which will change into a new population characterized by $`𝐧^{}`$ due to a generic transition matrix $`𝐓`$ ($`_iT_{ij}=1j`$), the stochastic dynamics is defined by the conditional probability distribution $`P_T\left(𝐧^{}|𝐧\right)`$. To evaluate this quantity, it is more convenient to introduce the set of integers $`\{b_{ij}\}`$, where $`b_{ij}`$ stands for the number of demes of type $`j`$ that have changed to a deme of type $`i`$. Hence $`n_j=_ib_{ij}`$ and $`n_i^{}=_jb_{ij}`$, so that given the set $`\{b_{ij}\}`$, the vector $`𝐧^{}`$ can be readily determined. In fact, given $`n_j`$ the conditional probability distribution of $`𝐛_j=(b_{0j},b_{1j},\mathrm{},b_{Nj})`$ is a multinomial
$$P_T\left(𝐛_j|n_j\right)=\frac{n_j!}{b_{0j}!b_{1j}!\mathrm{}b_{Nj}!}T_{0j}^{b_{0j}}T_{1j}^{b_{1j}}\mathrm{}T_{Nj}^{b_{Nj}}$$
(6)
for $`j=0,\mathrm{},N`$. Clearly, the random variables $`b_{kj}`$ and $`b_{li}`$ are statistically independent for $`ij`$, regardless of the values assumed by the indices $`k`$ and $`l`$. We must emphasize that since the transition matrix $`𝐄`$, which governs the extinction and recolonization procedures, depends explicitly on $`𝐧`$, this dynamics must be applied in parallel (simultaneously) to all demes.
The dynamics proceeds as follows. Given the population vector in generation $`t`$, denoted by $`𝐧^t`$, first we consider the extinction-recolonization event and generate the conditional probability distributions $`P_E\left(𝐛_j|n_j^t\right)j`$. The choice of $`(N+1)^2`$ uniformly distributed random numbers allows the determination of the set $`\{b_{ij}\}`$ and, consequently, of the new population vector $`𝐧^{}`$. Next, given $`𝐧^{}`$ we repeat this procedure for the reproduction event, then generating the population vector $`𝐧^{\prime \prime }`$. Finally, the same procedure is repeated again for the mutation event, leading from $`𝐧^{\prime \prime }`$ to $`𝐧^{t+1}`$, which thus completes the life cycle.
## 3 Expectations
The (conditional) expected value of the number of demes of type $`i`$ after a life cycle given the population vector $`𝐧^t`$ in generation $`t`$ is defined by
$$n_i^{t+1}=\underset{𝐤,𝐥,𝐦}{}k_iP_U\left(𝐤|𝐥\right)P_R\left(𝐥|𝐦\right)P_E\left(𝐦|𝐧^t\right)$$
(7)
where the conditional probabilities are given by Eq. (6) with the generic transition matrix replaced by the specific matrices $`𝐔`$, $`𝐑`$ and $`𝐄`$ as indicated. Here we have used the notation
$$P_T\left(𝐤|𝐥\right)=\underset{j=0}{\overset{N}{}}P_T\left(𝐛_j|l_j\right)$$
(8)
with $`k_i=_jb_{ij}i`$. Moreover, using
$$\underset{𝐤}{}k_iP_T\left(𝐤|𝐥\right)=\underset{j}{}T_{ij}l_j,$$
(9)
we find
$$n_i^{t+1}=\underset{jkl}{}U_{ij}R_{jk}E_{kl}n_l^t,$$
(10)
which, by making explicit the dependence of $`𝐄`$ on $`𝐧^t`$, can be rewritten as
$$n_i^{t+1}=\underset{jk}{}U_{ij}R_{jk}\left\{n_k^t\alpha _k+\frac{n_k^t}{M1}\left[\underset{l}{}\left(1\alpha _l\right)n_l^t\left(1\alpha _k\right)\right]\right\}.$$
(11)
At this stage we can readily derive a mean-field recursion equation for the average number of demes of type $`i`$. In fact, assuming that the covariance
$$\text{Cov}(n_i^t,n_j^t)=n_i^tn_j^tn_i^tn_j^t$$
(12)
vanishes at any $`t`$ for all pairs $`(i,j)`$, and setting $`n_i^t=\nu _i^t`$ yield
$$\nu _i^{t+1}=\underset{jk}{}U_{ij}R_{jk}\left\{\nu _k^t\alpha _k+\frac{\nu _k^t}{M1}\left[\underset{l}{}\left(1\alpha _l\right)\nu _l^t\left(1\alpha _k\right)\right]\right\}.$$
(13)
Thus, rather than studying the evolution of a specific population, in this approximation scheme we focus on the evolution of an average population whose deme frequencies at each generation are regarded as the average of the deme frequencies of an infinite number of populations at that generation. Of course, for finite $`M`$ the covariance can never vanish for all pairs $`(i,j)`$ since the random variables $`n_i^t`$ are not statistically independent (for instance, they obey the normalization condition $`_in_i^t=M`$). However, depending on the values of the control parameters $`\tau `$, $`u`$ and $`M`$, either the covariance $`\text{Cov}(n_k,n_l)`$ or its coefficient $`_jU_{ij}R_{jk}(1\alpha _l)`$ may be sufficiently small so as to validate the mean-field equation as a good approximation.
The (conditional) covariance after a life cycle given $`𝐧^t`$ in generation $`t`$ is simply given by
$$\text{Cov}(n_i^t,n_j^t)=\underset{k}{}S_{ik}S_{jk}n_k^tij$$
(14)
where $`S_{ij}=(URE)_{ij}`$ is the matrix element of the product of the three transition matrices. The case $`i=j`$ corresponds to the variance, $`\text{Var}\left(n_i^t\right)=\text{Cov}(n_i^t,n_i^t)`$, and yields
$$\text{Var}\left(n_i^t\right)=\underset{k}{}S_{ik}\left[1S_{ik}\right]n_k^t.$$
(15)
We note that these quantities can be readily evaluated within the mean-field approach by replacing $`n_k^t`$ by its average, $`\nu _k^t`$. Of course, the estimate of the magnitude of the fluctuations of the random variables $`n_i(i=0,\mathrm{},N)`$ around their means is crucial to assess the relevance of the finite $`M`$ effects.
## 4 Analysis of the results
The quantity of interest is the fraction of altruistic individuals in the metapopulation in the stationary regime, defined as
$$p=\frac{1}{N}\underset{i=0}{\overset{N}{}}iY_i$$
(16)
where $`Y_i=n_i/M`$ is the frequency of demes of type $`i`$. Clearly, $`_iY_i=1`$. Interestingly, in the case $`i_c=N`$, there is a simple relation between $`p`$ and the mean fitness of the metapopulation which is defined as $`\overline{\alpha }=_i\alpha _iY_i`$, namely,
$$\overline{\alpha }=\frac{1}{2}\left(1+p\right).$$
(17)
To measure the dispersion of the random variable $`p`$ we introduce the variance
$$\sigma _p^2=\frac{1}{N^2}\underset{i=0}{\overset{N}{}}i^2Y_ip^2$$
(18)
which vanishes in the case of an homogeneous metapopulation ($`Y_k=1`$ and $`Y_i=0`$ for $`ik`$), and reaches the maximum value $`1/4`$ in the case that the demes are segregated in equal proportions into the two opposite classes ($`Y_0=Y_N=1/2`$). Since $`p`$ and $`\sigma _p^2`$ are random variables, we will focus on their average values, denoted by $`p`$ and $`\sigma _p^2`$, respectively. In all simulations discussed in this work, the symbols represent the averages over $`210^3`$ independent experiments. The error bars are calculated by measuring the standard deviation of the average results obtained in $`50`$ sets of experiments, each one involving $`40`$ independent runs. Moreover, in each run the population is left to evolve for $`210^3`$ generations and we average over the quantities under analysis in the last $`100`$ generations. No significant differences were found for longer runs. Throughout our analysis we set $`i_c=N=10`$, so that Eq. (17) holds true.
In Figs. 1(a) and 1(b) we present $`p`$ and $`\sigma _p^2`$, respectively, as functions of the mutation rate $`u`$ for $`\tau =0.9`$ and two representative values for the number of demes, $`M=10`$ and $`M=100`$. For $`u=0`$ the altruistic trait always takes over the metapopulation, provided that there is at least one altruistic deme in the initial state ($`n_N^01`$). Besides the stable fixed point presented in these figures, for large $`\tau `$ the mean-field recursion equations possess an unstable one, $`p=u`$, which can be reached by starting the iteration with non-altruistic demes only ($`n_0^0=M`$). Thus the effect of finite $`M`$, as shown by the results of the simulations, is to increase the instability of the altruistic regime against mutations by stabilizing the mean-field unstable fixed point. (We note that the mean-field results actually shown the opposite tendency, which indicates the failure of the approximation in this matter.) This is expected since in a smaller metapopulation, chance plays a greater role, and so deleterious mutations accumulate with a higher probability, causing a more rapid decrease in the mean fitness of the metapopulation. In the case that the size of the metapopulation is not fixed but depends on its mean fitness, this positive feedback, termed mutational meltdown, leads rapidly to the extinction of the metapopulation . The agreement between the mean-field predictions and the simulations are very good for $`M=100`$, except in the region just after the variance maximum. Up to this maximum the population is composed almost exclusively of altruistic and non-altruistic demes, while beyond it the number of altruistic demes decreases very rapidly, the sole source of altruistic individuals being the mutations within the non-altruistic demes. Clearly, in this scenario we have $`p=u`$ in agreement with the simulation results. The occurrence of a pronounced maximum in $`\sigma _p^2`$ indicates the existence of a phenomenon similar to the error threshold transition of Eigen’s quasispecies model for molecular evolution . (The formal similarity between group selection and molecular evolution models has already been pointed out in ref. .) We note that even for $`M=10`$, the mean-field approximation yields very good results for small $`u`$. Of course, the agreement between theory and simulations is more problematic for small $`M`$, since in this case the probability that the altruistic demes are lost from the metapopulation due solely to fluctuations becomes significant, leading to the so-called stochastic escape phenomenon . This loss is practically irreversible as the altruistic selective disadvantage is too high to allow for the production of new altruistic demes.
The same quantities, $`p`$ and $`\sigma _p^2`$, are presented in Figs. 2(a) and 2(b), except that the altruistic disadvantage is reduced to $`\tau =0.2`$. In this case, the effects of the finite $`M`$ fluctuations are almost suppressed as illustrated by the agreement between the mean-field and the simulation results. The stochastic escape phenomenon is not important in this case since new altruistic demes can readily be generated due to the small reproductive disadvantage of the altruistic individuals. The results for $`M\mathrm{}`$ are practically indistinguishable from those obtained in the mean-field approximation for $`M=100`$. We have verified that changing the values of $`i_c`$ and $`N`$ alters the frequency of the altruistic gene smoothly, leaving its qualitative dependence on the mutation rate unaffected.
A more direct measure of the finite $`M`$ fluctuations is presented in Figs. 3 and 4, which show the variances of the fraction of non-altruistic and altruistic demes, $`\text{Var}\left(Y_0\right)`$ and $`\text{Var}\left(Y_N\right)`$, respectively, as functions of the mutation rate for $`\tau =0.9`$. We note that although the mean-field approximation describes very well the fluctuations outside the region where the transition between the altruistic ($`p1`$) and non-altruistic ($`pu`$) regimes takes place, it fails badly in that region. This failure seems more pronounced in Fig. 4 because the range of $`u`$ coincides with the transition region, but a similar discrepancy occurs in Fig. 3 also, where the variance peaks for small $`u`$ are completely overlooked by the mean-field approximation. The situation for $`\tau =0.2`$ is illustrated in Fig. 5 where we present $`\text{Var}\left(Y_N\right)`$ as a function of $`u`$. In this case the mean-field approximation reproduces very well the behavior pattern of the simulation results, except for the heights of the peaks which, as expected, are underestimated. The results for $`\text{Var}\left(Y_0\right)`$ are very similar to those shown in Fig. 5. Of course, the disagreement between simulation and theory is expected since we are trying to estimate the size of the fluctuations using an approximation scheme that neglects those very same fluctuations. However, the surprisingly good agreement shown in Fig. 3 for $`u`$ outside the transition region suggests that a self-consistent iterative scheme, where the covariances calculated in the mean-field approximation are used to improve that approximation, may describe successfully the finite $`M`$ fluctuations. As expected, these variances tend to zero as the number of demes increases.
## 5 Conclusion
In this paper we have modified the standard implementation of the group selection mechanism, which considers first the extinction of the demes and then the recolonization of the extinct demes by the surviving ones , by assigning a recolonizing deme to each extinct deme immediately after its extinction, according to Eq. (2). The actual replacement of the extinct demes is carried out simultaneously for all demes following the stochastic prescription given in Eq. (6). This modified extinction-recolonization procedure avoids the otherwise inevitable global extinction of the population. We have verified, however, that this procedure yields qualitatively similar results to those obtained with the standard extinction-recolonization procedure in the case that the metapopulation survives global extinction long enough to reach a metastable state.
It is important to mention that, in contrast to its original and very criticized ecological motivation , some concepts borrowed from group selection have been successfully applied to describe the evolution of parasite-host systems . In this case the hosts are associated with the demes, while the parasites correspond to the individuals inhabiting the demes. The role of the altruistic individuals is played by the less virulent parasites which, by having a lower growth rate, increase the survival probability of the host. Migration of individuals between demes corresponds to horizontal transmission of parasites. The transmission of the parasite between parent and offspring generations is termed vertical transmission. Interestingly, a well-known result is that, in a population of asexual hosts, parasites with vertical transmission alone cannot persist if the infected hosts suffer any fitness cost . (This result is readily recognized as Eshel’s , although no reference to that author is made in the specialized literature of parasite-host systems.) Our finding that at certain ranges of the mutation rate (around $`0.04`$ in Fig. 1), virulent parasites with vertical transmission alone almost take over the population yields evidences of the major role played by mutations in the evolution of virulence . This dominance becomes more pronounced as the host population decreases. A more thorough formulation of parasite-host dynamics through the classical, discrete time population genetics formalism used to study group selection models is still lacking. Such formulation will certainly help to uncover many more similarities, as well as overlapping results, between these two fascinating research fields.
This work was supported in part by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).
## Figure captions
Fig. 1(a) Average frequency of altruists as function of the mutation rate for $`\tau =0.9`$, $`M=10\left(\mathrm{}\right)`$, and $`M=100()`$. The solid and dashed curves are the mean-field results for $`M=10`$ and $`M=100`$, respectively. The straight line is $`p=u`$. The error bar is omitted when it is smaller than the symbol size. The parameters are $`i_c=N=10`$.
Fig. 1(b) Average variance of the frequency of altruists as function of the mutation rate. The parameters and convention are the same as for Fig. 1(a).
Fig. 2(a) Same as Fig. 1(a) but for $`\tau =0.2`$.
Fig. 2(b) Same as Fig. 1(b) but for $`\tau =0.2`$.
Fig. 3 Variance of the fraction of non-altruistic demes, $`\text{Var}\left(Y_0\right)`$, as function of the mutation rate for $`\tau =0.9`$. The convention is the same as for Fig. 1(a).
Fig. 4 Variance of the fraction of altruistic demes, $`\text{Var}\left(Y_N\right)`$, as function of the mutation rate for $`\tau =0.9`$. The convention is the same as for Fig. 1(a).
Fig. 5 Same as Fig. 4, but for $`\tau =0.2`$.
|
no-problem/9901/quant-ph9901036.html
|
ar5iv
|
text
|
# Schrödinger Equation with the Potential 𝑉(𝑟)=𝐴𝑟⁻⁴+𝐵𝑟⁻³+𝐶𝑟⁻²+𝐷𝑟⁻¹
## Abstract
Making use of an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ for the eigenfunction, we obtain exact closed form solutions to the Schrödinger equation with the inverse-power potential, $`V(r)=Ar^4+Br^3+Cr^2+Dr^1`$ both in three dimensions and in two dimensions, where the parameters of the potential $`A,B,C,D`$ satisfy some constraints.
PACS numbers: 03. 65. Ge.
1. Introduction
The exact solutions to the fundamental dynamical equations play an important role in the different fields of physics. As far as the Schrödinger equation concerned, the exact solutions are possible only for the several potentials and some approximation methods are frequently used to arrive at the solutions. The problem of the inverse-power potential, $`1/r^n`$, has been widely carried out on the different fields of classic mechanics as well as on the quantum mechanics. For instance, the interatomic interaction potential in molecular physics \[1-2\], the inverse-power potentials $`V(r)=Z^2\alpha /r^4`$ (interaction between an ion and a neutral atom) and $`V(r)=d_1d_2/r^3`$ (interaction between a dipole $`d_1`$ and another dipole $`d_2`$) are often applied to explain the interaction between one matter and another one. The interaction in one-electron atoms, muonic and hadronic and Rydberg atoms also requires considering the inverse-power potentials . Indeed, the interaction potentials mentioned above are only special cases of the inverse-power potential when some parameters of the potential vanishes.
The reason we write this paper is as follows. On the one hand, Özcelik and Simsek discussed this potential in the three-dimensional spaces . They obtained the eigenvalues and eigenfunctions for the arbitrary node. Simultaneously, the corresponding constraints on the parameters of the potential were obtained. Unfortunately, they did not find that it is impossible to discuss the higher order excited state except for the ground state. In the later discussion, we will draw this conclusion and find some essential mistakes occurred in their calculations even for the ground state. We recalculate the solutions to the Schrödinger equation with this potential in three dimensions following their idea and correct their mistakes. On the other hand, with the advent of growth technique for the realization of the semiconductor quantum wells, the quantum mechanics of low-dimensional systems has become a major research field. Almost all of the computational techniques developed for the three-dimensional problems have already been extended to lower dimensions. Therefore, we generalize this method to the two-dimensional Schrödinger equation because of the wide interest in lower-dimensional fields theory. Besides, we has succeeded in dealing with the Schödinger equation with the anharmonic potentials, such as singular potential both in two dimensions and in three dimensions, the sextic potential , the octic potential and the Mie-type potential by this method. We now attempt to study the Schödinger equation with the inverse-power potential by the same way both in three dimensions and in two dimensions.
This paper is organized as follows. In section 2, we study the three-dimensional Schrödinger equation with this potential using an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ for the eigenfunctions. The study of the two-dimensional Schrödinger equation with this potential will be discussed in section 3. The figures for the unnormalized radial functions are plotted in the last section.
2. Solutions in three dimensions
Throughout this paper the natural unit $`\mathrm{}=1`$ and $`\mu =1/2`$ are employed. Consider the Schrödinger equation
$$^2\psi +V(r)\psi =E\psi ,$$
$`(1)`$
where here and hereafter the potential
$$V(r)=Ar^4+Br^3+Cr^2+Dr^1,A>0,D<0.$$
$`(2)`$
Let
$$\psi (r,\theta ,\phi )=r^1R_{\mathrm{}}(r)Y_\mathrm{}m(\theta ,\phi ),$$
$`(3)`$
where $`\mathrm{}`$ and $`E`$ denote the angular momentum and the energy, respectively, and the radial wave function $`R_{\mathrm{}}(r)`$ satisfies
$$\frac{d^2R_{\mathrm{}}(r)}{dr^2}+\left[EV(r)\frac{\mathrm{}(\mathrm{}+1)}{r^2}\right]R_{\mathrm{}}(r)=0.$$
$`(4)`$
Özcelik and Simsek make an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ for the ground state
$$R_{\mathrm{}}^0(r)=\mathrm{exp}[g(r)],$$
$`(5)`$
where
$$g(r)=\frac{a}{r}+br+c\mathrm{ln}r,a<0,b<0.$$
$`(6)`$
After calculating, one can obtain the following equation
$$\frac{d^2R_{\mathrm{}}^0(r)}{dr^2}\left[\frac{d^2g(r)}{dr^2}+\left(\frac{dg(r)}{dr}\right)^2\right]R_{\mathrm{}}^0(r)=0.$$
$`(7)`$
Compare Eq. (7) with Eq. (4) and obtain the following sets of equations
$$a^2=A,b^2=E,$$
$`(8a)`$
$$2bc=D,2a(1c)=B,$$
$`(8b)`$
$$C+\mathrm{}(\mathrm{}+1)\frac{1}{4}=c^22bac.$$
$`(8c)`$
It is not difficult to obtain the value of the parameter $`a`$ from Eq. (8a) written as $`a=\pm \sqrt{A}`$. In order to retain the well-behaved solution at $`r0`$ and at $`r\mathrm{}`$, they choose negative sign in $`a`$, i. e. $`a=\sqrt{A}`$. According to this choice, they arrive at a constraint on the parameters of the potential from Eq. (8c) written as
$$C=\frac{B^2}{4A}+\frac{B}{2\sqrt{A}}+\frac{2AD}{B+2\sqrt{A}}\mathrm{}(\mathrm{}+1).$$
$`(9)`$
Then the energy is read as
$$E_0^\pm =\frac{1}{16A}\left\{C+\mathrm{}(\mathrm{}+1)\pm \sqrt{[C+\mathrm{}(\mathrm{}+1)]^22BD}\right\}^2.$$
$`(10)`$
It is readily to find that Eq. (10) is a wrong result. From Eqs. (6) and (8b), as we know, since the parameter $`b`$ is negative, when we calculate the energy $`E`$ from Eq. (8a), we only take the $`b`$ as a negative value, so that Eq. (10) only takes the negative sign. Actually, it is not difficult to obtain the corresponding values of the parameters for the $`g(r)`$ from Eq. (8), i. e.
$$c=\frac{B+2\sqrt{A}}{2\sqrt{A}},b=\frac{D\sqrt{A}}{B+2\sqrt{A}}.$$
$`(11)`$
The eigenvalue $`E`$, however, will be simply expressed as from Eq. (8a)
$$E=\frac{AD^2}{B^2+4A+4B\sqrt{A}}.$$
$`(12)`$
The corresponding eigenfunction Eq. (5) can now be read as
$$R_{\mathrm{}}^0=N_0r^c\mathrm{exp}\left[\frac{1}{r}a+br\right],$$
$`(13)`$
where $`N_0`$ is the normalized constant and here and hereafter the parameters $`a`$, $`b`$ and $`c`$ are given above.
After their discussing the ground state, Özcelik and Simsek continue to study the first excited state. They make the $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ for the first excited state,
$$R_{\mathrm{}}^1(r)=f(r)\mathrm{exp}[g(r)],$$
$`(14)`$
where $`g(r)`$ is the same as Eq. (6) and $`f(r)=r\alpha _1`$, where $`\alpha _1`$ is a constant. For short, it is readily to find from Eq. (14) that the radial wave function $`R_{\mathrm{}}^1(r)`$ satisfies the following equation
$$R_{\mathrm{}}^1(r)^{\prime \prime }\left[g(r)^{\prime \prime }+(g(r)^{})^2+\left(\frac{f(r)^{\prime \prime }+2g(r)^{}f(r)^{}}{f(r)}\right)\right]R_{\mathrm{}}^1(r)=0,$$
$`(15)`$
where the prime denotes the derivative of the radial wave function with respect to the variable $`r`$.
Compare Eq. (15) with Eq. (4) and obtain the following sets of equations
$$2b2bc+D+b^2\alpha _1+e\alpha _1=0,$$
$`(16a)`$
$$b^2E=0,a^2\alpha _1A\alpha _1=0,$$
$`(16b)`$
$$a^2+A+2a\alpha _1B\alpha _12ac\alpha _1=0,$$
$`(16c)`$
$$2abcc^2+C+\mathrm{}(\mathrm{}+1)+2bc\alpha _1D\alpha _1=0,$$
$`(16d)`$
$$B+2ac2ab\alpha _1c\alpha _1+c^2\alpha _1C\alpha _1\mathrm{}\alpha _1\mathrm{}^2\alpha _1=0,$$
$`(16e)`$
it is not hard to obtain the following sets of equations from Eqs. (16a-16c)
$$E=b^2,a^2=A,c=\frac{B+2\sqrt{A}}{2\sqrt{A}},$$
$`(17a)`$
$$b=\frac{D\sqrt{A}}{B+4\sqrt{A}},$$
$`(17b)`$
where the constant $`\alpha _10`$ and it is determined by Eqs. (16d) and (16e). Furthermore, it is evident to find that Eq. (17b) does not coincide with Eq. (11) with respect to the same parameter $`b`$, which will lead to the their wrong calculation for the first excited state. In fact, they obtained two different relations during their calculation through the compared equation, i. e. $`D=2bc`$ (see Eq. (9) in ) and $`D=2b(c+1)`$ (see Eq. (16) in ). The parameter $`D`$ does not exist if the parameter $`b`$ is not equal to zero. It is another main mistaken that arises their wrong result, that’s to say, it is impossible to discuss the first excited state for the Schrödinger equation by this method. We only discuss the ground state by this simpler $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ method as mentioned above.
As a matter of fact, the normalized constants $`N_0`$ can be calculated in principle from the normalized relation
$$_0^{\mathrm{}}|R_{\mathrm{}}^0|^2𝑑r=1.$$
$`(18)`$
In the course of calculation, making use of the standard integral (Re$`\lambda _1>0`$, Re$`\lambda _2>0`$ and Re$`\nu >0`$)
$$_0^{\mathrm{}}r^{\nu 1}\mathrm{exp}[(\lambda _1r+\lambda _2r^1)]𝑑r=2\left(\frac{\lambda _2}{\lambda _1}\right)^{\nu /2}K_\nu (2\sqrt{\lambda _1\lambda _2}),$$
$`(19)`$
which implies
$$N_0=\left[\frac{1}{2(\frac{a}{b})^{\frac{2c+1}{2}}K_{2c+1}(4\sqrt{ab})}\right],$$
$`(20)`$
where the values of the parameters $`b,c`$ and $`a`$ are given by Eq. (11) and $`\sqrt{A}`$, respectively. The figure 1 for the unnormalized radial eigenfunction in three dimensions is plotted in the last section.
3. Solutions in tow dimensions
We now generalize this method to the two-dimensional Schrödinger equation. Consider Schrödinger equation with a potential $`V(r)`$ that depends only on the distance $`r`$ from the origin
$$H\psi =\left(\frac{1}{r}\frac{}{r}r\frac{}{r}+\frac{1}{r^2}\frac{^2}{\phi ^2}\right)\psi +V(r)\psi =E\psi .$$
$`(21)`$
Let
$$\psi (r,\phi )=r^{1/2}R_m(r)e^{\pm im\phi },m=0,1,2,\mathrm{},$$
$`(22)`$
where the radial wave function $`R_m(r)`$ satisfies the following radial equation
$$\frac{d^2R_m(r)}{dr^2}+\left[EV(r)\frac{m^21/4}{r^2}\right]R_m(r)=0,$$
$`(23)`$
where $`m`$ and $`E`$ denote the angular momentum and energy, respectively. For the solution of Eq. (23), we make an $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ \[6-13\] for the ground state
$$R_m^0(r)=\mathrm{exp}[g_m(r)],$$
$`(24)`$
where
$$g_m(r)=\frac{a_1}{r}+b_1r+c_1\mathrm{ln}r.$$
$`(25)`$
After calculating, we arrive at the following equation
$$\frac{d^2R_m^0(r)}{dr^2}\left[\frac{d^2g_m(r)}{dr^2}+\left(\frac{dg_m(r)}{dr}\right)^2\right]R_m^0(r)=0.$$
$`(26)`$
Compare Eq. (26) with Eq. (23) and obtain the following sets of equations
$$a_1^2=A,b_1^2=E,$$
$`(27a)`$
$$2b_1c_1=D,2a_1(1c_1)=B,$$
$`(27b)`$
$$C+m^2\frac{1}{4}=c_1^22b_1a_1c_1.$$
$`(27c)`$
It is not difficult to obtain the values of the parameters $`a_1`$ from Eq. (27a) written as $`a_1=\pm \sqrt{A}`$. Likely, in order to retain the well-behaved solution at $`r0`$ and at $`r\mathrm{}`$, we choose negative sign in $`a_1`$, i. e. $`a_1=\sqrt{A}`$. According to this choice, Eq. (27b) will give the other parameter values as
$$c_1=\frac{B+2\sqrt{A}}{2\sqrt{A}},b_1=\frac{D\sqrt{A}}{B+2\sqrt{A}}.$$
$`(28)`$
Besides, it is readily to obtain from Eq. (27c) that
$$C=\frac{B^2}{4A}+\frac{B}{2\sqrt{A}}+\frac{2AD}{B+2\sqrt{A}}(m^21/4),$$
$`(29)`$
which is the constraint on the parameters for the two-dimensional Schrödinger equation with the inverse-power potential.
The eigenvalue $`E`$, however, will be given by Eq. (27a) as
$$E=\frac{AD^2}{B^2+4A+4B\sqrt{A}}.$$
$`(30)`$
The corresponding eigenfunction Eq. (24) can now be read as
$$R_m^0=Nr^{c_1}\mathrm{exp}\left[\frac{1}{r}a_1+b_1r\right],$$
$`(31)`$
Similarly, the normalized constants $`N`$ can be calculated in principle from the normalized relation
$$_0^{\mathrm{}}|R_m^0|^2𝑑r=1.$$
$`(32)`$
According to Eq. (19), we can obtain
$$N=\left[\frac{1}{2(\frac{a_1}{b_1})^{\frac{2c_1+1}{2}}K_{2c_1+1}(4\sqrt{a_1b_1})}\right],$$
$`(33)`$
where the values of the parameters $`a_1,b_1`$ and $`c_1`$ are given above. The figure 2 for the unnormalized radial eigenfunction in two dimensions is plotted in the last section.
Considering the values of the parameters of the potential, we fix them as follows. The values of parameters $`A,C,D`$ are first fixed, for example $`A=4.0,C=2.0`$ and $`D=2.0`$, the value of the parameter $`B`$ is given by Eq. (10) and Eq. (29) for the cases both in three dimensions and two dimensions for $`\mathrm{}=0`$ and $`m=0`$, respectively. By this way, the parameter $`B`$ turns out to $`B=5.87`$ in three dimensions and $`B=5.65`$ in two dimensions, respectively. The ground state energy corresponding to these values are obtained as $`E=0.164`$ for the case in three dimensions and $`E=0.172`$ for the case in two dimensions. Actually, when we study the properties of the ground state, as we know, the unnormalized radial wave functions do not affect the main features of the wave functions. We have plotted the unnormalized radial wave functions in figures 1 and 2 for the cases both in three dimensions and in two dimensions, respectively. With respect to figures 1 and 2, it is easy to find that they are similar to each other, which stems from the same values of the angular momentum $`\mathrm{}=0`$ and $`m=0`$. They will be different if we take the different values of the angular momentum in the course of calculations.
In conclusion, we obtain the exact analytic solutions to the Schrödinger equation with the inverse-power potential $`V(r)=Ar^4+Br^3+Cr^2+Dr^1`$ using a simpler $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ for the eigenfunction both in three dimensions and in two dimensions, and simultaneously the constrains on the parameters of the potential are arrived at from the compared equations. Finally, we remark that this simple and intuitive method can be generalized to other potential. The study of the Schrödinger equation with the asymmetric potential is in progress.
Acknowledgments. This work was supported by the National Natural Science Foundation of China and Grant No. LWTZ-1298 from the Chinese Academy of Sciences.
|
no-problem/9901/cond-mat9901180.html
|
ar5iv
|
text
|
# Nucleation Theory for Capillary Condensation
## Abstract
This paper is devoted to the thermally activated dynamics of the capillary condensation. We present a simple model which enables us to identify the critical nucleus involved in the transition mechanism. This simple model is then applied to calculate the nucleation barrier from which we can obtain informations on the nucleation time. These results are compared to the numerical simulation of a Landau-Ginzburg model for the liquid-vapor interface combined with a Langevin dynamics.
preprint:
Humidity is known to strongly affect the mechanical properties of many substances, like granular media or porous materials . Water vapor may for example condense in the pores of the medium to form liquid bridges. This phenomenon is the well known capillary condensation (see e.g. ). The Laplace pressure inside such liquid bridges may reach a few atmospheres, and thus results in high adhesion forces inside the material. More fundamentally, capillary condensation is a confinement induced gas-liquid phase transition. While in the bulk the gas phase has a lower free energy, and is thus the stable phase, a liquid phase can condense in a pore when the liquid partially wets the solid substrate (more specifically when $`\gamma _{SL}<\gamma _{SV}`$, where $`\gamma `$ is a surface tension, $`SL`$ and $`SV`$ denote the solid-liquid and the solid-vapor interfaces, respectively). A basic model of confinement is provided by the slab geometry, for which the fluid is confined between two parallel planar solid walls. Macroscopic considerations based on this model predict a condensation of the liquid phase below a critical distance $`H_c`$ between the solid surfaces satisfying the condition:
$$\mathrm{\Delta }\rho \mathrm{\Delta }\mu 2(\gamma _{SV}\gamma _{SL})/H_c$$
(1)
where $`\mathrm{\Delta }\rho =\rho _L\rho _V`$ is the difference between the bulk densities of the liquid and the gas phase, and $`\mathrm{\Delta }\mu =\mu _{sat}\mu `$ is the (positive) undersaturation in chemical potential ($`\mu _{sat}`$ is the chemical potential at bulk coexistence) . The static part of this transition is now well understood and documented, both from the experimental and theoretical point of view .
On the other hand, the dynamics of the transition have received very little attention. Only indirect informations are experimentally available in the litterature. Experimental studies of the capillary condensation using the Surface Force Apparatus (SFA) report strong metastability of the gas phase when $`H<H_c`$, which persists over macroscopic times . Furthermore the dynamics have been probed indirectly in experiments measuring the time dependent building up of cohesion inside divided materials. The latter is indeed a direct measure of the dynamical construction of liquid bridges between the grains of the medium (see ref. for details). From the theoretical side, macroscopic arguments for the topologically equivalent drying transition predict extremely large time-scales for evaporation to occur , while the latter is found to occur must faster in corresponding lattice-gas simulations .
A detailed theory for the dynamics of the capillary condensation providing an estimate of the condensation time is thus still lacking. The dynamics of the bulk gas-liquid transition have received on the contrary much more interest . Away from the spinodal line, in oversaturated situations (when $`\mu >\mu _{sat}`$), a nucleation barrier can be constructed in the bulk situation, and a critical nucleus identified. The latter takes the form of a spherical droplet, with a radius $`R_0\gamma _{LV}/\mathrm{\Delta }\mu \mathrm{\Delta }\rho `$ maximizing the free energy of the droplet. Since capillary condensation is also a first order phase transition, it should be possible to identify a critical nucleus away from the spinodal line . However, the situation is more complicated in a confined geometry since the size of the critical nucleus $`R_0`$ competes with other length scales, like the separation $`H`$ between the walls. In the following, we show how to construct the critical nucleus for capillary condensation. A simplified model keeping only the main ingredients for capillary condensation will be first considered. The latter has both advantages to allow tractable calculations and to capture the essential features of the involved physics. In our case, chemical potential, total volume and temperature are fixed. Our aim is thus to find out the saddle-point of the grand potential of the system corresponding to the critical nucleus.
To simplify further the discussion we first consider a two dimensionnal system and a perfect wetting situation: $`\gamma _{SV}=\gamma _{SL}+\gamma _{LV}`$. Both assumptions shall be relaxed at the end. In our simplified description, we shall assume that the phase equilibrium is determined by macroscopic considerations, so that the $`H`$ dependences of the surface tensions are totally neglected. Finally, we assume that the system exhibits the mirror symmetry with respect to the $`H/2`$ plane. Let us consider the situation in which planar liquid films of varying thickness $`e`$ ($`e<H/2`$) develop on both solid surfaces. Following Evans et al. , the grand potential of the system may be written
$$\mathrm{\Omega }=p_VV_Vp_LV_L+2\gamma _{SL}A+2\gamma _{LV}A$$
(2)
where $`V_V`$ (resp. $`V_L`$) is the volume of the gas (resp. liquid) and $`A`$ is the surface area. Using $`V_L=2Ae`$, $`V_V=A(H2e)`$ and $`p_Vp_L\mathrm{\Delta }\rho \mathrm{\Delta }\mu `$, one gets
$$\mathrm{\Delta }\omega (e)\frac{1}{A}(\mathrm{\Omega }\mathrm{\Omega }(e=0))=\mathrm{\Delta }\rho \mathrm{\Delta }\mu 2e$$
(3)
Note that in the complete wetting situation $`\mathrm{\Omega }(e=0)`$ can be identified with $`\mathrm{\Omega }_V`$, the grand-potential of the system filled with the gas phase only. The situation $`e=H/2`$ corresponds to the opposite case where the two liquid films merge to fill the pore. The grand potential thus exhibits a discontinuity at $`e=H/2`$ corresponding to the disappearance of the two liquid-vapor interfaces, and its value is reduced by $`2\gamma _{LV}A`$. When $`e=H/2`$, expression (3) must then be replaced by $`\mathrm{\Delta }\omega (e=H/2)=\mathrm{\Delta }\rho \mathrm{\Delta }\mu (H_cH)`$, where $`H_c`$ is the critical distance defined in eq. (1). One may note that the minimum of the grand potential corresponds to a complete filling of the pore by the liquid phase when $`H<Hc`$, as expected. Up to now, liquid-vapor interfaces were assumed to be planar. If we allow deformation of the interfaces, i.e. the thickness $`e`$ is now a function of the lateral coordinates, the corresponding cost has to be added to the grand potential. Due to the mirror symmetry assumption, the two films are identical, so that one finds eventually
$$\mathrm{\Delta }\mathrm{\Omega }_{tot}=𝑑x\left\{\gamma _{LV}|e|^2+\mathrm{\Delta }\omega (e)\right\}$$
(4)
with $`\mathrm{\Delta }\mathrm{\Omega }_{tot}=\mathrm{\Omega }(\{e\})\mathrm{\Omega }_V`$ and a small slope hypothesis has been made. Extremalization of the grand potential in two dimensions leads to the following Euler-Lagrange equation for $`e(x)`$, where $`x`$ denote the lateral coordinate:
$$2\gamma _{LV}\frac{d^2e}{dx^2}\frac{d\mathrm{\Delta }\omega (e)}{de}=0$$
(5)
This last equation is formally equivalent to the mechanical motion of a particle of mass $`2\gamma _{LV}`$, with position $`e`$ in the external potential $`\mathrm{\Delta }\omega (e)`$; $`x`$ plays the role of time. We look for solutions satisfying $`e=0`$ and $`de/dx=0`$ at infinity. Starting from $`e=0`$, the “particle” is uniformly accelerated until it reaches $`e=H/2`$. We can choose this last point to fix the origin i.e. $`e(x=0)=H/2`$. At this point, the discontinuity in the potential induces a specular reflexion, similar to a collision of the particle with a hard wall : $`de/dx`$ is therefore discontinuous and antisymetric at $`x=0`$. After a straightforward calculation the complete solution, depicted in fig 1, can now be obtained and is found to have a spatial extension $`x_c=\sqrt{HR_c}`$, with $`R_c=H_c/2`$. Explicitly one gets $`e(x)=(|x|x_c)^2/2R_c`$ for $`x[x_c;x_c]`$ and zero otherwise.
Let us note that the cusp in the solution at $`x=0`$ stems from the discontinuity of $`\mathrm{\Delta }\omega `$ at $`e=H/2`$ resulting from the assumption of an infinitesimely narrow liquid-vapor interface. Condensation thus occurs through the exitation of short wavelength fluctuations, in agreement with the simulations results for the drying transition . The corresponding energy of the nucleus (per unit length in the perpendicular direction) can be calculated by integrating eq. (4), to obtain
$$\mathrm{\Delta }\mathrm{\Omega }^{}=\frac{4}{3}(\mathrm{\Delta }\mu \mathrm{\Delta }\rho \gamma _{LV})^{1/2}H^{3/2}$$
(6)
This energy can be identified with the energy barrier to overcome in order to condense the liquid phase from the metastable gas phase. When the energy barrier is not too small (compared to $`k_BT`$), the time needed to condense can be estimated to be $`\tau =\tau _0\mathrm{exp}(\mathrm{\Delta }\mathrm{\Omega }^{}/k_BT)`$, with $`\tau _0`$ a “microscopic” time. It is easy to check that $`\mathrm{\Delta }\mathrm{\Omega }^{}`$ corresponds to a saddle-point of the grand-potential. It is greater than both free energies of the gas and liquid phases. Moreover $`\mathrm{\Delta }\mathrm{\Omega }^{}`$ is smaller than the free energy of any other configuration maximizing the grand potential since it is the only solution of finite extension. We postpone the physical interpretation of the results to the end of the letter. We just point out that the parabolic solution obtained above is the small slope approximation to the circle with radius of curvature $`R_c`$.
In order to verify the previous results, we have conducted numerical simulations of the capillary condensation in two dimensions. We start with a Landau-Ginzburg model for the grand potential of the system confined between two wall. In terms of the local density $`\rho (𝐫)`$, we write the “excess” part of the grand potential $`\mathrm{\Omega }^{ex}=\mathrm{\Omega }+P_{sat}V`$, where $`P_{sat}`$ is the pressure of the system at coexistence, as
$$\mathrm{\Omega }^{ex}=𝑑𝐫\left\{\frac{m}{2}|\rho |^2+W(\rho )+\left(\mathrm{\Delta }\mu +V_{ext}(z)\right)\rho \right\}$$
(7)
In this equation, $`m`$ is a phenomenological parameter; $`V_{ext}(z)`$ is the confining external potential, which we took for each wall as $`V_{ext}(z)=ϵ(\sigma /(\mathrm{\Delta }z+\sigma ))^3`$, with $`\mathrm{\Delta }z`$ the distance to the corresponding wall; $`ϵ`$ and $`\sigma `$ have the dimensions of an energy and a distance. $`W(\rho )`$ can be interpreted as the negative of the “excess” pressure $`\mu _{sat}\rho f(\rho )P_{sat}`$, with $`f(\rho )`$ the free-energy density . As usually done, we assume a phenomenological double well form for $`W(\rho )`$ : $`W(\rho )=a(\rho \rho _V)^2(\rho \rho _L)^2`$, where $`a`$ is a phenomenological parameter . The system is then driven by a non-conserved Langevin equation for $`\rho `$ :
$$\frac{\rho }{t}=\mathrm{\Gamma }\frac{\delta \mathrm{\Omega }^{ex}}{\delta \rho }+\eta (𝐫,t)$$
(8)
where $`\mathrm{\Gamma }`$ is a phenomenological friction coefficient and $`\eta `$ is a Gaussian noise field related to $`\mathrm{\Gamma }`$ through the fluctuation-dissipation relationship . An equivalent model has been successfully used for the (bulk) classic nucleation problem . Physically, the non-conserved dynamics assume an infinitely fast transport of matter in the system, which is justified in view of the time-scale involved for condensation. We solved (8) by numerical integration using standard methods, identical to those of ref. . The units of energy, length are such that $`\sigma =ϵ=1`$. Time is in unit of $`t_0=(\mathrm{\Gamma }ϵ\sigma ^2)^1`$ with $`\mathrm{\Gamma }=\frac{1}{3}`$. In these units, we took $`m=1.66`$, $`a=3.33`$, $`\rho _L=1`$, $`\rho _V=0.1`$. Typical values of the chemical potential and temperature are $`\mathrm{\Delta }\mu 0.016`$, $`T0.06`$ (which is roughly half the energy barrier between vapor and liquid with the form for $`W(\rho )`$ used in our model). Periodic boundary condition with periodicity $`L_x`$ were applied in the lateral direction.
Typically $`L_x2H`$ was used, but we have checked that increasing $`L_x`$ up to $`20H`$ does not affect the results for the activation dynamics. We emphasize that this lack of sensitivity is not an obvious result since it is known that the amplitude of capillary waves increases with the lateral dimension of the system for free interfaces . In our case however, the long-range effects of the fluctuations of the liquid film are expected to be screened due to the presence of the external potential. Moreover, as predicted by the model, nucleation should occur via the excitation of localized fluctuations. The observed insensitivity of the results with respect to finite size effects is then an encouraging feature for the model presented in this letter.
The simulated system is initially a gas state filling the whole pore, and its evolution is described by eq. (8). A typical evolution of the mean density in the slit $`\rho (t)`$ is plotted on fig 2. An average over different realizations (from 10 to 30) is next performed to get an averaged time-dependent density $`\overline{\rho }(t)`$. As expected , due to the long range nature of the external potential a thick liquid film of thickness $`\mathrm{}`$ rapidly forms on both walls on a short time scale $`\tau _1`$ ($`\mathrm{}3.8\sigma `$ and $`\tau _15t_0`$ in our case). The first step of the dynamics is thus the wetting of the solid substrates. This process is not thermally activated. In a second stage, fluctuations of the interfaces around their mean value $`\mathrm{}`$ induce after a while a sudden coalescence of the films (see fig. 2). This second process has a characteristic time $`\tau `$. It is numerically convenient to define the total coalescence time, $`\tau _1+\tau `$, as the time for the average density in the slab between the two wetting films to reach $`(\rho _V+\rho _L)/2`$ , which corresponds in our case to the condition $`\overline{\rho }(\tau +\tau _1)0.8`$. The physical results do not depend anyway on the precise definition of $`\tau `$. In fig. 3, we plot the variation of $`\mathrm{ln}(\tau )`$ as a function of the inverse temperature $`1/T`$.
As expected, far from the spinodal (i.e., for large enough $`H`$, $`H\stackrel{>}{}3\mathrm{}`$), $`\tau `$ is found to obey an Arrhenius law $`\tau =\tau _0\mathrm{exp}(\mathrm{\Delta }\mathrm{\Omega }^{}/k_BT)`$, where $`\mathrm{\Delta }\mathrm{\Omega }^{}`$ is identified as the energy barrier for nucleation. We now focus on the variations of $`\mathrm{ln}(\tau )`$ as a function of $`L_x`$ and $`H`$, which one assumes to be mainly controlled by the variation of $`\mathrm{\Delta }\mathrm{\Omega }^{}`$. First, as already noticed above, we found no variation of $`\mathrm{ln}(\tau )`$ as a function of $`L_x`$, in agreement with our prediction of a localized critical nucleus. The dependence on $`H`$ ($`\mathrm{\Delta }\mu `$ being fixed) is plotted on fig. 4. The previous model predicts a $`H^{3/2}`$ dependence. However the long range of the external potential (of the van der Waals type), produces thick wetting films whose thickness has to be substracted from $`H`$. A more careful analysis of the critical nucleus shows in fact that the total effective thickness of the films has to be replaced by $`3\mathrm{}`$ (instead of $`2\mathrm{}`$), in order to take correctly into account the long range character of the external potential. This result is in agreement with other theoretical and experimental findings for capillary condensation in the presence of van der Waals forces . As seen on fig. 4, a good agreement with the theoretical prediction is found. Dependence on the other parameters ($`\mathrm{\Delta }\mu `$, $`ϵ`$, …) shall be discussed in a longer version of this paper.
The simulation results in the 2D perfect wetting case are thus in agreement with the theoretical predictions. Now it is possible to relax the assumptions done in the presentation above, i.e. perfect wetting, two dimensionnal system, small slopes. More generally, one may realize that the extremalization of the grand potential leads in one hand to the usual Laplace equation, relating the local curvature $`\kappa `$ to the pressure drop $`\gamma _{LV}\kappa =\mathrm{\Delta }p\mathrm{\Delta }\mu \mathrm{\Delta }\rho `$; and in the other hand, it fixes the contact angle of the meniscus on the solid substrate according to Young’s law $`\gamma _{LV}\mathrm{cos}\theta =\gamma _{SV}\gamma _{SL}`$. In our case however, the corresponding nucleus corresponds to a maximum of the grand potential. In two dimension the general solution is of the same geometrical form as the one obtained above and pictured in fig 1, with a corresponding nucleation energy
$$\mathrm{\Delta }\mathrm{\Omega }^{}=\gamma _{LV}H_c\left\{\alpha \mathrm{sin}\alpha \mathrm{cos}(\alpha +2\theta )\right\}$$
(9)
with $`\alpha `$ defined through $`\mathrm{cos}(\alpha +\theta )=\mathrm{cos}(\theta )H/H_c`$. The previous result, eq. (6), is recovered in the limiting case $`\theta =0`$, $`HH_c`$. In three dimensions, the nucleus takes the form of a liquid bridge of finite lateral extension connecting the two solid surfaces, due in particular to the supplementary (negative) axisymmetric curvature. When $`H`$ is small compared to $`H_c`$, this predicts $`\mathrm{\Delta }\mathrm{\Omega }^{}\gamma _{LV}H^2+𝒪(\mathrm{\Delta }\mu )`$, but the full dependence on $`\mathrm{\Delta }\mu `$ for any $`H`$ needs a numerical resolution. This will be done in a forthcoming paper together with the corresponding simulations.
The authors would like to thank E. Charlaix, J. Crassous and J.C. Geminard for many interesting discussions. This work has been partly supported by the PSMN at ENS-Lyon, and the MENRT under contract 98B0316.
|
no-problem/9901/astro-ph9901396.html
|
ar5iv
|
text
|
# VLBI imaging of extremely high redshift quasars at 5 GHz
## 1 Introduction
There are about fifty known radio loud quasars at redshift $`z>3`$ with a total flux density at 5 GHz $`S_5100`$ mJy. Some of them have been imaged at 5 GHz with VLBI (Gurvits et al. LIG92 (1992), LIG94 (1994); Xu et al. XUW95 (1995); Taylor et al. TAY94 (1994); Frey et al. FS97 (1997); Udomprasert et al. UDO97 (1997)). Here we present first epoch VLBI images of a further ten $`z>3`$ quasars. We show that their structural properties are similar to those of other known sources at $`z>3`$. The present sample includes the most distant radio loud quasar known to date, 1428+423 at $`z=4.72`$ (Hook & McMahon H&M98 (1998)).
Our interest in studying the milliarcsecond radio structures in high redshift quasars is motivated in part by their potential usefulness for cosmological tests (e.g. Kellermann KIK93 (1993); Gurvits et al. LIG99 (1999)). Recent analysis of a sample of 151 quasars imaged at 5 GHz with milliarcsecond resolution has led to the conclusion that a simple assumption about the spectral properties of “cores” and “jets” can explain the apparent greater compactness of the sources at higher redshift (Frey et al. FS97 (1997); Gurvits et al. LIG99 (1999)). However, this result is based on a sample with considerable spread of structural properties on milliarcsecond scale. More data on the milliarcsecond radio structures, especially at high redshift are needed to study the structural properties of the quasars as well as various cosmological models to test.
## 2 Observations, calibration and data reduction
Five sources (0004+139, 0830+101, 0906+041, 0938+119 and 1500+045) were observed during a single 24 hour observing run using a global VLBI array on 27/28 September 1992. Another five sources (0046+063, 0243+181, 1338+381, 1428+423 and 1557+032) were observed with the European VLBI Network (EVN) and the Hartebeesthoek Radio Astronomical Observatory 26 m antenna in South Africa on 25/26 and 27/28 October 1996. Source coordinates, redshifts and total flux densities at 6 cm are given in Table 1. The parameters of the radio telescopes used in the two experiments are shown in Table 2. The observations were made at 5 GHz in left circular polarization. Data were recorded using the Mk III VLBI system in Mode B with 28 MHz total bandwidth, and correlated at the MPIfR correlator in Bonn, Germany.
Initial calibration was done using the NRAO AIPS package (Cotton COT95 (1995); Diamond DIA95 (1995)). Clock offset and instrumental delay errors were corrected using the strong sources 0804+499 and 0235+164 in the global and the EVN experiments, respectively. Data were fringe-fitted using AIPS using 5 minute solution intervals. We used the system temperatures measured during the observations and previously determined gain curves for each telescope for the initial amplitude calibration, which was then adjusted using amplitude calibrator sources, based on total flux density values measured nearly contemporaneously to our observations with the Effelsberg telescope. For the September 1992 experiment, this was also checked using VLA data obtained in parallel with our VLBI observation. Total flux densities determined from VLBI images were typically 10-15% smaller than those determined from the VLA observations, which may indicate either the presence of extended structures undetectable with VLBI or residual calibration errors.
The Caltech DIFMAP program (Shepherd et al. SHE94 (1994)) was used for self–calibration and imaging, starting with point source models with flux densities consistent with the zero–spacing values. RMS image noises (3 $`\sigma `$) were 0.2-0.4 and 0.6-1.0 mJy/beam (depending on the telescopes’ performance and integrated on–source time) for the global and the EVN experiments, respectively. Plots of self–calibrated correlated flux densities as a function of projected baseline length, as well as clean images resulting from the DIFMAP imaging process are shown in Fig. 1 for both experiments. Image parameters are listed in Table 3. All sources but the most distant one, 1428+423, appear to be well resolved and most of them show asymmetric structure.
We performed model fitting in DIFMAP using self–calibrated $`uv`$-data in order to quantitatively compare these sources with other extremely high redshift quasars. The results of model fitting are listed in Table 4. In all cases we fixed the first component at the phase center. While we searched for the simplest possible model (i.e. the smallest possible number of Gaussian components), not all components can be distinguished as separate features on the maps. In the case of 0004+139, we kept only one component for the extended emission because the position angle of the beam lies close to the source structure direction and the correlated flux density versus $`uv`$–distance plot indicates the presence of a large component.
We also made $`14\mathrm{}`$ resolution VLA D configuration images of the five sources observed in the 1992 global experiment. VLA data were obtained at the same time as the phased array data used for the global VLBI experiment. These images were made using the NRAO AIPS package with typically 3–6 iterations of self–calibration and imaging. We show VLA images of 0830+101 and 1500+045 in Fig. 2a and 2b, respectively. The other three sources appeared unresolved with the VLA in our observations.
## 3 Comments on individual sources
### 0004+139
The spectral indices of the source ($`S\nu ^\alpha `$ throughout this paper) are $`\alpha _{0.365}^{1.4}=0.6`$ and $`\alpha _{1.4}^{4.85}=0.4`$ (White & Becker W&B92 (1992)). It is unresolved with the VLA A-array ($`400`$ mas resolution) at 5 GHz (Lawrence et al. LAW86 (1986)).
Our VLBI image shows structure extending up to about 10 mas from the core to the SE direction (Fig. 1a). The position angle of the beam is not well suited to resolve the fine details of this jet–like extension. The source is unresolved with the VLA D-array in our experiment with $`14\mathrm{}`$ resolution.
### 0046+063
The source has a flat radio spectrum between 1.4 and 4.85 GHz ($`\alpha _{1.4}^{4.85}=0.0`$, White & Becker W&B92 (1992)). Our VLBI image shows a dominant central component and a prominent secondary component separated by 3.8 mas from the core in the NE direction (Fig. 1b).
### 0243+181
The spectral index of this quasar is $`\alpha _{1.4}^{4.85}=0.1`$ (White & Becker W&B92 (1992)). Apart from the compact core, there is a weak extended feature 4.9 mas to the South (Fig. 1c).
### 0830+101
The source is reported to be unresolved at 5 GHz with the VLA B–array ($`1.2\mathrm{}`$ resolution), no extended emission has been found within about $`51\mathrm{}`$ from the core (Lawrence et al. LAW86 (1986)). The spectral index of the source is $`\alpha _{1.4}^{4.85}=0.3`$ (White & Becker W&B92 (1992)). On the VLBI scale, it has two bright components near the core that perhaps delineate a slightly curved jet extending up to $``$15 mas (Fig. 1d). Our VLA D-array map shows two faint components about $`2\mathrm{}`$ from the core to the SE and NW which resembles a classical double lobe structure (Fig. 2a). However, it is not clear from our VLA image whether these sources are physically related to 0830+101 or they are chance coincidences. The latter seems to be unlikely, but could not be ruled out based on our data.
### 0906+041
Spectral indices of $`\alpha _{0.365}^{1.4}=0.1`$ and $`\alpha _{1.4}^{4.85}=0.4`$ are given by White & Becker (W&B92 (1992)). If the flux density of the source did not change between the epochs of measurements this indicates that the source may be a Gigahertz Peaked Spectrum (GPS) quasar. This object has been identified as a ROSAT X–ray source (RXJ0909.2+0354). Its flux in the 0.1–2.4 keV range is $`f_x=9.9\pm 2.7`$ $`10^{13}`$ erg cm<sup>-2</sup> s<sup>-1</sup> (Brinkmann et al. BRI95 (1995)). The source is unresolved with the VLA D-array at 5 GHz. On VLBI scales, the core of 0906+041 is resolved with an extension to the NE (Fig. 1e). A secondary compact component is separated by about 10 mas from the core.
### 0938+119
This source is identified as a quasar by Beaver et al. (BEA76 (1976)) and has a very steep optical continuum more typical of BL Lac objects (Baldwin et al. BAL76 (1976)). The radio continuum peaks near 1 GHz ($`\alpha _{0.365}^{1.4}=0.0`$ and $`\alpha _{1.4}^{4.85}=0.7`$, White & Becker W&B92 (1992)). Neff & Hutchings (N&H90 (1990)) found radio emission with the VLA at 1.4 GHz on both sides of the radio core extending to $`5\mathrm{}`$ and $`2\mathrm{}`$ from the centre. The source was studied in high energy bands, however, only upper limits are available for X–ray and $`\gamma `$–ray luminosities (Zamorani et al. ZAM81 (1981); Fichtel et al. FIC94 (1994)). The source is resolved by our observations and shows an extension of about 5 mas to the East (Fig. 1f). It is unresolved with the VLA in our experiment.
### 1338+381
This flat spectrum source ($`\alpha _{1.4}^{4.85}=0.0`$, White & Becker W&B92 (1992)) is a candidate IERS radio reference frame object and serves as a link to the HIPPARCOS stellar reference frame (Ma et al. MA97 (1997)). It is being monitored by geodetic VLBI networks at 2.3 and 8.4 GHz. In our 5 GHz imaging experiment the source appears to be resolved and shows a double structure elongated in the S-SW direction with the angular separation of 3.65 mas (Fig. 1g). The component position angle and separation are in very good agreement with a recent 8.4 GHz global VLBI image by Bouchy et al. (BOU98 (1998)). Due to the lower resolution of our image we can not decide whether their component “c” is present between the two dominant components seen in our image.
### 1428+423
The radio spectral indices of the quasar 1428+423 – also known as GB1428+4217 (Fabian et al. FAB97 (1997); Hook & McMahon H&M98 (1998)) and B3 1428+422 (Véron-Cetty and Véron VER98 (1998)) – are $`\alpha _{0.365}^{1.4}=0.5`$ and $`\alpha _{1.4}^{4.85}=0.4`$ (White & Becker W&B92 (1992)) which are typical for GPS sources. It is the third highest redshift quasar known to date (Hook & McMahon H&M98 (1998), $`z=4.72`$) and the most distant known radio loud quasar. The quasar was detected in X-rays with the ROSAT High Resolution Imager in the (observed) 0.1–2.4 keV band (Fabian et al. FAB97 (1997)) and with various ASCA detectors in the (observed) band of 0.5–10 keV (Fabian et al. FAB98 (1998)). Both observations are in agreement and indicate that the SED of this source is strongly dominated by X- and $`\gamma `$-ray emission. The X-ray spectrum is remarkably flat. The quasar might be the most luminous steady source in the Universe, with an apparent luminosity in excess of $`10^{47}`$ erg s<sup>-1</sup>. The extreme X-ray luminosity of the quasar 1428+423 suggests that the emission is highly beamed toward us (Fabian et al. FAB97 (1997), FAB98 (1998)).
Our VLBI image (Fig. 1h) is in qualitative agreement with the relativistic beaming model of the source. The quasar appears to be almost unresolved with the VLA at 5 GHz (Laurent-Muehleisen et al. LAU97 (1997)) as well as by our VLBI observations up to 170 M$`\lambda `$, which corresponds to an angular resolution of 2.0$`\times `$1.4 mas. The other two $`z>4`$ quasars imaged with VLBI also appear unresolved (1251$-$407 and 1508+572, Frey et al. FS97 (1997)), which suggests that the high $`z`$ quasars may be systematically more compact then their less distant counterparts. Alternatively, as suggested by Fabian et al. (FAB97 (1997)), the highly beamed emission might be responsible for a selection effect resulting in detection of an otherwise weaker population of extremely high redshift quasars.
### 1500+045
This source was detected as a $`5\pm 2.4`$ mJy source at 240 GHz by McMahon et al. (MCM94 (1994)) corresponding to $`\alpha _5^{240}=0.9`$. The spectral index between 1.4 and 4.85 GHz is $`\alpha _{1.4}^{4.85}=0.2`$ (White & Becker W&B92 (1992)). The source is unresolved with the VLA B–array ($`1.2\mathrm{}`$ resolution, Lawrence et al. LAW86 (1986)).
Although the source is resolved, our VLBI image does not show any structure (Fig. 1i). Our VLA image shows an extension to E-NE at about $`33\mathrm{}`$ (Fig. 2b).
### 1557+032
This quasar is an IERS Celestial Reference Frame candidate source (Ma et al. MA97 (1997)). It was also detected with the Parkes–Tidbinbilla interferometer at 2.3 GHz (Duncan et al. DUN93 (1993)) and found to be compact with a total flux density of 376 mJy. Our VLBI observations show that the source is resolved but featureless (Fig. 1j). There is no extended feature found down to 0.5% of the peak brightness.
## 4 Discussion
Frey et al. (FS97 (1997)) studied the parsec scale structural properties of radio loud QSO’s using a sample of 151 quasars in the redshift range of $`0.2<z<4.5`$ observed with sufficiently high resolution at 5 GHz. They determined the flux density ratios of the brightest “jet” and “core” components ($`S_j`$/$`S_c`$) of the sources. The typical angular resolution of those VLBI observations was $``$1 mas. Because the linear resolution is better for the lowest redshift sources, they introduced a linear size limit to distinguish between jet and core components in order to compare the same linear sizes at different redshifts. One milliarcsecond sets the linear resolution to 7 pc for $`z1`$ sources up to the highest redshifts represented in the sample ($`H_0`$=80 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0`$=0.1 were used to calculate linear sizes; the angular size of a fixed linear size is practically constant at $`z1`$ for plausible cosmological models). The value of 7 pc was not used in any quantitative way in their analysis, just as a threshold between cores and jets. Only components outside the core region were considered as jet components. They found a weak overall trend of a decreasing jet to core flux density ratio with increasing redshift which may be explained by the combined effect of the shifts of the emitting frequencies at different redshifts compared to the 5 GHz observing frequency and the different characteristic spectral indices in cores and jets.
We followed Frey et al. (FS97 (1997)) and calculated the jet to core flux density ratios ($`S_j`$/$`S_c`$) for the $`z>3`$ sources presented in this paper (last column of Table 4). We had to exclude two sources from the analysis, 0004+139 and 1500+045. In the former case, the angular resolution in the direction of the expected jet structure is considerably greater than 1 mas. The quasar 1500+045 may also have jet structure which is not observable in our data due to the unfortunate orientation of the 10.5 mas restoring beam. The $`S_j`$/$`S_c`$ values for the other three sources observed in the 1992 global experiment (0830+101, 0906+041 and 0938+119) should also be interpreted with caution since the restoring beam is very elongated. However, we derived tentative $`S_j`$/$`S_c`$ values because the direction of the jet structure indicated by our maps are nearly perpendicular to the major axis of the beam and the resolution in this direction is about 1 mas. In the case of 0938+119 and 1428+423, an upper limit of the jet flux density was calculated based on the beam sizes and the 3$`\sigma `$ RMS noises on our images.
We added our eight new sources to the sample of Frey et al. (FS97 (1997)). The median $`S_j/S_c`$ values as function of redshift are shown in Fig. 3. The data for all 159 sources are evenly grouped into 13 bins. Error bars indicate the mean absolute deviation of data points from the median within each bin. Upper limits and measured values are treated similarly. However, the plotted error bars are indicative of the scatter of the data. The solid curve represents the best least squares fit based on the 13 median values. Under the assumption that intrinsic spectral properties at the sources could be described by simple power–law dependence, the average difference between jet and core spectral indices can be estimated as $`\alpha _j\alpha _c=0.62\pm 0.45`$.
The three circles at the high redshift end of the plot in Fig. 3 show the upper limits of the jet to core flux density ratios for the most distant ($`z>4`$) quasars imaged at 5 GHz with VLBI to date. The sources 1251$-$407 ($`z=4.46`$, Shaver et al. SHA96 (1996)) and 1428+423 ($`z=4.72`$, Hook & McMahon H&M98 (1998)) are represented by filled circles. The open circle corresponds to the quasar 1508+572 ($`z=4.30`$, Hook et al. HOO95 (1995)) which also appeared to be unresolved, however, at a considerably lower angular resolution ($`5`$ mas) than the other sources included in the sample (Frey et al. FS97 (1997)).
We note that in both cases available to date, radio structures in quasars at $`z>4`$ (1251$``$407 and 1428+423) appear to be unresolved with a nominal resolution of $``$1 mas. The third case, 1508+572, albeit with a lower resolution of 5 mas, does not show a jet–like structure either. Qualitatively, it is consistent with the overall trend that steeper spectrum jets are fainter relative to flat spectrum cores at higher redshift because the fixed 5 GHz observing frequency implies high rest–frame frequency (for $`z>4`$ the emitted frequency $`\nu _{em}=\nu _{obs}(1+z)>25`$ GHz). However, these sources appear to be much more compact than expected from the general trend shown in Fig. 3. Even in the neighboring high redshift bins ($`3<z<4`$), it is unlikely that we find 3 randomly selected sources practically unresolved. A possible explanation for the observed compactness is that the spectral indices of the jet components become steeper with frequency, which results in a relative fading of the components with respect to the core at the high emitting frequencies ($``$25 GHz) of the largest redshift sources. Future multi–frequency VLBI observations of more $`z>4`$ radio loud quasars with the highest possible sensitivity and angular resolution should answer the question whether these objects are indeed intrinsically so compact or there is a strong observational selection effect responsible for their particularly compact appearance.
## 5 Conclusion
We have presented 5 GHz VLBI images of ten extremely high redshift ($`z>3`$) quasars including the most distant radio loud quasar known to date (1428+423, $`z=4.72`$). Most of the sources are well resolved and their morphology is asymmetric. Based on fitted Gaussian source model components, we have determined the jet to core flux density ratios. The values obtained are typical of high redshift radio quasars for sources in the redshift range $`3<z<4`$. However, the most distant radio loud quasar, 1428+423, appears to be unusually compact.
###### Acknowledgements.
We are grateful to the staff of the EVN, NRAO and Hartebeesthoek observatories, and the MPIfR correlator for their support of our project. We thank Joan Wrobel for assistance in preparation and analysis of the global VLBI experiment of 1992 described in the paper, and the referee for a number of very helpful suggestions. ZP and SF acknowledge financial support received from the European Union under contract CHGECT 920011, Netherlands Organization for Scientific Research (NWO) and the Hungarian Space Office, and hospitality of JIVE and NFRA during their fellowship in Dwingeloo. LIG acknowledges partial support from the EU under contract no. CHGECT 920011, the NWO programme on the Early Universe and the hospitality of the FÖMI Satellite Geodetic Observatory, Hungary (supported in part through the contract No. ERBCIPDCT940087 and by the Hungarian Space Office). LIG and RGM acknowledge partial support from the TMR Programme, Research Network Contract ERBFMRXCT 96–0034 “CERES”. The National Radio Astronomy Observatory is operated by Associated Universities, Inc. under a Cooperative Agreement with the National Science Foundation. This research has made use of the NASA/IPAC Extragalactic Data Base (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
|
no-problem/9901/math9901080.html
|
ar5iv
|
text
|
# Untitled Document
REPRESENTATIONS OF THE $`q`$-DEFORMED ALGEBRA $`𝐔_𝐪(\mathrm{𝐢𝐬𝐨}_𝐪(\mathrm{𝟐}))`$
M. Havlíček
Department of Mathematics, FNSPE, Czech Technical University
CZ-120 00, Prague 2, Czech Republic
A. U. Klimyk
Institute for Theoretical Physics, Kiev 252143, Ukraine
S. Pošta
Department of Mathematics, FNSPE, Czech Technical University
CZ-120 00, Prague 2, Czech Republic
## Abstract
An algebra homomorphism $`\psi `$ from the $`q`$-deformed algebra $`U_q(\mathrm{iso}_2)`$ with generating elements $`I`$, $`T_1`$, $`T_2`$ and defining relations $`[I,T_2]_q=T_1`$, $`[T_1,I]_q=T_2`$, $`[T_2,T_1]_q=0`$ (where $`[A,B]_q=q^{1/2}ABq^{1/2}BA`$) to the extension $`\widehat{U}_q(\mathrm{m}_2)`$ of the Hopf algebra $`U_q(\mathrm{m}_2)`$ is constructed. The algebra $`U_q(\mathrm{iso}_2)`$ at $`q=1`$ leads to the Lie algebra $`\mathrm{iso}_2\mathrm{m}_2`$ of the group $`ISO(2)`$ of motions of the Euclidean plane. The Hopf algebra $`U_q(\mathrm{m}_2)`$ is treated as a Hopf $`q`$-deformation of the universal enveloping algebra of $`\mathrm{iso}_2`$ and is well-known in the literature.
Not all irreducible representations of $`U_q(\mathrm{m}_2)`$ can be extended to representations of the extension $`\widehat{U}_q(\mathrm{m}_2)`$. Composing the homomorphism $`\psi `$ with irreducible representations of $`\widehat{U}_q(\mathrm{m}_2)`$ we obtain representations of $`U_q(\mathrm{iso}_2)`$. Not all of these representations of $`U_q(\mathrm{iso}_2)`$ are irreducible. The reducible representations of $`U_q(\mathrm{iso}_2)`$ are decomposed into irreducible components. In this way we obtain all irreducible representations of $`U_q(\mathrm{iso}_2)`$ when $`q`$ is not a root of unity. A part of these representations turns into irreducible representations of the Lie algebra iso<sub>2</sub> when $`q1`$. Representations of the other part have no classical analogue.
I. INTRODUCTION
Soon after definition of Drinfeld–Jimbo algebras $`U_q(g)`$, corresponding to semisimple Lie algebras $`g`$, the Hopf algebra $`U_q(\mathrm{m}_2)`$ was defined which is treated as a $`q`$-deformation of the universal enveloping algebra of the Lie algebra $`\mathrm{iso}_2`$ of the group of motions of the Euclidean plane (the description of this group, its Lie algebra and their representations see, for example, in , Chap. 4).
However, there is another $`q`$-deformation of the universal enveloping algebra $`U(\mathrm{iso}_2)`$ of the Lie algebra $`\mathrm{iso}_2`$ which will be denoted by $`U_q(\mathrm{iso}_2)`$. In the general form (that is, for $`U(\mathrm{iso}_n)`$) such $`q`$-deformations were defined in . The Hopf algebra $`U_q(\mathrm{m}_2)`$ is related to the well-known quantum algebra $`U_q(\mathrm{sl}_2)`$ while the associative algebra $`U_q(\mathrm{iso}_2)`$ is connected with the nonstandard $`q`$-deformation $`U_q(\mathrm{so}_3)`$ of the universal enveloping algebra $`U(\mathrm{so}_3)`$ which is sometimes called the Fairlie algebra.
It is known that the theory of representations of the associative algebra $`U_q(\mathrm{so}_3)`$ is richer than that of the algebra $`U_q(\mathrm{sl}_2)`$ \[4–6\]. It was shown recently that the theory of representations of the algebra $`U_q(\mathrm{iso}_2)`$ is also richer than that of the algebra $`U_q(\mathrm{m}_2)`$. In particular, the algebras $`U_q(\mathrm{so}_3)`$ and $`U_q(\mathrm{iso}_2)`$ have irreducible representations of nonclassical type (that is, representations which have no limit at $`q1`$). The paper is devoted to study of irreducible $``$-representations of the algebra $`U_q(\mathrm{iso}_2)`$ equipped with $``$-structures. Irreducible representations of $`U_q(\mathrm{iso}_2)`$ of the classical type are given in .
The aim of the present paper is to study irreducible representations of $`U_q(\mathrm{iso}_2)`$ when this algebra is not equipped with some $``$-structure and to clarify why irreducible representations of $`U_q(\mathrm{iso}_2)`$ of the nonclassical type appear. We do this in the same way as in the case of representations of the algebra $`U_q(\mathrm{so}_3)`$ in . Namely, we relate the algebra $`U_q(\mathrm{iso}_2)`$ with the extension $`\widehat{U}_q(\mathrm{m}_2)`$ of the Hopf algebra $`U_q(\mathrm{m}_2)`$. This allows us to obtain representations of $`U_q(\mathrm{iso}_2)`$ from those of the extended algebra $`\widehat{U}_q(\mathrm{m}_2)`$. We prove that if $`q`$ is not a root of unity, then irreducible representations obtained in this way exhaust, up to equivalence, all irreducible representations of $`U_q(\mathrm{iso}_2)`$.
II. THE ALGEBRAS $`U_q(\mathrm{iso}_2)`$ AND $`\widehat{U}_q(\mathrm{m}_2)`$
The algebra $`U_q(\mathrm{iso}_2)`$ is obtained by a $`q`$-deformation of the standard commutation relations
$$[I,T_2]=T_1,[T_1,I]=T_2,[T_2,T_1]=0$$
of the Lie algebra iso<sub>2</sub>. So, $`U_q(\mathrm{iso}_2)`$ is defined as the complex associative algebra with unit element generated by the elements $`I`$, $`T_1`$, $`T_2`$ satisfying the defining relations
$$[I,T_2]_q:=q^{1/2}IT_2q^{1/2}T_2I=T_1,$$
$`(1)`$
$$[T_1,I]_q:=q^{1/2}T_1Iq^{1/2}IT_1=T_2,$$
$`(2)`$
$$[T_2,T_1]_q:=q^{1/2}T_2T_1q^{1/2}T_1T_2=0.$$
$`(3)`$
Note that the elements $`T_2`$ and $`T_1`$ of the algebra $`U_q(\mathrm{iso}_2)`$ do not commute (as it is a case in the algebra $`\mathrm{iso}_2`$; these elements correspond to shifts along the axes of the plane). We say that they $`q`$-commute, that is, $`q^{1/2}T_2T_1q^{1/2}T_1T_2=0`$. This means that they generate the associative algebra determining the quantum plane.
Unfortunately, a Hopf algebra structure is not known on $`U_q(\mathrm{iso}_2)`$. However, it can be embedded into the Hopf algebra $`U_q(\mathrm{isl}_2)`$ as a Hopf coideal. (The algebra $`U_q(\mathrm{isl}_2)`$ is the $`q`$-deformation of the universal enveloping algebra $`U(\mathrm{isl}_2)`$ of the Lie algebra $`\mathrm{isl}_2`$ of the inhomogeneous Lie group $`ISL(2)`$).
The relations (1)–(3) lead to the Poincaré–Birkhoff–Witt theorem for the algebra $`U_q(\mathrm{iso}_2)`$. This theorem can be formulated as:
Proposition 1. The elements $`T_1^jT_2^kI^l`$, $`j,k,l=0,1,2,\mathrm{}`$, form a basis of the linear space $`U_q(\mathrm{iso}_2)`$.
Indeed, by using the relations (1)–(3) any product of the elements $`I`$, $`T_2`$, $`T_1`$ can be reduced to a sum of the elements $`T_1^jT_2^kI^l`$ with complex coefficients. Using the diamond lemma (or its special case from Subsect. 4.1.5 in ) it is proved that these elements are linear independent. This proves Proposition 1.
Note that by (1) the element $`T_1`$ is not independent: it is determined by the elements $`I`$ and $`T_2`$. Thus, the algebra $`U_q(\mathrm{iso}_2)`$ is generated by $`I`$ and $`T_2`$, but now instead of quadratic relations (1)–(3) we must take the relations
$$I^2T_2(q+q^1)IT_2I+T_2I^2=T_2,$$
$`(4)`$
$$IT_2^2(q+q^1)T_2IT_2+T_2^2I=0,$$
$`(5)`$
which are obtained if we substitute the expression (1) for $`T_1`$ into (2) and (3). The equation $`q^{1/2}IT_2q^{1/2}T_2I=T_1`$ and the relations (4) and (5) restore the relations (1)–(3).
Note that the relation (5) is a relation of Serre’s type in the definition of quantum algebras by V. Drinfeld and M. Jimbo. The relation (4) differs from Serre’s relation by appearance of non-vanishing right hand side.
It is known that the element $`C=T_1^2+T_2^2`$ from the universal enveloping algebra $`U(\mathrm{iso}_2)`$ belongs to the center of this algebra. The analogue of this element in $`U_q(\mathrm{iso}_2)`$ is the element $`C_q=\frac{1}{2}(T_1T_1^{}+T_1^{}T_1)+\frac{1}{2}(q+q^1)T_2^2,`$ where $`T_1^{}=q^{1/2}IT_2q^{1/2}T_2I`$ (see ), that is $`[C_q,X]:=C_qXXC_q=0`$ for all $`XU(\mathrm{iso}_2)`$. This element can be reduced according to Proposition 1 to the form
$$C_q=q^1T_1^2+qT_2^2+q^{3/2}(1q^2)T_1T_2I.$$
$`(6)`$
The algebra $`U_q(\mathrm{iso}_2)`$ is closely related to (but not coincides with) the quantum algebra $`U_q(\mathrm{m}_2)`$. The last algebra is generated by the elements $`q^H`$, $`q^H`$, $`E`$, $`F`$ satisfying the relations
$$q^Hq^H=q^Hq^H=1,q^HEq^H=qE,q^HFq^H=q^1F,[E,F]:=EFFE=0.$$
$`(7)`$
In order to relate the algebras $`U_q(\mathrm{iso}_2)`$ and $`U_q(\mathrm{m}_2)`$ we need to extend $`U_q(\mathrm{m}_2)`$ by the elements $`(q^kq^H+q^kq^H)^1`$, $`k𝐙`$, in the sense of . This extension $`\widehat{U}_q(\mathrm{m}_2)`$ is defined as the associative algebra (with unit element) generated by the elements
$$q^H,q^H,E,F,(q^kq^H+q^kq^H)^1,k𝐙,$$
satisfying the defining relations (7) of the algebra $`U_q(\mathrm{m}_2)`$ and the following natural relations:
$$(q^kq^H+q^kq^H)^1(q^kq^H+q^kq^H)=(q^kq^H+q^kq^H)(q^kq^H+q^kq^H)^1=1,$$
$`(8)`$
$$q^{\pm H}(q^kq^H+q^kq^H)^1=(q^kq^H+q^kq^H)^1q^{\pm H},$$
$`(9)`$
$$(q^kq^H+q^kq^H)^1E=E(q^{k+1}q^H+q^{k1}q^H)^1,$$
$`(10)`$
$$(q^kq^H+q^kq^H)^1F=F(q^{k1}q^H+q^{k+1}q^H)^1.$$
$`(11)`$
III. THE ALGEBRA HOMOMORPHISM $`U_q(\mathrm{iso}_2)\widehat{U}_q(\mathrm{m}_2)`$
The aim of this section is to give (in an explicit form) the homomorphism of the algebra $`U_q(\mathrm{iso}_2)`$ to $`\widehat{U}_q(\mathrm{m}_2)`$.
Proposition 2. There exists a unique algebra homomorphism $`\psi :U_q(\mathrm{iso}_2)\widehat{U}_q(\mathrm{m}_2)`$ such that
$$\psi (I_1)=\frac{\mathrm{i}}{qq^1}(q^Hq^H),$$
$`(12)`$
$$\psi (I_2)=(EF)(q^H+q^H)^1,$$
$`(13)`$
$$\psi (I_3)=(\mathrm{i}q^{H1/2}E+\mathrm{i}q^{H1/2}F)(q^H+q^H)^1,$$
$`(14)`$
where $`q^{H+a}:=q^Hq^a`$ for $`a𝐂`$.
Proof. In order to prove this proposition we have to show that the defining relations
$$q^{1/2}\psi (I)\psi (T_2)q^{1/2}\psi (T_2)\psi (I)=\psi (T_1),$$
$$q^{1/2}\psi (T_1)\psi (I)q^{1/2}\psi (I)\psi (T_1)=\psi (T_2),$$
$$q^{1/2}\psi (T_2)\psi (T_1)q^{1/2}\psi (T_1)\psi (T_2)=0.$$
$`(15)`$
of $`U_q(\mathrm{iso}_2)`$ are satisfied. Let us prove the relation (15). (Other relations are proved similarly.) Substituting the expressions (12)–(14) for $`\psi (I)`$, $`\psi (T_2)`$, $`\psi (T_1)`$ into (15) we obtain (after multiplying both sides of equality by $`(q^H+q^H)`$ on the right) the relation
$$q(EF)Eq^H(qq^H+q^1q^H)^1+q(EF)Fq^H(q^1q^H+qq^H)^1$$
$$qE^2q^H(qq^H+q^1q^H)^1q^1FEq^H(qq^H+q^1q^H)^1+$$
$$+q^1EFq^H(q^1q^H+qq^H)^1+qF^2q^H(q^1q^H+qq^H)^1=0.$$
The formula (15) is true if and only if this relation is correct. We multiply both its sides by $`(qq^H+q^1q^H)(q^1q^H+qq^H)`$ on the right and obtain the relation in the algebra $`U_q(\mathrm{m}_2)`$ (that is, without the expressions $`(q^kq^H+q^kq^H)^1`$). This relation is easily verified by using the defining relations (7) of the algebra $`U_q(\mathrm{m}_2)`$. Proposition is proved.
IV. DEFINITION OF REPRESENTATIONS OF $`U_q(\mathrm{m}_2)`$ AND $`U_q(\mathrm{iso}_2)`$
From this point we assume that $`q`$ is not a root of unity. Let us define representations of the algebras $`U_q(\mathrm{m}_2)`$ and $`U_q(\mathrm{iso}_2)`$.
Definition. By a representation $`\pi `$ of $`U_q(\mathrm{m}_2)`$ (respectively $`U_q(\mathrm{iso}_2)`$) we mean a homomorphism of $`U_q(\mathrm{sl}_2)`$ (respectively $`U_q(\mathrm{iso}_2)`$) into the algebra of linear operators (bounded or unbounded) on a Hilbert space $``$, defined on an everywhere dense invariant subspace $`𝒟`$, such that the operator $`\pi (q^H)`$ (respectively the operator $`\pi (I)`$) can be diagonalized, has a discrete spectrum (with finite multiplicities of spectral points if $`\pi `$ is irreducible) and its eigenvectors belong to $`𝒟`$. Two representations $`\pi `$ and $`\pi ^{}`$ of $`U_q(\mathrm{m}_2)`$ (of $`U_q(\mathrm{iso}_2)`$) on spaces $``$ and $`^{}`$, respectively, are called (algebraically) equivalent if there exist everywhere dense invariant subspaces $`V`$ and $`V^{}^{}`$ and a one-to-one linear operator $`A:VV^{}`$ such that $`A\pi (a)v=\pi ^{}(a)Av`$ for all $`aU_q(\mathrm{m}_2)`$ (respectively, for all $`aU_q(\mathrm{iso}_2)`$) and $`vV`$.
Remark. Note that the element $`IU_q(\mathrm{iso}_2)`$ corresponds to the homogeneous part of the motion group $`ISO(2)`$. As in the classical case, it is natural to demand in the definition of representations of $`U_q(\mathrm{iso}_2)`$ that the operator $`\pi (I)`$ has a discrete spectrum (with finite multiplicities of spectral points for irreducible representations $`\pi `$). Such representations correspond to Harish-Chandra modules of Lie algebras. Note that irreducible $``$-representations of $`U_q(\mathrm{iso}_2)`$ without a requirement that $`\pi (I)`$ has a discrete spectrum were studied in . It was shown there that the classification of irreducible $``$-representations by self-adjoint operators in this case is equivalent to the classification of arbitrary families of bounded self-adjoint operators. The classification of irreducible representations (not obligatory $``$-representations) in this case turn into unsolved problem.
The algebra $`U_q(\mathrm{m}_2)`$ has the following non-trivial irreducible representations:
(a) one-dimensional representations $`\pi _\sigma `$, $`\sigma 𝐂`$, $`\sigma 0`$, determined by the formulas $`\pi _\sigma (q^H)=\sigma `$, $`\pi _\sigma (E)=\pi _\sigma (F)=0`$;
(b) infinite dimensional representations $`\pi _{rs}`$, $`r,s𝐂`$, $`r,s0`$, acting on the Hilbert space $``$ with a basis $`|m`$, $`m𝐙`$, by the formulas
$$\pi _{rs}(q^H)|m=sq^m|m,\pi _{rs}(E)|m=r|m+1,\pi _{rs}(F)|m=r|m1,m𝐙.$$
$`(16)`$
We take $`𝒟=\mathrm{lin}\{|m|m𝐙\}`$. A direct verification shows:
Proposition 3. The representations $`\pi _{rs}`$ and $`\pi _{r^{}s^{}}`$ ($`r,s,r^{},s^{}𝐂\backslash \{0\}`$) are equivalent if and only if $`r=\pm r^{}`$ and $`s^{}=q^ns`$ for some $`n𝐙`$.
Repeating the reasonings of Sect. 5.2 from we easily prove
Proposition 4. Every irreducible representation of $`U_q(\mathrm{m}_2)`$ is equivalent to one of the representations (16) or is one-dimensional.
Note that for $`q1`$ the representations $`\pi _{rs}`$ of $`U_q(\mathrm{m}_2)`$ turn into irreducible representations of the universal enveloping algebra $`U(\mathrm{m}_2)`$, that is, all irreducible representations of $`U_q(\mathrm{m}_2)`$ are deformations of the corresponding irreducible representations of $`U(\mathrm{m}_2)`$.
We try to extend representations $`\pi _{r,s}`$ of $`U_q(\mathrm{m}_2)`$ to representations of the extension $`\widehat{U}_q(\mathrm{m}_2)`$ by using the relation
$$\pi ((q^kq^H+q^kq^H)^1):=(q^k\pi (q^H)+q^k\pi (q^H))^1,k𝐙.$$
$`(17)`$
Clearly, only those irreducible representations $`\pi _{r,s}`$ of $`U_q(\mathrm{m}_2)`$ can be extended to $`\widehat{U}_q(\mathrm{m}_2)`$ for which the operators $`q^k\pi (q^H)+q^k\pi (q^H)`$ are invertible. From formulas (16) it is clear that these operators are always invertible for the representations $`\pi _{rs}`$, $`s\pm \mathrm{i}q^n`$, $`n𝐙`$. (For the representations $`\pi _{rs}`$, $`s=\pm \mathrm{i}q^n`$ for some $`n𝐙`$, some of these operators are not invertible since they have zero eigenvalue.) Denoting the extended representations by the same symbols $`\pi _{rs}`$, we can formulate the following statement:
Proposition 5. The algebra $`\widehat{U}_q(\mathrm{m}_2)`$ has the infinite dimensional representations $`\pi _{rs}`$, $`r,s𝐂\backslash \{0\}`$, $`s\pm \mathrm{i}q^n`$ for all $`n𝐙`$, given by the relations (16) and (17). The representations $`\pi _{rs}`$ and $`\pi _{r^{},s^{}}`$ ($`r,s,r^{},s^{}𝐂\backslash \{0\}`$, $`s,s^{}\pm \mathrm{i}q^n`$ for all $`n𝐙`$) are equivalent if and only if $`r=\pm r^{}`$ and $`s^{}=q^ms`$ for some $`m𝐙`$. Any irreducible representation of $`\widehat{U}_q(\mathrm{sl}_2)`$ is equivalent to the representation $`\pi _{r,s}`$ for some $`r,s`$ or is a one-dimensional represenration.
V. IRREDUCIBLE REPRESENTATIONS OF $`U_q(\mathrm{iso}_2)`$
If $`\pi `$ is a representation of the algebra $`\widehat{U}_q(\mathrm{m}_2)`$ on a space $``$, then the mapping $`R:U_q(\mathrm{iso}_2)`$ defined as the composition $`R=\pi \psi `$, where $`\psi `$ is the homomorphism from Proposition 1, is a (not necessary irreducible) representation of $`U_q(\mathrm{iso}_2)`$.
Let us consider the representations
$$R_{rs}=\pi _{rs}\psi $$
of $`U_q(\mathrm{iso}_2)`$, where $`\pi _{rs}`$ are the irreducible representations of $`\widehat{U}_q(\mathrm{m}_2)`$ from Proposition 5. Using formulas (16) and (12)–(14) we find that
$$R_{rs}(I)|m=\mathrm{i}\frac{sq^ms^1q^m}{qq^1}|m,$$
$`(18)`$
$$R_{rs}(T_2)|m=\frac{r}{sq^m+s^1q^m}\{|m+1+|m1\},$$
$`(19)`$
$$R_{rs}(T_1)|m=\frac{\mathrm{i}q^{1/2}r}{sq^m+s^1q^m}\{sq^m|m+1q^ms^1|m1\}.$$
$`(20)`$
We consider that these operators are defined on the invariant subspace $`𝒟`$ which is the span of the basis vectors $`|m`$. Thus we proved the following
Proposition 6. Let $`r,s𝐂\backslash \{0\}`$, $`s\pm \mathrm{i}q^n`$ for all $`n𝐙`$. Then the formulas (18)–(20) form a representation $`R_{rs}`$ of the algebra $`U_q(\mathrm{iso}_2)`$.
We also have
Proposition 7. The representations $`R_{rs}`$ of Proposition 6 are irreducible if $`s\pm \mathrm{i}q^{m+1/2}`$, $`m𝐙`$.
Proof. Let $`\{a\}:=(sq^aa^1q^a)/(qq^1)`$. To prove this proposition we first note that since $`q`$ is not a root of unity and $`s\pm \mathrm{i}q^{m+1/2}`$, $`m𝐙`$, the eigenvalues $`\mathrm{i}\{m\}`$, $`m=0,\pm 1,\pm 2,\mathrm{}`$, of the operator $`R_{rs}(I)`$ are pairwise different.
Let $`V𝒟`$ be an invariant subspace of the representation $`R_{rs}`$. We need to show that $`V=𝒟`$. Let $`v=_{m_i}\alpha _i|m_iV`$, where $`|m_i`$ are eigenvectors of $`R_{rs}(I)`$ which are basis vectors of $``$. (Note that the sum is finite since $`v𝒟`$.) Let us prove that $`|m_iV`$. We prove this for the case when $`𝐯=\alpha _1|m_1+\alpha _2|m_2`$. (The case of more number of summands is proved similarly.) We have
$$v^{}:=R_{rs}(I)v=\mathrm{i}\alpha _1\{m_1\}|m_1+\mathrm{i}\alpha _2\{m_2\}|m_2.$$
. Since $`v,v^{}V`$, one derive that
$$\mathrm{i}\{m_1\}vv^{}=\mathrm{i}\alpha _2(\{m_1\}\{m_2\})|m_2V.$$
Since $`\{m_1\}\{m_2\}`$, then $`|m_2V`$ and hence $`|m_1V`$.
In order to prove that $`V=𝒟`$ we obtain from (18) and (19) that
$$\{R_{rs}(T_1)\mathrm{i}sq^{m_2+1/2}R_{rs}(T_2)\}|m_2=\mathrm{i}rq^{1/2}|m_21,$$
$$\{R_{rs}(T_1)+\mathrm{i}s^1q^{m_2+1/2}R_{rs}(T_2)\}|m_2=\mathrm{i}rq^{1/2}|m_2+1.(20)$$
It follows from these relations that $`V`$ contains the vectors $`|m_21,|m_22,\mathrm{}`$ and the vectors $`|m_2+1,|m_2+2,\mathrm{}`$. This means that $`V=𝒟`$ and the representation $`R_{rs}`$ is irreducible. Proposition is proved.
Note that the representations $`R_{rs}`$ of Proposition 7 turn into irreducible representations of the universal enveloping algebra $`U(\mathrm{iso}_2)`$ when $`q1`$. For this reason, they are called representations of the classical type.
Using Proposition 5 it is easy to show that the representations $`R_{rs}`$ and $`R_{r^{}s^{}}`$ of Proposition 7 are equivalent if and only if $`r^{}=\pm r`$ and $`s=q^ms`$ for some $`m𝐙`$.
Proposition 8. Let $`r𝐂\backslash \{0\}`$ and $`s=\epsilon \mathrm{i}q^{m+1/2}`$, where $`m𝐙`$ and $`\epsilon \{1,1\}`$. Then the representation $`R_{rs}`$ is reducible.
Proof. The eigenvalues of the operator $`R_{rs}(I)`$ are
$$\epsilon \frac{q^n+q^n}{qq^1},n=\pm \frac{1}{2},\pm \frac{3}{2},\pm \frac{3}{2},\mathrm{},$$
that is, every spectral point has multiplicity 2. The pairs of vectors $`|m+j`$ and $`|mj1`$, $`j=0,1,2,\mathrm{}`$, are of the same eigenvalue. Let us define two subspaces $`V_1`$ and $`V_1`$ by the formulas $`V_{\stackrel{~}{\epsilon }}:=\mathrm{lin}\{|j_{\stackrel{~}{\epsilon }}|j=0,1,2,\mathrm{}\}`$, where
$$|j_{\stackrel{~}{\epsilon }}:=|m+j+\stackrel{~}{\epsilon }\mathrm{i}(1)^j|mj1,j=0,1,2,\mathrm{}.$$
$`(21)`$
A direct calculation show that for $`\stackrel{~}{\epsilon }=1`$ and for $`\stackrel{~}{\epsilon }=1`$ we have
$$R_{rs}(I)|j_{\stackrel{~}{\epsilon }}=\epsilon \frac{q^{j+1/2}+q^{j1/2}}{qq^1}|j_{\stackrel{~}{\epsilon }},j=0,1,2,\mathrm{},$$
$`(22)`$
$$R_{rs}(T_2)|0_{\stackrel{~}{\epsilon }}=\frac{r}{q^{1/2}q^{1/2}}\left(\stackrel{~}{\epsilon }|0_{\stackrel{~}{\epsilon }}+\mathrm{i}|1_{\stackrel{~}{\epsilon }}\right),$$
$`(23)`$
$$R_{rs}(T_2)|j_{\stackrel{~}{\epsilon }}=\epsilon \frac{\mathrm{i}r}{q^{j+1/2}q^{j1/2}}\left(|j+1_{\stackrel{~}{\epsilon }}+|j1_{\stackrel{~}{\epsilon }}\right),j=1,2,3,\mathrm{},$$
$`(24)`$
$$R_{rs}(T_1)|0_{\stackrel{~}{\epsilon }}=\frac{r}{q^{1/2}q^{1/2}}\left(\stackrel{~}{\epsilon }|0_{\stackrel{~}{\epsilon }}+\mathrm{i}q|1_{\stackrel{~}{\epsilon }}\right),$$
$`(25)`$
$$R_{rs}(T_1)|j_{\stackrel{~}{\epsilon }}=\frac{\mathrm{i}r}{q^{j+1/2}q^{j1/2}}\left(q^{j+1}|j+1_{\stackrel{~}{\epsilon }}+q^j|j1_{\stackrel{~}{\epsilon }}\right),j=1,2,3,\mathrm{}.$$
$`(26)`$
These formulas show that the subspaces $`V_1`$ and $`V_1`$ are invariant with respect to the representation $`R_{rs}`$, that is this representation is reducible. Proposition is proved.
Let us denote the restrictions of the representation $`R_{rs}`$ from Proposition 7 to the invariant subspases $`V_1`$ and $`V_1`$ by $`R_r^{\epsilon ,1}`$ and $`R_r^{\epsilon ,1}`$, respectively. It is seen from the formulas (22)–(26) that the operators are independent of $`s`$ and the index $`s`$ is ommitted. These formulas show that $`R_{rs}`$, $`s=\epsilon \mathrm{i}q^{m+1/2}`$, is the direct sum of the representations $`R_r^{\epsilon ,1}`$ and $`R_r^{\epsilon ,1}`$.
Proposition 9. The representations $`R_r^{\epsilon ,\stackrel{~}{\epsilon }}`$ and $`R_r^{}^{\epsilon ^{},\stackrel{~}{\epsilon }^{}}`$ are equivalent if $`(r,\epsilon ,\stackrel{~}{\epsilon })=(r^{},\epsilon ^{},\stackrel{~}{\epsilon }^{})`$.
Proposition is easily proved by using Proposition 5 and formulas (22)–(26).
Theorem 1. (a) Let $`r`$ and $`r^{}`$ be nonzero complex numbers such that $`\mathrm{Re}r>0`$ and $`\mathrm{Re}r^{}>0`$, and let $`\epsilon ,\stackrel{~}{\epsilon },\epsilon ^{},\stackrel{~}{\epsilon }^{}\{1,1\}`$. If $`(\epsilon ,\stackrel{~}{\epsilon },r)(\epsilon ^{},\stackrel{~}{\epsilon }^{},r^{})`$, then the representations $`R_r^{\epsilon ,\stackrel{~}{\epsilon }}`$ and $`R_r^{}^{\epsilon ^{},\stackrel{~}{\epsilon }^{}}`$ are irreducible and nonequivalent.
(b) Let $`r𝐂\backslash \{0\}`$, $`\epsilon ,\stackrel{~}{\epsilon }\{1,1\}`$, and let $`r^{},s^{}𝐂\backslash \{0\}`$, $`s^{}\pm \mathrm{i}q^{m+1/2}`$ for all $`m𝐙`$. Then the representations $`R_r^{\epsilon ,\stackrel{~}{\epsilon }}`$ and $`R_{r^{}s^{}}`$ are nonequivalent.
Proof. The irreducibility is proved in the same way as in Proposition 7. In order to prove a nonequivalence we note that the spectrum of the operator $`R(I)`$ for any of the representations $`R_r^{+,+}`$, $`R_r^{,+}`$, $`\mathrm{Re}r>0`$, does not coincide with that of any of the representations $`R_r^{+,}`$, $`R_r^,`$, $`\mathrm{Re}r>0`$. Therefore, any of the representations $`R_r^{+,+}`$, $`R_r^{,+}`$, $`\mathrm{Re}r>0`$, cannot be equivalent to some of the representations $`R_r^{+,}`$, $`R_r^,`$, $`\mathrm{Re}r>0`$.
The operators $`R_r^{ϵ_1,ϵ_2}(T_2)`$, $`ϵ_1,ϵ_2=+,`$, are trace class operators. Their traces are nonzero (there exists only one nonzero diagonal matrix element with respect to the basis $`\{|m^{}\}`$ or the basis $`\{|m^{\prime \prime }\}`$). Since for $`\mathrm{Re}r>0`$ and $`\mathrm{Re}r^{}>0`$, $`rr^{}`$, we have $`\mathrm{Tr}R_r^{+,+}(T_2)\mathrm{Tr}R_r^{}^{,+}(T_2)`$, then any of the representations $`R_r^{+,+}`$, $`\mathrm{Re}r>0`$, cannot be equivalent to some of the representations $`R_r^{,+}`$, $`\mathrm{Re}r>0`$. It is proved similarly that any of the representations $`R_r^{+,}`$, $`\mathrm{Re}r>0`$, cannot be equivalent to some of the representations $`R_r^{}^,`$, $`\mathrm{Re}r^{}>0`$. This prove the assertion (a). The assertion (b) is proved similarly. Proposition is proved.
Representations of Theorem 1 have no classical limit since at $`q1`$ the denominators in (22)–(26) turn into zero. For this reason, these representations are called representations of non-classical type. There are no analogues of such representations for the Lie algebra $`\mathrm{iso}_2`$.
Theorem 2. Every irreducible representation of $`U_q(\mathrm{iso}_2)`$ is equivalent to one of the representations $`R_{rs}`$ and $`R_r^{\epsilon ,\stackrel{~}{\epsilon }}`$ or is one-dimensional. This means that the representations $`R_{rs}`$ and $`R_r^{}^{\epsilon ,\stackrel{~}{\epsilon }}`$, $`r,s,r^{}𝐂\backslash \{0\}`$, $`\epsilon ,\stackrel{~}{\epsilon }\{1,1\}`$ defined by the relations (18)–(20) and (22)–(26), respectively, exhaust (up to equivalence) all irreducible representations of $`U_q(\mathrm{iso}_2)`$.
Proof. Let $`R`$ be an irreducible representation of $`U_q(\mathrm{iso}_2)`$. Then it follows from the definition of representations that $`R(I)`$ has some eigenvector $`|0`$. Thus there exists $`s𝐂`$, $`s0`$, such that
$$R(I)|0=\mathrm{i}[0]_{q,s}|0,$$
where $`[m]_{q,s}:=(sq^ms^1q^m)/(qq^1)`$. Since $`R`$ is irreducible there exists a complex number $`C`$ such that $`R(C_q)=C`$ (see (6)). We define recursively the vectors
$$|j+1:=R(\mathrm{i}T_1s^1q^{j+1/2}T_2)|j,j=0,1,2,\mathrm{},$$
$`(27)`$
$$|j1:=R(\mathrm{i}T_1+sq^{j+1/2}T_2)|j,j=0,1,2,\mathrm{}.$$
$`(28)`$
Some of these vectors may be linear dependent or be equal to 0. It follows from (1)–(3) and (6) that
$$R(I)|j=i[j]_{q,s}|j,j𝐙,$$
$`(29)`$
$$R(\mathrm{i}T_1+sq^{j+3/2}T_2)|j+1=Cq|j,j=0,1,2,\mathrm{},$$
$`(30)`$
$$R(\mathrm{i}T_1s^1q^{j+3/2}T_2)|j1=Cq|j,j=0,1,2,\mathrm{}.$$
$`(31)`$
As a sample, we prove the relation (30) for $`j0`$:
$$R(\mathrm{i}T_1+sq^{j+3/2}T_2)|j+1=R(\mathrm{i}T_1+sq^{j+3/2}T_2)R(\mathrm{i}T_1s^1q^{j+1/2}T_2)|j=$$
$$=R(T_1^2+\mathrm{i}sq^{j+3/2}T_2T_1\mathrm{i}s^1q^{j+1/2}T_1T_2q^2T_2^2)|j=$$
$$=qR(q^1T_1^2qT_2^2+\mathrm{i}sq^{j3/2}T_1T_2\mathrm{i}s^1q^{j1/2}T_1T_2)|j=$$
$$=qR(q^1T_1^2qT_2^2q^{3/2}(1q^2)T_1T_2I)|j=qR(C_q)|j=qC|j.$$
We obtain from (27) and (30) that
$$R(T_2)|j=(s^1q^{j+1/2}+sq^{j+1/2})^1(|j+1+Cq|j1),$$
$`(32)`$
$$\mathrm{i}R(T_1)|j=\frac{sq^{j+1/2}}{s^1q^{j+1/2}+sq^{j+1/2}}|j+1+Cq\left(\frac{sq^{j+1/2}}{s^1q^{j+1/2}+sq^{j+1/2}}\right)|j1.$$
$`(33)`$
Let us now consider two cases: (a) $`C=0`$, (b) $`C0`$.
(a) $`C=0`$. The formulas (32) and (33) in this case give
$$R(T_2)|j=(s^1q^{j+1/2}+sq^{j+1/2})^1|j+1,$$
$`(34)`$
$$\mathrm{i}R(T_1)|j=\frac{sq^{j+1/2}}{s^1q^{j+1/2}+sq^{j+1/2}}|j+1.$$
$`(35)`$
If the set $`\{|j|j=0,1,2,\mathrm{}\}`$ is linear independent, it follows from (34) and (35) that $`\mathrm{lin}\{|j,|j+1,\mathrm{}\}`$ is invariant subspace for any $`j=1,2,3,\mathrm{}`$. Thus the representation is either reducible or one-dimensional.
Now let there exist $`l𝐍`$ such that $`|l`$ is linear dependent on linear independent vectors $`|0,|1,\mathrm{},|l1`$. Since the sequence of numbers $`[j]_{q,s},j𝐙`$, does not contain 3 equal elements, the only possible case is $`|l=\alpha |k`$ for some $`k\{0,1,\mathrm{},l1\}`$.
Let us consider the case $`sq^{l1}\pm \mathrm{i}`$. For $`\alpha 0`$ we get contradiction with the commutation relations (1)–(3) applying them to the vector $`|l1`$. For $`\alpha =0`$ and $`l2`$ we get the one-dimensional representation on the invariant subspace $`𝐂|l1`$. For $`\alpha =0`$ and $`l=1`$ we must move attention to the vectors $`|0,|1,\mathrm{}`$ and the rest of the proof is fulfilled by repeating the above and below arguments except that we work with vectors with negative indices.
Now let us consider the case $`sq^{l1}=\epsilon \mathrm{i}`$, $`\epsilon =\pm 1`$. Since $`sq^{l1}=s^1q^{l+1}`$, the equalities (34) and (35) have no sense in this case.
For $`\alpha 0`$ we get from equation $`[l1+j]_{q,s}=[l1j]_{q,s}`$ (valid for all $`j𝐙`$) that $`k=l2`$. By (27), (30) and the relation $`sq^{l1}=s^1q^{l+1}`$ we have
$$|l=R(\mathrm{i}T_1s^1q^{l+3/2}T_2)|l1=\alpha |k=\alpha R(\mathrm{i}T_1+sq^{l1/2}T_2)|l1=0.$$
This contradict the equality $`|l=\alpha |k`$, $`\alpha 0`$. For $`\alpha =0`$ we have from (1) and from $`R(\mathrm{i}T_1\epsilon \mathrm{i}q^{1/2}T_2)|l1=0`$ that
$$R(I)R(T_2)|l1=\epsilon \frac{q^2+1}{q^21}R(T_2)|l1=\mathrm{i}[l]_{q,s}R(T_2)|l1$$
and we can redefine $`|l:=R(T_2)|l1`$. From (3) and (30) we have $`0=R(\mathrm{i}T_1+sq^{l+1/2}T_2)|l`$.
If $`|l`$ is linear dependent on $`|0,|1,\mathrm{},|l1`$, there exists $`\beta 𝐂`$ such that $`|l=\beta |l2`$. As above, for $`\beta 0`$ we get the contradiction $`\beta |l1=0`$ and for $`\beta =0`$ we get a one-dimensional representation since $`𝐂|l1`$ is an invariant subspace.
If $`|l`$ is linear independent on the vectors $`|0,|1,\mathrm{},|l1`$, we recursively redefine $`|j+1:=R(\mathrm{i}T_1s^1q^{j+1/2}T_2)|j`$, $`j=l,l+1,\mathrm{}`$. Then we again consider 2 cases:
If there exists $`\gamma 𝐂`$ such that $`|l+l^{}=\gamma |l2l^{}`$ for some $`l^{}\{1,2,\mathrm{},l2\}`$ then we get either one-dimensional representation on the invariant subspace $`𝐂|l+l^{}1`$ (when $`\gamma =0`$) or a contradiction applying (1)–(3) to $`|l2l^{}`$.
If $`|l+l^{}`$ is linear independent on $`|0,|1,\mathrm{},|l+l^{}1`$, then the representation is reducible since $`\mathrm{lin}\{|j,|j+1,\mathrm{}\}`$ is invariant subspace for any $`j`$.
(b) $`C0`$. Consider first the case when $`|l`$ is linear dependent on the linear independent vectors $`|0,|1,\mathrm{},|l1`$. It means that $`|l=\alpha |k`$ for some $`k\{0,1,\mathrm{},l1\}`$ and for $`\alpha 𝐂`$. If $`\alpha =0`$ we get a contradiction since
$$0=|l=R(\mathrm{i}T_1s^1q^{l+3/2}T_2)|l1=R(\mathrm{i}T_1+sq^{l+1/2}T_2)|l=Cq|l1,$$
implies $`C=0`$. For $`\alpha 0`$ we get a contradiction applying (1)–(3) to the vector $`|l1`$.
Now consider the case when the vectors $`|j`$, $`j=0,1,2,\mathrm{}`$, are linear independent. If there exists $`m𝐍`$ such that the vector $`|m`$ is linear dependent on the linear independent vectors $`|j`$, $`j\{m+1,m+2,\mathrm{}\}`$, we write $`|m=\beta |p`$ for some $`\beta 𝐂`$ and $`p>m`$. For $`\beta =0`$ we get a contradiction similarly as in the above analogous cases. For $`\beta =Cq`$ and $`p=m+1`$ we get the representation given (after some suitable rescaling of the basis) by (22)–(26). For $`\beta =Cq`$ and $`p=m+2`$ we can derive from (1) how operator $`R(I)`$ acts on the linear independent vectors $`|p`$ and $`R(T_2)|p1`$ and see that it cannot be diagonalized on this subspace. Therefore, this case is unpossible. For other values of $`\beta `$ and $`p`$ we get a contradiction applying (1)–(3) to $`|m+1`$.
Thus the only possible remaining case is when all the vectors $`|j`$, $`j𝐙`$ are linear independent. Using the formulas (29), (32) and (33), in this case we get (after some suitable rescaling of the basis) the representation (18)–(20). Theorem is proved.
It is clear from Theorem 2 that for $`q𝐑`$ the irreducible $``$-representations of $`U_q(\mathrm{iso}_2)`$ which can be separated from representations of Theorem 1, are equivalent to the irreducible $``$-representations from . However, it is not seen directly from formulas for representations since operators of represenations in are given with respect to another basis than our one. Namely, the authors of diagonalize the operator $`R(T_2)`$ which corresponds to the shifts in the group $`ISO(2)`$.
ACKNOWLEDGEMENTS
We thank each others institutions for hospitality during mutual visits. The research of M. H. and S. P. was supported by research grants from GA Czech Republic. The research of A. U. K. was supported in part by CRDF Grant UP1-309.
REFERENCES
1. L. L. Vaksman and L. I. Korogodskii, Soviet Math. Dokl. 39, 173 (1989).
2. N. Ja. Vilenkin and A. U. Klimyk, Representations of Lie Groups and Special Functions, vol. 1, Kluwer, Dordrecht, 1991.
3. A. U. Klimyk, Preprint ITP–90–37E, Kiev, 1990.
4. M. Odesski, Funct. Anal. Appl. 20, No. 2, 78 (1986).
5. Yu. S. Samoilenko and L. B. Turovska, in book Quantum Groups and Quantum Spaces, Banach Center Publications, vol. 40, Warsaw, 1997, pp. 21–40.
6. M. Havlíček, A. Klimyk and S. Pošta, J. Math. Phys., 40 (1999), to be published.
7. S. D. Silvestrov and L. D. Turowska, J. Funct. Anal., 160, 79 (1998).
8. A. M. Gavrilik and N. Z. Iorgov, Proc. of Second Intern. Conf. Symmetry in Nonlinear Mathematical Physics, Kiev, 1997, 384.
9. G. M. Bergman, Adv. Math. 29, 178 (1978).
10. A. Klimyk and K. Schmüdgen, Quantum Groups and Their Representations, Springer, Berlin, 1997.
11. J. Dixmier, Algébras Enveloppantes, Gauthier–Villars, 1974.
|
no-problem/9901/cond-mat9901275.html
|
ar5iv
|
text
|
# Extremal dynamics model on evolving networks
\[
## Abstract
We investigate an extremal dynamics model of evolution with variable number of units. Due to addition and removal of the units, the topology of the network evolves and the network splits into several clusters. The activity is mostly concentrated in the largest cluster. The time dependence of the number of units exhibits intermittent structure. The self-organized criticality is manifested by a power-law distribution of forward avalanches, but two regimes with distinct exponents $`\tau =1.98\pm 0.04`$ and $`\tau ^{}=1.65\pm 0.05`$ are found. The distribution of extinction sizes obeys a power law with exponent $`2.32\pm 0.05`$.
\]
Extremal dynamics (ED) models are used in wide area of problems, ranging from growth in disordered medium , dislocation movement , friction to biological evolution . Among them, the Bak-Sneppen (BS) model plays the role of a testing ground for various analytical as well as numerical approaches (see for example ). The dynamical system in question is composed of a large number of simple units, connected in a network. Each site of the network hosts one unit. The state of each unit is described by a dynamical variable $`b`$, called barrier. In each step, the unit with minimum $`b`$ is mutated by updating the barrier. The effect of the mutation on the environment is taken into account by changing $`b`$ also at all neighbors of the minimum site.
General feature of ED models is the avalanche dynamics. The forward $`\lambda `$-avalanches are defined as follows . For fixed $`\lambda `$ we define active sites as those having barrier $`b<\lambda `$. Appearance of one active site can lead to avalanche-like proliferation of active sites in successive time steps. The avalanche stops, when all active sites disappear again. There is a value of $`\lambda `$, for which the probability distribution of avalanche sizes obeys a power law without any parameter tuning, so that the ED models are classified as a subgroup of self-organized critical models .
The BS model was originally devised in order to explain the intermittent structure of the extinction events seen in the fossil record . In various versions of the BS model it was found, that the avalanche exponent is $`1<\tau 3/2`$, where the maximum value $`3/2`$ holds in the mean-field universality class. On the other hand, in experimental data for the distribution of extinction sizes higher values of the exponent, typically around $`\tau 2`$ are found . The avalanche exponent close to 2 was also measured in ricepile experiments . While there are several models of different kind, which give generic value $`\tau =2`$ , we are not aware of any ED model with such a big value of the exponent.
The universality class a particular model belongs to, depends on the topology of the network on which the units are located. Usually, regular hypercubic networks or Cayley trees are investigated. For random neighbor networks, mean-field solution was found to be exact . Also the tree models were found to belong to the mean-field universality class. Recently, BS model on random networks, produced by bond percolation on fully connected lattice, was studied . Two universality classes were found. Above the percolation threshold, the system belongs to the mean-field universality class, while exactly at the percolation threshold, the avalanche exponent is different. A dynamics changing the topology in order to drive the network to critical connectivity was suggested. Similar model was investigated recently in the context of autocatalytic chemical reactions.
We present here a further step towards reality. In fact, one can find real systems in which not only the topology of connections evolves, but also the number of units changes. The network develops new connections, when a new unit is inserted, and if a unit is removed, its links are broken. This is a typical situation in natural ecologies, where each extinction and speciation event changes also the topology of the ecological network. The same may apply to economics and other areas, where the range of interaction is not determined by physical Euclidean space. This problem was already partially investigated within mean-field BS model and also in several other models devised for description of biological evolution , which use different approaches than the extremal dynamics.
The purpose of this Letter is twofold. First, to study the evolution of topology in a ED model with variable number of units. Second, to demonstrate that the large value of the avalanche exponent can be observed if the topology of the underlying network evolves dynamically.
We consider a system composed of varying number $`n_\mathrm{u}`$ of units connected in a network. In the context of biological evolution, these units are species. The dynamical rules of our model are the following.
(i) Each unit has a barrier $`b`$ against mutations. The unit with minimum $`b`$ is mutated.
(ii) The barrier of the mutated unit is replaced by a new random value $`b^{}`$. Also the barriers of all its neighbors are replaced by new random numbers. If $`b^{}`$ is larger than barriers of all its neighbors, the unit gives birth to a new unit (speciation). If $`b^{}`$ is lower than barriers of all neighbors, the unit dies out (extinction). As a boundary condition, we use the following exception: if the network consists of a single isolated unit only, it never dies out.
This rule is motivated by the following considerations. We assume, that well-adapted units proliferate more rapidly and chance for speciation is bigger. However, if the local biodiversity, measured by connectivity of the unit, is bigger, there are fewer empty ecological niches and the probability of speciation is lower. On the other hand, poorly adapted units are more vulnerable to extinction, but at the same time larger biodiversity (larger connectivity) may favor the survival. Our rule corresponds well to these assumptions: speciation occurs preferably at units with high barrier and surrounded by fewer neighbors, extinction is more frequent at units with lower barrier and lower connectivity. Moreover, we suppose that a unit completely isolated from the rest of the ecosystem has very low chance to survive. This leads to the following rule.
(iii) If a unit dies out, all its neighbors which are not connected to any other unit also die out. We call this kind of extinctions singular extinctions.
From the rule (ii) alone follows equal probability of adding and removing a unit, while the rule (iii) enhances the probabilty of the removal. As a result, the probability of speciation is slightly lower than the probability of extinction. The degree of disequilibrium between the two depends on the topology of the network at the moment and can be quantified by the frequency of singular extinctions. The number of units $`n_\mathrm{u}`$ perform a biased random walk with reflecting boundary at $`n_\mathrm{u}=1`$. The bias towards small values is not constant, though, but fluctuates as well.
(iv) Extinction means, that the unit is removed without any substitution and all links it has, are broken. Speciation means, that a new unit is added into the system, with a random barrier. The new unit is connected to all neighbors of the mutated unit: all links of the “mother” unit are inherited by the “daughter” unit. This rule reflects the fact that the new unit is to a certain extent a copy of the original, so the relations to the environment will be initially similar to the ones the old unit has. Moreover, if a unit which speciates has only one neighbor, a link between “mother” and “daughter” is also established.
The above described rules are illustrated in Fig. 1. Speciation occurs in the transition from step 369 to 370, 371 to 372, 373 to 374 and 375 to 376. In the transitions from 372 to 373, 374 to 375 and 376 to 377 extinction occurs (last two include also singular extinctions). We can see that after speciation the neighbors of the new unit have one neighbor more than before, so that if $`n_\mathrm{u}`$ increases, also the connectivity of the network grows.
We investigated the evolution of the network by measuring time dependence of several quantities. We start the simulation with the initial condition $`n_u=1`$. A typical result is shown in Fig. 2, where we show the time dependence of the number of units $`n_\mathrm{u}`$, average connectivity $`\overline{c}`$, and the frequency of singular extinctions $`f_\mathrm{s}`$. The network can be split into disconnected clusters, as is illustrated in the Fig. 1. In Fig. 2 we show also the evolution of number of clusters in three size categories. We denote by $`n_1`$ the number of the smallest clusters, of size 2, by $`n_2`$ the number of medium-size clusters, larger than 2 and smaller or equal to $`n_\mathrm{u}/2`$, and by $`n_3`$ the number of clusters larger than $`n_\mathrm{u}/2`$. The value $`n_3=1`$ means that most of the system is concentrated in a single cluster.
We can see that singular extinctions occur in bursts. The passages without singular extinctions, where number of units evolves like a random walk (due to equal probability of increase and decrease of number of units by 1) are interrupted by short periods, where $`n_\mathrm{u}`$ falls to small values and singular extinctions are intense. We can see that very often three events coincide: high frequency of singular extinctions, large number of clusters, especially of size 2, and the fact, that the largest cluster does not contain most of the network. We observed, that the mutation occurs nearly all the time in the largest cluster. A similar effect was reported also in the Cayley tree models : the small isolated portions of the network are very stable and nearly untouched by the evolution.
Figure 2 suggests that the number of units exhibits intermittent drops. This corresponds qualitatively to punctuated equilibria seen in the fossil data. This feature is new in our model, when compared with previous approaches within the BS model but resembles the intermittent features of models based on neural networks , Lotka-Volterra equations , and coherent noise . In order to check this property quantitatively, we plot in Fig. 3 the distribution of changes in number of units during time interval $`\mathrm{\Delta }t=310^4`$ steps. We can see that the distribution of drops ($`\mathrm{\Delta }n_\mathrm{u}<0`$) has a power-law tail, which confirms the intermittency.
The distribution of forward $`\lambda `$-avalanches is shown in Fig. 4. We found power-law distribution for $`\lambda _c=0.016`$ with the exponent $`\tau =1.98\pm 0.04`$. The value of the exponent was found by fitting the data in the interval $`(500,510^5)`$. Contrary to the BS and related models, we found power law distribution with a different exponent $`\tau ^{}=1.65\pm 0.05`$ also for $`\lambda `$ larger than about 0.4. (More precisely, we obtained the value of the exponent by fitting the data in the interval $`(500,10^6)`$ for $`\lambda =0.4`$ and $`\lambda =0.6`$. Both values of $`\lambda `$ give the same result). The data suggest that for $`\lambda >\lambda _c`$ the avalanche size distribution exhibits two regimes, with crossover around certain avalanche size $`s_{\mathrm{cross}}`$. For small avalanches, $`s<s_{\mathrm{cross}}`$, the distribution is power law with exponent $`\tau `$, while for larger avalanches, $`s>s_{\mathrm{cross}}`$, power law with exponent $`\tau ^{}`$ holds. The crossover $`s_{\mathrm{cross}}`$ grows when $`\lambda `$ approaches to $`\lambda _c`$.
The existence of avalanches for $`\lambda `$ close to 1 is related to the fluctuation of the number of units. We observed, that such avalanches start and end mostly when number of units is very small. Between these events the evolution of the number of units is essentially a random walk, because singular extinctions are rare. This fact can explain, why the exponent $`\tau ^{}`$ is not too far from the value $`3/2`$ corresponding to the distribution of first returns to the origin for the random walk. The difference is probably due to the presence of singular extinctions.
The fact, that the number of units change, enables us to define the extinction size in a more realistic way than in the previous variants of the BS model. For fixed $`\lambda `$ we count number of units which were present at the beginning of the $`\lambda `$-avalanche and are no more present when the avalanche stops. This quantity corresponds better to the term “extinction size” used by paleontologists than the number of units affected by mutations, as it is defined in the BS model.
For $`\lambda =\lambda _c=0.016`$ the distribution of extinction sizes follows the power law with exponent $`\tau _{\mathrm{ext}}=2.32\pm 0.05`$, as is shown in Fig. 5. (The value of the exponent was obtained by fitting the data in the interval $`(10,1000)`$.) This value is larger than the exponent 2 observed in the statistics of real biological extinctions, but still closer than the values found in previous modifications of the BS model.
The fact, that the network evolves and the number of units fluctuates leads to significantly larger values of the exponent than in the BS model, even greater that the experimental one, while variants of the BS model have values lower than the experimental one. This suggests, that the freedom in changing the topology in our model is exaggerated and in order to obtain a more realistic model of the biological evolution, we should look for some principles, which imply freezing of the topology, while allowing the species to be replaced by new ones.
We studied several other modifications of our model, in order to check its robustness. For example, the link between “mother” and “daughter” unit was established, or only certain fraction of the links connecting “mother” to its neighbors was inherited by “daughter”. These modifications affected some aspects of the network dynamics, but the avalanche and extinction statistics was not significantly different.
To sum up, we formulated and studied the extremal dynamics model derived from the Bak-Sneppen model, which exhibits forward-avalanche exponent close to two, due to the annealed topology of the network. The extinction size was defined in more realistic manner compared to previous approaches within the BS model and the extinction statistics was found to obey a power law with exponent somewhat larger than two. The value found is closer to paleontological data than in the previous variants of the BS model.
We wish to thank A. Markoš for useful discussions.
|
no-problem/9902/astro-ph9902112.html
|
ar5iv
|
text
|
# Metallicity Gradients in the Intracluster Gas of Abell 496
## 1 Introduction
While the metals observed in intracluster gas clearly originate from stars, it remains controversial how the metals got from stars into the intracluster gas. The two global metal enrichment mechanisms considered to be most likely are protogalactic winds from early-type galaxies (Larson & Dinerstein 1975) and ram pressure stripping of gas from galaxies (Gunn & Gott 1972). Early $`Einstein`$ FPCS spectroscopy (Canizares et al. 1982) and more recent ASCA spectroscopy (Mushotzky & Loewenstein 1997; Mushotzky et al. 1996) showed that global intracluster metal abundances are consistent with ejecta from Type II supernovae, which supports the protogalactic wind model. White (1991) showed that the specific energy of intracluster gas is greater than that of cluster galaxies, which also suggests that protogalactic winds injected significant amounts of energy and metals into intracluster gas. However, theoretical uncertainties about the elemental yields from Type II supernovae make it difficult to determine confidently the relative proportion of iron from SN II and SN Ia in intracluster gas (Gibson, Loewenstein & Mushotzky 1997). This uncertainty has allowed others to conclude that as much as $``$50% of the iron in intracluster gas comes from SN Ia (Ishimaru & Arimoto 1997; Nagataki & Sato 1998). The possible presence of such large quantities of iron from SN Ia is problematic. Is ram pressure stripping so effective that it contaminates the outer parts of clusters nearly as effectively as the central regions? Or is it ejecta from a secondary SN Ia-driven wind phase in ellipticals? Clues about the dominant enrichment mechanism(s) may be found in the detailed spatial distribution of elements in intracluster gas.
An increasing number of galaxy clusters are being found with centrally enhanced metal abundances in their intracluster gas. $`Ginga`$ observations of the Virgo cluster showed that its iron abundance declines from $``$0.5 solar at the center to $``$0.1–0.2 solar 3 away (Koyama, Takano & Tawara 1991). White et al. (1994) found central abundance enhancements in Abell 496 and Abell 2142 in joint analyses of $`Ginga`$ LAC and $`Einstein`$ SSS spectra. ASCA observations of the Centaurus cluster show that its iron abundance declines from $``$ solar at the center to $``$0.3 solar 15 away (Fukazawa et al. 1994). The Perseus cluster also has an abundance gradient near its center (Ulmer et al. 1987; Ponman et al. 1990; Kowalski et al. 1993; Arnaud, et al. 1994). More recently, central abundance enhancements were found in ASCA data for Hydra A (Ikebe et al. 1997), AWM 7 (Ezawa et al. 1997; Xu et al. 1997), Abell 2199 and Abell 3571 (Dupke 1998, Dupke & White 2000), as well as in ROSAT (Pislar et al. 1997) and BeppoSax (Irwin & Bregman 1999) data for Abell 85. The presence of these central abundance enhancements is poorly correlated with global cluster properties.
In this paper we analyze spatially resolved ASCA spectra of Abell 496 and confirm that it has centrally enhanced metal abundances. Abell 496 is a Bautz-Morgan Type I cluster with an optical redshift of $`z=0.0328`$. Adopting a Hubble constant of 50 km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0.5`$, its luminosity distance is $`197h_{50}^1`$ Mpc and $`1^{}=57h_{50}^1`$ kpc. The central cD (MCG -02-12-039) has a total $`B`$ magnitude of $`B_T=13.42`$ (Valentijn 1983) and an optical effective radius of $`r_{\mathrm{eff}}49h_{50}^1`$ kpc (Schombert 1986). Neither the projected galaxy distribution nor the galaxy velocity distribution in the cluster shows signs of significant substructure, so the cluster appears to be dynamically relaxed (Bird 1993; Zabludoff, Huchra & Geller 1990). Heckman (1981) was the first to suggest that Abell 496 contained a cooling flow, after he detected cool H$`\alpha `$-emitting gas in its central cD galaxy; subsequent optical observations found extended H$`\alpha `$ emission (Cowie et al. 1983). Nulsen et al. (1982) found a soft X-ray component in $`Einstein`$ SSS spectra of this cluster and estimated a cooling accretion rate of $``$200 $`M_{}`$ yr<sup>-1</sup>, which is consistent with later analyses (Mushotzky 1984; Mushotzky & Szymkowiak 1988; Canizares, Markert & Donahue 1988; Thomas, Fabian & Nulsen 1987; White et al. 1994). In the course of a joint analysis of $`Einstein`$ SSS and $`Ginga`$ LAC spectra, White et al. (1994) found a central abundance enhancement in Abell 496; the differing fields of view of these two instruments allowed a coarsely spatially-resolved analysis. The ASCA observations of Abell 496 have been previously analyzed by Mushotzky (1995) and Mushotzky et al. (1996), who found no evidence of an abundance gradient within the central 1 Mpc; however, the first paper analyzed data from only one of the four ASCA spectrometers, while the second considered data from all four ASCA spectrometers, but only beyond 3 from the center (in order to avoid the spectral influence of the central cooling flow). We include the central cooling flow region in our analysis of data from all four ASCA spectrometers.
## 2 Data Reduction & Analysis
Abell 496 was observed for 40 ksec by ASCA on 20-21 September 1993. ASCA carries four large-area X-ray telescopes, each with its own detector: two Gas Imaging Spectrometers (GIS) and two Solid-State Imaging Spectrometers (SIS). Each GIS has a $`50^{}`$ diameter circular field of view and a usable energy range of 0.7–12 keV; each SIS has a $`22^{}`$ square field of view and a usable energy range of 0.4–10 keV. We selected data taken with high and medium bit rates, with cosmic ray rigidity values $``$ 6 GeV/c, with elevation angles from the bright Earth of $`20^{}`$ and from the Earth’s limb of $`5^{}`$ (GIS) or $`10^{}`$ (SIS); we also excluded times when the satellite was affected by the South Atlantic Anomaly. Rise time rejection of particle events was performed on GIS data, and hot and flickering pixels were removed from SIS data. The resulting effective exposure times for each instrument are shown in Table 1. Since the cluster fills the spectrometers’ fields of view, we estimated the background from blank sky files provided by the ASCA Guest Observer Facility.
We used XSPEC v9.0 (Arnaud 1996) software to analyze the SIS and GIS spectra separately and jointly. We fit spectra using the mekal and vmekal thermal emission models, which are based on the emissivity calculations of Mewe & Kaastra (cf. Mewe, Gronenschild & van den Oord 1985; Mewe, Lemen & van den Oord 1986; Kaastra 1992), with Fe L calculations by Liedahl, Osterheld & Goldstein (1995). Abundances are measured relative to the solar photospheric values of Anders & Grevesse (1989), in which Fe/H=$`4.68\times 10^5`$ by number. Galactic photoelectric absorption was incorporated using the wabs model (Morrison & McCammon 1983); the Galactic column of absorbing material in this line of sight is $`N_H=4.58\times 10^{20}`$ cm<sup>-2</sup> (Dickey & Lockman 1990; HEASARC NH software). Spectral channels were grouped to have at least 25 counts/channel. Energy ranges were restricted to 0.8–10 keV for the GIS and 0.4–10 keV for the SIS. The minimum projected sizes for regions of spectral extraction (circular radii or annular widths) were typically 3 for the GIS and 2 for the SIS. To maximize the statistical significance of any gradients, we also assessed larger regions in the outer parts. A central region of GIS data with a projected radius of 2 was also analyzed for closer comparison with the SIS. The intracluster gas temperature in Abell 496 is cool enough ($``$4 keV) that the energy dependence of ASCA’s point spread function does not affect our results significantly. Since our results for spectral fits to the individual instruments are consistent with our results from the joint analysis of all instruments, we will discuss only the joint analysis.
## 3 Results
### 3.1 Temperature and Abundance Profiles
We jointly fitted thermal models to spectra from all four ASCA instruments. The fitting normalizations for the data from each instrument were allowed to vary independently, in order to compensate for small calibration and spatial extraction differences between the detectors; the normalizations differ by $`<10\%`$ in practice. Figure 1a shows the GIS data with the best fits in both the inner and outer projected regions of the cluster, while Figure 1b shows the same fits to the SIS data from these regions. The resulting fits had $`\chi _\nu ^21`$ and the temperature and abundance distributions are shown in Figures 2a & b and Table 2. The temperature rises from $`3.24_{0.06}^{+0.07}`$ keV within 2 to $`4.40_{0.13}^{+0.13}`$ keV beyond 5. The central abundance is 0.53$`{}_{0.04}{}^{}{}_{}{}^{+0.04}`$ solar, falling to 0.36$`{}_{0.03}{}^{}{}_{}{}^{+0.03}`$ in the outer 3–12 region. In this combined data set, the central abundance enhancement within $`3^{}`$ is significant at a confidence level of $``$99%. Confidence contours for the abundances in the two innermost regions are shown in Figure 3. We also used the $`F`$-test to assess the significance of the central abundance enhancement. We simultaneously fitted the spectra from the inner and outer regions, allowing their normalizations, temperatures and abundances to vary independently. We then refit the spectra with the abundances from the two regions tied together. The $`\chi ^2`$ of these latter fits were larger by $`26`$ for an increase of only one degree of freedom. The $`F`$-test indicates that the central abundance enhancement is significant at a level of $`>`$99.99%.
Since there is a moderate cooling flow at the center of Abell 496, we tested whether the abundance gradient described above is an artifact of our choice of spectral model. We added a cooling flow component to the mekal thermal emission model in the central region. The cooling flow spectral model cflow in XSPEC is characterized by maximum and minimum temperatures, an abundance, a slope which parameterizes the temperature distribution of emission measures, and a normalization, which is simply the cooling accretion rate. We adopted the emission measure temperature distribution that corresponds to isobaric cooling flows (zero slope). We tied the maximum temperature of the cooling flow to the temperature of the thermal component, and we fixed the minimum temperature at 0.1 keV. We applied a single (but variable) global absorption to both spectral components and associated an additional, intrinsic absorption component with the cooling flow, placing it at the redshift of the cluster. The addition of the cooling flow component does not significantly affect our results for the central region: the central abundance enhancement remains significant at a confidence level of $`>`$90%. The two component fits at the center are slightly worse than the isothermal model fits above, but they still have $`\chi _\nu ^21`$ (see Table 2). In order to apply the $`F`$-test to these cooling flow model fits, we also simultaneously fit the spectra from inner and outer regions, first allowing the abundances in the respective regions to vary independently, then tying the abundances together. The $`F`$-test implies that the abundance gradient is significant at the $`>`$99.99% level. Since the spectra we analyze are from cylindrical projections of emission measure through various lines of sight through the cluster, the true spatial abundance enhancement at the center will be somewhat stronger than we observe.
### 3.2 Individual Elemental Abundances & Abundance Ratios
We also determined the abundances of individual elements using the vmekal spectral model in XSPEC. A similar analysis for the outer regions ($`>3^{}`$) of Abell 496 was done by Mushotzky et al. (1996), whose individual elemental abundance measures are consistent with our results at the 90% confidence level. In our spectral model fits, the He abundance was fixed at the solar value, while C and N were fixed at 0.3 solar (since ASCA is rather insensitive to C and N and the derived abundances of other elements are not affected by the particular choice for C and N abundances). Our observed abundances are shown in Table 3 for various projected spatial regions.
Table 3 shows that the iron abundance is best determined and increases $``$50% from $`0.36\pm 0.03`$ solar in the outer parts ($`>3^{}`$) to a central value of $`0.53\pm 0.05`$ solar. The sulfur abundance also shows a significant gradient, rising from 0.20$`\pm 0.16`$ solar beyond 3 to 0.58$`\pm 0.20`$ solar at the center. The silicon abundance is $``$0.8$`\pm 0.2`$ solar, showing no significant gradient within its 90% confidence limits. The best fitting neon abundance is also nearly solar, showing no significant gradient, while the best fitting nickel abundance is $``$2.5 times solar at the center, with a marginally significant decline to solar in the outer parts. Oxygen, magnesium, argon and calcium are poorly constrained.
Theoretical numerical models of supernova yields predict the following elemental ratios relative to solar values: for SN Ia, the W7 model of Nomoto, Thielemann & Yokoi (1984), as updated in Nomoto et al. (1997a), gives
$$\mathrm{O}\mathrm{Mg}0.035\mathrm{Fe},$$
$$\mathrm{Ne}0.006\mathrm{Fe},$$
$$\mathrm{Si}\mathrm{S}\mathrm{Ar}\mathrm{Ca}0.5\mathrm{Fe},$$
$$\mathrm{Ni}4.8\mathrm{Fe};$$
while for SN II, Nomoto et al. (1997b) find
$$\mathrm{O}\mathrm{Mg}\mathrm{Si}3.7\mathrm{Fe},$$
$$\mathrm{Ne}\mathrm{S}2.5\mathrm{Fe},$$
$$\mathrm{Ar}\mathrm{Ca}\mathrm{Ni}1.7\mathrm{Fe},$$
after integrating their yields over a Salpeter mass function with upper and lower mass limits of 10 and 50 $`M_{}`$, respectively.
Various observed abundance ratios within inner and outer projected spatial regions are shown in Table 4, along with the theoretical expectations for SN Ia and SN II ejecta; the errors associated with the observed abundance ratios are the propagated $`1\sigma `$ errors. Note that several abundance ratios lie significantly outside their theoretical ranges: Ne/Si ($`03^{}`$), Ne/S ($`312^{}`$) and Si/S ($`312^{}`$). Three abundance ratios show marginally significant gradients: Si/S, Si/Ni, and S/Fe.
## 4 Distinguishing the Relative Contributions from SN Ia & SN II
We use the observed abundance ratios (normalized by their solar values) in Table 4 to estimate the relative contribution of SN Ia and SN II to the metal enrichment of the intracluster gas. Such estimates are complicated by uncertainties in both the observations and the theoretical yields. The yield relations above show that the most discriminatory abundance ratios are the ones involving oxygen, magnesium and neon, since their ratios to iron are 2-3 orders of magnitude smaller for SN Ia than for SN II; of these, magnesium is poorly determined, which leaves oxygen and neon. Despite their large fractional uncertainties, the fact that the observed O/Fe and Ne/Fe ratios are of order unity (see Table 4) clearly indicates the presence of SN II ejecta — these ratios are predicted to be less than a few percent for pure SN Ia ejecta. The 90% errors in these abundance ratios are not so large that they can be consistent with SN Ia ejecta alone. On the other hand, the observed O/Fe and Ne/Fe ratios are about half the values predicted for pure SN II ejecta, which may indicate dilution by SN Ia ejecta. The SN Ia iron mass fractions indicated by these abundance ratios are indicated in Table 4. However, the theoretical iron yields from SN II are uncertain by a factor of $``$2 (Woosley & Weaver 1995; Arnett 1996; Gibson et al. 1997), so this systematic uncertainty obscures the relative contribution from SN Ia and SN II.
The abundance ratios with the smallest fractional errors are Si/Fe, S/Fe and Ni/Fe. The Si/Fe and Ni/Fe ratios in Table 4 indicate a roughly comparable mix of SN II and SN Ia ejecta in the outer parts, while the SN Ia/II mix indicated by the S/Fe ratio is inconsistent with those derived from the Si/Fe and Ni/Fe ratios (we show below that elemental ratios involving sulfur are problematic). The associated SN Ia iron mass fractions are listed in Table 4 for these ratios as well. These three abundance ratios also involve iron, the production of which in SN II models is uncertain by a factor of $``$2, so we next consider SN Ia/II discriminators that do not involve iron.
Two well-determined ratios independent of iron are Si/Ni and Si/S. The SN Ia/II ratio derived from Si/Ni is consistent with the values derived above (see Table 4), indicating that SN Ia contribute 74% (66%) of the iron mass in the inner $`02^{}`$ ($`03^{}`$) region and 48% in the outer $`312^{}`$ region. However, the best-fit value of the Si/S ratio is outside the theoretical boundaries in the outer parts of Abell 496, although its errors are large enough to be consistent with the expectation for SN II ejecta. Inspection of Table 4 shows that most of the best fit values of the other ratios involving sulfur in the outer parts are also systematically outside the theoretical range. We conclude, despite the large fractional errors, that sulfur is likely to be overproduced in the SN II models we have adopted. To be consistent with the results from the other elemental ratios above, sulfur production in SNe II should be reduced by a factor of $``$2-4 relative to the models of Nomoto et al. (1997b). This overproduction of sulfur in other SN II models was noted previously by Mushotzky et al. (1996), Loewenstein & Mushotzky (1996) and Gibson et al. (1997). Recently, Nagataki & Sato (1998) found that this sulfur discrepancy was reduced when they used the theoretical SN II yields of Nagataki et al. (1998), who explored the affects of asymmetric SN II explosions.
Figure 4 summarizes our estimates of the iron mass fraction from SN Ia, as derived from the variety of elemental abundance ratios described above; values for the inner $`02^{}`$ region are indicated by filled circles, while empty circles correspond to $`312^{}`$. The filled and empty symbols are obviously segregated, despite the large individual errors, indicating that the outer region is more dominated by SN II ejecta and the central region is dominated by SN Ia ejecta. The consistency of the results for ratios $`not`$ involving sulfur is particularly noteworthy; in Figure 4 the average theoretical yield of sulfur from SN II has been reduced by a factor of 3.8 to bring the estimates from sulfur ratios more in line with the other estimates.
For our consensus estimate of the iron mass fraction due to SN Ia, we average the SN Ia iron mass fractions for five of the seven abundance ratios discussed above: O/Fe, Ne/Fe, Si/Fe, Ni/Fe and Si/Ni (the two ratios involving sulfur are excluded). Our best estimates for the SN Ia iron mass fraction are $`70\pm 5`$% within $`02^{}`$, $`64\pm 5`$% within $`03^{}`$, and $`50\pm 6`$% within $`312^{}`$ (these values are denoted as $`\mu `$ in Figure 4). Although the exact proportions of SN Ia and SN II ejecta are sensitive to our adopted theoretical SN II yields, this constitutes evidence for an increase in the proportion of iron from SN Ia at the center relative to the outer parts. For our adopted yields, the proportion of iron from SN Ia is $``$50% larger at the center than the outer parts. This increased proportion of SN Ia ejecta is such that the central iron abundance enhancement can be attributed wholly to SN Ia. Combining our results for the gradients in both the iron abundance and the relative proportion of SN Ia and II, we find that the iron abundance due to SN II is $`A_{\mathrm{Fe}_{\mathrm{II}}}=0.18\pm 0.06`$ solar in the outer parts ($`312^{}`$) and $`0.16\pm 0.06`$ solar near the center ($`02^{}`$), exhibiting no significant gradient. The iron abundance due to SN Ia is $`A_{\mathrm{Fe}_{\mathrm{Ia}}}=0.18\pm 0.06`$ solar in the outer parts, increasing a factor of $``$2 to $`0.38\pm 0.07`$ solar near the center (see Table 5).
While our estimate of the relative proportion of SN Ia/II ejecta depends upon our adopted SN yield models, our qualitative conclusion that there is a large fraction of iron from SN Ia is robust. Fukazawa et al. (1998) found in ASCA spectra of 40 clusters that Si/Fe ratios are lower in cool clusters than in hot clusters, indicating that the proportion of SN Ia ejecta is larger in cool clusters than in hot clusters. Davis, Mulchaey & Mushotzky (1999) recently showed that this trend continues in galaxy groups dominated by early-type galaxies. Given that both SN Ia and SN II produce significant amounts of Si and Fe (although in different proportions), there must be roughly comparable amounts of ejecta from both SN Types in typical clusters to produce the Si/Fe – $`kT`$ trend. Fukazawa et al. (1998) conclude that the SN Ia/II mixture in clusters cannot be as ambiguous as suggested by Gibson et al. (1997).
## 5 Distinguishing between Possible Metal Enrichment Mechanisms
We showed above that iron, nickel and sulfur abundances are centrally enhanced in Abell 496 and that gradients in various elemental ratios indicated that these central abundance enhancements are largely due to SN Ia ejecta. Our more model dependent result is that $``$50% of the iron $`312^{}`$ from the cluster center comes from SN Ia. The existence of spatial gradients in abundance ratios implies that the dominant metal injection mechanism near the cluster center must be different than in the outer parts. We will first distinguish between several possible mechanisms for producing central abundance enhancements in the cluster. Then we will assess the relative roles of winds and ram pressure stripping as metal enrichment mechanisms for the bulk of the cluster.
### 5.1 Mechanisms for Creating Central Abundance Enhancements
There are several mechanisms that may cause central abundance enhancements in intracluster gas: 1) ram pressure stripping of the metal-rich gas in cluster galaxies by intracluster gas is more effective at the center, where the intracluster gas density is highest (Nepveu 1981); 2) secular mass loss from the stars in central dominant galaxies may accumulate near the cluster center (White et al. 1994); 3) if galaxies blew winds which were not thoroughly mixed in the intracluster gas, metal abundances would decline outward from the center, since the luminosity density of metal-injecting galaxies falls more rapidly with radius than the intracluster gas density (Koyama et al. 1991; White et al. 1994); 4) even if cluster galaxies blew winds which were generally well-mixed in the intracluster gas, a central dominant galaxy’s wind may be at least partially suppressed, by virtue of its location at the bottom of the cluster’s gravitational potential and in the midst of the highest intracluster gas density (White et al. 1994). Although each of these mechanisms can produce abundance gradients, no individual mechanism can produce spatial gradients in abundance $`ratios`$. The fact that we observe gradients in abundance ratios implies that the dominant metal enrichment mechanism changes spatially.
There are several reasons why ram pressure stripping is unlikely to be the source of SN Ia iron at the center of the cluster. First, the gaseous abundances measured in most early-type galaxies by ASCA (Loewenstein et al. 1994; Matsumoto et al. 1997) and ROSAT (Davis & White 1996) are 0.2-0.4 solar, significantly less than the 0.5-0.6 solar abundance observed at the cluster center. Only the most luminous ellipticals, which also tend to be at the centers of galaxy clusters or groups, are observed to have gaseous abundances of 0.5-1 solar. Second, if ram pressure stripping is the primary source of SN Ia material at the center of the cluster, the cD is the one galaxy in the cluster which should not be stripped. Therefore, the cD should exhibit its accumulated history of SN Ia ejecta, in addition to any ejecta stripped from other galaxies as they passed near the cluster center. The cD should then have more SN Ia iron than expected for its luminosity. However, we show in Appendix A that the SN Ia iron mass to luminosity ratio ($`M_{\mathrm{Fe}_{\mathrm{Ia}}}/L_{B_{\mathrm{E}/\mathrm{S0}}}`$) in the vicinity of the cD is no greater than in the rest of the cluster (see Table 5). This indicates that the bulk of the SN Ia ejecta from the cD has been retained in its vicinity, but has not been supplemented by ejecta from other galaxies. Third, gaseous abundances in ellipticals tend to decline outward from their centers, as observed in NGC 4636 (Matsushita et al. 1997) and other early-type galaxies (Matsushita 1997), since the mass-losing stars in ellipticals exhibit such metallicity gradients; thus, most stripped gas will have even lower abundances than indicated by the global X-ray measures of ellipticals (since global measures are weighted by centrally concentrated emission measures in ellipticals), which are already too low. Finally, since the efficiency of ram pressure stripping depends on the intracluster gas density, which declines strongly with radius, the abundances of SN Ia ejecta should also decline strongly with radius, which is not observed; we see only a factor of two decline in the iron abundance from SN Ia (see Table 5). We conclude that ram pressure stripping is not the cause of the central abundance enhancements in Abell 496. If ram pressure stripping is effective in the cluster, it would act to $`dilute`$ the central abundance enhancement.
To assess whether secularly accumulated stellar mass loss in the cD could cause the observed central abundance enhancement in Abell 496, we compare the cD to a giant elliptical, which is not in a rich cluster: NGC 4636 is one of the most X-ray luminous ellipticals and may be at the center of its own small group (Matsushita et al. 1998). Thus, NGC 4636 is not in an environment where it is likely to have been stripped. If giant (unstripped) ellipticals secularly accumulate their stellar mass loss (after a putative wind phase), NGC 4636 should have the same SN Ia iron mass to light ratio as the cD in Abell 496. Instead, we find that the SN Ia Fe mass to light ratio in the central region of Abell 496 is at least $``$ 20 times higher than in NGC 4636 (see Appendix A). We conclude that the central abundance enhancements in Abell 496 are not likely to be due to secularly accumulated stellar mass loss in the cD.
If early-type galaxies blew winds which were only locally mixed in the intracluster gas, we would expect to observe an abundance gradient $`A(r)`$ which is proportional to the ratio of the luminosity density of wind-blowing galaxies ($`\mathrm{}_{\mathrm{E}/\mathrm{S0}}`$) to the intracluster gas density, i.e. $`A(r)\mathrm{}_{\mathrm{E}/\mathrm{S0}}(r)/\rho _{\mathrm{gas}}(r)`$. Equivalently, $`A(r)L_{\mathrm{E}/\mathrm{S0}}(r)/M_{\mathrm{gas}}(r)`$, where $`L_{\mathrm{E}/\mathrm{S0}}(r)`$ and $`M_{\mathrm{gas}}(r)`$ are the luminosity of early-type galaxies and the intracluster gas mass enclosed in spherical shells about the center. We derived the luminosity distribution of early-type galaxies in Abell 496 from the galaxy morphology data of Dressler (1980) and the gas density distribution from $`Einstein`$ IPC data (see Appendix A). In Table 5 we show that the ratio $`L_{\mathrm{E}/\mathrm{S0}}/M_{\mathrm{gas}}`$ declines a factor of $`4`$ from $`02^{}`$ to $`312^{}`$, while the observed iron abundance declines only $`30\%`$ (and the SN Ia iron abundance drops by half). Thus, the observed abundance gradient is too shallow to be caused by poorly mixed winds from early-type galaxies in the cluster.
We propose instead that the central gradients in abundances and abundance ratios in Abell 496 result from a partially suppressed SN Ia-driven wind from the cD. This would be a secondary wind phase, following a more vigorous, unsuppressed protogalactic wind driven by SN II. Such a secondary wind phase in noncentral early-type galaxies can generate the SN Ia enrichment seen in the bulk of the intracluster gas, as we suggest in the next subsection. As we showed above, the SN II iron mass to light ratio in the vicinity of the cD and the lack of a central enhancement in SN II ejecta indicate that it lost the bulk of its SN II ejecta. However, a weaker SN Ia-driven wind would be more readily suppressed at center of the cluster, due to the depth of the gravitational potential and the high ambient intracluster gas density. SN Ia-driven winds would be less vigorous than the initial SN II-driven winds, since SN Ia inject $``$10 times less energy per unit iron mass than SN II; the observations indicate that comparable amounts of iron came from SN Ia and SN II, so SN II have injected $``$10 times more energy into the intracluster gas than SN Ia.
If suppressed winds from central dominant galaxies are the cause of central abundance enhancements in other clusters, the prevalence of such enhancements in cooler clusters may be related to their cooling flow properties. Cool clusters tend to have cooling flows with smaller accretion rates than hot clusters, so the history of prior metal ejection from the central cD may be more likely to survive in cool clusters. For a given cD optical luminosity, the metal ejection is more likely to extend beyond the cooling flow region (since the central intracluster gas density is smaller in cool clusters than hot clusters), and the inward advection of the cooling flows would be less destructive to preexisting abundance gradients than in hotter clusters with higher accretion rates.
### 5.2 Global Enrichment Mechanisms
As mentioned in the introduction, the two metal enrichment mechanisms usually considered for the bulk of intracluster gas are protogalactic winds from early-type galaxies and ram pressure stripping of gas from galaxies in the cluster. The yields of the SN II models that we adopt (Nomoto et al. 1997b) lead us to conclude that nearly 50% of the iron in Abell 496 comes from SN Ia. Similar conclusions were reached for other clusters by Ishimaru & Arimoto (1997) and Nagataki & Sato (1998), who used different theoretical models for SN II. Slightly more than half of the cluster iron comes from SN II, which can be readily attributed to protogalactic wind enrichment. However, the quantity and spatial extent of the iron from SN Ia is problematic: is ram pressure stripping so effective that it contaminates the outer parts of clusters nearly as effectively as the central regions? Or was there a (secondary) galactic wind phase driven by SN Ia?
If ram pressure stripping is effective in the outer parts of the cluster, it should be even more effective at the center, where the intracluster gas density is highest. However, we showed in the previous subsection that ram pressure stripping cannot account for the central concentration of metals. Renzini et al. (1993) have also argued strongly against ram pressure stripping being very significant in clusters, citing the lack of a strong metallicity trend with cluster temperature: hot clusters have higher velocity dispersions and tend to have much higher gas densities than cool clusters, so ram pressure stripping should be much more effective in hot clusters than cool clusters. The lack of a strong metallicity trend with cluster temperature implies that ram pressure is not the major source of intracluster metals.
We propose instead that the bulk of intracluster gas is contaminated by two phases of winds from early-type galaxies: an initial SN II-driven protogalactic wind phase, followed by a secondary, less vigorous SN Ia-driven wind phase. As mentioned in the previous subsection, secondary SN Ia-driven winds would be $``$ 10 times less energetic than the initial SN II-driven protogalactic winds. Fukazawa et al. (1998) invoked SN II-driven protogalactic winds to account for their discovery that the proportion of SN Ia ejecta is higher in cool clusters than in hot clusters. They suggested that protogalactic winds were energetic enough that SN II-enriched material was able to escape cool clusters, which have shallower gravitational potentials than hot clusters. Less vigorous secondary SN Ia-driven winds would allow SN Ia-enriched material to escape most galaxies, but not clusters.
The two phase wind scenario we are advocating has not been explicitly modeled to date. Some previous evolutionary models for the gas in ellipticals have presumed the existence of an initial SN II-driven wind, without modeling it, and concentrated on a later SN Ia-drive wind phase (Loewenstein & Mathews 1991; Ciotti et al. 1991). Some of these models assume that the bulk of intracluster iron comes from SN Ia. Others investigators have assumed that the bulk of intracluster iron comes from SN II and model only SN II-driven protogalactic winds in detail (Larson & Dinerstein 1975; David, Forman, & Jones 1991). However, in all recent investigations, present epoch SN Ia rates were adopted which lead to huge overpredictions of the current gaseous abundances in ellipticals: iron abundances were theoretically predicted to be 3-5 times solar, while ASCA and ROSAT observations find abundances to be 0.1-1 solar.
In parlance similar to that of Renzini et al. (1993), we are proposing a wind-outflow-inflow (WOI) model in which the “wind” is driven by SN II, the less vigorous “outflow” is driven by SN Ia, and the subsequent inflow experiences much less contamination by SN Ia than in previous modeling. Generating SN Ia-enriched outflows that can contaminate intracluster gas, but leave current abundances subsolar in elliptical atmospheres, requires rather different SN Ia rate evolution than in previous models. Prior models of SN Ia-driven winds tended to inject roughly the right amount of iron in clusters (to within a factor of $``$2 or so). However, the SN Ia rate must decline much faster than in previous models if the current SN Ia rate is to be as low as $`0.03`$ SNU (Loewenstein & Mushotzky 1997), in order to match the low iron abundances in elliptical atmospheres. This rate is 3-10 times smaller than previously adopted and is $`4`$ times smaller than the most recent optical estimate of the current SN Ia rate in ellipticals (0.13 SNU; Capellaro et al. 1997). To generate the amount of SN Ia observed in intracluster gas, the SN Ia rate at earlier times must be much larger than previously modeled, to compensate for the much lower current rate. These heuristic constraints on the evolution of the SN Ia rate are not yet theoretically motivated.
## 6 Summary
We have carried out a detailed analysis of the distribution of elemental abundances in the intracluster gas of Abell 496. Our main results which are independent of our choice of supernovae yield models are:
1. The hot gas of Abell 496 has significant abundance gradients: the iron abundance is 0.36$`{}_{0.03}{}^{}{}_{}{}^{+0.03}`$ solar 3–12 from the center, rising $``$50% to 0.53$`{}_{0.04}{}^{}{}_{}{}^{+0.04}`$ solar within $`2^{}`$; nickel and sulfur also have significant central concentrations.
2. There are spatial gradients in elemental abundance $`ratios`$ in this cluster; a variety of abundance ratios individually and collectively indicate that SN Ia ejecta is more dominant in the center than in the outer parts.
3. We find no significant gradient in SN II ejecta.
4. Ram pressure stripping is unlikely to generate the observed central abundance enhancements for several reasons, including the fact that gaseous abundances observed in elliptical atmospheres tend to be substantially less than the abundances observed in the intracluster gas near the center.
5. Two stage galactic winds, consisting of SN II-driven protogalactic winds followed by less energetic SN Ia-driven outflows, are proposed to generate comparable levels (by mass) of iron contamination from SN Ia and SN II in the intracluster gas of Abell 496.
6. Since the secondary SN Ia-driven wind is $`10`$ times less energetic than the SN II-driven wind, it is more likely to be smothered due to the cD being at the bottom of the cluster’s gravitational potential and in the midst of the highest intracluster gas density; Such a smothered wind may generate the observed central SN Ia iron abundance enhancement.
Results which are more dependent upon our particular choice of supernovae yield models include:
1. SN Ia account for $``$50% of the iron mass $`312^{}`$ from the center of the cluster and $``$70% of the iron mass within $`2^{}`$.
2. The central iron abundance enhancement can be attributed wholly to the iron associated with the central enhancement of SN Ia ejecta.
Our scheduled Chandra observation of Abell 496 should allow us to trace more accurately the gradients in abundances and abundance ratios, given Chandra’s higher spatial resolution and relative lack of scattering compared to ASCA. Since our suggested mechanism for generating gradients in abundances and abundance ratios should not be specific to just one cluster, we will be applying similar analyses to ASCA data for other clusters, as well.
This work was partially supported by the NSF and the State of Alabama through EPSCoR grant EHR-9108761. REW also acknowledges partial support from NASA grant NAG 5-2574 and a National Research Council Senior Research Associateship at NASA GSFC. This research made use of the HEASARC ASCA database and NED.
## Appendix A Appendix
We calculate the iron mass to luminosity ratio in the vicinity of the cD and for the rest of the cluster. For the cD, we will separately consider the iron from SN Ia and SN II and we will treat the iron from SN Ia in two ways: we will first attribute to the cD all the SN Ia iron in the central region where the abundance is enhanced; however, this may be an overestimate, since the cD’s central location makes it difficult to separate general intracluster gas from its own interstellar medium, even if the latter is substantial. Consequently, we will also consider the possibility that only the “excess” iron at the center (that which gives rise to the central abundance enhancement and to the central change in elemental abundance ratios) was generated by the cD. This “excess” is one third of all the iron within $`3^{}`$, or half of the iron attributable to SN Ia.
We derived the intracluster gas density distribution in Abell 496 by using Einstein IPC data to determine the shape (core radius and asymptotic slope of a $`\beta `$-model) of its X-ray surface brightness distribution and using the ASCA observations described in §2 to provide the flux normalization. Given the iron abundances listed in Table 3, we calculated the iron mass within the spherical volume contained between $`03^{}`$ to be $`M_{\mathrm{Fe}}5.1\times 10^9M_{}`$, as listed in Table 5 (also listed are calculations for $`02^{}`$). Using our best estimate of the SN Ia iron mass fraction described in §4, we calculated the iron mass from SN Ia to be $`M_{\mathrm{Fe}_{\mathrm{Ia}}}3.3\times 10^9M_{}`$ within $`03^{}`$.
We derived the galaxies’ optical luminosity distribution in Abell 496 from the galaxy morphological and ($`V`$-band) photometric data of Dressler (1980). We assumed that early-type galaxies are the source of the intracluster iron (cf. Arnaud et al. 1992) and derived their cumulative spherical luminosity distribution from a deprojection of their cumulative surface luminosity distribution. We converted the Dressler (1980) $`V`$ magnitudes to $`B`$ by assuming $`BV=1`$ for early-type galaxies. For the central cD, we used the photometry of Valentijn (1983), which assigns a total $`B`$ magnitude of $`B_T=13.42`$, giving a blue luminosity of $`L_B=2.6\times 10^{11}h_{50}^2`$ $`L_{}`$. We distributed the luminosity of the cD over several radial bins, using an $`r^{1/4}`$ law with the same effective radius, to avoid an artificial luminosity spike at the center. Table 5 lists the iron mass to optical light ratio for SN Ia and II ejecta within various projected regions, 0–2, 0–3 and 3–12. The errors shown in Table 5 include the iron mass and luminosity errors; the fitting errors from the luminosity deprojection procedure are relatively small, so are not included.
It can be seen from Table 5 that the iron mass to light ratio for SN II ejecta is $``$2-10 times smaller at the center than in the outer regions. If ellipticals had protogalactic winds driven by SN II, this shows that such a wind in the cD was not suppressed by it being at the cluster center (i.e. by being at the bottom of the gravitational potential well of the cluster and in the midst of the highest intracluster gas density). Given that we see no significant gradient in SN II ejecta (see §4), this suggests that the SN II ejecta in vicinity of the cD is simply the result of a fairly uniformly mixed contamination in the cluster (in the regions observed).
The nominal SN Ia iron mass to light ratio in the central region is somewhat less than that of the outer parts, but the associated errors are large enough that this difference is not significant. Thus, the bulk of the SN Ia ejecta produced by the cD has been retained in its vicinity.
We also use X-ray observations of an individual elliptical galaxy to see how much gaseous iron it has accumulated per unit optical luminosity and compare with the cD in Abell 496. NGC 4636 is a particularly X-ray luminous elliptical for its optical luminosity, $`10^{}`$ from the center of the Virgo cluster in the Virgo Southern Extension (Nolthenius 1993). Its metric X-ray to optical luminosity ratio is $``$5 times larger than the median for ellipticals (White & Davis 1997, 1999), indicating that it has been particularly successful in retaining its hot gas. Matsushita et al. (1998) suggest that NGC 4636 may even be at the center of a small group of galaxies, with its emission enhanced by group gas. If this is true, it makes our conclusion even stronger. The optical and X-ray luminosities of NGC 4636 are $`L_B3\times 10^{10}L_{}`$ and $`L_X4\times 10^{41}`$ erg s<sup>-1</sup>, adopting a distance of 17 Mpc. Very deep ASCA exposures of this galaxy have been analyzed by Matsushita et al. (1997; 1998), who found an abundance gradient characterized by a central value of $``$0.65 solar (converted to the photospheric abundance scale), declining to $``$0.2 solar $`10^{}`$ away.
We used the gas distribution of Matsushita et al. (1998) and the abundance distribution of Matsushita et al. (1997) to calculate the iron mass within $`7r_{\mathrm{eff}}`$ in NGC 4636. This encompasses the bulk of its iron content and virtually all of its optical light. We compare this to the iron mass and optical luminosity within $`3^{}`$ of the center of Abell 496, which encompasses most of the region with enhanced abundances. We find that the SN Ia iron mass to luminosity ratio for the cD is $``$40 times greater than in NGC 4636. If we attribute to the cD only the “excess” amount of SN Ia iron at the center compared to the rest of the cluster, the SN Ia iron mass to light ratio in the cD is still $``$20 times higher than within NGC 4636. The discrepancy is actually even greater than indicated, since we have overestimated the SN Ia iron within NGC 4636 by assuming that all of its iron was produced by SN Ia and we have underestimated the SN Ia iron in the vicinity of the cD by restricting ourselves to within $`3^{}`$, while the SIS data show that abundances are enhanced out to $`5^{}`$.
|
no-problem/9902/quant-ph9902033.html
|
ar5iv
|
text
|
# Entanglement of pure states for a single copy
## Abstract
An optimal local conversion strategy between any two pure states of a bipartite system is presented. It is optimal in that the probability of success is the largest achievable if the parties which share the system, and which can communicate classically, are only allowed to act locally on it. The study of optimal local conversions sheds some light on the entanglement of a single copy of a pure state. We propose a quantification of such an entanglement by means of a finite minimal set of new measures from which the optimal probability of conversion follows.
A proper quantification of entanglement is a priority in quantum information theory, for many of its applications rely on quantum correlations as a necessary resource. Such a quantification of the non-local resources of a state should provide us with a detailed account of which tasks can be accomplished with it, or more specifically – since we are in the quantum kingdom –, with the probability with which a given task can be accomplished.
Our work addresses the quantification of the entanglement of pure states shared by two parties. So far its complete quantification has been achieved only in a very specific limit, namely when the parties share infinitely many identical copies of a given state . On the other hand some authors () have initiated the study of the entanglement of a finite number of copies of pure states. This finite framework, which is the one involved in the realistic situations one encounters in a lab, will be also the objective of this work. The approach taken here relies on the study of optimal local transformations and of magnitudes that have a monotone behavior under any local manipulation of the system, which will be referred to as entanglement monotones. We present a new set of such entanglement monotones and argue that they together quantify uniquely the entanglement of pure states in a physically relevant fashion.
Our starting question is the following: suppose that Alice and Bob share a pure entangled state $`\mathrm{\Psi }`$ and that they would like to convert it into another pure entangled state $`\mathrm{\Phi }`$. Which is the greatest probability of success in such a conversion if the two parties, which are classically communicated, are only allowed to act on the system locally?
We present here the answer to this question (see eq. (3)) , together with an explicit local strategy achieving the optimal probability. We also investigate, and refute, a possible ordering on the set of pure states induced by such probability. Finally some considerations regarding the nature of entanglement are made, and reversibility of optimal conversions, additivity of entanglement and uniqueness of their measure are argued not to hold.
Let us start by considering the most general pure state of a bipartite system $`\mathrm{\Psi }𝒞^n𝒞^n`$ and its Schmidt decomposition
$$|\mathrm{\Psi }=\underset{i=1}{\overset{n}{}}\sqrt{\alpha _i}|i_Ai_B,\alpha _i\alpha _{i+1}0,\underset{i=1}{\overset{n}{}}\alpha _i=1,$$
(1)
where $`\{\sqrt{\alpha _i}\}`$ are the Schmidt coefficients of $`\mathrm{\Psi }`$ and $`|i_Ai_B`$ stands for $`|i_A|i_B`$, $`\{|i_A\}_{i=1}^n`$ and $`\{|i_B\}_{i=1}^n`$ being two local orthonormal bases depending on $`\mathrm{\Psi }`$. Since Alice and Bob are allowed to perform local unitary transformations, and these are locally reversible, any two states with the same Schmidt coefficients are locally equivalent. Thus only the Schmidt coefficients are relevant as far as non-locality is concerned, and we may study, without loss of generality, the optimal local conversion of $`\mathrm{\Psi }`$ into $`\mathrm{\Phi }`$ satisfying
$$|\mathrm{\Phi }=\underset{i=1}{\overset{n}{}}\sqrt{\beta _i}|i_Ai_B,\beta _i\beta _{i+1}0,\underset{i=1}{\overset{n}{}}\beta _i=1.$$
(2)
Theorem: Let us call $`P(\mathrm{\Psi }\mathrm{\Phi })`$ the maximal probability of obtaining the state $`\mathrm{\Phi }`$ from $`\mathrm{\Psi }`$ by means of any local strategy. Then, in terms of the Schmidt coefficients of $`\mathrm{\Psi }`$ and $`\mathrm{\Phi }`$, we have
$$P(\mathrm{\Psi }\mathrm{\Phi })=\underset{l[1,n]}{\mathrm{min}}\frac{_{i=l}^n\alpha _i}{_{i=l}^n\beta _i}.$$
(3)
Before proving this result, let us note here that for $`|\mathrm{\Phi }=_{i=1}^m\frac{1}{\sqrt{m}}|i_Ai_B`$ we recover the results obtained by Lo and Popescu in , while the entanglement monotone $`E_{k=2}`$ reduces, in the two-qubit case (that is, $`n=2`$), to the entanglement of single pair purification introduced by Bose, Vedral and Knight in .
Proof: Optimality of eq. (3) will be proved by
* showing an explicit local strategy which converts $`\mathrm{\Psi }`$ into $`\mathrm{\Phi }`$ successfully with such probability, and by
* introducing a family of entanglement monotones, denoted by $`E_k(\rho )`$, $`k=1,\mathrm{},n`$, and defined over the set of pure states as
$$E_k(\mathrm{\Psi })\underset{i=k}{\overset{n}{}}\alpha _i,$$
(4)
whose monotonicity sets the upper bound
$$P(\mathrm{\Psi }\mathrm{\Phi })\underset{l[1,n]}{\mathrm{min}}\frac{_{i=l}^n\alpha _i}{_{i=l}^n\beta _i}=\underset{l[1,n]}{\mathrm{min}}\frac{E_l(\mathrm{\Psi })}{E_l(\mathrm{\Phi })}.$$
(5)
Indeed, suppose that there is a local strategy with probability of success $`P^{}`$ greater than this upper bound, and that the minimum in eq. (5) is for $`l=l_{}`$. Before the conversion the amount of the monotone $`E_l_{}`$ is $`E_l_{}(\mathrm{\Psi })`$, and after the conversion it would be, on average, at least –since we may be neglecting positive contributions coming from unsuccessful conversions– $`P^{}E_l_{}(\mathrm{\Phi })>E_l_{}(\mathrm{\Psi })`$, which would mean an increase of this (non-increasing) entanglement monotone, and would lead therefore to a contradiction.
That eq. (4), together with the convex roof extension of $`E_k`$ to mixed states
$$E_k(\rho )\underset{\mathrm{{\rm Y}}_\rho }{\mathrm{min}}\underset{j}{}p_jE_k(\psi _j),$$
(6)
(here the minimization is to be performed over all the pure-state ensembles $`\mathrm{{\rm Y}}_\rho =\{p_j,\psi _j\}`$ realizing $`\rho `$, i.e. such that $`\rho =_jp_j|\psi _j\psi _j|`$), defines an entanglement monotone for each $`k`$ follows from the fact that $`E_k(\mathrm{\Psi })`$ can be written as
$$E_k(\mathrm{\Psi })=f_k(\text{Tr}_A|\mathrm{\Psi }\mathrm{\Psi }|),$$
(7)
where $`f_k(\sigma )_{i=k}^n\alpha _i`$ –that is the sum of the $`nk+1`$ smallest eigenvalues of $`\sigma `$– is a unitarily-invariant, concave function of $`\sigma `$, and from Theorem 2 in . That $`f_k(\sigma )`$ is a concave function follows from the ”Ky Fan’s Maximum Principle” . Therefore what remains to be shown is that there is a local conversion strategy compatible with eq. (3).
Notice, first, that eq. (5) implies that if the number of non-zero Schmidt coefficients of $`\mathrm{\Psi }`$ is smaller than that of $`\mathrm{\Phi }`$, then $`P(\mathrm{\Psi }\mathrm{\Phi })=0`$, as it is already well known . Therefore we will assume from now on that $`\mathrm{\Psi }`$ has at least as many non-vanishing Schmidt coefficients as $`\mathrm{\Phi }`$. We will also assume, for simplicity sake, that $`\alpha _n>0`$ (by lowering de dimension $`n`$ of the original local Hilbert spaces if needed).
The optimal local conversion strategy we present here consists of two steps. In the first one the parties convert, with certainty, the initial state $`\mathrm{\Psi }`$ into a temporary pure state $`\mathrm{\Omega }`$, by making use of a local strategy recently proposed by Nielsen . In a second step $`\mathrm{\Omega }`$ is converted into $`\mathrm{\Phi }`$ by means of a local measurement, $`P(\mathrm{\Psi }\mathrm{\Phi })`$ being the probability that this last conversion be successful.
Let us thus call $`l_1`$ the smallest integer $`[1,n]`$ such that
$$\frac{_{i=l_1}^n\alpha _i}{_{i=l_1}^n\beta _i}=\underset{l[1,n]}{\mathrm{min}}\frac{_{i=l}^n\alpha _i}{_{i=l}^n\beta _i}r_1(1).$$
(8)
It may happen that $`l_1=r_1=1`$. If not, it follows from the equivalence
$$\frac{a}{b}<\frac{a+c}{b+d}\frac{a}{b}<\frac{c}{d}(a,b,c,d>0)$$
(9)
that for any integer $`k[1,l_11]`$
$$\frac{_{i=k}^{l_11}\alpha _i}{_{i=k}^{l_11}\beta _i}>r_1.$$
(10)
Let us then define $`l_2`$ as the smallest integer $`[1,l_11]`$ such that
$$r_2\frac{_{i=l_2}^{l_11}\alpha _i}{_{i=l_2}^{l_11}\beta _i}=\underset{l[1,l_11]}{\mathrm{min}}\frac{_{i=l}^{l_11}\alpha _i}{_{i=l}^{l_11}\beta _i}(>r_1).$$
(11)
Repeating this process until $`l_k=1`$ for some $`k`$, we obtain a series of $`k+1`$ integers $`l_0>l_1>l_2>\mathrm{}>l_k`$ ($`l_0n+1`$) and $`k`$ positive real numbers $`0<r_1<r_2<\mathrm{}<r_k`$, by means of which we define our temporary (normalized) state $`|\mathrm{\Omega }=_{i=1}^n\sqrt{\gamma _i}|i_Ai_B`$, where
$$\gamma _ir_j\beta _i\text{ if }i[l_j,l_{j1}1],\text{i.e.,}$$
(12)
$$\stackrel{}{\gamma }=\left[\begin{array}{c}r_k\left[\begin{array}{c}\beta _{l_k}\\ \mathrm{}\\ \beta _{l_{k1}1}\end{array}\right]\\ \mathrm{}\\ r_2\left[\begin{array}{c}\beta _{l_2}\\ \mathrm{}\\ \beta _{l_11}\end{array}\right]\\ r_1\left[\begin{array}{c}\beta _{l_1}\\ \mathrm{}\\ \beta _{l_01}\end{array}\right]\end{array}\right].$$
(13)
Notice that by construction
$$\underset{i=k}{\overset{n}{}}\alpha _i\underset{i=k}{\overset{n}{}}\gamma _ik[1,n]$$
(14)
(or, equivalently, $`_{i=1}^k\alpha _i_{i=1}^k\gamma _ik[1,n]`$, that is to say, $`\stackrel{}{\alpha }`$ is majorized by $`\stackrel{}{\gamma }`$). Consequently Nielsen’s local strategy shown in can be applied in order for the parties to obtain the state $`\mathrm{\Omega }`$ from $`\mathrm{\Psi }`$ with certainty.
Let us consider now the positive operator $`\widehat{M}:𝒞^n𝒞^n`$
$$\widehat{M}\left[\begin{array}{cccc}\widehat{M}_k& & & \\ & \mathrm{}& & \\ & & \widehat{M}_2& \\ & & & \widehat{M}_1\end{array}\right]=\widehat{M}^{},$$
(15)
where
$$\widehat{M}_j\sqrt{\frac{r_1}{r_j}}\widehat{I}_{[l_{j1}l_j]}j=1,\mathrm{},k,$$
(16)
is proportional to the identity in a $`(l_{j1}l_j)`$-dimensional subspace of $`𝒞^n`$. It satisfies that $`0\widehat{M}I`$, so that together with $`\widehat{N}\sqrt{1\widehat{M}^2}`$ it defines a generalized measurement of two outcomes ($`\widehat{M},\widehat{N}0;\widehat{M}^{}\widehat{M}+\widehat{N}^{}\widehat{N}=\widehat{I}`$) that Alice (for instance) can perform locally. Since $`\widehat{M}\widehat{I}_B|\mathrm{\Omega }=\sqrt{r_1}|\mathrm{\Phi }`$, the whole local strategy allows to obtain the pure state $`\mathrm{\Phi }`$ from $`\mathrm{\Psi }`$ with (optimal) probability $`P(\mathrm{\Psi }\mathrm{\Phi })=r_1`$. Notice that $`\widehat{N}\widehat{I}_B|\mathrm{\Omega }`$ is an unnormalized, pure (often entangled) state with less non-vanishing coefficients than $`\mathrm{\Phi }`$, so that, as expected, one can not use it to obtain $`\mathrm{\Phi }`$. This ends the proof of eq. (3).$`\mathrm{}`$
Notice that this strategy can be minimally implemented, for instance, with local measurements on Alice’s side, one way classical communication (from Alice to Bob) and local unitary transformations on both sides, these three types of allowed operations being performed several times (the number of operations will depend on the two states, but is of order $`n`$). Notice also that this strategy is not the simplest optimal one since optimal local conversion strategies must exist involving only one measurement on Alice side, plus one transmission of classical bits from Alice to Bob, plus one locally unitary transformation on each side (see ).
Let us briefly consider an alternative scenario where eq. (3) can be applied. Suppose that, as before, the parties start sharing the pure state $`\mathrm{\Psi }`$, but that now their aim is to obtain (on average) the greatest number of copies of the state $`\mathrm{\Phi }`$, say $`m_{\mathrm{\Psi }\mathrm{\Phi }}^{MAX}`$. In this case the optimal strategy involve, if possible, local conversions into several copies of $`\mathrm{\Phi }`$, and this is not ruled in general by eq. (3). However, there are circumstances in which $`m_{\mathrm{\Psi }\mathrm{\Phi }}^{MAX}=P(\mathrm{\Psi }\mathrm{\Phi })`$. Indeed, let $`n_\psi `$ denote the number of non-vanishing Schmidt coefficients of the entangled state $`\psi `$, and recall that $`n_{\psi ^N}=n_\psi ^N`$. Then,
$$n_\mathrm{\Psi }<n_\mathrm{\Phi }^2P(\mathrm{\Psi }\mathrm{\Phi }^N)=0N2$$
(17)
implies that the greatest number $`m_{\mathrm{\Psi }\mathrm{\Phi }}^{MAX}`$ of copies of $`\mathrm{\Phi }`$ the parties can obtain locally from $`\mathrm{\Psi }`$ is also given by $`P(\mathrm{\Psi }\mathrm{\Phi })`$ when $`n_\mathrm{\Psi }<n_\mathrm{\Phi }^2`$.
Let us move to consider now the following question: Is there any order in the space of entangled pure states that can be derived from eq. (3)? In a partial order on the entangled pure states was obtained according to whether, given two states $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$, one of them can be converted locally into the other with certainty, say $`\mathrm{\Psi }_1`$ into $`\mathrm{\Psi }_2`$. If so, $`\mathrm{\Psi }_1`$ can be said to contain at least as much entanglement as $`\mathrm{\Psi }_2`$, in the sense that any non-local resource that $`\mathrm{\Psi }_2`$ may contain, it is automatically contained, in at least the same amount, also in $`\mathrm{\Psi }_1`$, and, again, the non-local resources needed to obtain $`\mathrm{\Psi }_1`$ suffice to create $`\mathrm{\Psi }_2`$. But if on the contrary $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$ are such that none of them can be converted into the other with certainty, then their entanglement is incommensurable according to this criteria. One might be tempted to extend such a partial order to the whole set of pure states by saying that the state $`\mathrm{\Psi }_1`$ is more entangled than $`\mathrm{\Psi }_2`$ if, and only if, $`P(\mathrm{\Psi }_1\mathrm{\Psi }_2)>P(\mathrm{\Psi }_2\mathrm{\Psi }_1)`$. However, the following example shows that this order would be ill-defined: consider three states $`\mathrm{\Psi }_k𝒞^4𝒞^4`$, the square of the Schmidt coefficients of the $`k`$-th state being $`\stackrel{}{\alpha }_k`$, where
$`\stackrel{}{\alpha }_{k=1}`$ $``$ $`{\displaystyle \frac{1}{144}}(108,12,12,12),`$ (18)
$`\stackrel{}{\alpha }_{k=2}`$ $``$ $`{\displaystyle \frac{1}{144}}(66,66,6,6),`$ (19)
$`\stackrel{}{\alpha }_{k=3}`$ $``$ $`{\displaystyle \frac{1}{144}}(47,47,47,3).`$ (20)
Then such an ordering relation leads to the following contradiction:
$$\mathrm{\Psi }_1<\mathrm{\Psi }_2<\mathrm{\Psi }_3<\mathrm{\Psi }_1.$$
(21)
Finally, we would like to analyze what conclusions can be drawn from eq. (3) regarding the quantification of entanglement of a shared state, understood in relation to the non-local resources that characterize it. One can consider, e.g., both how many such non-local resources are needed to create the state and how many of them can be extracted from it, in terms of other shared states.
For pure states of a bipartite system the entropy of entanglement $`E(\mathrm{\Psi }^N)`$ , and therefore one sole –and unique – parameter, quantifies asymptotically the non-local resources of a huge number $`N`$ of copies of a given shared state $`\mathrm{\Psi }`$. It turns out that in such a context optimal local conversions are reversible, and that entanglement behaves as an additive property of the quantum world.
We have considered in the present work the optimal local conversion of single copies of pure states, which falls far from the large-N asymptotic case. This finite scenario is relevant in the light of the state of present technology, for it is not clear yet how to perform certain local transformations in the space of a large number of copies which are a necessary ingredient in the asymptotic conversions so far exposed . But even if one knows how such local transformations can be performed in a lab, the finite scenario is important on its own, since it describes the local resources involved in any local manipulation of a finite number of copies of pure states. Eq. (3) teaches us the following qualitative facts about entanglement:
1. Irreversibility: the optimal local conversion between any two states with non-identical Schmidt coefficients is always an irreversible process. Here irreversible means that the parties can not, with certainty, convert locally one state into another and then get the initial state back. This general result, which was proved in and follows also from , does not hold asymptotically.
2. More than one measure: the quantification of entanglement, in the sense exposed above, requires more than just one measure . For pure states in $`𝒞^n𝒞^n`$, the $`n1`$ entanglement monotones $`E_k(k=2,\mathrm{},n)`$ are a minimal set of non-increasing parameters providing a detailed and straightforward account of their non-local resources. They can be regarded as the measures of the entanglement of pure states in a similar sense as the entropy of entanglement is their measure in the asymptotic limit.
3. Non-additivity: the non-local resources of entangled states are not additive in general, in that, for instance, two parties can often extract more such resources from two copies of a given shared state $`\mathrm{\Psi }`$, i.e. from $`\mathrm{\Psi }\mathrm{\Psi }`$, than twice what they can obtain from one single copy $`\mathrm{\Psi }`$ . From this point of view it is artificial to take additivity as an a priori requirement for any good measure of entanglement. Thus additivity of the entanglement of pure states in the asymptotic limit is a remarkable result, rather than an a priori constraint, which follows from the additivity of the entropy of entanglement and from the existence of reversible asymptotic conversions (as the ones in ).
Summarizing, given two pure shared states $`\mathrm{\Psi }`$ and $`\mathrm{\Phi }`$, the highest probability $`P(\mathrm{\Psi }\mathrm{\Phi })`$ of success in the conversion of $`\mathrm{\Psi }`$ into $`\mathrm{\Phi }`$ by means of any local strategy can be used to quantify their entanglement. We have shown that for pure states of a bipartite system the entanglement monotones $`E_k`$ provide $`P(\mathrm{\Psi }\mathrm{\Phi })`$, since this probability is the greatest one compatible with the monotonicity of $`E_k`$. The explicit expression for $`P(\mathrm{\Psi }\mathrm{\Phi })`$ shows that the entanglement of a pure state $`\mathrm{\Psi }`$ behaves essentially differently from that of $`\mathrm{\Psi }^N`$ for very large $`N`$.
There are many open problems regarding finite entanglement. It would be interesting to derive equivalent results for pure states shared by three or more parties. Also, to extend the results presented here to mixed states . A way to proceed is by studying concrete local conversion strategies, which mean lower bounds on the optimal probability of success in the conversion, and by identifying new entanglement monotones, since each one implies an upper bound. In this scheme it becomes reasonable to demand, as an a priori requirement, only monotonicity under local manipulations in order for a magnitude to be a candidate for a measure of entanglement.
The author is most grateful to Rolf Tarrach for his thorough reading of the manuscript, comments and suggestions. Comments are also acknowledged to Maciej Lewenstein. The author thanks Maciej Lewenstein, Anna Sanpera and Christel Franko for their hospitality in Hannover. Financial support from CIRYT, contract AEN98-0431 and CIRIT, contract 1998SGR-00026 and a CIRIT grant 1997FI-00068 PG are also acknowledged.
|
no-problem/9902/astro-ph9902110.html
|
ar5iv
|
text
|
# An extinction study of the Taurus Dark Cloud Complex
## 1 Introduction
In order to understand how molecular clouds evolve and eventually produce stars, it is necessary to study the distribution of their star-forming matter. Since the clouds’ main constituent, molecular hydrogen, is generally unobservable, it is necessary to use other tracers, whose abundance relative to hydrogen can be reliably estimated, to map out the distribution of material. The extinction of background starlight is the result of the absorption and scattering of photons off dust grains; so, for a given line of sight, the amount of extinction is directly proportional to the amount of dust. If the gas-to-dust ratio is known and constant (e.g. Bohlin et al. 1978), then a detailed study of the dust distribution in a cloud serves as a detailed study of its mass distribution.
The study of fluctuations in the dust distribution is also interesting independent of its usefulness as a mass tracer. Strong fluctuations in the dust distribution have considerable impact on both the physics and chemistry of the interstellar medium (ISM), which both depend heavily on the extinction (opacity) structure on all scales (see Thoraval et al. 1997, and references therein). In addition, knowledge of the spatial structure and amount of extinction in the Galactic ISM is important as it effects the apparent colors of background sources, such as stars and galaxies.
The most direct measure of reddening is the color excess of a star with known spectral type. Unfortunately, mapping out extended distributions of extinction (reddening) by obtaining photometry and a spectrum for large numbers of stars is very tedious and time consuming, and usually impractical. Therefore, the traditional way of undertaking an extinction study of a fairly large region of the sky has been, until recently, through the use of optical star counts using photographic plates (Bok & Cordwell 1973). This method can only be used up to an extinction of approximately 4 mag, with a resolution of $``$2.5′. Thankfully, recent advances in technology have led to the development of new methods of deriving the extinction in dark cloud regions. For example, in a study of the structure of nearby dark clouds, Wood et al. (1994) use 60 and 100 µm images taken by IRAS to calculate 100 µm optical depth, from which they obtain the extinction ($`A_V`$). Lada et al. (1994; hereafter LLCB) took advantage of the improvements in infrared array cameras to devise a clever new method of measuring extinction. The LLCB technique,<sup>3</sup><sup>3</sup>3This technique has been called the “NICE” (Near Infrared Color Excess) method by Alves et al. (1998). which combines measurements of near-infrared ($`H`$ and $`K`$) color excess and certain techniques of star counting, has a higher angular resolution and can probe greater optical depths than that achieved by optical star counting alone.
In this study we use four different methods of measuring $`A_V`$, utilizing: 1) the color excess of individual background stars for which we could obtain spectral types; 2) ISSA 60 and 100 µm images to estimate dust opacity; 3) traditional star counting; and 4) an optical ($`V`$ and $`R`$) version of the average color excess method used by Lada et al. (1994). To our knowledge, this is the first time that all of these different methods have been directly intercompared. We describe the acquisition and reduction of the data in Section 2. In Section 3 we present the results of the observations, and in Section 4 we offer analysis and discussion. Readers interested primarily in intercomparison of the various methods and limits on density fluctuations should skim sections 2 and 3 and focus more on Section 4 and 5. In Section 5 we compare and rate the four different methods of obtaining $`A_V`$. We devote Section 6 to our conclusions.
## 2 Data
The new photometric and spectroscopic observations used in this paper were originally obtained to conduct the polarization-extinction study described in Arce et al. (1998). The photometry consists of $`B`$, $`V`$ and $`R`$ CCD images of two 10 arcmin by $``$ 5 deg “cuts” through the Taurus dark cloud complex (see Figure 1). In the spectroscopic observations, we observed 95 stars (most of them in cut 1), in order to determine their spectral types. The cuts shown in Figure 1 pass through two well known filamentary dark clouds (L1506 and B216-217, both at a distance of 140 pc from the Sun) as well as very low extinction regions, giving our photometric observations a fairly large dynamic range in extinction. In the spectroscopic observations, we selected our target stars along the two cuts by virtue of their relative brightness, in that we attempted to exclude foreground stars by not selecting stars which appear unusually bright. Table 1 lists the spectral type, apparent $`V`$ magnitude and $`BV`$ color, and the derived spectroscopic parallax distance for each target star. Our stellar sample has only one star with a distance less than 140 pc, which confirms that we largely succeeded in excluding foreground stars. In addition to the new photometric and spectroscopic observations we also obtained co-added images of flux density from the IRAS Sky Survey Atlas (ISSA), in order to examine the far-infrared emission from dust in the region.
### 2.1 Photometric Data
The broad band imaging data of the two cuts (Figure 1) through the Taurus dark cloud complex were obtained using the Smithsonian Astrophysical Observatory (SAO) AndyCam on the Fred Lawrence Whipple Observatory (FLWO) 1.2-meter telescope on Mt. Hopkins, Arizona. AndyCam is a camera with a thinned back-side illuminated AR coated Loral $`2048\times 2048`$ CCD chip. All the frames were taken in $`2\times 2`$ bin mode, giving a plate scale of 0.63 arcsec per pixel. In 1995 November, a total of 64 frames in different positions in the sky were taken in the $`B`$, $`V`$ and $`R`$ bands, where $`R`$ is the Cousins $`R`$ band filter with an effective wavelength equal to 0.64 µm. In each position we obtained one 200 second exposure for each broad-band filter. Each telescope pointing was a little less than 10′ north of the previous position, and since each frame is slightly larger than 10′ $`\times `$ 10′, there is a small sky overlap between the frames successive positions. The first cut extends from declination of 22°30′ to 28°20′, centered on right ascension 4h 22m 36s (J2000), with a total of 36 frames. The second cut extends from declination 24°00′ to 28°30′, centered on right ascension 4h 21m 29s (J2000), with a total of 28 frames. In addition to the frames acquired in November 1995, seventeen frames were taken in the $`U`$, $`B`$, and $`V`$ bands in October 1996, four frames were taken in the $`B`$ and $`V`$ bands in November 1996, and two frames were taken in the $`B`$ and $`V`$ bands in March 1997 —all with the same instrument configuration as the November 1995 frames. The additional frames were taken because: not all of the original frames were of good quality; several frames were of regions of special interest around the 2 dark clouds peripheries (outside the cuts) which the original frames did not include; and because shorter exposure images of some of the regions covered by the original frames were needed. The exposure times were 80, 150, and 200 sec for $`V`$, $`B`$, and $`U`$, respectively, for frames of new sky positions, and 30 sec in $`V`$, 50 sec in $`B`$, and 100 sec in $`U`$ for frames with repeated positions in the sky.
All of the stellar photometric data reduction was done using standard Image Reduction and Analysis Facility (IRAF) routines. For stars whose spectra had been measured (see §2.2), photometry was obtained in the flat-fielded, background-subtracted images using the APPHOT routine. After analyzing the dependence of magnitude value with aperture size in the standard stars for all nights, it was decided to use an aperture radius of 14 pixels. With this aperture size, less than 10% of the stars in the most crowded field, with $`R`$-band apparent magnitude ($`m_R`$) between 14.5 and 18.0 mag, have neighbors within the aperture. The correction to the standard photometric system was done using Landolt standards (Landolt 1992). A set of standards were observed for each night, at different times of the night, at varying airmasses. These were then used to solve a set of linear equations that would give the stars’ $`V`$ magnitude, B V and V R colors using the routines in the IRAF package PHOTCAL. The errors obtained from the APPHOT routine, and the errors in the transformation equation fit were summed in quadrature to give the final errors in the photometry. These were $`\pm 0.02`$ to 0.06 mag for $`V`$ and B V for stars with $`V`$ between 12.3 and 17.4 mag. We calculated the photometry of the standard stars used to derive the transformation equation and compared our results with those quoted by Landolt (1992). By doing so we convinced ourselves that the value of our final 1 $`\sigma `$ errors (Table 1) are a reasonably good estimation of the true photometry uncertainties.
In addition to obtaining apparent magnitudes and colors for stars, the photometric data were also used to do a star count of the region. The routine DAOFIND was used to detect sources in the $`R`$ filter frames. This routine automatically detects objects that are above a certain intensity threshold, and within a limit of sharpness and roundness, all of which the user specifies. We used a finding threshold of 5 times the rms sky noise in each frame. The objects detected by the routine are then stored in a file with the objects’ coordinates. Although care was taken to select a limit of sharpness and roundness so that DAOFIND would only detect stars, other objects (like cosmic rays and galaxies) were also detected, and some stars clearly above the threshold level were not detected. Thus, the $`R`$ frames were painstakingly inspected visually to erase detected objects that were not stars, and add the few stars that were clearly above the threshold, but were not originally detected. We obtained the photometry of all the stars in the sample and then made a histogram (Figure 2) of the number of stars versus apparent $`R`$ magnitude in order to study the completeness of the sample. From Figure 2 we estimate the upper completeness limit to be $`m_R18.0`$ mag. Stars brighter than 14.5 mag in the $`R`$-band 200 second exposures are saturated, thus the photometry of stars with $`m_R<14.5`$ is unreliable. Therefore we estimate that our sample is 90% (or more) complete for stars with $`14.5m_R<18.0`$.
### 2.2 Spectroscopic Data
The spectra of 95 stars along the cuts were obtained using the SAO FAST spectrograph on the FLWO 1.5-meter telescope on Mt. Hopkins, Arizona. The observations were carried out during the Fall trimesters of 1995 and 1996. FAST was used with a 3″ slit and a 300 line mm<sup>-1</sup> grating. This resulted in a resolution of $``$ 6 Å, a spectral coverage of $`3800`$ Å (from approximately 3600 to around 7400 Å), a dispersion of 1.47 Å/pixel and 1.64 pixels/arcsec along the dispersion axis.
The spectrum of each star was used to derive its spectral type. In order to spectroscopically classify the stars, we followed O’Connell (1973) and Kenyon et al. (1994) and computed several absorption line indices from the spectra:
$$I_\lambda =2.5\mathrm{log}\left[\frac{F(\lambda _2)}{F^{}(\lambda _2)}\right]$$
(1)
where
$$F^{}(\lambda _2)=F(\lambda _1)+\left[F(\lambda _3)F(\lambda _1)\right]\left[\frac{\lambda _2\lambda _1}{\lambda _3\lambda _1}\right]$$
(2)
is the interpolated continuum flux at the feature, $`\lambda _1`$ and $`\lambda _3`$ are continuum wavelengths, $`\lambda _2`$ is the feature wavelength, and $`F(\lambda _i)`$ is the average flux in erg cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup> over a bandwidth specified in Table II of O’Connell (1973). We measured the Ca II H ($`\lambda 3933`$), H$`\delta `$ ($`\lambda 4101`$), CH ($`\lambda 4305`$), H$`ϵ`$ ($`\lambda 4340`$), H$`\zeta `$ ($`\lambda 4861`$), and Mg I ($`\lambda 5175`$) indices of our program stars and then compared them to the indices of main sequence stars in the Jacoby et al. (1984) atlas. The indices have errors of $`510\%`$ depending on the signal to noise of the spectrum. This method resulted in spectral classification of most of the stars observed with accuracy of $`\pm 12`$ subclasses for spectral types A through F and $`\pm 24`$ subclasses for stars with spectral type G. Stars with spectral types earlier than A0 were not found, and stars later than G9 were not included in the sample due to the low accuracy in their spectral classification and reddening corrections. All the stars were assumed to be main sequence stars (luminosity type V). Kenyon et al. (1994) estimate, and also obtain, that $``$ 10% of their magnitude-limited sample of A–F stars in the Taurus region are giants. If we use this result with our magnitude-limited sample it would mean that only $``$7 stars of our 69 A–F stars are giants. In addition, the intrinsic B V color of main sequence A and F stars differs by less than 0.05 mag from the intrinsic B V of luminosity type III and type II stars. Of the G stars, no more than 5 out of a total of 26 should be giant stars (Mihalas & Binney 1981; Table 4-9). Hence we are not introducing large errors in the extinction of each star by assuming that all of the stars we classified are luminosity type V.
Once each star was classified, its reddening was calculated. Intrinsic B V values for each spectral type were obtained from Table A5 in Kenyon & Hartmann (1995). The observed B V value, from the photometric study, was then used to obtain the color excess: $`E_{BV}=(\mathrm{B V})(\mathrm{B V})_{}`$, where (B V) is the observed color index and (B V) is the unreddened intrinsic color of the star. An error in the stellar classification of $`\pm 12`$ subclasses in A-F stars transforms into an error of $`\pm 0.040.05`$ mag in $`E_{BV}`$, while an error in the stellar classification of $`\pm 24`$ subclasses in G stars transforms into an error of $`\pm 0.050.08`$ mag in $`E_{BV}`$. We assumed that $`A_V=R_VE_{BV}`$, with $`R_V`$ (the ratio of total-to-selective extinction) equal to 3.1 (Savage & Mathis 1979; Vrba & Rydgren 1985) —the validity of this assumption is discussed below. Using absolute magnitude values for each spectral type from Lang (1991), we then obtained distances for each star (see Table 1).
When we calculated the extinction to each star using a constant value of $`R_V=3.1`$, we made the implicit assumption that the ratio of total-to-selective extinction is constant for different lines of sight. In fact, $`R_V`$ varies along different lines of sight in the Galaxy and only has a mean of $`3.1`$ (Savage & Mathis 1979). In contrast with other regions, the Taurus-Auriga molecular cloud complex seems to have a fairly constant interstellar reddening law with $`R_V3.1`$ through most of the region (Vrba & Rydgren 1985; Kenyon et al. 1994). Our $`BVR`$ photometry is not ample enough to derive $`R_V`$ for each line of sight. We would need observations at shorter wavelengths to be able to independently obtain the value of the ratio of total-to-selective extinction for each line of sight we observed. Thus we decided to use the ISM (and Taurus) average of $`R_V=3.1`$, and to caution the reader that we do not take the errors caused by assuming a constant $`R_V`$ into account when we calculate the errors in $`A_V`$. In Section 3.1, we show how the ISSA data confirm that $`R_V=3.1`$ is a good estimate of the ratio of total-to-selective extinction for our region.
### 2.3 ISSA Data
The IRAS Sky Survey Atlas (ISSA) was used to obtain images of flux density at 60 and 100 µm of the Taurus dark cloud complex. Our region of interest lies in two different (but overlapping) ISSA fields. Each of these is a 500- by 500-pixel image, covering a 12.5° by 12.5° field of sky with a pixel size of 1.5′. The maps have units of MJy Sr<sup>-1</sup>, are made with gnomic projection, have spatial resolution smoothed to the IRAS beam at 100 µm (approximately 5′), and the zodiacal emission has been removed from them. The 12.5° by 12.5° images were cropped in order to keep only the region of the Taurus dark cloud complex shown in Figure 1. This resulted in a total of four different images; two (60 and 100 µm) images of the northern part and two images of the southern part of the map.
Although they have the zodiacal emission removed, ISSA fields are not calibrated so that the “zero point” corresponds to no emission, so another “background” needs to be subtracted. This background subtraction procedure went as follows. First, the minimum value of each of the four 12.5° by 12.5° images (see Table 2) was obtained and subtracted from them. The resulting images were then used to obtain an optical extinction ($`A_V`$) map through a process to be discussed below. At this point, after just a simple subtraction of the minimum value in each image, the north and the south A<sub>V</sub> maps (see Figure 1) did not agree within the errors in the region of overlap. So, the values used for the background subtraction were iterated until the best agreement for the overlap region was found, while keeping the background subtraction constants less than 1 MJy Sr<sup>-1</sup> away from the minimum flux value of the original 12.5° by 12.5° ISSA fields. Table 2 lists the values that were ultimately used for this purpose. These values resulted in a difference of 0.1 mag between the mean in the distribution of $`A_V`$ values in the northern image and the mean in the distribution of $`A_V`$ values in the southern image. Tests using other values for background subtraction showed that the resultant extinction values did not change significantly for small (less than $``$1 MJy Sr<sup>-1</sup>) changes in the background subtraction constants. As discussed below, the offset between ISSA plates winds up being the limiting error in determining extinction from ISSA data.
The extinction map was computed from the ISSA images using a method very similar to that described by Wood, Myers & Daugherty (1994), and references therein. Note, however, that in their study, Wood et al. (1994) used IRAS images which had not gone through a zodiacal emission removal process. They devised their own zodiacal light subtraction, which they state is not very efficient for regions near the ecliptic, like Taurus. One of the regions they study was in fact the Taurus dark cloud region itself. We believe that our extinction map is of better quality due to the fact that we use ISSA images which have a more elaborate zodiacal light subtraction algorithm.
The 60 and 100 µm dust temperature, $`T_d`$, at each pixel in an image can be obtained by assuming that the dust in a single beam can be characterized by one temperature ($`T_d`$), and that the observed ratio of 60 to 100 µm emission is due to blackbody radiation from dust grains at $`T_d`$, modified by a power-law emissivity. The flux density of emission at a wavelength $`\lambda _i`$, is given by
$$F_i=\left[\frac{2hc}{\lambda _i^3}\frac{1}{e^{hc/\lambda _ikT_d}1}\right]N_d\alpha \lambda _i^\beta \mathrm{\Omega }_i$$
(3)
where $`N_d`$ is the column density of dust grains, $`\beta `$ is the power-law index of the dust emissivity, $`\mathrm{\Omega }_i`$ is the solid angle at $`\lambda _i`$, and $`\alpha `$ is a constant of proportionality.
In order to use equation 3 to calculate the dust color temperature ($`T_d`$) of each pixel in the image we have to make some assumptions. The first assumption is that the dust emission is optically thin. We believe this is a safe assumption because in our maps there is not a single line of sight that could be optically thick ($`\tau _{100}>1`$). In fact, the largest $`\tau _{100}`$ we find in our processed images is 0.002. The second assumption we have to make is that $`\mathrm{\Omega }_{60}\mathrm{\Omega }_{100}`$, which is true for all ISSA images. With these two assumptions we can then write the ratio $`R`$ of the flux densities at 60 and 100 µm as:
$$R=\frac{F_{60}}{F_{100}}=0.6^{(3+\beta )}\left[\frac{e^{144/T_d}1}{e^{240/T_d}1}\right]$$
(4)
In order to proceed we need to assume a value of $`\beta `$. For now we will assume that $`\beta =1`$, and we will discuss the implications of this assumption later on. We constructed a look-up table with the value of $`R`$ calculated for a wide range of $`T_d`$, with steps in $`T_d`$ of 0.05 K. For each pixel in the image, the table was searched for the value of $`T_d`$ that reproduces the observed 100 to 60 µm flux ratio. Using the dust color temperature, we then calculate the dust optical depth for each pixel:
$$\tau _{100}=\frac{F_\lambda (100\mathrm{\mu m})}{B_\lambda (\lambda ,T_d)}$$
(5)
where $`B_\lambda (\lambda ,T_d)`$ is the Planck function and $`F_\lambda (100\mathrm{\mu m})`$ is the observed 100 µm flux.
We use equation 5 of Wood et al. (1994) to convert from optical depth to extinction in $`V`$:
$$A_V=15.078(1e^{\tau _{100}/a})$$
(6)
where $`\tau _{100}`$ is the optical depth and $`a`$ is a constant with a value of $`6.413\times 10^4`$. This equation relies on the work of Jarrett et al. (1989), who present a plot (their Figure 8) of the relation between 60 µm optical depth ($`\tau _{60}`$) and $`A_V`$ based on star counts. Assuming optically thin emission, Wood et al. (1994) multiply the Jarrett et al. $`\tau _{60}`$ values by 100/60 to convert to $`\tau _{100}`$ and obtain equation 6, above. Thus, extinction values obtained using the ISSA images are subject to the uncertainties in the conversion equation. But, Figure 8 of Jarrett et al. shows that there is a very tight correlation between $`\tau _{60}`$ and $`A_V`$ for $`A_V5`$ mag, implying very little uncertainty in the conversion of far-infrared optical depth to visual extinction. In the Taurus region under study in this paper, all of the extinction values measured are less than 5 mag, so we do not consider any errors caused by uncertainty in the coefficients of equation 6.
After all this processing was done, we were left with two extinction maps (a northern and a southern one) which had some overlap (see Figure 1). The area of overlap was averaged and the north and south extinction maps were combined to produce a final image (Figure 1) extending from R.A. of 4h 09m 00s to 4h 24m 30s (B1950) and from Dec. 22°00′ to 28°35′ (B1950).
An interesting point to note is that several “elliptical holes” in the extinction appear in Figure 1. These “holes” are unphysical depressions in the extinction produced by the hot sources seen in the 60 µm maps. The 60 µm point sources (mostly embedded young stars) heat the dust around them and create a region where there is an excess of hot dust. This limitation, caused by the low spatial resolution of IRAS and assuming a single $`T_d`$, will then create an unphysically low extinction, when calculating the $`A_V`$ in the region near IRAS point sources, using the method described above. Unfortunately, cut 2 (see Figure 1) passes near two of these unphysically low extinction areas. (In Figures 4 and 5 we mark the position of the unreal dip in extinction.) These unphysical holes in $`A_V`$ are each associated with two very close IRAS point sources (one hole is produced by IRAS 0418+2654 and IRAS 0418+2655, and the other one by IRAS 0418+2650 and IRAS 0419+2650) inside the dark cloud B216-217 (see end of §3.2).
As mentioned above, in order to calculate the optical depth we assumed that the dust emissivity is proportional to a power law ($`\tau _d\lambda ^\beta `$), with index $`\beta =1`$. Though studies differ in the values they find for $`\beta `$, there is a general agreement that the emissivity index depends on the grain’s size, composition, and physical structure (Weintraub et al. 1991). The general consensus in recent years has been that $`\beta `$ has a value most likely between 1 and 2, that in the general ISM $`\beta `$ is close to 2, and in denser regions with bigger grains, $`\beta `$ is closer to 1 (Beckwith & Sargent 1991; Mannings & Emerson 1994; Pollack et al. 1994). Our region of interest has both lines of sight that pass through low and high density medium with different environments. Thus, there is no way we can use a “perfect” or “preferred” value of $`\beta `$, as it might be different for different lines of sight. In our case we had to choose the same value as Jarrett et al. (1989) and Wood et al. (1994), which is $`\beta =1`$, since we use their results to convert from $`\tau _{100}`$ to extinction. The errors introduced by assuming a constant $`\beta `$ are hard to estimate, since we do not have any way to measure how much $`\beta `$ changes in our region of study. We do not include these errors in the error estimate of $`A_V`$ using ISSA images (from now on $`A_{V_{ISSA}}`$), but it must be kept in mind that $`\beta `$ is not necessarily equal to 1 and that its value may vary for different lines of sight.
In order to estimate the pixel-to-pixel (random) errors in $`A_{V_{ISSA}}`$, we examined a circular area of 800 pixels centered at 4h 19m, 22°24′ (B1950) which appears to have a constant extinction. We compared pixels that are 1.5 IRAS beams apart and obtained a standard deviation in the extinction value of this region to be 0.06 mag. We use this value as an estimate of the pixel-to-pixel errors in $`A_{V_{ISSA}}`$.
Ultimately, though, we need an estimate of the total error in $`A_{V_{ISSA}}`$, not just pixel-to-pixel errors. As explained above, the north and south extinction maps give slightly different extinction values for matching pixels in the region where they overlap. By fitting a gaussian to a histogram of the difference in $`A_{V_{ISSA}}`$ value between the north and the south extinction map, we find a $`1\sigma `$ width of 0.11 mag. We use this value as an estimate of the $`1\sigma `$ error in $`A_{V_{ISSA}}`$ caused by uncertainty in the zero level constants and zodiacal subtraction in the ISSA plates. Adding this “plate-to-plate” error in quadrature to the the pixel-to-pixel error (previous paragraph) gives a total error in $`A_{V_{ISSA}}`$ of 0.12 mag. We use this value as an estimate of the error in $`A_{V_{ISSA}}`$, but we remind the reader that this error does not include any errors caused by assuming a constant $`\beta `$.
## 3 Results
The data described above offer the opportunity to measure $`A_V`$ in four different ways: 1) using the 85 stars in cut 1 for which we have color excess data (see Table 1); 2) using 60 and 100 µm ISSA images as described in Section 2.4; 3) using star-counting techniques on the $`R`$-band frames; and 4) using an optical ($`V`$ and $`R`$) version of the average color excess method used by LLCB, described in more detail in Section 3.3.
Throughout the paper we assume that the extinction we calculate, independent of the way it was obtained, is produced by the dust associated with the Taurus dark cloud complex at a distance of 140 $`\pm 10`$ pc (Kenyon et al. 1994). The region of Taurus we observed lies at Galactic coordinate $`l^{II}172\mathrm{°}`$, $`b^{II}17\mathrm{°}`$. Thus, our stars lie towards lines of sight where there is little or no dust except for that associated with the Taurus dark cloud and, we can safely assume that in the area under study, virtually all the extinction is produced by the dust associated with the Taurus dark cloud complex.
### 3.1 Extinction from the color excess of stars with measured spectral types
Using the color excess and position data for the stars in Table 1 that lie on cut 1 and have a distance larger than 150 pc, we construct the extinction vs. declination plot shown in Figure 3a. This technique easily detects the rises in extinction associated with Tau M1 around Dec $`23.65\mathrm{°}`$, L1506 around Dec 25°, B216-217 around $`26.7\mathrm{°}`$, and the area near the IRAS cores Tau B5 and Tau B11 (hereafter Tau B5-B11) around Dec $`27.5\mathrm{°}`$. As in the ISSA $`A_V`$ map (Figure 1), the extinction obtained from the color excess of stars ($`A_{V_{sp}}`$) shows an overall rise in extinction with increasing declination. Also plotted in Figure 3a are the extinction values obtained from the ISSA extinction map ($`A_{V_{ISSA}}`$) for the same coordinates on the sky where there is a $`A_{V_{sp}}`$ point. The value of each $`A_{V_{ISSA}}`$ point is obtained from the pixel nearest to the coordinates of each star with measured $`A_{V_{sp}}`$ in cut 1. The error bars, set at $`\pm 0.12`$ mag for each $`A_{V_{ISSA}}`$ point, show the total error (including pixel-to-pixel and plate-to-plate variations, but not including systematic errors) discussed above.
As a check on the assumed value of $`R_V`$, we let $`R_V`$ vary in the calculation of $`A_{V_{sp}}`$ and then calculate $`\mathrm{\Delta }_{tot}=\mathrm{\Sigma }_{i=1}^N|R_VE_{BV}A_{V_{ISSA}}|_i`$ (where $`i`$ represents the $`i`$th point in the plot) for different values of $`R_V`$. We found that $`R_V3.05`$ minimizes the difference between the two curves ($`\mathrm{\Delta }_{tot}`$). This reassures us that the choice of a constant $`R_V=3.1`$ is a good assumption.
The traces of $`A_V`$ vs. declination in Figure 3a show a very striking similarity. Although the beam size of ISSA is $`5\mathrm{}`$ and $`A_{V_{sp}}`$ has a beam, due to seeing effects, of approximately 3″ (which transform into 0.2 pc and 0.002 pc, respectively, at a distance of 140 pc), both values agree within the errors in most places. A plot of ($`A_{V_{sp}}A_{V_{ISSA}}`$) vs. declination is shown in Figure 3b. It can be seen that $`A_{V_{sp}}A_{V_{ISSA}}0`$, within errors, for all points not in the vicinity of a steep increase in extinction (i.e. away from dark clouds and IRAS cores). In other words, in the low extinction regions $`A_{V_{sp}}`$ is very similar to $`A_{V_{ISSA}}`$, despite the great difference in the resolution of the two methods. This leads us to believe that there are no or only very small fluctuations in the extinction inside a 5′ beam, in low $`A_V`$ regions. In order to put an upper limit to the magnitude of the fluctuations, we present a plot of $`A_{V_{sp}}A_{V_{ISSA}}`$ divided by $`A_{V_{ISSA}}`$, in Figure 3c. We discuss this plot more in §4.1.
The gently sloping solid lines in Figures 3b and c are unweighted linear fits to the points in each of the plots. Both fits have a very small, but detectable, slope (0.1 mag/degree for Figure 3b). Previous studies using IRAS data have attributed the existence of gradients like these to imperfect zodiacal light subtraction (Wood et al. 1994). Moreover, the fact that the linear fit of the middle panel crosses the $`A_{V_{sp}}A_{V_{ISSA}}`$ zero line at a declination of around 25.4°, near the middle of the overlap region between the northern and southern $`A_{V_{ISSA}}`$ (Figure 1) map, leads us to believe that the small gradient is due to imperfect ISSA image reduction. The gradient in $`A_{V_{ISSA}}`$ gives a pixel-to-pixel offset of 0.008 mag/beam, which is much less than the pixel-to-pixel random errors in $`A_{V_{ISSA}}`$, and much, much less than other systematic errors (e.g. constant $`\beta `$) so we do not to correct for it.
### 3.2 Star counting
With the help of IRAF, as described in §2.1, we located a total of 3,715 stars in cut 1 and 3,074 stars in cut 2, with $`14.5m_R<18.0`$, from the November 1995 $`R`$-band images. With this database of stellar positions we measure the extinction over the region covered by both cuts, using classical star counting techniques. First, we subdivide each cut into a rectilinear grid of overlapping squares, and then we count the total number of stars in each square. Our ultimate goal in star counting is to obtain the extinction of the region in a way that can be compared to the other techniques used in this paper. Therefore, in order to mimic the resolution and sampling frequency of the ISSA data, we made the counting squares 5′ on a side (the resolution of ISSA) and the centers of the squares were separated by 1.5′ (the size of an ISSA pixel).
Conventionally, measuring extinction from star counts involves comparing the integrated number of stars within a given cell towards the region of interest to a nearby reference field which is assumed to be free from extinction (Bok & Cordwell 1973). Several assumptions need to be made in order to use this method: 1) the population of stars background to the region of interest does not vary substantially and is similar to the reference field; 2) the extinction ($`A`$) is uniform over the count cell; and 3) the integrated surface density of stars for the reference field ($`N_{ref}`$) of stars brighter than the apparent magnitude, $`m`$, follows an exponential law with $`\mathrm{log}(N_{ref}(<m))=a+bm`$. The integrated surface density of stars for any other field under study, $`N_{on}`$, follows a similar law, $`\mathrm{log}(N_{on}(<m))=a+b(mA)`$.
In our case, the extinction to each square was obtained via:
$$A_{V_{on}}=A_{V_{ref}}+\frac{A_V}{A_R}\frac{\mathrm{log}(n_{ref}/n_{on})}{b_R}$$
(7)
where $`A_{V_{ref}}`$ is the extinction of the reference field, $`n_{ref}`$ is the number of stars in the reference field, and $`n_{on}`$ is the number of stars in any of the other counting cells. The quantity $`b_R`$ is the slope of the cumulative number of stars as a function of $`R`$ apparent magnitude (see assumption 3 in previous paragraph). We calculate $`b_R`$ by fitting a line to $`\mathrm{log}(N_{ref}(<m_R))`$ for $`14.5m_R<18.0`$, and obtain a value of $`0.34\pm 0.02`$. The ratio $`A_V/A_R`$ is the reddening law between $`V`$ and $`R`$ wavelengths, which gives the value of 1.24 (He et al. 1995) used in the conversion for star counts in (Cousins) $`R`$ to extinction in $`V`$.
As mentioned above, in conventional star count studies, the reference fields are areas in the sky, close to the region under study, which have $`A_{V_{ref}}=0`$. This is not so in our case. Since our CCD data were not originally obtained for star counting purposes, we did not take any images of a reference field with $`A_{V_{ref}}=0`$. Thus, the reference fields were chosen to be regions where we could safely assume the value of $`A_{V_{ref}}`$. Specifically, the reference fields were chosen by virtue of their having a spectroscopic probe star to which we derived the extinction through its color excess.
One of the major assumptions in conventional star counting studies is that the population of background stars to the cloud is similar to that in the reference field. This is why star count studies are only done over small regions of the sky. In our case, cut 1 (the longer cut) expands 6 degrees in declination which transforms to nearly 4° in Galactic latitude ($`18.8\mathrm{°}<b^{II}<14.9\mathrm{°}`$). This span in Galactic latitude is enough to have a big effect on the surface density of stars due to galactic variations; one detects more stars per cell, for a constant $`A_V`$, the closer one observes to the galactic plane. (The star count Galaxy model of Reid et al. predicts that a 10 square degree field with no extinction centered at $`b^{II}=14\mathrm{°}`$, $`l^{II}=172\mathrm{°}`$ will have approximately a factor of two more stars than a 10 square degree field with no extinction centered at $`b^{II}=19\mathrm{°}`$, $`l^{II}=172\mathrm{°}`$.) Thus, it was imperative that we correct for Galactic changes in the stellar density. Not doing this would have resulted in an unreal drop in the derived extinction for lower Galactic latitudes. We account for the Galactic gradient by using 4 different, more or less evenly spaced, reference fields at different galactic latitudes along the cut. Each of these fields has a star with measured reddening (from Table 1) which was used to estimate the extinction of the reference field in question. Thus, for example, the star count extinction located between $`b^{II}=18.9\mathrm{°}`$ and $`b^{II}=17.9\mathrm{°}`$ (which transforms to $`22\mathrm{°}25\mathrm{}<\delta _{J2000}<23\mathrm{°}56\mathrm{}`$ in cut 1) is tied to the reference field where star 011001 lies; the star count extinction in the region with $`17.9\mathrm{°}b^{II}<16.9\mathrm{°}`$ (which transforms to $`23\mathrm{°}56\mathrm{}<\delta _{J2000}<25\mathrm{°}25\mathrm{}`$ in cut 1) is tied to the reference field where star 051043 lies; and so forth. The number of stars ($`n_{ref}`$), the visual extinction ($`A_{V_{ref}}`$), and the range in $`b^{II}`$ which each of the four reference fields calibrates are given in Table 3. It is important to stress that by doing this calibration we are tying $`A_{V_{sc}}`$ to $`A_{V_{sp}}`$ at these four points, and thus $`A_{V_{sc}}`$ is not totally independent of $`A_{V_{sp}}`$. But this procedure only forces $`A_{V_{sc}}`$ to agree in absolute value to $`A_{V_{sp}}`$ in four points and not to give the same structure or scale through the cuts.
In our study, where observations lie primarily along a declination cut, the best way to graphically compare the extinction measured by star counting ($`A_{V_{sc}}`$) and from the ISSA images ($`A_{V_{ISSA}}`$) is to plot them both in an extinction versus declination plot. (Cut 1 is about 6 degrees long, but only 10′ wide, giving a ratio of 1:36 between length and width. This makes it practically impossible to show a legible figure of a star count extinction map of the cuts.) A value of the extinction was obtained for each $`5\mathrm{}\times 5\mathrm{}`$ counting box, and then averaged extinction over R.A. for every point in declination, so as to produce only one value of $`A_V`$ for each declination, independent of R.A. Constant declination slices every 5′ show that the variations in $`A_{V_{ISSA}}`$ across the 10′ width of each cut are not large. Most of the constant declination slices had standard deviations in $`A_{V_{ISSA}}`$ of less than 0.2 mag, and none exceeded 0.3 mag. Moreover, most adjacent counting boxes, with the same declination, differ in $`A_{V_{sc}}`$ by less than 0.2 mag. Therefore, we are confident that we are not introducing large errors by averaging the extinction over the $``$ 10′ spanned by each cut in Right Ascension. On the other hand, by averaging over R.A., the sensitivity to small scale fluctuations decreases. The smoothed (averaged) extinction trace has less sensitivity in detecting fluctuations on 5′ scales than on 10′ scales, where it reaches full sensitivity. Nevertheless, averaging the extinction over R.A. does not create any disadvantage for the purpose of comparing the different ways of calculating the extinction since all the techniques are averaged over the same width.
Figure 4 shows $`A_{V_{sc}}`$ and $`A_{V_{ISSA}}`$ (now both averaged over the $``$ 10′ that spanned each cut in R.A.) versus declination, for both cuts.<sup>4</sup><sup>4</sup>4Notice that $`A_{V_{ISSA}}`$ in Figure 4 is an average over the approximately 10′ width of the frames taken in the optical, but still has resolution $`5`$′ along the declination direction. Thus, these are not exactly the same values of $`A_{V_{ISSA}}`$ shown in Figure 3, where $`A_{V_{ISSA}}`$ values for 1.5′ single ISSA pixels, with 5′ resolution, at individual stellar positions are plotted. The random errors in the star count trace are plotted for each point. The uncertainty in the number of stars in each sampling box is given by $`\sqrt{n}`$ (Poisson statistics), where $`n`$ is the number of stars counted in the box (Bok 1937). The uncertainty in the extinction (with contributions from the uncertainty in $`n`$, $`n_{ref}`$, $`A_{V_{ref}}`$, and $`b_R`$) for each sampling box with the same declination is then averaged to give the final (plotted) error in the extinction. Like the ratio of total-to-selective extinction ($`R_V`$), the value of $`A_V/A_R`$ may vary from one line of sight to the other. Also, like $`R_V`$ in section 2.2, here we use the ISM average value of $`A_V/A_R`$. We do not take the errors caused by assuming a constant value of $`A_V/A_R`$ into account when we calculate the errors in $`A_{V_{sc}}`$, as we have no way of calculating them.
It can be seen that both $`A_{V_{ISSA}}`$ and $`A_{V_{sc}}`$ show the same gross extinction structure. Both have local maxima and minima in the same places (in most cases), but not at necessarily the same value. Both traces detect rises in extinction associated with Tau M1, L1506, B216-217, and Tau B5-B11. It is clear that $`A_{V_{sc}}`$ has more fluctuations than the $`A_{V_{ISSA}}`$ trace. These fluctuations are not likely to be real, as most of them are of the same magnitude as the errors in $`A_{V_{sc}}`$. Most of the “noise” in the extinction is due to the fact that the star count technique is very dependent on the assumption of a constant background stellar surface density. Real (small) changes in the background surface density of stars (not caused by extinction) will produce unreal fluctuations in the resultant $`A_{V_{sc}}`$.
Note the unreal dip in the cut 2 trace of $`A_{V_{ISSA}}`$ (see Figure 4), caused by assuming a constant dust temperature for those lines of sight where there are IRAS point sources which heat the dust around them (see section 2.3). This shows the potentially large errors in $`A_{V_{ISSA}}`$ that can arise from assuming a single dust temperature ($`T_d`$) for each line of sight. The other place where there is a significant discrepancy between $`A_{V_{ISSA}}`$ and $`A_{V_{sc}}`$ is in the rise in extinction associated with Tau B5-B11 (see Figure 4), where the two methods disagree by more than $`1\sigma `$ of the error in $`A_{V_{sc}}`$. This discrepancy will be discussed further in §4.2.
### 3.3 Extinction measured via average color excess method
Taking advantage of the fact that we had taken our November 1995 images in more than one broad band filter, we used the method developed by LLCB to study the extinction along both cuts in yet another way. This method consists of assuming that the color distribution of stars observed all along the cut is identical in nature to that of stars in a control field. With this assumption one can use the mean $`VR`$ color of the stars in the control field to approximate the intrinsic $`VR`$ color of all stars background to the cloud. (Note that LLCB’s analysis was in the near-infrared, and they used $`HK`$ colors—which have even less intrinsic variation than $`VR`$ colors.) Using the same technique as in star counting, the region under study is divided into a grid of overlapping counting boxes, and then an average of the color excess (and extinction) of the stars in each counting box is obtained.
To derive extinction from the average color excess method, we used the same sample of stars, with $`14.5m_R<18.0`$, used in our star counting study (§3.2). The region between declinations 22.9° and 23.22° (B1950) was used as our reference field since, as can be seen in Figure 3a, this region has a uniform extinction within the errors. We took an average of the extinction of the 6 stars for which we had spectral types and which lie inside this region, and obtained a value of $`A_{V_{ref}}=0.72\pm 0.2`$. We did not divide the area under study into square sampling boxes as it is usually done in star count studies (see previous section). Instead, the area under study was divided into rectangular cells, where the R.A. side of the rectangle was dictated by the frames’ width ($`10\mathrm{}`$) and the declination side was set to be 5′. The centers of the rectangles were separated in declination by 1.5′. This was done in order to keep the same resolution and sampling frequency in the declination direction as in the ISSA and star counting methods (described in the previous sections). We are not sensitive to any variations in extinction within each measurement rectangle, but Figure 3a suggests that such variations are very small.
The number of stars in each rectangle was counted and the color excess of each star was obtained using the formula:
$$E_{VR}=(VR)VR_{ref}$$
(8)
where $`VR_{ref}`$ is the mean V R color of the reference field, which in our case is $`0.60\pm 0.05`$ mag. Figure 5 shows the distribution of V R colors in the reference field.<sup>5</sup><sup>5</sup>5Notice that the width of the color distribution in Figure 5 is significantly larger than the error in the mean (0.05 mag). The spread in near-IR (e.g. $`HK`$) color for a field like this would be much narrower, which is why it is preferred to use the average color excess method in the near-IR (LLCB). We then apply equation 8 to each star in a counting rectangle and obtain a mean color excess to each rectangle:
$$E_{VR}=\frac{\mathrm{\Sigma }_i^N(E_{VR})_i}{N}$$
(9)
where $`N`$ is the number of stars in a counting rectangle and $`(E_{VR})_i`$ is the color excess of the $`i`$th star. The mean visual extinction is obtained using:
$$A_V=A_{V_{ref}}+\frac{A_V}{E_{VR}}\times E_{VR}$$
(10)
where $`A_{V_{ref}}`$ is the extinction of the reference field (0.72 mag), and we use the ISM average value of 5.08 for the expression $`A_V/E_{VR}`$ (He et al. 1995). Similar to the extinction from star counts, the extinction using the average color excess (from now on $`A_{V_{ce}}`$) is not totally independent of the spectral classification method. Recall that $`A_{V_{ref}}`$, which is used to calibrate $`A_{V_{ce}}`$,is determined using measurements of $`A_{V_{sp}}`$. This calibration forces $`A_{V_{ce}}`$ to agree in absolute value with the average of $`A_{V_{sp}}`$ in the reference field, but it does not force $`A_{V_{ce}}`$ to give the same structure or scale in the extinction curves throughout the cuts.
Using the average color excess technique, we are able to detect the same overall trends in extinction found from the ISSA plates, spectral analysis, and star counting (see Figure 6). The rises in extinction due to Tau M1, L1506, B216-217, and Tau B5-B11 can be seen as well pronounced peaks in $`A_V`$. Again, note the unreal dip in the cut 2 trace of $`A_{V_{ISSA}}`$, caused by the presence of four IRAS point sources in the dark cloud B216-217 (see section 2.3). In Figure 6 we also show the random errors of each point in the $`A_{V_{ce}}`$ trace. The measurement uncertainty in $`A_{V_{ce}}`$ for any given counting cell is given by:
$$\sigma _{A_{V_{ce}}}=\sqrt{\sigma _{ref}^2+(5.08)^2[\sigma _{mean}^2+\mathrm{\Sigma }_1^N\frac{\sigma _{(VR)_i}^2}{N^2}]}$$
(11)
where $`\sigma _{ref}`$ is the uncertainty in $`A_{V_{ref}}`$ (which is equal to 0.2 mag), $`N`$ is the number of stars in the counting cell and $`\sigma _{(VR)_i}`$ is the photometric error in $`VR`$ of the $`i`$th star inside the cell. The quantity $`\sigma _{mean}`$ is the error in the mean of the $`VR`$ color distribution of the counting cell. The distribution of the $`VR`$ colors does not have a gaussian distribution, thus $`\sigma _{mean}`$ was obtained using Monte Carlo simulations. We obtained $`N_{mc}`$ values of $`VR`$, representing the $`VR`$ colors of $`N_{mc}`$ stars in a counting cell, drawn from a distribution given by the reference field distribution (Figure 5). We then computed the average $`VR`$ color over these $`N_{mc}`$ stars. The procedure was repeated a thousand times with the same $`N_{mc}`$, from which we obtained a (gaussian) distribution of average values. The 1$`\sigma `$ width of this gaussian was then used as the value of $`\sigma _{mean}`$. This procedure was repeated for different values of $`N_{mc}`$ (representing different number of stars inside a counting cell). We do not include the errors in $`A_{V_{ce}}`$ caused by assuming a constant $`A_V/E_{VR}`$ for all lines of sight.
Although $`A_{V_{ce}}`$ and $`A_{V_{ISSA}}`$ agree very well for the low declination ($`\delta <25.5\mathrm{°}`$) part of both cuts, their values seem to diverge for higher declinations ($`\delta >25.5\mathrm{°}`$; Figure 6). This effect suggests that it may be inappropriate to assume a single average color for the whole length (6°) of our cuts. A change in the average V R could be due to: 1) a gradient in the $`VR`$ caused by a greater fraction of early type stars close to the galactic plane; and/or 2) a sudden drop in the value of $`VR`$ in the north edge of the cuts due to the presence of a star cluster. Concerning the first point, the star count Galaxy model of Reid et al. (1996) predicts that a 10 degree square degree field with no extinction centered at $`b^{II}=14\mathrm{°}`$, $`l^{II}=172\mathrm{°}`$ will have approximately an average star ($`VR`$) color 0.04 mag greater than a 10 degree square degree field with no extinction centered at $`b^{II}=19\mathrm{°}`$, $`l^{II}=172\mathrm{°}`$. A difference of 0.04 mag in $`VR`$ transforms to a difference of 0.2 mag in $`A_{V_{ce}}`$. Thus, it is possible that at least some of the discrepancy between $`A_{V_{ce}}`$ and $`A_{V_{ISSA}}`$ for the northern parts of both cuts is due to this uncorrected effect of varying $`VR`$. We will discuss the possibility of a cluster in section 4.3.
## 4 Analysis and Discussion
### 4.1 Structure in the cloud
Figure 3a shows a striking resemblance between the extinction obtained through the use of the ISSA 60 and 100 µm images ($`A_{V_{ISSA}}`$) and the extinction obtained through the color excess of individual stars which we had classified by spectral type ($`A_{V_{sp}}`$). Our stellar reddening (color excess) sample represents a map of the distribution of extinction ($`A_{V_{sp}}`$) which, although it has a “pencil beam resolution,” is measured in a spatially nonuniform fashion. On the other hand the extinction data obtained from the ISSA images is spatially continuous, with 1.5′ pixels and a resolution of 5′ ($`0.2`$ pc at a distance of 140 pc). Most stars with measured $`A_{V_{sp}}`$ are less than 5′ from their nearest star with measured $`A_{V_{sp}}`$, and there are a number of cases where three and even four stars lie within 5′ of each other. Therefore if one were to place a 5′ beam anywhere along cut 1, one would find that 1 to 4 stars would lie, in random places, inside the beam. Thus, if there were to exist big fluctuations in the dust distribution inside the 0.2 pc beam of IRAS, we would see strong variations between the value of $`A_{V_{sp}}`$ and the value of $`A_{V_{ISSA}}`$. Note that this fluctuation probe can only be used where there are stars that have been classified by spectra.
Using the extinction measurements shown in Figure 3, we can place an upper limit on the fluctuations in the dust distribution within a 5′ beam. In the bottom panel of Figure 3c we plot the difference between $`A_{V_{sp}}`$ and $`A_{V_{ISSA}}`$ divided by $`A_{V_{ISSA}}`$ (from now on $`\mathrm{\Delta }A_V/A_V`$) versus declination. Here the errors are obtained using the quoted errors for $`A_{V_{sp}}`$ (see Table 1) and $`A_{V_{ISSA}}`$ (0.12 mag), and propagation of errors. One can think of $`\mathrm{\Delta }A_V/A_V`$ as a measurement of the deviations in the average extinction within a fixed area of the sky. In our case the area is given by the 5′ beam of $`A_{V_{ISSA}}`$.
Figure 3c has only four points with an absolute value which is more than 3 times its 1 $`\sigma _i`$ error (independent of whether we correct for the small gradient in $`A_V`$ with declination), where $`\sigma _i`$ is the error of each individual point on Figure 3c. Each of these four points is associated with one of the extinction peaks created by dark clouds and IRAS cores intersecting the cut. The high values of $`\mathrm{\Delta }A_V/A_V`$ could be due to two effects indistinguishable by our data; spatially unresolved steep gradients in the extinction or random fluctuations in the dust distribution inside dark clouds and IRAS cores. These possibilities will each be discussed later. All of the remaining lines of sight, in the more uniform extinction areas, have absolute values of $`\mathrm{\Delta }A_V/A_V`$ which are less than 3 times their 1 $`\sigma _i`$ error. If we exclude points near dramatic extinction peaks (see Figure 3), then we do not detect any deviations from zero in $`\mathrm{\Delta }A_V/A_V`$ within our sensitivity.
We can characterize our sensitivity to extinction fluctuations using the average error in $`\mathrm{\Delta }A_V/A_V`$, which is given by $`\sigma _{av}=\mathrm{\Sigma }_i^N\sigma _i/N`$. The 1-$`\sigma _{av}`$ error in $`\mathrm{\Delta }A_V/A_V`$, for points with $`A_{V_{ISSA}}<0.9`$ mag is 0.41, while for points with $`A_{V_{ISSA}}0.9`$ is 0.15. We choose $`A_{V_{ISSA}}=0.9`$ as the dividing line since points with $`A_{V_{ISSA}}<0.9`$ mag have consistently large uncertainties. Assuming that 0.15 is the “typical” error in $`\mathrm{\Delta }A_V/A_V`$ for points with $`A_{V_{ISSA}}0.9`$, we can then state that a real detection (using a 3-$`\sigma _{av}`$ detection limit) of sub-IRAS beam structure would be if $`\mathrm{\Delta }A_V/A_V0.45`$. Therefore only in places where $`\mathrm{\Delta }A_V/A_V>0.45`$, can we say that there are sub-IRAS beam structures in the cloud. Any value less than 0.45 would be considered part of the “noise” and not significant enough to be considered a detection of sub 0.2 pc structure. So, ultimately, we only detect deviations from the mean $`A_V`$ within a 0.2 pc beam in the vicinity of IRAS cores and dark clouds. Outside of those regions, deviations within a 0.2 pc beam are limited to be less than $`\mathrm{\Delta }A_V/A_V<0.45`$ (for $`A_V>0.9`$). For points with extinction less than 0.9 mag the larger uncertainty in the extinction determinations means that only points with $`\mathrm{\Delta }A_V/A_V1.23`$ would be real fluctuations detections, and no such points are found.
### 4.2 Evidence for smooth clouds
In a very important study, Lada et al. (1998; hereafter LAL) recently showed that smoothly varying density gradients can produce the “fluctuations” observed in extinction studies of filamentary clouds. Two studies of dust extinction in filamentary dark clouds had been conducted previous to LAL: 1) the study of IC 5146 by LLCB; and 2) a study of L977 by Alves et al. (1998). Both studies find that in $`1.5\mathrm{}\times 1.5\mathrm{}`$ cells, the dispersion of extinction measurements within a square map pixel (what LLCB name $`\sigma _{disp}`$) increases in a systematic way with the average $`A_V`$, in the range of $`0<A_V<25`$ mag. Both studies conclude that the systematic trend in their $`\sigma _{disp}A_V`$ plot, an increase of $`\sigma _{disp}`$ with $`A_V`$, is due to variations in the cloud structure on scales smaller than the resolution of their measurements. But neither of the studies could definitively determine the nature of the fluctuations in the extinction. This prompted LAL to study IC 5146 in the same way as LLCB, but at a higher spatial resolution (30″). With the help of Monte Carlo simulations LAL conclude that the form and slope of the $`\sigma _{disp}A_V`$ relation, and hence most (if not all) of the small scale variations in the extinction are due to unresolved gradients in the dust distribution within the filamentary clouds. LAL note that random spatial fluctuations in the dust distribution could exist, at a very low level, in addition to the smooth gradients. They state that $`\sigma _{ran}/A_V`$ due to random fluctuations is much less than 25% at $`A_V30`$ mag, which is consistent with our ($`3\sigma `$) upper limit of $`\mathrm{\Delta }A_V/A_V=0.45`$ at $`0.9<A_V<3.0`$ mag.
Recently, Thoraval et al. (1997), hereafter TBD, observed a low $`A_V`$ area of the IC 5146 dark cloud complex. Similar to our study, TBD concentrate their observations in a low and uniform extinction region ($`A_V<5`$), but unlike our study, the region that TBD studied did not include a filamentary cloud. They conclude that the variations in the extinction are present at a level no larger than $`\mathrm{\Delta }A_V/A_V0.25`$, again consistent with our ($`3\sigma `$) upper limit of $`\mathrm{\Delta }A_V/A_V=0.45`$, and similar to what LAL obtain in the high extinction region of IC 5146. If we exclude, in our study, the points near extinction peaks, we are left with the same results as TBD: no fluctuations on scales smaller than the resolution. These results all suggest that there is very little random spatial fluctuation in the extinction in regions of low $`A_V`$ far from extinction peaks.
Although it is not possible to definitively determine the origin of the handful of high $`\mathrm{\Delta }A_V/A_V`$ we find near extinction peaks, it is more likely that they are due to unresolved steep gradients in clouds than to localized random fluctuations in the dust distribution. The filamentary dark clouds and IRAS cores typically have a minor axis that is only 3 to 4 times the IRAS beam size, so some IRAS beam will undoubtedly contain a steep extinction gradient characterizing the the “edge” of one of these structures. Extinction measurements using a pencil-beam (e.g. $`A_{V_{sp}}`$), would be able to resolve this “edge,” so, near edges, large-beam (e.g. $`A_{V_{ISSA}}`$) and pencil-beam measurements would disagree. The results of TBD and LAL reinforce this hypothesis. Thus, we strongly believe that the high values of $`\mathrm{\Delta }A_V/A_V`$ near dark clouds and IRAS cores are due to steep gradients in the extinction not resolved by the IRAS beam.
### 4.3 Possible discovery of a previously unknown cluster
While comparing our different ways of obtaining the extinction we found a peculiar discrepancy between $`A_{V_{ISSA}}`$ and $`A_{V_{sc}}`$ and between $`A_{V_{ISSA}}`$ and $`A_{V_{ce}}`$, in the declination range from 27.2° to 28° (B1950) along the two cuts (see Figure 4). The rise in ISSA extinction in these declinations is due to the existence of a high dust concentration which Wood et al. (1994) classify as the IRAS cores Tau B5 and Tau B11 (Tau B5-B11). The trace of $`A_{V_{ISSA}}`$ shows that the increase in the extinction associated with Tau B5-B11 is of similar or higher magnitude to the increase in extinction associated with the dark cloud B216-217, in both cuts (see Figure 6). On the other hand, the star count extinction and the average color excess extinction show a small increase in $`A_V`$ associated with Tau B5-B11 compared to that associated with B216-217. In addition, there is no sharp decrease in the surface density of stars like the one associated with the two dark clouds in our cuts. This can be observed in both the spatial distribution of our $`R`$-frame stars and in the Digitized Palomar Sky Survey.
One possible explanation for the discrepancy between extinction traces is that either $`A_{V_{ISSA}}`$ or $`A_{V_{ce}}`$ and $`A_{V_{sc}}`$ were calculated using the wrong assumptions. It could be that the dust in the Tau B5-B11 region has different physical properties compared to the rest of the dust in the Taurus dark cloud complex, which would change the values of $`\beta `$ or of $`A_V/A_R`$ and $`A_V/E_{VR}`$. When we calculated $`A_{V_{ISSA}}`$, $`A_{V_{sc}}`$, and $`A_{V_{ce}}`$, we assumed that the power law index, $`\beta `$ (equation 4), $`A_V/A_R`$ (equation 7) and $`A_V/E_{VR}`$ (equation 10) were constant for all lines of sight. We could change the value of $`A_V/A_R`$ (equation 7) from 1.24 to 1.74 in order for $`A_{V_{sc}}`$ to be approximately equal to $`A_{V_{ISSA}}`$ for the lines of sight that pass through Tau B5-B11. But, the value of $`A_V/A_R`$ is tied to the value of $`A_V/E_{VR}`$, by the equation $`\frac{A_V}{E_{VR}}=\frac{1}{1\frac{A_R}{A_V}}`$, thus the proposed change in $`A_V/A_R`$ would change $`A_V/E_{VR}`$ from 5.08 to 2.35, making the discrepancy between $`A_{V_{sc}}`$ and $`A_{V_{ISSA}}`$ (Figure 6) more pronounced. An alternate possibility is that the value of $`\beta `$ changes from 1 to a value less than 1 for lines of sight in the region of Tau B5-B11, in that case $`A_{V_{ISSA}}A_{V_{sc}}A_{V_{ce}}`$. Although it is possible to have neighboring lines of sight with different dust properties, it is very unlikely (but not impossible) to have dust with $`\beta <1`$ in a region like Tau B5-B11, according to experimental and theoretical studies of dust properties (Weintraub et al. 1991; Pollack et al. 1994).
A more likely explanation for the discrepancy between extinction traces near Tau B5-B11 is that there is a sharp increase in the stellar distribution in the area, which we did not account for when we calculated $`A_{V_{sc}}`$. The existence of a previously unknown stellar cluster in the vicinity of R.A. 4h 19m, Dec. $`27\mathrm{°}30\mathrm{}`$ (B1950) could create such an increase in the number of stars in the region. Also, if the cluster is relatively young, the average stellar colors should be bluer than in the field. This can explain why the $`A_{V_{ce}}`$ trace follows, within the error, the structure present in the $`A_{V_{ISSA}}`$ trace, but at a lower value, while the $`A_{V_{sc}}`$ trace decreases in extinction value without following the structure in $`A_{V_{ISSA}}`$.
We conclude that there is a sudden increase in the stellar distribution background to Tau B5-B11 (and a change in the average V R color), which is due to a previously unknown open star cluster.<sup>6</sup><sup>6</sup>6The Lynga catalog of star clusters (Lynga 1985) does not contain a cluster in the vicinity of R.A. 4h 19m, Dec. 27°30′ (B1950). This is more credible than a change in dust properties, since the cluster hypothesis does not require the assumption of a physically contradictory, simultaneous change in the value of $`A_V/A_R`$ and $`A_V/E_{VR}`$, or an improbable value of $`\beta `$. We expect that further observations of the area will verify the existence of an open cluster behind Tau B5-B11.
## 5 Rating the Various Methods
Although we find that all four methods give generally similar results and are consistent with each other, there are some important exceptions. The discrepancies arise from different systematic errors inherent to the different techniques. For example, when calculating $`A_{V_{ISSA}}`$, a constant dust temperature for each line of sight was assumed. This single temperature assumption breaks down in the immediate vicinity of stars surrounded by dust. Even though it is very likely that there is dust of many different temperatures along the lines of sight to these stars, it is the hot dust that dominates the emission at 60 and 100 µm, resulting in an incorrect estimate of $`\tau _{100}`$ (and $`A_{V_{ISSA}}`$), as we observe in the regions near IRAS sources.
The techniques that use background stellar populations to measure the extinction (i.e, star count and the average excess color method) also suffer from important systematic errors. Here the major systematic error lies in assuming a constant stellar population background to the cloud. If the region under study spans several degrees in galactic latitude (which is our case) uncorrected gradients in the background stellar density and the average stellar color will lead to incorrect extinction measurements when using for the star count method and the average excess color method, respectively. In addition, small fluctuations in the background surface stellar density can result in unreal fluctuations in the extinction.
It is important to appreciate that the techniques used in this Paper are not entirely standalone or independent methods of obtaining extinction. Even the most exact method for measuring extinction, using the color excess of individual stars with measured spectral type ($`A_{V_{sp}}`$) still depends on the value of $`R_V`$. Both star counting ($`A_{V_{sc}}`$) and the average color excess ($`A_{V_{ce}}`$) technique depend on a reference field of known extinction for calibration. In this study, $`A_{V_{sc}}`$ and $`A_{V_{ce}}`$ are calibrated using measurements of $`A_{V_{sp}}`$ in chosen reference fields, which means that $`A_{V_{sc}}`$ and $`A_{V_{ce}}`$ are then not completely independent of $`A_{V_{sp}}`$. Note though, that the calibration procedures only force methods to agree at a limited number of points, and it does not force them to have the same structure or scale through the cuts. $`A_{V_{ISSA}}`$ depends on a conversion from dust opacity at 100 µm to visual extinction, which ultimately relies on star count data in Jarrett et al. (1989). Thus, $`A_{V_{ISSA}}`$ is independent of any of the other extinction methods in this study, but it is tied to the star count data of Jarrett et al. (1989).
Table 4 outlines the advantages and disadvantages, and the random and systematic errors, of each of the four different methods of measuring extinction used in this Paper.
In principle, it would seem that the best method to calculate extinction is using the color excess of individual stars with measured spectral type, but this method is not without problem. One inconvenience is the fact that $`R_V`$ could have different values for different lines of sight. But systematic errors due to unknown constants are also present in the other methods ($`\beta `$ for $`A_{V_{ISSA}}`$, $`A_V/A_R`$ for $`A_{V_{sc}}`$, and $`A_V/E_{VR}`$ for $`A_{V_{ce}}`$). Therefore not knowing the specific value of $`R_V`$ for every line of sight observed is not a disadvantage over the other methods. The real drawback of this technique is the large amount of time required to measure each and every star’s spectrum. Thus, although using the color excess of background stars with known spectral type is the most direct and accurate way of measuring the extinction, it is a very time consuming procedure and it measures the extinction in a spatially nonuniform fashion.
To asses the robustness of the four methods of measuring extinction used in this Paper, we constructed plots of $`A_{V_{ISSA}}`$, $`A_{V_{sc}}`$, and $`A_{V_{ce}}`$ versus $`A_{V_{sp}}`$, at each point where all four methods can be used. We do this in order to obtained a least square fit for each of the three remaining methods plotted against $`A_{V_{sp}}`$. The four points with the highest extinction were not included in the fits, since these are points near the extinction peaks of dark clouds. It is clear that for these four points $`A_{V_{sp}}`$ is larger than any of the other 3 methods since the 5′ beam of the other methods does not resolve the extinction gradients in this regions of high extinction (see section 4.1). When we constrain the fits to have a zero intercept, we find a slope of one (within the errors) in all three comparisons. Thus, we believe all four methods of obtaining extinction are robust.
## 6 Summary and Conclusions
We studied the extinction of a region of Taurus in four different ways: using the color excesses of background stars for which we had spectral types; using the ISSA 60 and 100 µm images; using star counting; and using an optical ($`V`$ and $`R`$) version of the average color excess technique of (Lada et al. 1994). All four give generally similar results. Therefore, any of the methods discussed above can be used to obtain reliable information about the extinction in regions where $`A_V4`$ mag.
We inter-compared the ISSA extinction and the extinction measured using individual stars, to study the spatial fluctuations in the dust distribution. Excluding areas where there are extinction gradients due to filamentary dark clouds and IRAS cores, we do not detect any variations in the structure on scales smaller than 0.2 pc. With this result we are able to place a constraint on the magnitude of the fluctuations. We conclude that in the regions with $`0.9<A_V<3`$ mag, away from filamentary dark clouds and IRAS cores, there are no fluctuations in the dust column density greater than 45% (at the 99.7% confidence level), on scales smaller than 0.2 pc. On the other hand, in regions of high extinction in the vicinity of dark clouds and IRAS cores, we do detect statistically significant deviations from the mean in dust column density on scales smaller than 0.2 pc. Although it is not possible to definitively determine the nature of the fluctuations with our data alone, the results of other studies (Lada et al. 1998, Thoraval et al. 1997) and ours taken together strongly favor unresolved steep gradients in clouds over random fluctuations on the dust distribution.
A discrepancy between the extinction obtained through star counting and average color excess and the rest of the techniques in the vicinity of R.A. 4h 19m, Dec. 27°30′ (B1950), leads us to believe in the existence of a previously unknown open stellar cluster in the region.
We would like to thank Lucas M. Macri very much for his great help in the photometry analysis and taking time off his observing round to take a few frames for us in October 1996. We would also like to thank Elizabeth Barton and Warren Brown for taking the November 1996 and March 1997 frames. A special thanks goes to Perry Berlind for helping us with the acquisition of the spectral data. We would like to give special thanks to our referee Dan Clemens, as well as to Douglas Finkbeiner for their very helpful comments and thorough critique of the paper. Also thanks to Scott J. Kenyon, George Field, Charlie Lada, and John Huchra for their helpful remarks and Ian Reid for sharing his star count Galaxy model with us.
|
no-problem/9902/cond-mat9902302.html
|
ar5iv
|
text
|
# Evidence for vortex staircases in the whole angular range due to competing correlated pinning mechanisms
(November 30, 1998)
## Abstract
We analyze the angular dependence of the irreversible magnetization of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals with columnar defects inclined from the c-axis. At high fields a sharp maximum centered at the tracks’ direction is observed. At low fields we identify a lock-in phase characterized by an angle-independent pinning strength and observe an angular shift of the peak towards the c-axis that originates in the material anisotropy. The interplay among columnar defects, twins and ab-planes generates a variety of staircase structures. We show that correlated pinning dominates for all field orientations.
A difficult aspect of the study of vortex dynamics in HTSC in the presence of correlated disorder is the determination of flux structures for applied fields tilted with respect to the pinning potential. As 3D vortex configurations cannot be directly observed, our knowledge is mostly based on the analysis of the angular dependence of magnetization, susceptibility or transport data.
According to theoretical models, when the angle between the applied field $`𝐇`$ and the defects is smaller than the lock-in angle $`\phi _L`$ vortices remain locked into the defects thus producing a transverse Meissner effect. For tilt angles larger than $`\phi _L`$ and smaller than a trapping angle $`\phi _T`$, vortices form staircases with segments pinned into different defects and connected by unpinned or weakly pinned kinks. Beyond $`\phi _T`$, vortices will be straight and take the direction of the applied field, thus being unaffected by the correlated nature of the pinning. In principle, this picture should apply with minor differences to twins, columnar defects and intrinsic pinning.
Many experiments have confirmed the directional pinning due to columnar defects, twins and Cu-O planes. Evidence for a locked-in phase arises from the observation of the transverse Meissner effect, but a quantitative determination of $`\phi _L(H,T)`$ for columnar defects had not been done until now. The introduction of columnar defects inclined with respect to the c-axis has been used to discriminate their pinning effects from those due to twin boundaries, and from anisotropy effects. However, the combined effect of the various correlated structures on the vortex configurations remain largely unexplored.
In this work we report studies of the vortex pinning in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals with inclined columnar defects, for the whole range of field orientations. This allows us to determine the misalignment between the applied and internal fields due to anisotropy, as well as to identify the angular range of influence of each correlated pinning structure. We present the first determination of the lock-in angle of tracks using irreversible magnetization.
The crystal used in this study was grown by the self flux method, and has dimensions $`200\times 600\times 8.5\mu m^3`$. Columnar defects at an angle $`\mathrm{\Theta }_D32^{}`$ from the c-axis and a density corresponding to a matching field $`B_\mathrm{\Phi }=3T`$ were introduced by irradiation with $`315`$ $`MeV`$ $`Au^{23+}`$ ions at the Tandar accelerator (Buenos Aires, Argentina).
DC magnetization $`𝐌`$ was measured in a Quantum Design SQUID magnetometer with two sets of pick up coils, and both the longitudinal ($`M_l`$, parallel to $`𝐇`$) and transverse ($`M_t`$, perpendicular to $`𝐇`$) components were recorded. The sample could be rotated in situ around an axis perpendicular to $`𝐇`$ using a home-made device. The angle $`\mathrm{\Theta }`$ between the normal to the crystal (that coincides with the c-axis) and $`𝐇`$ was determined with absolute accuracy $`1^{}`$, and relative variations between adjacent angles better than $`0.2^{}`$. The details of the procedure are described elsewhere.
Magnetization loops $`M_l(H)`$ and $`M_t(H)`$ were recorded at fixed $`T`$ and $`\mathrm{\Theta }`$. Sample was then rotated, warmed up above $`T_c`$ and cooled down in zero field to start a new run. We use the hysteresis widths $`\mathrm{\Delta }M_l\left(H\right)`$ and $`\mathrm{\Delta }M_t\left(H\right)`$ to calculate the modulus $`M_i=\frac{1}{2}\sqrt{\mathrm{\Delta }M_l^2+\mathrm{\Delta }M_t^2}`$ and direction of the irreversible magnetization vector $`𝐌_i`$. It is known that in thin samples $`𝐌_i`$ is normal to the surface due to geometrical constrains, except above a critical angle (of $`87^{}`$ for our crystal). We confirmed that $`𝐌_ic`$ within $`1^{}`$, for all $`\mathrm{\Theta }<85^{}`$.
From now on we analyze the modulus $`M_i`$ as a function of $`T`$, $`H`$ and $`\mathrm{\Theta }`$. Figure 1 shows $`M_i(\mathrm{\Theta })`$ at $`60`$ and $`70K`$ and several values of $`H`$. According to the Bean model, $`M_iJ`$, where the screening current density $`J`$ is lower than $`J_c`$ due to thermal relaxation. The geometrical factor $`M_i/J`$ depends on $`\mathrm{\Theta }`$, but is almost constant for $`\mathrm{\Theta }`$ below the critical angle.
The most obvious feature of Fig. 1 is the asymmetry with respect to the c-axis, which is due to the uniaxial pinning of the inclined tracks. At high fields ($`H1T`$) we observe a large peak in the direction of the tracks $`\mathrm{\Theta }_D32^{}`$. For $`H<1T`$ the peak becomes broader and progressively shifts away from the tracks in the direction of the c-axis as $`H`$ decreases. The shift decreases with increasing $`T`$ as shown in figure 2, where the angle $`\mathrm{\Theta }_{\mathrm{max}}`$ of the maximum in $`M_i`$ is plotted as a function of $`H`$ for three temperatures. The inset of figure 2 shows a blow-up of the data of fig. 1 for $`H=0.4T`$ and $`T=60K`$. This curve exhibits the second main characteristic of the low field results, namely the existence of a plateau in $`M_i\left(\mathrm{\Theta }\right)`$ (We define $`\mathrm{\Theta }_{\mathrm{max}}`$ as the center of the plateau).
We first discuss the origin of the shift. Maximum pinning is expected to occur when the tracks are aligned with the direction that the vortices would have in the absence of pinning. For an anisotropic material, such direction does not coincide with $`𝐇`$. If $`\mathrm{\Theta }_B`$ is the angle between the equilibrium induction field $`𝐁`$ (which represents the vortex direction) and the c-axis, minimization of the free energy for $`H_{c1}^cHH_{c2}^c`$ gives
$$\mathrm{sin}\left(\mathrm{\Theta }_B\mathrm{\Theta }\right)\frac{H_{c1}^c(1\epsilon ^2)}{2H\mathrm{ln}\kappa }\frac{\mathrm{sin}\mathrm{\Theta }_B\mathrm{cos}\mathrm{\Theta }_B}{\epsilon \left(\mathrm{\Theta }_B\right)}\mathrm{ln}\left(\frac{H_{c2}(\mathrm{\Theta }_B)}{B}\right)$$
(1)
where $`H_{c2}(\mathrm{\Theta }_B)=H_{c2}^c/\epsilon (\mathrm{\Theta }_B)`$. Here $`H_{c1}^c`$ and $`H_{c2}^c`$ are the lower and upper c-axis critical fields, $`\epsilon `$ is the anisotropy and $`\epsilon \left(\theta \right)=\left(\mathrm{cos}^2\theta +\epsilon ^2\mathrm{sin}^2\theta \right)^{1/2}`$. For $`\epsilon <1`$ vortices tilt towards the ab plane. When $`\mathrm{\Theta }=\mathrm{\Theta }_D`$ we have $`\mathrm{\Theta }_B>\mathrm{\Theta }_D`$ and the optimum pinning situation is not satisfied. Instead, maximum $`M_i`$ occurs at the vortex-track alignment condition $`\mathrm{\Theta }_B=\mathrm{\Theta }_D`$. This corresponds to an applied field angle $`\mathrm{\Theta }_{\mathrm{max}}<\mathrm{\Theta }_D`$ that can be calculated from Eq. 1 by setting $`\mathrm{\Theta }_B=\mathrm{\Theta }_D32^{}`$. (In this picture the peak cannot occur at $`\mathrm{\Theta }<0`$, thus the negative values of $`\mathrm{\Theta }_{\mathrm{max}}`$ given by Eq. 1 at low $`H`$ are unphysical, and $`\mathrm{\Theta }_{\mathrm{max}}0`$ as $`H0`$).
The solid lines in fig. 2 are fits to Eq. 1 with fixed parameters $`\epsilon =1/7`$ and $`H_{c2}^c(T)=1.6T/K\times (T_cT)`$ (the fits are not very sensitive to any of them). Using $`H_{c1}^c(T)/2\mathrm{ln}\kappa =\mathrm{\Phi }_0/8\pi \lambda _{ab}^2(T)`$ and $`\lambda _{ab}^2(T)\lambda _{ab}^2(0)(1T/T_c)^1`$, we obtain a good fit to the data as a function of field and temperature by setting only one free parameter, $`\lambda _{ab}(0)500\AA `$. Although this value is significantly smaller than the accepted value ($`1400\AA `$), we nevertheless consider that this simple model captures the basic physics. We note that Zhukov et al. have reported lock-in angles for twin boundaries in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals that imply an $`H_{c1}^c`$ about 5 times larger than the usual values, a result suggestively similar to our case.
We now return to the plateau seen in the inset of figure 2. The constancy of $`M_i\left(\mathrm{\Theta }\right)`$ indicates that the pinning energy remains constant and equal to the value at the alignment condition $`\mathrm{\Theta }_B=\mathrm{\Theta }_D`$. This behavior is a fingerprint of the lock-in phase. The extension of the plateau in the $`H\mathrm{\Theta }`$ plane at $`60K`$ (determined with accuracy $`1^{}`$) is shown as bars in Fig. 2. Its width decreases approximately as $`H^1`$, as expected for $`\phi _L`$, and for $`H>1T`$ it becomes undetectable with our resolution. The decrease of $`M_i`$ at the edges of the plateau is sharp, a result consistent with the appearance of kinks, which not only reduce $`J_c`$ but also produce a faster relaxation.
When $`\left|\mathrm{\Theta }_B\mathrm{\Theta }_D\right|>\phi _L`$ vortices form staircases. Two questions arise here. First, which is the direction of the kinks that connect the pinned portions of the vortices? Second, do we observe evidence for a trapping angle $`\phi _T`$?
For $`\mathrm{\Theta }>\mathrm{\Theta }_{\mathrm{max}}`$, there is a wide angular range in Fig. 1 in which $`M_i\left(+\mathrm{\Theta }\right)>M_i\left(\mathrm{\Theta }\right)`$ for all $`H`$, i.e., pinning is stronger when $`H`$ is closer to the tracks than in the crystallographically equivalent configuration in the opposite side. This asymmetry demonstrates that at the angle $`+\mathrm{\Theta }`$ vortices form staircases, with segments trapped in the tracks. For $`\mathrm{\Theta }<\mathrm{\Theta }_{\mathrm{max}}`$ we again observe asymmetry, $`M_i\left(\mathrm{\Theta }\right)`$ crosses $`\mathrm{\Theta }=0`$ with positive slope, indicating that pinning decreases as $`H`$ is tilted away from the tracks. We can conclude that staircases extend at least beyond the c-axis into the $`\mathrm{\Theta }<0`$ region.
The angle $`\theta _k`$ between the kinks and the c-axis can be calculated by minimization of the free energy. For simplicity, we consider the case $`HH_{c1}^c`$, where $`\mathrm{\Theta }_B=\mathrm{\Theta }`$ and the problem reduces to calculate the energy of one vortex, as the other terms in the free energy are the same for all configurations. If $`L_p`$ is the length of a pinned segment, and $`L_k`$ the length of the kink (see sketch in figure 4), the energy is $`EL_pϵ_p\left(\mathrm{\Theta }_D\right)+L_kϵ_f\left(\theta _k\right)`$, where $`ϵ_f\left(\theta _k\right)\epsilon _0\epsilon \left(\theta _k\right)\left[\mathrm{ln}\kappa +0.5\right]`$ and $`ϵ_p\left(\mathrm{\Theta }_D\right)\epsilon _0\epsilon \left(\mathrm{\Theta }_D\right)\left[\mathrm{ln}\kappa +\alpha _t\right]`$ are the line energy for free and pinned vortices respectively, $`\epsilon _0`$ is the vortex energy scale and $`\alpha _t<0.5`$ parametrizes the core pinning energy due to the tracks (smaller $`\alpha _t`$ implies stronger pinning). Minimizing $`E`$ with respect to $`\theta _k`$ we obtain two orientations, $`\theta _k^{}`$ for $`\mathrm{\Theta }<\mathrm{\Theta }_D`$ and $`\theta _k^+`$ for $`\mathrm{\Theta }>\mathrm{\Theta }_D`$.
As the tracks are inclined, $`\left|\theta _k^{}\right|`$ and $`\left|\theta _k^+\right|`$ are different. However, those angles are independent of $`\mathrm{\Theta }`$. As $`\left|\mathrm{\Theta }\mathrm{\Theta }_D\right|`$ increases, $`\theta _k^\pm `$ remain constant while $`L_p`$ decreases and the number of kinks increases, consequently the pinning energy lowers. This accounts for an $`M_i`$ that decreases as we move away from the tracks. In particular, for $`\mathrm{\Theta }=\theta _k^\pm `$ vortices become straight ($`L_p=0`$), thus $`\phi _T^\pm =\left|\theta _k^\pm \mathrm{\Theta }_D\right|`$ are the trapping angles in both directions. In general $`\theta _k^\pm `$ must be obtained numerically, but for $`\epsilon \mathrm{tan}\theta _k1`$ and $`\epsilon \mathrm{tan}\mathrm{\Theta }_D1`$ we obtain
$$\mathrm{tan}\theta _k^\pm \mathrm{tan}\mathrm{\Theta }_D\pm \frac{1}{\epsilon }\sqrt{\frac{12\alpha _t}{\mathrm{ln}\kappa +0.5}}$$
(2)
Eq. 2 adequately describes the main features of the asymmetric region in Fig. 1, and for $`\mathrm{\Theta }_D=0`$ it coincides with the usual estimates of $`\phi _T`$.
There is, however, an important missing ingredient in the standard description presented above, namely the existence of twins and Cu-O layers, which are additional sources of correlated pinning. This raises the possibility that vortices may simultaneously adjust to more than one of them, forming different types of staircases.
Pinning by twin boundaries is visible in figure 1 as an additional peak centered at the c-axis for $`H=2T`$ and $`T=60K`$. A blow-up of that peak is shown in the inset. We observe this maximum for $`H1T`$. The width of this peak, $`5^{}`$, is in the typical range of reported trapping angles for twins . On the other hand, the fact that the peak is mounted over an inclined background implies that vortices are also trapped by the tracks. Thus, vortices in this angular range contain segments both in the tracks and in the twins. These two types of segments are enough to build up the staircases for $`\mathrm{\Theta }>0`$, but for $`\mathrm{\Theta }<0`$ a third group of inclined kinks with $`\theta _k<0`$ must exist in order to have vortices parallel to $`𝐇`$.
Another fact to be considered is that there is an angle $`\mathrm{\Theta }_{sym}`$ (which is only weakly dependent on $`H`$) beyond which $`M_i\left(\mathrm{\Theta }\right)`$ recovers the symmetry with respect to the c-axis. This is illustrated in Fig. 3, where $`M_i`$ data for $`\left|\mathrm{\Theta }\right|`$ was reflected along the c-axis and superimposed to the results for $`+\left|\mathrm{\Theta }\right|`$.
One possible interpretation is that for $`\mathrm{\Theta }>\mathrm{\Theta }_{sym}`$ staircases disappear, i.e., that $`\mathrm{\Theta }_{sym}=\theta _k^+`$ and we are determining $`\phi _T^+=\mathrm{\Theta }_{sym}\mathrm{\Theta }_D`$. However, this is inconsistent with our experimental results. Indeed, $`\phi _T^+`$ should decrease with $`T`$, and this decrease should be particularly strong above the depinning temperature $`T_{dp}40K`$ due to the reduction of the pinning energy by entropic effects . This expectation is in sharp contrast with the observed increase of $`\mathrm{\Theta }_{sym}`$ with $`T`$, which is shown in Figure 4 for $`H=2T`$. Thus, the interpretation of $`\mathrm{\Theta }_{sym}`$ as a measure of the trapping angle is ruled out. Moreover, if in a certain angular range vortices were not forming staircases, pinning could be described by a scalar disorder, then at high fields $`M_i\left(\mathrm{\Theta }\right)`$ should follow the anisotropy scaling law $`M_i(H,\mathrm{\Theta })=M_i\left(\epsilon \left(\mathrm{\Theta }\right)H\right)`$. Consistently, we do not observe such scaling in any angular range.
Our alternative interpretation is that, at large $`\mathrm{\Theta }`$, the kinks become trapped by the ab-planes. This idea has been used by Hardy et al. to explain that the $`J_c`$ at low $`T`$ in the very anisotropic Bi and Tl compounds with tracks at $`\mathrm{\Theta }_D=45^{}`$ was the same for either $`\mathrm{\Theta }=45^{}`$ or $`\mathrm{\Theta }=45^{}`$. Our situation is different, as we are comparing two kinked configurations.
We first note that, according to Eq. 2, $`\theta _k^\pm `$ cannot be exactly $`90^{}`$ for finite $`\epsilon `$, thus the intrinsic pinning must be incorporated into the model by assigning a lower energy to kinks in the ab-planes. Vortices may now form structures consisting of segments trapped in the columns connected by segments trapped in the ab-planes, or alternatively an inclined kink may transform into a staircase of smaller kinks connecting segments in the planes (see sketches in figure 4). We should now compare the energy of the new configurations with that containing kinks at angles $`\theta _k^\pm `$. This is equivalent to figure out whether the kinks at $`\theta _k^\pm `$ lay within the trapping regime for the planes or not. The problem with this analysis is that, as $`\theta _k^\pm `$ are independent of $`\mathrm{\Theta }`$, one of the two possibilities (either inclined or trapped kinks), will be the most favorable for all $`\mathrm{\Theta }`$. Thus, this picture alone cannot explain the crossover from an asymmetric to a symmetric regime in $`M_i\left(\mathrm{\Theta }\right)`$.
The key additional concept in this scenario is the dispersion in the pinning energy. The angles $`\theta _k^\pm `$ depend on the pinning strength of the adjacent tracks ($`\alpha _t`$ in Eq. 2), thus dispersion in $`\alpha _t`$ implies dispersion in $`\theta _k^\pm `$. As $`\mathrm{\Theta }`$ increases, it becomes larger than the smaller $`\theta _k^\pm `$’s (that connect the weaker defects) and the corresponding kinks disappear. The vortices involved, however, do not become straight, but remain trapped by stronger pins connected by longer kinks with larger $`\theta _k^\pm `$. This process goes on as $`\mathrm{\Theta }`$ grows: the weaker tracks progressively become unnefective as the ”local” $`\theta _k`$ is exceeded, and the distribution of $`\theta _k^\pm `$ shifts towards the ab-planes. When a particular kink falls within the trapping angle of the planes, a switch to the pinned-kink structure occurs. In this picture, the gradual crossover to the symmetric regime takes place when most of the remaining kinks are pinned by the planes.
If kinks become locked, the total length of a vortex that is trapped inside columnar defects is the total length of a track, independent of $`\mathrm{\Theta }`$, and the total length of the kinks is $`\mathrm{tan}\left(\left|\mathrm{\Theta }\pm \mathrm{\Theta }_D\right|\right)`$ for field orientations $`\pm \mathrm{\Theta }`$ respectively. As $`\left|\mathrm{\Theta }\right|`$ grows, the relative difference between the line energy in both orientations decreases, an effect that is reinforced by the small line energy of the kinks in the ab-planes. If kinks are not locked but rather form staircases, taking into account that the trapping angle for the ab-planes is small ($`5^{}`$), the same argument still applies to a good approximation. The temperature dependence of $`\mathrm{\Theta }_{sym}`$ is now easily explained by a faster decrease of the pinning of the ab-planes with $`T`$ as compared to the columnar defects. Additional evidence in support of our description comes from recent transport measurements in twinned YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals, which show that in the liquid phase vortices remain correlated along the c-axis for all field orientations, suggesting that they are composed solely of segments in the twins and in the ab-planes.
In summary, we have shown that the combined effect of the three sources of correlated pinning must be taken into account to describe the vortex structure in samples with inclined columnar defects. We demonstrate that the lock-in phase exhibits an angle independent pinning strength, and show the decrease of the lock-in angle with field. Our results show that a variety of complex staircases are formed depending on the field orientation and strongly suggest that, at high temperatures, correlated structures dominate vortex pinning over random disorder in the whole angular range.
Work partially supported by ANPCyT, Argentina, PICT 97 No.01120. A.S. and G.N. are members of CONICET. We acknowledge useful discussions with S. Grigera, F. de la Cruz, E. Osquiguil and D. Niebieskikwiat.
Figure 1:Irreversible magnetization $`M_i`$ as a function of the applied field angle $`\mathrm{\Theta }`$ for several fields $`H`$, at temperatures (a) $`70K`$ and (b) $`60K`$. Inset: blow up of the $`H=2T`$ data near the c-axis for $`T=60K`$ (the units are the same of those in the main figure.
Figure 2:Angle $`\mathrm{\Theta }_{max}`$ of the maximum in $`M_i(\mathrm{\Theta })`$ as a function of H for three temperatures. The solid lines are fits to Eq.(1) (see text). Bars mark the width of the plateau. Inset: $`M_i(\mathrm{\Theta })`$ in the region of the plateau.
Figure 3:Irreversible magnetization $`M_i`$ versus field angle $`\mathrm{\Theta }`$ for three fields (curves are vertically displaced for clarity). Open symbols: data for $`\mathrm{\Theta }>0`$. Solid symbols: data for $`\mathrm{\Theta }<0`$, reflected with respect to the c-axis. Arrows indicate the angle $`\mathrm{\Theta }_{sym}`$ beyond which the behavior is symmetric. The procedure of reflection is sketched in the inset.
Figure 4:Temperature dependence of $`\mathrm{\Theta }_{sym}`$ (see fig.3). The solid line is a guide to the eye. The sketches show possible vortex staircases for $`\mathrm{\Theta }>\mathrm{\Theta }_D`$.
|
no-problem/9902/gr-qc9902050.html
|
ar5iv
|
text
|
# Acknowledgements
## Acknowledgements
I would like to thank Carl Brans and A.Wang for helpful discussions on the subject of this paper.Financial support of UERJ (FAPERJ) is gratefully acknowledged.
|
no-problem/9902/astro-ph9902057.html
|
ar5iv
|
text
|
# Can galactic nuclei be non-axisymmetric? — The parameter space of power-law discs
## 1 Introduction
Current formation theories emphasize the roles of dissipation and galaxy interaction as major processes in shaping present day galaxies, but is there any stringent limit on the shapes and density profiles of galaxies from general conditions such as equilibrium and stability alone? Particularly, are there stable triaxial equilibria with realistic radial density profiles? This has been an out-standing stellar dynamical question ever since Binney (1978) invoked triaxial equilibria to account for the observed flattened shape of elliptical galaxies and their lack of rotation. Theoretically triaxial equilibria exist at least for systems with a finite density core and a wealth of box orbits and tube orbits. This was demonstrated for general models by Schwarzschild (1979, 1982), and for models with separable potentials by Statler (1987). However, the traditional assumption of a finite core in every galaxy was challenged by recent observations of nuclei of nearby elliptical galaxies with the Hubble Space Telescope. It was found that giant ellipticals have a power-law surface density distribution $`\mu r^\gamma `$ with $`0<\gamma <0.3`$ near the center; the slope steepens to as much as 0.3–1.3 for small ellipticals, with the dividing line at $`M_B20`$ mag (Crane et al. 1993; Jaffe et al. 1994; Lauer et al. 1995; Carollo et al. 1997; Faber et al. 1997). These observations call for a re-examination of the existence of triaxial equilibria in a potential with a divergent force ($`F\mu \mathrm{}`$ for $`\gamma >0`$) at the center (Gerhard & Binney 1985; de Zeeuw & Carollo 1996). Whether triaxial models of this kind exist also becomes a key uncertainty in, e.g., interpreting the kinematic and photometric data of galactic nuclei, weighing their central black holes and reconstructing the formation history of these systems.
The conclusion that at least some strongly non-axisymmetric models with a steep central cusp are probably not in rigorous steady state is based on a handful of three-dimensional dynamical models. These have been built with various implementations of Schwarzschild’s method, in which individual orbits are populated so as to match the model density, and include three-dimensional scale-free models with logarithmic potentials (Schwarzschild 1993), and two non-scale-free models (Merritt & Fridman 1996). Unfortunately, the power of these few numerical experiments is limited when it comes to exploring the parameter space. It is not clear how to extrapolate results obtained for a few models to a general statement about the whole class of cusped triaxial potentials, because the meaning of a small mismatch in the reconstructed density, which is often of the order of one percent or less, has to be interpreted on a case-by-case basis. Kuijken (1993, hereafter K93) showed in his systematic study of two-dimensional non-axisymmetric models with a logarithmic potential that whether the numerically constructed model is in equilibrium depends sensitively on numerical details, including the resolution of the spatial grid for the mass model, the grid for orbital initial conditions, the number of orbits used, and the integration time for each orbit. Although it may be feasible with present-day computer technology to carry out a massive numerical search in the multi-dimensional parameter space (axis ratios, inner and outer density slopes with the possible addition of a central black hole and the tumbling speed of the potential), it is intrinsically difficult in this approach to pinpoint the exact origin of any mismatch between the orbits and the density model. For example it has not been well-understood why replacing box orbits in a cored potential by boxlets in a cusped potential upsets the equilibria.
In this paper we present a new approach to study self-consistency of a general non-axisymmetric model. We restrict ourselves to the two-dimensional case of scale-free discs, and show that a non-trivial and necessary condition for the existence of a self-consistent elongated disc is that the angular speed of the regular boxlet and tube orbits when they cross the major axis should be consistent with the local curvature of the density distribution. It is well known that boxlets are less useful than box orbits when it comes to fit the model density near the major axis because the boxlets have their density maxima at the turning points rather than on the major axis (e.g., Pfenniger & de Zeeuw 1989; Schwarzschild 1993). K93 made the interesting observation that a box orbit has only one ‘corner’ per quadrant, while a boxlet orbit has two or more correlated ‘corners’ per quadrant, which makes them less flexible in fitting the model density. He also suggested that the lack of self-consistency in elliptical models is likely due to the spiky angular distribution of boxlets rather than to the lack of flattened boxlets. Syer & Zhao (1998, hereafter SZ98) suggested that the self-consistency is first broken down near the symmetry axes of the model. Unfortunately, the result of SZ98 is limited to a special subset of the separable non-axisymmetric potentials introduced by Sridhar & Touma (1997, hereafter ST97).
The structure of this paper is as follows. §2 gives a rigorous formulation of the problem for scale-free discs. §3 illustrates the requirement on the curvature of the model with specific orbits in elliptic disc potentials. §4 gives the results of fitting the curvature, and §5 examines the assumptions in the model, and discusses generalizations of the method to three-dimensional and non-scale-free systems, as well as the implications for barred galaxies.
## 2 Scale-free non-axisymmetric discs
### 2.1 Formulation
Consider a general two-dimensional scale-free non-axisymmetric disc potential
$$\varphi (r,\theta )r^\alpha p(\theta ),1\alpha 1,$$
(1)
where $`p(\theta )`$ defines the angular shape, with $`\theta =0`$ and $`\pi /2`$ the directions of the minor and major axis, respectively. We consider surface densities of the form
$$\mu (r,\theta )r^\gamma s(\theta ),0\gamma <2,$$
(2)
where $`\gamma `$ is the cusp strength and $`s(\theta )`$ the angular shape of the surface density. In self-consistent systems $`s(\theta )`$ and $`p(\theta )`$ are related, and $`\gamma =1\alpha `$.
The regular orbits in this potential can be grouped according to their shapes (bananas, fishes, tubes, etc.; see Miralda–Escudé & Schwarzschild 1989; K93), irrespective of their sizes, into self-similar families. Each family can be characterized by a dimensionless second integral, say, $`I`$, and the whole family can be then built by rescaling one reference orbit with a trajectory described by the polar coordinates $`(r_I(t),\theta _I(t))`$. The weights among the self-similar ‘cousin’ orbits should be prescribed in a scale-free fashion such that each family produces a $`r^{\alpha 1}`$ power-law density distribution, as for the model density. The different families should be weighed according to a to-be-determined positive function $`w(I)0`$ so as to reproduce the angular part of the model density.
A clear discussion of the algorithm for constructing self-similar non-axisymmetric models was given by Richstone (1980) and Schwarzschild (1993) for three-dimensional models, and by K93 for two-dimensional models. These previous studies were focused exclusively on the logarithmic potential, and it appears that no one has applied the algorithm to other non-axisymmetric power-law potentials, which are most relevant for galactic nuclei. Following K93, we first divide the angular coordinate into $`n`$ slices with an interval $`\mathrm{\Delta }\theta =2\pi /n`$, and $`n`$ approaching infinity. We then compare the amount of mass a reference orbit deposits in each angular sector to the amount of mass required by the density model in the same sector within the radius of the orbit. If a reference orbit $`(r_I(t),\theta _I(t))`$ spends a fraction $`\mathrm{\Delta }(t/T)_{I,j}`$ of the integration time $`T`$ in the $`j`$-th sector, then it deposits an amount proportional to $`w(I)\mathrm{\Delta }(t/T)_{I,j}`$. By comparison, the amount of mass prescribed by the density model in the same angular sector within the radius $`r_I(t)`$ is,
$`\mathrm{\Delta }m_{I,j}`$ $``$ $`\mathrm{\Delta }\theta {\displaystyle _0^{r_I(t)}}\mu (r_1,\theta )r_1𝑑r_1`$ (3)
$``$ $`r_I^2(t)\mu (r_I(t),\theta )\mathrm{\Delta }\theta .`$ (4)
To reproduce the model density using regular orbits alone we require that
$$𝑑Iw(I)\frac{\mathrm{\Delta }(t/T)_{I,j}}{\mathrm{\Delta }m_{I,j}}=1,j=1,\mathrm{},n$$
(5)
where we have taken time averages (as indicated by the brackets) of all the times when an orbit comes to the same angle, and have summed up all regular families by integrating the second integral $`I`$.
It is straightforward to verify this result for the scale-free logarithmic disc of K93. In this special case $`\mu r^1s(\theta )`$, so $`\mathrm{\Delta }m_{I,j}r_I(t)s(\theta )\mathrm{\Delta }\theta `$. Eq. (5) can then be rewritten in a form similar to K93, $`𝑑Iw(I)\mathrm{\Delta }(t/T)_{I,j}/r_I(t)s(\theta )\mathrm{\Delta }\theta `$.
### 2.2 Curvature at the major axis
For the time being we assume that the angular momentum of a regular orbit is always non-zero everywhere in the orbit (which is clearly not the case near the turning point of a boxlet orbit), so that $`\dot{\theta _I}0`$. Taking the limit $`\mathrm{\Delta }\theta 0`$, we have
$$\frac{\mathrm{\Delta }t}{\mathrm{\Delta }\theta }\dot{\theta }_I(t)^1,$$
(6)
and upon substitution in eq. (5) and (3), we obtain
$$𝑑Iw(I)\mathrm{\Gamma }=\mathrm{const},\mathrm{\Gamma }\frac{1}{\mu (r,\theta )J},$$
(7)
where the weights $`w(I)`$ are non-negative and $`Jr^2\dot{\theta }`$ is the angular momentum.
To examine how the orbits fit the curvature of the density near the major axis, we take the double derivative of eq. (7) with respect to $`\theta `$, and substitute the result in the equations of motion
$$\ddot{r}=\frac{J^2}{r^3}\frac{\varphi }{r},\dot{J}=\frac{\varphi }{\theta }.$$
(8)
The angular momentum $`J`$ is nearly constant in the vicinity of the major axis where the torque $`\varphi /\theta 0`$, because the force is radial at the symmetry axes. Evaluating the angular derivatives at the major axis of the potential ($`\theta =\pi /2`$), we obtain the following simple expression (see Appendix A for a derivation):
$$𝑑Iw(I)[q_\varphi ^2(1+\gamma \gamma \lambda )]\frac{\mathrm{\Gamma }}{K_\theta }=0,$$
(9)
where
$$\lambda (1+\gamma )K_r+q_\mu ^2K_\theta |_{\theta =\frac{\pi }{2}}>0,$$
(10)
and we have written $`K_r\frac{\dot{r}^2}{r_r\varphi }`$ and $`K_\theta \frac{r^2\dot{\theta }^2}{r_r\varphi }`$ as the radial and tangential parts of the kinetic energy scaled by the virial $`r_r\varphi `$. The quantities $`q_\varphi ^2`$ and $`q_\mu ^2`$, defined as
$`q_X^21+{\displaystyle \frac{_\theta ^2X}{r_rX}}|_{\theta =\frac{\pi }{2}},X=[\varphi (r,\theta ),\mu (r,\theta )]`$ (11)
describe the curvatures of the equal potential and equal density contours at the major axis (alternative expressions in terms of double derivatives of $`s(\theta )`$ and $`p(\theta )`$ are given by eqs 24 and 25). Here $`q_{[\varphi ,\mu ]}`$ are the axis ratios of the best-fitting ellipses to the potential, respectively density, contours near the major axis. A similar result can be derived for any angle $`\theta `$, but the expression is much simpler at the symmetry axes because all first derivatives vanish: $`_\theta \mu (r,\theta )=_\theta \varphi (r,\theta )=_\theta J=0`$; the result on the minor axis turns out to be not very useful in providing constraints to models. We remark that, e.g., circular orbits in an axisymmetric system trivially satisfy eq. (9) because $`q_\mu =q_\varphi =\lambda =1`$.
## 3 Orbits in elongated discs
Consider scale-free discs with surface density of the form:
$$\mu (r,\theta )r^\gamma \left(\mathrm{cos}^n\theta +q^n\mathrm{sin}^n\theta \right)^{\frac{\gamma }{n}}.$$
(12)
The gravitational potential of these discs is not exactly elliptical, and is formally given by eq. (1). It can be computed with harmonic expansions as in SZ98. When $`n=2`$, the surface density is stratified on ellipses of axis ratio $`q`$.
Figures 1 and 2 show the density distribution of the dominant orbits in two such discs. The heavy solid line in each panel indicates the shape of the orbit after all the scaled copy orbits and their reflection images are summed up in a scale-free fashion; for clarity only the right hand part is drawn. We find that near the major axis the density contributions of all tube orbits, and nearly all boxlet orbits, curve in the opposite sense as the model density; the exceptions are the fish orbits in some potentials (Miralda–Escudé & Schwarzschild 1989; K93), which sometimes support the curvature of the model density.
The above result suggests that a large fraction of scale-free elliptic disc potentials cannot be made self-consistent near the major axis. While the tube orbits are always anti-aligned with the potential (e.g., de Zeeuw, Hunter & Schwarzschild 1987), once the ‘backbone’ of a cored potential, namely the set of box orbits, is replaced by the boxlets and stochastic orbits of a cusped model, there may be no orbits left to support the potential near the major axis. Since a general property of all orbits is that the amplitude of the orbital angular momentum always reaches a local maximum on the major axis, an orbit tends to spend more time away from the major axis than on the axis. As a result, the curvature of orbits is often opposite to the real density even after all ‘cousin’ orbits are added, which limits the range of non-axisymmetric models with cusps that can be self-consistent.
## 4 Fitting the curvature at the major axis
We now present a more quantitative analysis of the curvature of the regular orbits. A simple and necessary condition for self-consistency near the major axis of the model follows from eq. (9): at least some orbits should have a negative $`\left[q_\varphi ^2(1+\gamma \gamma \lambda )\right]`$ and others a positive value in order to place the average at zero. This places a range on the possible curvature (or axis ratio) of the potential, as follows:
$$[0,1+\gamma \gamma \lambda _{\mathrm{max}}]_{\mathrm{max}}q_\varphi ^2[1,1+\gamma \gamma \lambda _{\mathrm{min}}]_{\mathrm{min}}.$$
(13)
Since $`\mathrm{}>\lambda _{\mathrm{max}}\lambda \lambda _{\mathrm{min}}>0`$ by definition (cf eq. (10)), a clean non-trivial limit is that
$$q_\varphi >\frac{1}{\sqrt{1+\gamma }}.$$
(14)
This condition turns out to be quite powerful. It implies that discs with shallow density cusps (small $`\gamma `$) cannot have a highly elliptical potential because they would violate the requirement of the curvature. The curvature condition is trivially satisfied in the Keplerian regime, since the potential becomes spherical ($`q_\varphi ^21`$). This would be the case near a central black hole, or at very large distance of a model with a finite mass.
Our analysis nicely confirms the result of the semi-analytical study of SZ98 that all ST97 discs are non-self-consistent. Eq. (14) is always violated for these discs, because
$$q_\varphi =\frac{1}{\sqrt{3\gamma }}<\frac{1}{\sqrt{1+\gamma }}\text{for ST97 discs with }\gamma <1\text{;}$$
(15)
ST97 models with $`1<\gamma <2`$ have unphysical negative density regions.
More interesting is the application of the curvature criterion (14) to the entire class of elongated discs. Self-consistent elliptic discs ($`n=2`$ in eq. (12)) must have strong enough cusps such that
$$\gamma >q_\varphi ^212(1q_\mu ),$$
(16)
where $`q_\mu =q`$ is the axis ratio of the surface density.
Figure 3 shows that about 50% of the parameter space in the cusp strength vs axis-ratio plane of elliptic discs can be simply ruled out on the basis of examining the curvature at the major axis. Figure 4 shows a similar diagram, but for the potential of elliptic discs and the ST97 models. When applied to the specific case of the $`\gamma =1`$ elliptic disk, the lower limit on the allowed axis ratio provided by eq. (14) is in harmony with the numerical estimates of K93.
Perhaps unexpectedly, our criterion shows that non-axisymmetric discs with a shallow cusp are easier to rule out than those with strong cusps, while the opposite has been suggested for three-dimensional models, based on numerical experiments with two classes of potentials, one with a finite force at the center ($`\gamma =0`$ in projection), the other with the force diverge as $`r^1`$ ($`\gamma =1`$) (Merritt & Fridman 1996). These two opposing suggestions here may well reflect the still incomplete coverage of parameter spaces of both approaches: we are restricted to 2D scale-free discs, but we gain better coverage of the range of cusp slope $`\gamma `$ and ellipticity $`1q`$ owing to our analytical approach.
## 5 Discussion and conclusions
We have shown that many scale-free elliptic discs cannot be self-consistent, and we have given specific criteria for picking out non-self-consistent models. The major axis is a local maximum in terms of the angular momentum of an orbit, which often translates to a local minimum in terms of the fraction of time spent by regular boxlets and tube orbits. This is opposite to that required by the surface density distribution. A clean result from this analysis is shown by the forbidden zone in Fig. 3, which implies that galactic nuclear discs with a shallow density profile ($`0\gamma 0.3`$) are necessarily nearly axisymmetric.
### 5.1 Hidden assumptions
Our results apply only to orbits which cross the disc major axis with a finite angular momentum $`Jr^2\dot{\theta }`$, so that $`\dot{\theta }^1`$ and its derivatives are well-defined at $`\theta =\pi /2`$. This condition clearly breaks down for the axial orbits and/or the stochastic orbits. The two are perhaps the same since the axial orbits are destabilized by the divergent force at the center, and become stochastic.
Stochastic orbits make our method problematic because they could reach $`J=0`$ virtually anywhere. And it is quite inevitable that they will be populated during galaxy formation. However, the way they are populated must be restricted in an equilibrium model because stochastic orbits are slowly-evolving orbits. The only sure recipe to mix stochastic orbits of the same energy ($`E_0`$) together into a time-independent building block (a super-orbit) is with a distribution function $`\delta (EE_0)`$ (Zhao 1996); this distribution mixes in the regular orbits of the same energy $`E_0`$ as a side effect. In our scale-free models these super-orbits form one family, with the relative weights of all members related by a simple scaling (§2). Fortunately for the present problem, the spatial distribution of this super-orbit family is always as round as the equal-potential contours. Since the equal-density contours are generally more elongated, including the super-orbits does not relax our constraints on curvature.<sup>1</sup><sup>1</sup>1It is possible to modify the shape of a super-orbit by subtracting the densities of regular orbits from it—always keeping the density everywhere non-negative. It is not clear whether the resulting component (a nearly spherical distribution with many worm-holes) will be more elongated than the density model. Even if it were to do so, we suspect that it would not relax the constraint on the intrinsic shape of the model unless this single component is also very heavily populated during galaxy formation. So in summary, our result remains valid if stochastic and/or axial orbits are populated with a time-independent distribution.
Can a small operation near the major axis of the density model bring the model to self-consistency? One possibility is to modify the shape of the equal-density contour near the major axis by a significant amount (without violating the Poisson equation and the positivity of the density) such that it is rounder than that of the equal potential contour ($`q_\mu >q_\varphi `$) locally. It is conceivable that by heavily populating the super-orbit component, which is stratified on a set of mildly elongated equal-potential contours, one might counteract the wrong curvature from regular orbits at the major axis. Whether self-consistency can be restored this way also depends on the maximum fraction of mass which can be allocated to the super-orbit component.
### 5.2 Expanding the forbidden territory
Can self-consistent scale-free discs be ruled out on the right-hand side of the ‘forbidden zone’ in Figure 3? Perhaps this is likely: if regular orbits always have a finite (rescaled) angular momentum $`h`$ when crossing the major axis, and orbits with less angular momentum are stochastic, then the minimum angular momentum sets up a barrier to prevent any regular orbit from falling to the center or touching the zero-velocity curve on the major axis. Here $`h`$ is defined by $`0<hJ_{\mathrm{min}}^2/J_E^21`$, where $`J_E`$ is the maximum angular momentum allowed for an orbit of energy $`E`$. There should be a gap in phase space filled by stochastic orbits which separates the unstable periodic axial orbit from the regular boxlets. Such a stochastic gap is typically seen in the start space on the zero-velocity surface of three-dimensional models, and in surfaces of section along the major axis of two-dimensional models (e.g., fig. 5 of Schwarzschild 1993). The minimum angular momentum sets up a dynamical boundary to $`r`$, $`r_r\varphi `$, $`\dot{r}`$ and $`\lambda `$ (cf eq. (10)) at the upper and lower ends. The upper limit on $`\lambda `$ typically helps little to tighten the constraints, but the finite lower limit of $`\lambda `$, namely $`\lambda _{\mathrm{min}}(h)`$ as a function of $`h`$, propagates to a more stringent upper limit on the flattening (cf eq. (13)) and pushes the limit of the forbidden region further to the right in Figure 3. The parameter space to the left of the dashed line can be ruled out as long as the stochastic gap is wide enough ($`h>0.5`$). Typically only fish orbits or higher-order resonant orbits can have a small $`h`$, and we suspect that it is difficult to reach self-consistency for much of the region to the right of the forbidden zone in Fig. 3 without heavily populating the higher-order resonances. This hypothesis is also supported by the range of non-self-consistent models of K93 (indicated by the crosses), which extends to our $`h=0.5`$ line.
### 5.3 Constraint from the density contrast
The density ratio of the minor vs. major axis should also set limits on the ellipticity of self-consistent discs. As shown by SZ98, if
$$C\left[\frac{\mu (r,0)}{\mu (r,\frac{\pi }{2})}\right]\left[\frac{\varphi (r,0)}{\varphi (r,\frac{\pi }{2})}\right]^{\frac{\gamma }{\alpha }}\left(\frac{1ϵ_\mu }{1ϵ_\varphi }\right)^\gamma ,$$
(17)
is defined as a measure for the model density contrast between the minor and major axes, where $`1ϵ_\mu `$ and $`1ϵ_\varphi `$ are the minor-to-major axis ratio of the equal density and equal potential contours respectively, then $`C`$ must have a non-trivial lower and upper bound,
$$C_{min}CC_{max},$$
(18)
set by the shape of orbits in the potential. A model constructed by populating only the thinnest banana orbit in a potential cannot be flatter than a certain value because of the fact that even the thinnest banana orbits spend a fair amount of time near the minor axis. Populating thick boxlet orbits, higher-order resonant orbits and tube orbits tend to make the shape rounder as these orbits come more frequently to the minor axis than the thin banana orbits. Figure 3 also shows the curve with $`C=0.7071/\sqrt{2}`$ in the $`q`$ vs. $`\gamma `$ plane. This curve is interesting for reference as it corresponds to a configuration where only the thinnest banana orbits in a ST97 potential are populated. In contrast a $`C=1`$ curve (which means $`ϵ_\mu =ϵ_\varphi `$) would correspond to a configuration where only the $`f(E)`$ super-orbits are populated. The realistic configurations are likely in between these extremes. Combined with the criteria from the curvature, Figure 3 suggests that about $`2/3`$ of the parameter space of cusped elliptic discs are ruled out.
### 5.4 Implications for nuclear discs and bars
Our curvature constraint on the existence of self-consistent scale-free elliptic disks can be applied to the properties of observed stellar bars and nuclear discs. The dynamics of these highly flattened systems are essentially two-dimensional with the small vertical oscillation completely decoupled from the motion in the plane. In the absence of a curvature criterion for three-dimensional models, it is premature to discuss the parameter space for elliptical galaxies.
Tumbling stellar bars have scales set by the corotation radius. Numerical experiments show that regular boxlet or loop orbits have a central ‘hole’ of finite size compared to their apocenter. So one expects a region infinitely close to the center where no ‘large’ orbits will ever visit. The dynamics of the very nucleus will be dominated by ‘small’ orbits in situ. Such a situation would not be possible for models with a finite core. We know of at least one set of strongly elongated bar models — the rotating Freeman (1966) elliptic discs, which can be built self-consistently for any axis ratio. Our curvature condition clearly does not apply to these models with an analytical core ($`\gamma =0`$) as the finite force at the center stabilises axial orbits and box orbits. But it can be applied with confidence to the central regions of cusped elliptical nuclear bars. And our results show that these cannot be strongly elongated; their axis ratio in projected light should satisfy the major axis curvature condition eq. (14).
Our curvature constraint on elongation comes from the general property that the orbital angular momentum peaks near the major axis, so that the orbital density generally has a local minimum on this axis, whereas the model density is largest there. We therefore expect it to hold to some extent for general discs, as long as a genuine cusp ($`\gamma 0`$) in the central surface density creates a divergent force at the center and destabilises any box orbit. An outer truncation of the mass distribution and the tumbling motion of the bar can greatly change the potential at large radius, but at very small radii the model reduces to the static self-gravitating scale-free case. This happens at radii well-inside the ILR of any rotating potential (but well beyond the sphere of influence of any small central black hole if it exists in bars) such that the centrifugal and Coriolis forces are negligible compared to the self-gravity of the cusp.
Measurements of nuclear cusp slopes for galaxies with nuclear bars are as yet scarce. The one object for which the light distribution has been measured with HST resolution, NGC 5121, shows a steep light profile with $`\gamma =(0.75\pm 0.05)`$ and a projected axis ratio $`(0.67\pm 0.05)`$ (Carollo & Stiavelli 1998). This data point, as shown in Fig. 3, is safely outside the forbidden zone with or without correcting for the inclination. Imaging of other nuclear bars are needed to establish whether there is a zone of avoidance for the shapes of cusped galaxies.
It is a pleasure to acknowledge helpful discussions with Vincent Icke, Konrad Kuijken, Doug Richstone, Massimo Stiavelli, Scott Tremaine, and Dave Syer, and thoughtful suggestions by an anonymous referee which helped to improve the presentation of our results.
## Appendix A Derivation for double derivative of $`\mathrm{\Gamma }`$
The lengthy calculation of the double derivative of $`\mathrm{\Gamma }`$ with respect to $`\theta `$ is greatly simplified near the major axis ($`\theta =\frac{\pi }{2}`$), where we can safely ignore any first or third derivatives of an even function with respect to time $`t`$ or angle $`\theta `$, since these tend to zero.
First we decompose the double derivative of $`\mathrm{\Gamma }`$ (cf. eq. 7) at $`\theta =\frac{\pi }{2}`$ to that with respect to radius $`r`$, to angular momentum $`J`$, and to the angular part $`s(\theta )`$ of the density,
$$\frac{\mathrm{d}^2\mathrm{\Gamma }}{\mathrm{\Gamma }\mathrm{d}\theta ^2}|_{\theta =\frac{\pi }{2}}=\frac{\mathrm{d}^2J}{J\mathrm{d}\theta ^2}+\frac{\mathrm{d}^2r^\gamma }{r^\gamma \mathrm{d}\theta ^2}\frac{\mathrm{d}^2s(\theta )}{s\mathrm{d}\theta ^2}.$$
(19)
With the help of the equations of motion (eq. 8) we find
$`{\displaystyle \frac{\mathrm{d}^2J}{J\mathrm{d}\theta ^2}}`$ $`=`$ $`{\displaystyle \frac{\mathrm{d}\dot{J}}{J\dot{\theta }\mathrm{d}\theta }}`$ (20)
$`=`$ $`{\displaystyle \frac{_\theta ^2\varphi }{r^2\dot{\theta }^2}}`$ (21)
$`{\displaystyle \frac{\mathrm{d}^2r^\gamma }{r^\gamma \mathrm{d}\theta ^2}}`$ $`=`$ $`\gamma {\displaystyle \frac{\ddot{r}}{r\dot{\theta }^2}}+\gamma (1+\gamma )\left({\displaystyle \frac{\mathrm{d}r}{r\mathrm{d}\theta }}\right)^2`$ (22)
$`=`$ $`\gamma {\displaystyle \frac{\dot{\theta }^2r_r\varphi }{r\dot{\theta }^2}}+\gamma (1+\gamma )\left({\displaystyle \frac{\dot{r}}{r\dot{\theta }}}\right)^2.`$ (23)
Now rewrite the second derivatives of the density $`\mu `$ and the potential $`\varphi `$ in terms of the curvatures of the density and potential, which by definition (cf. eq. 11) satisfy
$`q_\mu ^21`$ $`=`$ $`{\displaystyle \frac{_\theta ^2\mu }{r_r\mu }}={\displaystyle \frac{_\theta ^2s(\theta )}{s}}{\displaystyle \frac{1}{\gamma }},`$ (24)
$`q_\varphi ^21`$ $`=`$ $`{\displaystyle \frac{_\theta ^2\varphi }{r_r\varphi }}={\displaystyle \frac{_\theta ^2p(\theta )}{p}}{\displaystyle \frac{1}{1\gamma }}.`$ (25)
Rewriting $`\dot{r}^2`$ and $`\dot{\theta }^2`$ in terms of $`K_r`$ and $`K_\theta `$, we obtain
$$\frac{\mathrm{d}^2\mathrm{\Gamma }}{\mathrm{\Gamma }\mathrm{d}\theta ^2}|_{\theta =\frac{\pi }{2}}=K_\theta ^1\left[q_\varphi ^2(1+\gamma )(1\gamma K_r)+\gamma q_\mu ^2K_\theta \right],$$
(26)
which immediately reduces to eq. (9).
|
no-problem/9902/cond-mat9902258.html
|
ar5iv
|
text
|
# The Addition Spectrum and Koopmans’ Theorem for Disordered Quantum Dots
## I Introduction
We consider the response of the ground state of spinless fermions to the addition or removal of a particle. To this end, we apply the self-consistent Hartree-Fock (SCHF) approximation: a non-perturbative effective single particle theory.
Koopmans’ theorem states that the single particle SCHF energy levels describe the affinity and ionisation energy spectra for the unoccupied and occupied states respectively. The approximation involved is that all the other particles do not react to this process. This approximation is generally considered to be good when the single particle wavefunctions are extended: corrections to each wavefunction due the rearrangement of the system following the addition or removal of a particle are expected to be $`𝒪(1/N)`$, where $`N`$ is the number of particles in the system . Moreover, these corrections to the wavefunctions are expected to disappear in the limit of vanishing disorder (having e.g. periodic boundary conditions), where the single particle wavefunctions are free waves, as well as in the limit of vanishing interaction. It is then loosely assumed that in the thermodynamic limit, Koopmans’ theorem becomes exact even for disordered systems, and should be sufficiently accurate for mesoscopic samples. It is evident however, that if the physical quantities at hand require an energy resolution of order the mean single particle level spacing, $`\mathrm{\Delta }`$, the validity of Koopmans’ theorem should be reconsidered.
Our analysis of Koopmans’ theorem for a quantum dot is closely related to the addition spectrum of the latter: the spectrum of energy differences between states with total particle number different by unity. We consider only the energy differences between ground states, which is experimentally accessible through resonant tunnelling , and capacitance measurements at low bias and temperature. We stress that while our analysis here pertains to some aspects of the experiments, a direct comparison is not feasible: firstly we consider spinless electrons, and secondly we consider disordered systems in the diffusive regime; in Ref. the mean free path is of order the sample size, whereas in Ref. the mean free path is much larger than the system size, and ergodicity is ensured by a chaotic boundary shape.
The position of the observed resonant tunnelling (RT) conductance peaks can be related to the ground state energy difference $`\mu _N=E_G(N,V_G)E_G(N1,V_G)`$ where $`E_G(N,V_G)`$ is the ground state energy of the dot with $`N`$ electrons, at gate voltage $`V_G`$. The spacing between consecutive peaks is thus related to
$$\mathrm{\Delta }_2(N)=E_G(N+1,V_G)E_G(N,V_G)E_G(N,V_G^{})+E_G(N1,V_G^{}).$$
(1)
Within the constant interaction (CI) model the ground state energy is simply the sum of filled single particle energies $`e(n)`$ plus $`N(N1)V_0/2`$, where $`V_0`$ is the constant interaction. Taking $`V_G=V_G^{}`$, the peak spacing trivially reduces to
$$\mathrm{\Delta }_2(N)=e(N+1)e(N)+V_0$$
(2)
and so, in the diffusive regime, displays shifted Wigner-Dyson (WD) statistics up to corrections in one over the dimensionless conductance, $`g`$ : $`P(s)=\pi s/2\mathrm{exp}(\pi s^2/4)`$ for zero magnetic field, the case that we consider here; $`s=(\mathrm{\Delta }_2V_0)/\mathrm{\Delta }`$.
Recent experiments on quantum dots have shown that whilst the mean peak spacings are well described by the CI model, the fluctuations are not described by Wigner-Dyson statistics. It is found that the distribution of $`\mathrm{\Delta }_2`$ is roughly Gaussian , with broader non-Gaussian tails seen in Ref.. In Refs. the variance of the fluctuations was found to be considerably larger than that given by the WD distribution. Further experimental observations, including correlations of peak heights and the sensitivity to an Aharanov-Bohm flux are not consistent with random matrix results, and suggest a breakdown of the naive single particle picture. To investigate this, one needs information on the ground state wavefunction.
Blanter et al have evaluated the fluctuations within a Hartree-Fock framework, neglecting effects due to the change in gate voltage $`V_G`$ to $`V_G^{}`$. To this end they applied the Random Phase Approximation (RPA) to generate the screened interaction in the confined geometry, and assumed that all HF level spacings, except for the Coulomb gap, are described by WD statistics. Implicitly assuming Koopmans’ theorem to be valid, and using wavefunction statistics established for non-interacting electrons in a random potential, they calculated the fluctuations of $`\mathrm{\Delta }_2`$ beyond the CI model. These additional fluctuations were found to be parametrically small (in $`1/g`$) and proportional to $`\mathrm{\Delta }`$. Hence, the total fluctuations in $`\mathrm{\Delta }_2`$ were found to be proportional to $`\mathrm{\Delta }`$. The analysis of Ref. is consistent in the limits $`g1`$, $`r_s1`$. The parameter $`r_s`$ characterises the relative importance of interactions in the electronic system, and is defined as the mean electron separation in units of the effective Bohr radius.
Exact numerical calculations on small disordered dots did produce large Gaussian distributed fluctuations at experimental densities. It was claimed that for strong enough interaction $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ is $`\mathrm{𝑢𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑙}`$, independent of the interaction strength and disorder, where $`\delta \mathrm{\Delta }_2`$ denotes the typical (RMS) size of the fluctuations, and the angle brackets denote disorder averaging. This $`\mathrm{𝑢𝑛𝑖𝑣𝑒𝑟𝑠𝑎𝑙}`$ constant was found to be approximately $`.10.17`$, to be compared with the WD result for the CI model: $`.52\mathrm{\Delta }/(\mathrm{\Delta }+V_0)`$. We note that the typical experimental value for the charging energy $`V_0`$, is much larger than $`\mathrm{\Delta }`$. The scaling with $`\mathrm{\Delta }_2`$ suggested by this analysis is in stark contrast to the scaling with $`\mathrm{\Delta }`$ obtained in Ref..
Stopa has considered ballistic chaotic billiards numerically, using local density functional theory . In this case, it was claimed that the fluctuations arise due to strongly scarred wavefunctions in the self consistent potential. As a result of these scars, an asymmetric distribution of $`\mathrm{\Delta }_2(N)`$ was found, including strong correlations over $`N`$. It was then further noted that what is actually measured (i.e. the change in the gate voltage between resonant tunnelling peaks) is not simply related to $`\mathrm{\Delta }_2(N)`$ when the dependence of $`\mathrm{\Delta }_2(N)`$ on the gate voltage is strong. It was then claimed that a self-consistent calculation of $`\mathrm{\Delta }V_G`$ with $`\mathrm{\Delta }_2(N)`$ retrieves a symmetric distribution of $`\mathrm{\Delta }_2(N)`$, and reduces peak to peak correlations.
A further suggestion that the coupling to the gate is important in understanding the fluctuations has been made with reference to the CI model, with WD statistics for the single particle levels . The authors claim that the required distribution of $`\mathrm{\Delta }_2`$ can be generated, except for the non-Gaussian tails, through the de-correlation of neighbouring levels under a parametric change in the Hamiltonian (mediated by $`V_G`$). However, the degree of de-correlation induced by $`\mathrm{\Delta }V_G`$ is left as a fitting parameter.
In this paper we present numerical calculations within the SCHF approximation, considering larger samples than is feasible by exact diagonalisation . This approximation has been seen to be quite good for the calculation of persistent currents in similar systems . We show that fluctuations large compared to the single particle level spacing can arise without recourse to varying the sample shape, size or gate to dot coupling, supposing these to be additional effects. We further demonstrate that approximating the addition spectrum spacings by applying Koopmans’ theorem can lead to large errors in the calculation of the spacing statistics.
We consider separately both a long range (Coulomb) bare interaction and a short range (nearest neighbour) bare interaction. In section II we introduce our model in detail; in section III we present a short discussion of the implications of Koopmans’ theorem; in section IV we present and discuss our numerical results, which are then summarised in the final section.
## II The Model
We address the following tight binding Hamiltonian for spinless fermions
$$H=\underset{i}{}w_ic_i^+c_it\underset{i,\eta }{}c_{i+\eta }^+c_i+\frac{U_0}{2}\underset{ij}{}M_{ijij}c_i^+c_j^+c_jc_i$$
(3)
where $`i`$ is the site index, $`\eta `$ describes the set of nearest neighbours, $`w_i`$ is the random on site energy in the range $`[W/2,W/2]`$, and $`t`$ the hopping matrix element, henceforth taken as unity. We study separately, both a Coulomb interaction potential,
$$M_{ijij}=1/|𝐫_i𝐫_j|$$
(4)
and a short range potential plus a constant term $`M_c`$ (see below),
$$M_{ijij}=(\delta _{i,i+\eta }+M_c).$$
(5)
We consider a 2D system with periodic boundary conditions, and choose to define
$$|𝐫_i𝐫_j|^2[L_x^2\mathrm{sin}^2(\pi n_x/L_x)+L_y^2\mathrm{sin}^2(\pi n_y/L_y)]/\pi ^2,$$
(6)
where $`(n_x,n_y)𝐫_i𝐫_j`$.
All distances are measured in units of the lattice spacing $`a`$; the physical parameters are therefore $`U_0=e^2/a`$, $`t=\mathrm{}^2/2ma^2`$. The standard definition for $`r_s`$ is given, for low filling, by $`r_s=U_0/(t\sqrt{4\pi \nu })`$, where $`\nu =N/A`$ is the filling factor on the tight binding lattice with $`A`$ sites. The dimensionless conductance $`g`$ can be approximated, again for low filling, using the Born approximation. We find $`g=96\pi \nu (t/W)^2`$, which is valid for $`A,Ng1`$. Here $`\nu 1/4`$ throughout, so that $`r_s.56U_0/t`$, and $`g75(t/W)^2`$.
Having identified the parameters of our model with the standard ones employed in the theory of a continuous electron gas, we note that in the limit of small $`r_s`$ and $`1/g`$ the leading order term for the typical interaction dependent fluctuations predicted by Blanter et al is $`U_0\mathrm{\Delta }/t\sqrt{g}`$. For the torus geometry considered here, this contribution, being a surface term, vanishes identically. Their prediction then reduces to typical fluctuations in addition to those of the CI model to be of order $`U_0\mathrm{\Delta }/tg`$.
The torus geometry has the advantage over geometries with hard walls whereby in the former, the compensating background charge provides a trivial shift in all the site energies, and can be removed. In a bounded dot, with an overall charge, the excess charge may build up near the boundary, depending on the position of nearby metallic plates and gates. These effects are geometry specific . Upon adding an electron, the average charge configuration may change considerably (the configuration is strongly geometry dependent). As the gate voltage is varied to allow for the next electron addition, the background potential could have changed causing further charge rearrangement. Whilst it is of great interest to analyse this issue (which may play an important role in the peak spacing fluctuations as well as undermining the naive single particle picture by further reducing the accuracy of Koopmans’ theorem ), we concentrate here on effects due entirely to the intrinsic rearrangement of the dot. From this point of view our analysis may be taken as an attempt to establish an upper bound criterion for the breakdown of Koopmans’ theorem. In reality it may break down earlier due to other non-universal factors. During the completion of this work very recent experimental evidence for significant rearrangement has been produced . It is argued that rearrangements due to adding an electron are far greater than rearrangements due merely to a change in shape.
When considering the short range interaction, the mean charging energy $`V_0`$ in Eq.(2) must be put in by hand through $`M_c`$ of equation (5). The way in which this is done depends on the physical situation being modelled, and is highly geometry dependent (vis-à-vis the gates). We stress that the value of $`M_c`$ does not affect the physical results. We choose to insert
$$M_c=V_0/U_04/A,V_0=\underset{𝐫,𝐫^{}}{}^{}\frac{U_0}{|𝐫𝐫^{}|}.$$
(7)
This value for $`M_c`$ is defined such that if the charge is uniformly spread over the dot, the average charging energies in the Coulomb and nearest-neighbour cases roughly coincide. This choice has been made for simplicity, but corresponds to the premise that the interactions of the $`N`$-electron gas with the positive background and with itself is the same for both models considered. Exchange contributions, which tend to reduce the total charging energy, are included insofar as to cancel both the on-site contributions to the energy, and the unphysical self-interaction of electrons, but are otherwise neglected . The energy associated with charging the system uniformly is $`U_0N(N1)/(2A^2)_{𝐫,𝐫^{}}^{}|𝐫𝐫^{}|^1`$ in the Coulomb case, and $`U_0N(N1)/(2A^2)(4A+M_cA^2)`$ in the nearest-neighbour case. Equation (7) follows from equating these energies. This estimate can be systematically improved if the above premise is taken as the definition, not only by correctly accounting for the exchange contributions, but also by considering single particle wavefunction statistics in the diffusive regime. In this case wavefunction correlation functions such as $`|\psi _i(𝐫)|^2|\psi _j(𝐫^{})|^2`$ are required in order the calculate the average electrostatic energy, where here and after $`\mathrm{}`$ denotes averaging over the disorder ensemble.
In Ref. it was assumed that the (RPA) screening can be taken into account before constructing the Slater determinant ground state, and therefore their result corresponds to a short-ranged effective interaction. It is not clear that this remains a consistent procedure when calculating the ground state energy self-consistently. The reason for the inconsistency is that many of the diagrams generated by the SCHF approximation are already included in the RPA calculation of the screening, resulting in double counting. On the other hand, if the screening is generated externally (e.g. by close metallic gates), then it is consistent to insert a short ranged bare interaction, and this is the point of view taken here.
In some sense, the Coulomb interaction results can be considered as the opposite limit of the nearest-neighbour interaction, and is of interest in this context. However, it is more difficult to physically motivate the use of a Coulombic bare interaction unless one is considering very low electron densities. Screening is indeed weak in a $`2d`$ electron gas in a vacuum, even at high density, but the SCHF procedure cannot correctly generate screening by itself; whilst it can screen the Hartree contributions (as discussed above), it does not screen the exchange (Fock) term. However, we have verified that for the range of parameters considered here, fluctuations in the Hartree energy are larger than the typical fluctuations of the exchange energy. This suggests that the error made in not screening the exchange term correctly is not overly important.
## III Implications of Koopmans’ Theorem
Let us now consider the form of $`\mathrm{\Delta }_2`$, and approximations to it given by applying Koopmans’ theorem. We denote the diagonal matrix elements of the one-body operators in (3) by $`T_i^N`$, and the antisymmetrised Hartree-Fock interaction by $`V_{ij}^N`$, where hereafter the subscripts denote single particle states, and the superscript $`N`$ denotes the number of particles present and identifies the self-consistent basis of single particle wavefunctions being employed, $`\psi _i^N`$. For the torus geometry, where the gate voltage and background potential represent a trivial shift that can be omitted, the SCHF ground state energy is given by
$$E_G(N)=\underset{j}{\overset{N}{}}ϵ_j^N\frac{1}{2}\underset{ij}{\overset{N}{}}V_{ij}^N=\underset{j}{\overset{N}{}}T_j^N+\frac{1}{2}\underset{ij}{\overset{N}{}}V_{ij}^N,$$
(8)
where $`ϵ_l^m`$ is the $`l`$th SCHF single particle energy for a system of $`m`$ particles in the ground state:
$$ϵ_l^m=T_l^m+\underset{j}{\overset{m}{}}V_{lj}^m.$$
(9)
Using (8), we find (c.f. Eq.(1))
$$\mathrm{\Delta }_2(N)=T_{N+1}^{N+1}T_N^{N1}+\underset{j}{\overset{N}{}}\left(T_j^{N+1}2T_j^N+T_j^{N1}\right)$$
$$+\underset{j}{\overset{N}{}}\left(V_{N+1j}^{N+1}V_{Nj}^{N1}\right)+\frac{1}{2}\underset{ij}{\overset{N}{}}\left(V_{ij}^{N+1}2V_{ij}^N+V_{ij}^{N1}\right).$$
(10)
Applying Koopmans’ approximation corresponds to dropping the superscripts and employing an appropriate fixed basis. The theorem implies that the effective single particle states do not depend on the occupation of these states. In particular, Koopmans’ theorem yields $`ϵ_{N+1}^N`$ for the minimum energy required to add a particle to a system of $`N`$ particles, and $`ϵ_N^N`$ for the maximum energy gained by removing a particle from the same system; in both cases the final state is a ground state. Clearly $`ϵ_l^m`$ as well as the ground state energy depend on $`m`$, even in Koopmans’ approximation, through the number of terms in the sum in Eqs.(9) and (8) respectively. It is then easy to see that Koopmans’ approximation yields
$$\mathrm{\Delta }_2^{k_1}(N)=ϵ_{N+1}^Nϵ_N^N.$$
(11)
We also consider two other approximations to $`\mathrm{\Delta }_2`$ that involve calculating two self-consistent bases rather than just one:
$$\mathrm{\Delta }_2^{k_2}(N)=ϵ_{N+1}^Nϵ_N^{N1}$$
(12)
$$\mathrm{\Delta }_2^{k_3}(N)=ϵ_{N+1}^{N+1}ϵ_N^N.$$
(13)
All three estimates (11-13) coincide with $`\mathrm{\Delta }_2(N)`$ of Eq.(10) if Koopmans’ theorem holds. To connect with the notation of Ref., and to demonstrate the difference between the above three approximations and the fully self-consistent result, we provide a schematic diagram of the SCHF spectra in Fig. 1.
Since the self-consistent basis of $`N`$ particles provides the lowest energy for $`N`$ occupied levels, and similarly for $`N1`$ particles, the following relations are clear:
$$\underset{i}{\overset{N1}{}}T_i^N+\frac{1}{2}\underset{ij}{\overset{N1}{}}V_{ij}^N\underset{i}{\overset{N1}{}}T_i^{N1}+\frac{1}{2}\underset{ij}{\overset{N1}{}}V_{ij}^{N1}$$
$$\underset{i}{\overset{N}{}}T_i^N+\frac{1}{2}\underset{ij}{\overset{N}{}}V_{ij}^N\underset{i}{\overset{N}{}}T_i^{N1}+\frac{1}{2}\underset{ij}{\overset{N}{}}V_{ij}^{N1}$$
(14)
Combining these equations, we find that $`\mathrm{\Delta }ϵ(N)\mathrm{\Delta }_2^{k_1}(N)\mathrm{\Delta }_2^{k_2}(N)0`$, or equivalently
$$\mathrm{\Delta }ϵ(N)ϵ_N^{N1}ϵ_N^N0.$$
(15)
The equalities in (14),(15) only hold when no modification of the effective single particle wavefunctions occurs following the addition of an electron. In a disordered dot, in which there are no spatial symmetries, such a modification will always take place, and so $`\mathrm{\Delta }ϵ`$ can be considered strictly positive.
The difference $`\mathrm{\Delta }ϵ`$ provides a measure of the effectiveness of Koopmans’ theorem. To demonstrate this we present, in Fig.2, a schematic diagram of the surface of expectation values of the many-body Hamiltonian in the space of Slater determinants of $`N1`$, $`N`$, and $`N+1`$ particles. The SCHF ground states correspond to minima in these surfaces. From the diagram, it is clear that the energies $`ϵ_{N+1}^N`$ and $`ϵ_N^{N1}`$ are upper bounds to the respective addition energies, and the energies $`ϵ_{N+1}^{N+1}`$ and $`ϵ_N^N`$ are lower bounds to the addition energies. The approximation $`\mathrm{\Delta }_2^{k_1}`$, is therefore obtained by subtracting a lower bound ($`ϵ_N^N`$) from an upper bound ($`ϵ_{N+1}^N`$). As a result, the average value contains the average difference between the two bounds in addition to the correct mean $`\mathrm{\Delta }_2`$. It is generally assumed that the difference between the two bounds vanishes in the thermodynamic limit, and therefore so does $`\mathrm{\Delta }ϵ`$. We shall see that our results do not show any indication that this is the case. On the other hand, $`\mathrm{\Delta }_2^{k_2}`$ corresponds to the difference of two upper bounds to the two relevant addition energies. Regardless of the quality of the upper bound, so long as it is not strongly dependent on the number of particles present, both the particle number and disorder averaged results are good. The third approximation to $`\mathrm{\Delta }_2^{k_3}`$ corresponds to the difference of two lower bounds, and like $`\mathrm{\Delta }_2^{k_2}`$ is good in the mean. It is for this reason that we introduce these alternative approximations. It is easy to see that $`\mathrm{\Delta }_2^{k_1}(N)\mathrm{\Delta }_2^{k_3}(N)=\mathrm{\Delta }ϵ(N+1)`$ and therefore provides no further information. On the other hand, the fluctuations of $`\mathrm{\Delta }_2^{k_3}`$ can be different from those of $`\mathrm{\Delta }_2^{k_2}`$, and so are investigated separately. We note that in a clean system at $`r_s`$ below the Wigner crystal transition , the minima would align in Fig. 2, reflecting the validity of Koopmans’ theorem in that limit.
Let us briefly discuss the non-self-consistent single particle picture, for which the Koopmans’ approximations (11-13) and Eq.(10) all coincide:
$$\mathrm{\Delta }_2(N)=T_{N+1}T_N+\underset{j}{\overset{N1}{}}\left(V_{N+1j}V_{Nj}\right)+V_{N+1N}.$$
(16)
Here, the term non-self-consistent approximation refers to a scheme where a set of effective single-particle states is given (e.g. by solving the N-electron SCHF problem), and utilised for any number of particles present in the system. The nearest neighbour spacings between levels that are both occupied or unoccupied has a similar form:
$$ϵ_{m+1}^Nϵ_m^N=T_{m+1}T_m+\underset{j}{\overset{N}{}}\left(V_{m+1j}V_{mj}\right),$$
(17)
the major difference between (16) and (17) is the additional unbalanced matrix element $`V_{N+1N}`$ appearing in (16). Let us also suppose that in this simple single particle scheme the electrons interact with a short-ranged effective interaction. Blanter et al introduce the hypothesis that the (normalised) spacings (17) and $`\mathrm{\Delta }_2V_{N+1N}`$ obey WD statistics up to corrections in $`1/g`$. Further assuming that the wavefunction correlations are still close to those of non-interacting particles leads, for the short-ranged effective interaction, to the result $`\mathrm{Var}(V_{ij})(U_0\mathrm{\Delta }/tg)^2`$ , so that the interaction dependent contribution to $`\delta \mathrm{\Delta }_2`$ scales like $`U_0\mathrm{\Delta }/tg`$ . This analysis is valid in the regime $`r_s1`$ and $`g1`$, implying that Koopmans’ theorem is a good approximation in that regime.
## IV Results and Discussion
In this section we present and discuss the results of the numerical simulations for both the nearest-neighbour and the Coulomb bare potentials. To make each subsection self-contained there is some repetition.
### A Short Range Interactions
We consider first the case of a nearest neighbour bare interaction potential as defined in Eq.(5). We begin by plotting the distributions of both the level spacings (17) and the gap (16) of the SCHF spectrum for finite $`r_s`$, $`g`$. In this case (16) and (17) are calculated in the self-consistent basis of $`N`$ particles. In Fig. 3 it is seen that the normalised level spacings between occupied states show an increasing deviation from WD to Poisson statistics as $`U_0`$ is increased. This is also true for the unoccupied states, but to a much greater extent. The difference between occupied and unoccupied states in the SCHF approximation will be discussed in greater detail later. We interpret the tendency towards Poisson statistics as a signature of the incipient localisation of the effective one particle states.
The normalised gap ($`\mathrm{\Delta }_2^{k_1}`$) distribution tends towards a more symmetric distribution that is approximately Gaussian as $`U_0`$ is increased.
We have also investigated the gap ($`\mathrm{\Delta }_2`$) distribution obtained within the fully self-consistent scheme, which we show in Fig. 4. We find that as $`U_0`$ is increased, the distribution evolves from a WD form to a more symmetric distribution similar to a Gaussian.
We shall concentrate first on the mean values of these distributions. A typical dependence of $`\mathrm{\Delta }_2`$ on the interaction parameter $`U_0`$ is plotted in figure 5. Whilst $`\mathrm{\Delta }_2^{k_2}`$ and $`\mathrm{\Delta }_2^{k_3}`$ provide a good approximation, we see a strong deviation of $`\mathrm{\Delta }_2^{k_1}`$. For all the system sizes considered, this effect occurs at $`r_s𝒪(1)`$. Results for the CI model, evaluated as described above, are plotted for comparison. Deviations of order $`𝒪(U_0/A)`$ from the CI model appear above $`U_02`$ ($`r_s1`$).
Elsewhere , we show that the ground state develops large density modulations as $`U_0`$ is increased beyond $`r_s𝒪(1).`$ These ground state charge density modulations (CDMs) explain the deviations of $`\mathrm{\Delta }_2`$ from the CI model prediction: they reduce the average addition energy by up to $`4U_0/At`$ when $`\nu <1/2`$ for a commensurate lattice. When $`\nu >1/2`$ the mean charging energy can be correspondingly increased.
We are also now in a position to understand why the level spacing statistics between unoccupied states show an increased tendency towards a Poisson distribution: the density modulations that appear in the ground state alter the potential felt by the unoccupied states. These modulations are not spatially ordered, as is demonstrated in Ref.. Hence, as $`U_0`$ is increased, the unoccupied states see an effective potential with increasingly strong modulations and tend to localise, whence the tendency towards a Poissonian distribution.
Let us consider the error in $`\mathrm{\Delta }_2^{k_1}`$ in more detail. As can be seen in Fig. 6, $`\mathrm{\Delta }ϵ/\mathrm{\Delta }`$ increases with the system size for lattices up to about 7\*8; for larger systems ($`A50`$) it seems that the error becomes proportional to $`\mathrm{\Delta }`$, and is of order $`\mathrm{\Delta }`$ when $`r_s`$ is of order unity.
We find that the nature of the disorder dependence of $`\mathrm{\Delta }ϵ`$ depends on the interaction strength, as seen in Fig. 7. The change in dependence occurs at interactions strengths corresponding to $`r_s1`$ for all the sample sizes considered. One might be surprised that deviations from Koopmans’ theorem do not smoothly decrease as $`W0`$ since, in the limit of vanishing disorder, Koopmans’ theorem becomes exact on the torus due to the restoration of translational symmetry. However, when $`W0`$, the spectrum develops many near degeneracies such that the effective perturbation due to $`U_0`$ is magnified as $`W0`$. In the limit $`r_s0`$, $`g1`$, the typical size of the matrix elements $`V_{ijkl}`$ which drive the rearrangement scale like $`\delta V_{ijkl}r_s\mathrm{\Delta }/g`$ , thus one expects that in this regime $`\mathrm{\Delta }ϵ`$ should increase with disorder. We find only a weak increase with disorder for $`r_s1`$.
In figure 8 we plot the interaction dependence of $`\mathrm{\Delta }ϵ/\mathrm{\Delta }`$. We find that at small $`U_0`$, $`\mathrm{\Delta }ϵ/\mathrm{\Delta }(U_0/t)^2`$, with deviations for larger $`U_0`$. In fact, this quadratic behaviour can be understood using second order perturbation theory. To see this we refer back to the schematic diagram of Fig. 2. A shift occurs in the ground state configuration when a particle is added, which is represented by a misalignment of the minima. This shift is, to leading order, linear in $`U_0`$. Since the SCHF ground state energy is a minimum in the expectation value of the Hamiltonian, the difference of ground state energies for the two configurations will be quadratic in this shift. Furthermore, the local curvature tensor is independent of $`U_0`$ when the interaction matrix elements are small compared to the mean level spacing. Thus, $`\mathrm{\Delta }ϵ`$ scales like $`U_0^2`$ in the perturbative regime. The indication is that second order perturbation theory is qualitatively good even for $`r_s1`$. We note that since both the shift and the local curvature tensor depend on the disorder, there is no such simple $`W`$ dependence.
To summarise the results for the mean Coulomb gap, we find that Koopmans’ approximation (11) makes an error in the mean charging energy which for small $`r_s`$, $`A50`$, and fixed disorder scales like $`\mathrm{\Delta }ϵr_s^2`$. There is also evidence that for the larger sizes ($`A50`$), far beyond that accessible by exact diagonalisation, that $`\mathrm{\Delta }ϵr_s^2\mathrm{\Delta }`$. The latter dependence is consistent with the expectation that for sufficiently small $`r_s`$, $`1/g`$, perturbation theory is valid when the effective interaction is short-ranged. We find that $`\mathrm{\Delta }ϵ𝒪(\mathrm{\Delta })`$ when $`r_s𝒪(1)`$. To understand this result, we return to (16), (17), and fix the basis to be the self-consistent one for $`N`$ particles, so that (16) now describes $`\mathrm{\Delta }_2^{k1}`$. Since we have verified that the level spacings (17) show nearest neighbour separation statistics which are close to WD for all $`mN`$, with an approximately constant density of states, we are led to conclude that $`\mathrm{\Delta }ϵ`$ arises due to the fundamental difference between occupied and unoccupied levels in the SCHF approximation. In short, whilst $`V_{m+1j}^NV_{mj}^N`$ for $`mN`$ vanishes as expected, $`V_{N+1j}^NV_{Nj}^N`$ does not. Indeed $`\mathrm{\Delta }ϵ_j^{N1}V_{N+1j}^NV_{Nj}^N`$.
Turning now to the fluctuations in $`\mathrm{\Delta }_2`$, we plot an example result in Fig. 9. The most striking behaviour is the asymptotic saturation of the fluctuations (verified but not shown for even stronger interactions). As with the deviations of $`\mathrm{\Delta }_2`$ from the CI model, this occurs over the same range of interactions for all the system sizes considered, and is associated with the appearance of CDMs. Over the range of interaction strengths shown the fluctuations have not completely saturated, but the ground state density modulations are already present , such that, at low filling, the short-ranged contribution to the interaction energy is reduced. In the limit of strong interactions the charge segregates at a kinetic energy cost of order $`𝒪(t)`$, and $`U_0`$ plays no further role in the ground state energy fluctuations. The fluctuations therefore become sub-linear in $`U_0`$, and eventually saturate to an interaction independent value. Moreover, the results for the fluctuations become strongly geometry and filling factor dependent . We note that the observed saturation is in fact an artifact of the sharp cut-off in the interaction range: with a longer-ranged interaction, charge segregation cannot eliminate contributions due to the interaction (although it may significantly reduce them), and the fluctuations would no longer be bounded simply by kinetic energy considerations.
In Fig. 9 it can also be seen that for strong interactions, the fluctuations are overestimated by $`\mathrm{\Delta }_2^{k_1}`$ and $`\mathrm{\Delta }_2^{k_2}`$, and underestimated by $`\mathrm{\Delta }_2^{k_3}`$. This can be understood within the picture given above of charge density modulations. In this case, an occupied state that is removed non-self-consistently will yield less energy than can be gained when the system is allowed to reorganise, but the typical size of this error saturates to an interaction independent value for the same reason that the SCHF fluctuations do. If on the other hand an unoccupied state is occupied non-self-consistently, it is not possible to avoid contributions from the short-ranged part of the potential (we do not consider strongly Anderson localised states at very low filling), and the typical error increases indefinitely with the interaction strength. As a result $`\mathrm{\Delta }_2^{k_3}`$ underestimates the fluctuations by an amount that saturates to an interaction independent value, whereas fluctuations in the charging energy predicted by $`\mathrm{\Delta }_2^{k_1}`$ and $`\mathrm{\Delta }_2^{k_2}`$ grow with $`U_0`$ indefinitely; the errors made in employing the latter approximations diverge with the interaction strength.
Concentrating now on the fully self consistent results, the fluctuations in the charging energy are plotted against the sample size in figure 10. As expected, for very weak interactions, the typical fluctuations vanish like $`1/A`$, being dominated by kinetic energy fluctuations. For stronger interactions, this dependence no longer holds: for $`r_s1`$ our results are in broad agreement with Ref., but do not agree with their suggestion that the typical fluctuations remain proportional to $`\mathrm{\Delta }`$ for $`r_s>𝒪(1)`$. This appears to conflict with a simple single-parameter scaling argument . The appearance of fluctuations that do not scale with $`\mathrm{\Delta }`$ coincides with the appearance of density modulations. We stress that in this model there can be no physical connection between the amplitude of the constant interaction and the amplitude of the fluctuations.
For strong interactions the dominant disorder dependence appears to develop only for $`W4`$, where it is consistent with the emergence of a linear dependence to be expected from spatial rearrangements in the disorder potential. An example is plotted in Fig. 11. We reiterate that we also find strong geometry and filling factor dependences. It is extremely difficult to extract disorder scalings in such small systems because $`W/t`$ is required to be fairly large to generate diffusive motion, which in turn stretches the spectrum in the tails. This can be seen at weak interaction, where one would have hoped to see a disorder independent plateau in $`\mathrm{\Delta }`$ (i.e $`\delta \mathrm{\Delta }_2`$ at $`U_0=0`$).
To summarise then, we find $`\delta \mathrm{\Delta }_20.52\mathrm{\Delta }+ar_s\mathrm{\Delta }+𝒪(r_s^2)`$, where $`a`$ is an undetermined constant or function of disorder strength. We note that the disorder scaling is not clear because of the residual dependence of $`\mathrm{\Delta }`$ on $`W`$.
### B Long Range Interactions
We consider here the results for the Coulombic bare potential.
We first study the distributions of both the level spacings (17) and the gap (16) of the SCHF spectrum at finite $`r_s`$, $`g`$. In this case (16) and (17) are calculated in the self-consistent basis of $`N`$ particles. In Fig. 12 it is seen that the normalised level spacings between occupied states obey statistics very close to WD for all interaction strengths considered. Between occupied states (Fig. 12a) there is a mild deviation towards Poisson statistics for the strongest interaction strengths, indicative of a weak tendency towards localisation. Between unoccupied states (Fig. 12b) the distribution is even closer to WD. On the other hand, the normalised gap distribution clearly tends towards a more symmetric distribution that is approximately Gaussian.
We have also investigated the gap ($`\mathrm{\Delta }_2`$) distribution obtained within the fully self-consistent scheme, which we show in Fig. 13. Again we find that as $`U_0`$ is increased, the distribution evolves from a WD form to a symmetric distribution similar to a Gaussian.
We shall concentrate first on the mean values obtained from these distributions, and will come to the variance later in the section. Figure 14 shows a comparison of $`\mathrm{\Delta }_2`$, with the various approximations to it, plotted against $`U_0`$. Whilst $`\mathrm{\Delta }_2^{k_2}`$ and $`\mathrm{\Delta }_2^{k_3}`$ provide a good approximation to $`\mathrm{\Delta }_2`$, we see a clear deviation of $`\mathrm{\Delta }_2^{k_1}`$. Results for the CI model, evaluated as described above, are plotted for comparison. That the CI model is good in the mean indicates that the single particle wavefunctions remain roughly uniformly distributed over the dot for all $`r_s`$ considered.
In figure 15 we plot $`\mathrm{\Delta }ϵ/\mathrm{\Delta }_2`$ against the sample area $`A`$, for an intermediate disorder strength ($`W=4`$). Since we know from Fig.14 that $`\mathrm{\Delta }_2^{k_2}\mathrm{\Delta }_2`$, then $`\mathrm{\Delta }ϵ`$ is very close to $`\mathrm{\Delta }_2^{k_1}\mathrm{\Delta }_2`$, the total error made by applying Koopmans’ theorem. For $`A50`$, we find that for a fixed interaction strength $`\mathrm{\Delta }ϵL`$ ($`\mathrm{\Delta }ϵ/\mathrm{\Delta }_2A`$). For larger samples with $`U_02`$ we find a weakening in the dependence, but see no indication that it will vanish relative to $`\mathrm{\Delta }`$. The result that the deviations from Koopmans’ approximation increase with system size (when compared to $`\mathrm{\Delta }`$), showing no sign of saturation, is admittedly strange, and may be an artifact of the specific model considered here. However, the result that Koopmans’ approximation appears to fail even as the system size tends towards the thermodynamic limit, is in line with our findings for the short-ranged case .
It is interesting to see how the error depends on disorder. In figure 16 we plot $`\mathrm{\Delta }ϵ/t`$ for a range of disorder strengths: the disorder dependence as a function of interaction is weak, but not simple. Similarly to the short-ranged case, the deviations from Koopmans’ approximation do not decrease for small disorder. This occurs for the same reasons as for the short-ranged case. Here too the typical size of the matrix elements $`V_{ijkl}`$ which drive the rearrangement scale inversely with $`g`$. One thus expects that in this regime $`\mathrm{\Delta }ϵ`$ should increase with disorder, and this is indeed seen in the figure. For $`U_04t`$, $`\mathrm{\Delta }ϵ`$ decreases with disorder at sufficiently large $`W`$, with evidence of a turning point ($`d\mathrm{\Delta }ϵ/dW=0`$) at $`U_0=4t`$, $`W4t`$.
In figure 17 we plot the interaction dependence of $`\mathrm{\Delta }ϵ/\mathrm{\Delta }`$. We find that at small $`U_0`$, $`\mathrm{\Delta }ϵ/\mathrm{\Delta }(U_0/t)^2`$, with deviations for larger $`U_0`$. This quadratic behaviour has the same origin as that of the short-ranged case: the indication is that second order perturbation theory is qualitatively good even for $`r_s1`$.
To summarise the results for the mean Coulomb gap, we find that Koopmans’ approximation (11) makes an error in the mean charging energy which for small $`r_s`$ and $`L`$, and fixed disorder scales like $`\mathrm{\Delta }ϵr_s^2L`$. There is also evidence that for the larger sizes ($`A50`$), far beyond that accessible by exact diagonalisation, that the size dependence vanishes: $`\mathrm{\Delta }ϵr_s^2`$. In contrast to the naive expectation however, we find no sign of this error vanishing relative to $`\mathrm{\Delta }`$ in the thermodynamic limit. This is due to the fundamental difference between occupied and unoccupied SCHF levels already discussed in the short-ranged case.
We now consider the fluctuations in $`\mathrm{\Delta }_2`$. As an example of the interaction dependence of these fluctuations in the various approximation schemes, we plot the results for a fixed size in figure 18. It is seen that applying Koopmans’ theorem in the forms (11-13) results in considerably smaller fluctuations than the fully self-consistent calculation. To quantify this error, we plot $`\delta \mathrm{\Delta }_2^{k_2}/\delta \mathrm{\Delta }_2`$ in Fig.19, which shows that the relative error initially increases with interaction strength, but shows signs of saturating. The value of the saturation appears to increase towards unity as the system size is increased.
We now concentrate on the fluctuations of the fully self-consistent peak spacing $`\mathrm{\Delta }_2`$. For comparison with Ref. it is useful to plot $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ against the interaction strength for a range of sample sizes. This is done in figure 20. In the inset we plot $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }`$ which shows that the peak spacing fluctuations are not proportional to $`\mathrm{\Delta }`$ for $`r_sL𝒪(1)`$ . From Fig. 20 it can be seen that the curves $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ do not saturate to a constant as suggested in Ref., although to see this clearly one has to consider larger sample sizes than are accessible by exact calculations. The curve $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ appears to take on the approximate form of a constant term plus a linear term for $`r_sL𝒪(1)`$. The constant contribution identified by Sivan et el. , is here, contrary to their claim, non-universal (i.e. it is disorder dependent).
In figure 21 we plot $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ against disorder for the $`10\times 11`$ lattice with a range of interaction strengths. At $`U_0=0`$ it is seen that for $`W6`$ the systems obeys WD statistics quite well. For the sample size considered we find that in the regime $`0.5U_06.0`$ ($`0.25r_s3.0`$) $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2W`$, and at stronger interactions this dependence weakens. The intermediate dependence, $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2W`$, is consistent with the dependence $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_21/\sqrt{g}`$ recently observed independently by Bonci and Berkovits for the Buminovich Stadium billiard. Analysis of Fig. 20 leads to the conclusion that the quadratic contribution (in $`U_0`$) to $`\delta \mathrm{\Delta }_2`$ is independent of disorder, which is consistent with Fig. 21.
To identify the system size scaling of the various contributions we plot $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }`$ against $`A`$ in figure 22. For $`U_0=0`$ the system obeys WD statistics and $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }`$ is independent of size. The regime over which the fluctuations are approximately proportional to the mean charging energy (the constant contribution to $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }_2`$ alluded to above, which corresponds to a $`\sqrt{A}`$ dependence in the figure), depends on the system size. As could be seen in Fig. 20 another term begins to dominate the the fluctuations at larger $`U_0`$ which is quadratic in $`U_0`$, this term increases more rapidly with the system size, and so dominates at lower $`U_0`$ in larger systems. Over the range of sizes considered, this term appears to scale like $`L^2`$. The cross-over in dominance therefore occurs at $`U_01/L`$ for fixed disorder strength; clearly the quadratic term will dominate in large samples. The increase in $`\delta \mathrm{\Delta }_2/\mathrm{\Delta }`$ with system size appears to be an artifact of using the unscreened Coulomb interaction in the Hamiltonian.
Summarising the results presented in figures 20 to 22, and the above discussion, we find for $`r_sL𝒪(1)`$ an approximate form: $`\delta \mathrm{\Delta }_2.52\mathrm{\Delta }+a\mathrm{\Delta }_2/\sqrt{g}+br_s^2`$ where $`a,b`$ are constants. One would normally expect the fluctuations to be linear in the interaction strength, (i.e. $`b=0`$). A possible source for such a quadratic interaction dependence in the typical fluctuations is the development of correlations that grow like $`r_s^2`$ in products of eight wavefunctions. Elsewhere we present evidence for increased fluctuations in the ground state density in this regime, as compared to a non-interacting system. It is not yet clear whether this result is an artifact of the SCHF approximation (which has also very recently been observed in $`1d`$ systems using a similar approximation scheme), or a genuine physical effect.
Finally, it is worth noting that since the exchange interaction is not correctly screened, that errors in the SCHF scheme might be expected to diverge with respect to $`\mathrm{\Delta }`$ as the system size is increased. We can neither confirm nor counter this argument, but have verified that for $`r_s5`$ the fluctuations in the exchange contribution are smaller than those of the direct contribution.
## V Summary
We have investigated the addition spectra of disordered quantum dots employing an effective single particle approximation, both using a fully self consistent analysis, and by invoking Koopmans’ theorem. We were able to consider system sizes with up to 144 sites, and 37 particles, compared to the latest exact calculations on samples with 24 sites and 6 particles . The larger sample size also allows us to consider smaller values of $`g`$ than exact calculations whilst retaining an ergodic non-interacting limit, and therefore approaches the experimental parameters more closely. The inclusion of spin in a consistent manner is left for a future project.
Our SCHF results for the typical fluctuations of the peak spacings for particles possessing short-ranged bare interactions are entirely different from the results for long-ranged bare interactions. In the short-ranged case we find the same scaling as Ref. for very weak interactions ($`r_s1`$), but for $`r_s1`$ deviations from this behaviour become significant and coincide with the appearance of interaction induced density modulations. We find no size dependence in the onset of these effects. We find that strong filling factor and geometry dependences arise due to these density fluctuations, and therefore do not expect that the disorder ensemble statistics can be mapped to statistics over the ensemble of filling factors: ergodicity is lost. We suggest that employing a short ranged bare interaction in a self-consistent scheme is not an appropriate model for the quantum dots of Refs. for which $`r_s>1`$, but may be a useful model for dot geometries sandwiched between very close metallic gates which provide a good external source of screening . In this respect we identify some experiments on the addition spectrum which possess a metallic source (heavily doped $`n^+`$ GaAs) and drain (Cr/Au), at separations of the order of the average inter-particle spacing in the dot .
In the Coulomb case the SCHF approximation to $`\mathrm{\Delta }_2`$ yields typical fluctuations that do not scale with $`\mathrm{\Delta }`$ for $`r_s1/L`$. In Ref. it is claimed that $`\delta \mathrm{\Delta }_2`$ is universally proportional to $`\mathrm{\Delta }_2`$ for strong interactions (but still far from the accepted Wigner Crystal transition point). In contrast, we find, in addition to the small interaction independent contribution, a contribution to $`\delta \mathrm{\Delta }_2`$ that is proportional to $`\mathrm{\Delta }_2/\sqrt{g}`$ (i.e. non-universal), and a further contribution that scales like $`r_s^2`$, which is independent of disorder, and appears to be due to the development of charge density modulations . The latter is not detectable in the small systems examined numerically in , and so our results are not numerically inconsistent with exact calculations . Whilst we do not include spin, the observed decrease in the fluctuations with $`g`$ is consistent with the experimental indications that in cleaner samples the fluctuations are smaller.
We show that a direct application of Koopmans’ theorem overestimates $`\mathrm{\Delta }_2`$. This overestimate, a manifestation of the breakdown of Koopmans’ approximation, does not vanish on the scale of $`\mathrm{\Delta }`$ in the thermodynamic limit. The error seems to scale differently with sample size for sample areas above or below $`A50`$. In the nearest neighbour case, with $`A50`$, this error scales with $`\mathrm{\Delta }`$, but in smaller systems, accessible by exact methods, it is independent of system size. In the Coulomb case the error grows with the system size as $`L`$ for $`A50`$. For larger sizes the error appears to tend towards a $`1/L`$ scaling, i.e. in proportion with the charging energy, and therefore diverges with respect to the mean effective single-particle level spacing. This result for the Coulomb interaction case appears to be non-physical, and may be an artifact of the model considered. However, the result that Koopmans’ theorem is not recovered in the thermodynamic limit also occurs in the short-ranged interaction case. In both cases we find that initially this error grows in proportion to $`U_0^2`$, to be expected since the lowest order contribution is second order, but for strong interactions it grows more slowly in $`U_0`$, and that the disorder dependence of this error is weak and non-monotonic. We identify the source of the error $`\mathrm{\Delta }ϵ`$ to be the fundamental difference between occupied and unoccupied states that is inherent in the SCHF approximation. We introduce two improved applications of Koopmans’ theorem, $`\mathrm{\Delta }_2^{k_2}`$, $`\mathrm{\Delta }_2^{k_3}`$, which provide a good approximation to $`\mathrm{\Delta }_2`$, but not to $`\delta \mathrm{\Delta }_2`$.
Whilst preparing the manuscript, two related works appeared that confirm some of the points discussed above .
In both cases, fluctuations in the ground state density develop with $`r_s`$ , and have significant effects of the addition spectrum statistics. It remains to be seen whether these density modulations are an artifact of the SCHF approximation (i.e. due the neglect of dynamical correlations), or in fact interesting results on the continuous transition to a Wigner-type solid in disordered samples with short- and long-ranged bare interactions.
###### Acknowledgements.
We acknowledge discussions with H. Orland and F. von Oppen in the early stages of this project, as well as with Ya. Blanter, S. Levit, A. Mirlin and D. Orgad. We acknowledge support from the EU TMR fellowship ERBFMICT961202, the German-Israeli Foundation, the U.S.-Israel Binational-Science Foundation and the Minerva Foundation. One of us (YG) would also like to acknowledge support from an EPSRC senior professorial fellowship, grant number GR/L67103. Much of the numerical work was performed using IDRIS facilities.
|
no-problem/9902/hep-ex9902010.html
|
ar5iv
|
text
|
# Study of a Like-Sign Dilepton Search for Chargino-Neutralino Production at CDF
## I Introduction
Previous searches for chargino-neutralino production at the Tevatron have focused primarily on signatures with three charged leptons (trileptons) plus missing transverse energy ($`\mathrm{}E_T`$. In the Minimal Supersymmetric (SUSY) Standard Model, chargino-neutralino production occurs in proton-antiproton ($`\mathrm{p}\overline{\mathrm{p}}`$) collisions via a virtual W (s channel) or a virtual squark (t channel). In a representative minimal Supergravity (SUGRA) model (parameters: $`\mu <0,\mathrm{tan}\beta =2,\mathrm{A}_0=0,m_0=200\mathrm{GeV}/c^2`$, $`m_{1/2}=90140\mathrm{GeV}/c^2`$ ), we expect three-body chargino and neutralino decays through virtual bosons and sleptons in a chargino mass region of $`80130\mathrm{GeV}/c^2`$. Conserving R-parity, these decays produce a distinct signature: trileptons plus $`\mathrm{}E_T`$ from a neutrino and the lightest supersymmetric particle. We demonstrate that the sensitivity to this signature can be significantly increased by searching for events with two like-sign leptons. The Like-Sign Dilepton (LSD) search provides a strong rejection of Standard Model background through the like-sign requirement, and enhances the acceptance of the signal by requiring only two of the three leptons produced in the chargino-neutralino decay.
## II Like-Sign Dilepton Analysis
Signal and most background processes were generated using ISAJET 7.20 and the CDF detector Monte Carlo simulation. For the signal estimation, we used representative SUGRA parameters of $`\mu <0,\mathrm{tan}\beta =2,\mathrm{A}_0=0,m_0=200\mathrm{GeV}/c^2`$, and $`m_{1/2}=90140\mathrm{GeV}/c^2`$. The relevant mass relations are $`\mathrm{M}_{\stackrel{~}{\chi }_1^\pm }\mathrm{M}_{\stackrel{~}{\chi }_2^0}2\mathrm{M}_{\stackrel{~}{\chi }_1^0}`$ , with $`\mathrm{M}_{\stackrel{~}{\chi }_1^\pm }`$ between $`80130\mathrm{GeV}/c^2`$. The sleptons and sneutrinos have masses between $`200220\mathrm{GeV}/c^2`$, so we generate only three-body chargino and neutralino decays.
The LSD analysis begins with the selection of a pair of leptons ($`ee,\mu \mu ,e\mu `$) with the same charge. We then impose kinematic requirements on the selected events in order to remove Standard Model and other non-SUSY backgrounds. Our primary requirements are minimum transverse momentum ($`P_T>11\mathrm{GeV}/c`$) for both leptons, and isolation, in which we remove events where at least one lepton has excess transverse energy greater than 2 GeV in a cone of 0.4 radians around the lepton. Monte Carlo simulations indicate that isolation removes heavy flavor ($`\mathrm{b}\overline{\mathrm{b}},\mathrm{c}\overline{\mathrm{c}}`$) backgrounds most effectively. As the like-sign cut requires us to select both leptons from a b or c decay in such an event, and as semi-leptonic b and c decays produce leptons associated with jets, neither of the selected leptons will be isolated. Isolation also reduces $`\mathrm{t}\overline{\mathrm{t}}`$ because at least one lepton from the like-sign pair will be selected from a b decay in such an event. The isolation cut, when applied to both like-sign leptons, reduces $`\mathrm{b}\overline{\mathrm{b}}`$ and $`\mathrm{c}\overline{\mathrm{c}}`$ to a negligible level.
We remove diboson events through a Z-mass rejection, in which the combined mass of a third opposite-sign, same-flavor lepton selected by the analysis and either of the LSDs is between $`80100\mathrm{GeV}/c^2`$, reducing WZ and ZZ backgrounds. We impose no requirement on $`\mathrm{}E_T`$. This leaves WZ production as the dominant source of Standard Model background, as shown in Table 1.
An important source of non-SUSY background estimated from data is events with one true lepton, such as W $`\mathrm{}\nu `$ \+ jets, and a “fake” lepton, i.e. an isolated track misidentified as a lepton. This fake lepton, in combination with the true lepton from the W decay, can be selected as a signal event in this analysis. In order to estimate this background, we first look at Z $`\mathrm{e}^+\mathrm{e}^{}`$ \+ jets, which we assume provides a model for W + jets events. Removing the true leptons, we then measure the rate of underlying isolated tracks in the event. Next we search minimum bias data, in which we assume there are no true leptons, to find the probability of an isolated track to be misidentified as a lepton. The probability of misidentifying an isolated track as a lepton is 1.5$`\%`$ per track. We multiply this probability by the isolated track rate from the Z $`\mathrm{e}^+\mathrm{e}^{}`$ events, by the number of W + jets events expected , and by a factor of 0.5 for the like-sign requirement. This “fake” rate drops rapidly with an increasing minimum $`P_T`$ requirement. Optimization of the number of expected background events as a function of the $`P_T`$ requirement yields 0.3 events expected from W + jets in 100 $`\mathrm{pb}^1`$ of data.
## III Results
Applying the analysis requirements and normalizing the luminosity to 100 $`\mathrm{pb}^1`$, the expected background is a total of 0.56 events, as shown in Table 1. Drell-Yan and W + jets are the most significant non-SUSY backgrounds; WZ production is the largest Standard Model background. There is little background overlap of the trilepton and LSD analyses in the selected events based on Monte Carlo studies. Therefore, the backgrounds are treated as independent. For the trilepton analysis, the expected background for the Run I luminosity of 107 $`\mathrm{pb}^1`$ is 1.2 events . The total expected background for the combined LSD and trilepton analyses is 1.8 events.
Figure 1 shows the efficiency versus chargino mass for the trilepton analysis, the LSD analysis, and the combined analyses, taking into account the signal overlap between the trilepton and LSD analyses. These efficiencies are calculated for all three analyses as number of selected events divided by total number of chargino-neutralino events where both sparticles decay leptonically, where a lepton can be $`e,\mu ,`$ or $`\tau `$. All $`\tau `$ decays are included in this calculation, even though the analyses are only sensitive to the leptonic decays.
Figure 2 shows the average expected limit normalized to 100 $`\mathrm{pb}^1`$ for the trilepton, LSD, and combined analyses. These limits were calculated from the efficiencies in Figure 1 and from the expected number of background events based on Monte Carlo.
## IV Conclusion
This study indicates that a fully realized Like-Sign Dilepton analysis will increase the sensitivity of searches for chargino-neutralino production with the CDF detector using existing data of $`\mathrm{p}\overline{\mathrm{p}}`$ collisions at $`\sqrt{s}=1.8`$ TeV. It has been shown that the sensitivity of the previously published trilepton analysis can be improved by combining it with this new LSD signature search. Significantly, the LSD search has fewer requirements than the trilepton analysis, e.g. the trilepton analysis requires $`\mathrm{}E_T>15\mathrm{GeV}/c^2`$ whereas the LSD analysis has no $`\mathrm{}E_T`$ requirement, making the Like-Sign Dilepton channel sensitive to a greater number of signatures.
This research was supported by a grant (number DE-FG03-91ER40662) from the U.S. Department of Energy.
|
no-problem/9902/physics9902004.html
|
ar5iv
|
text
|
# Effect of histamine on the electric activities of cerebellar Purkinje cell
## I Introduction
Cerebellum deserves more extensive studies than was conventionally realized. Historically, cerebellum was thought of as mainly a motor control organ, while recent researches reveal that it has many other functions, and it has more intimate connections to other parts of the brain . Histamine (HA), a neurotransmitter or neuromodulator in the brain, plays an important role in the functions and interactions of various parts of the brain, and also in the studies of these functions and interactions. For instance, the neuroanatamic researches revealed the existence of the hypothalamus-cerebellum histaminergic path , and shows that hypothalamus has great influence on the cerebellar activities and hence plays an important role in coordinating the functions of the body and viscera. Besides, HA has the possible role of controlling the cerebellar circulation, and HA receptors are also found in the neurons of the cerebellar cortex .
Cerebellum is made up of the outer dark matter (cortex), inner white matter, and three pairs of deep nuclei lying in the heart of the white matter. These nuclei are the fastigial nucleus FN, interposed nucleus IN, and dentate nucleus DN. The afferent fibres to the cerebellum are mainly from the vestibule, spinal cord, and the cerebral cortex. They form synaptic connections with the neurons in the cerebellar deep nuclei and cortex. The synapses of Purkinje cell (PC) make the efferent fibres of the cerebellar cortex, they are mainly projected into the deep nuclei, then the neurons there stick out fibres which make the cerebellar output; a small amount of PC synapses are directly projected into the vestibular nucleus.
The cerebellar cortex can the divided into three layers: the (out-most) molecular layer, PC layer, and the granular layer. It contains three kinds of afferent fibres (musciform fibre MF, crawl fibre CF, and monoaminergic fibre), and five kinds of neurons (PC, granular cell GR, basket cell BA, star-like cell ST, and Golgi cell GO). Hence we see that the cerebellar afferent fibres and the intermediate neurons, with PC acting as the core, constitute the basic neural circular that is responsible for the sensory function of the cerebellar cortex, and the cerebellar cortex together with the deep nuclei undertake various functions of the cerebellum.
In our laboratory there has been studies of HA’s effects on certain neurons of the cerebellum, such as the granular cells , while HA’s effect on the neurons in the cerebellar cortex is not yet intensively studied. Considering that PC is the only efferent neuron of the cerebellar cortex, we are going to investigate HA’s effect on the electric activities of PC, based on our accumulated experiences in studying the influences of aminergic materials such as norepinephrine (NA) and serotonin (5-HT) on the spontaneous and induced discharge activities of the cerebellar PC; so as to learn more about the role of aminergic afferent system in the process of information treatment in the cerebellar cortex.
## II Experimental material and method
We use for our experiments 19 mature SD rats (200-250g). Anaesthetize a rat by injecting betchloramines hydrochloride (4mg/100g) into the abdominal cavity, take out the cerebellum right after cutting the head, wash the cerebellum with frozen artificial cerebrospinal fluid (ACSF, $`4^{}`$C), stick it onto the operating table of a microtome (at which the cerebellum is soaked in frozen ACSF), and cut a 400$`\mu \mathrm{m}`$ thick of arrow-like slice from the vermis. The process of making the slice should be done within 20 minutes. Then move the so obtained slice into a recording trough, and begin the experiment after 15 minutes of hatching. The recording trough is continuously irrigated (3ml/min) by ACSF (33$`\pm 0.2^{}\mathrm{C}`$), and is aerated with the mixed air of 95%O<sub>2</sub> +5%CO<sub>2</sub>. The concentrations of various elements in ACSF are (mmol/l): NaCl 124, KCl 5, KH<sub>2</sub>PO<sub>4</sub> 1.2, MgSO<sub>4</sub> 1.3, CaCl<sub>2</sub> 2.4, NaHCO<sub>3</sub> 26, glucose 10. Put a tiny glass electrode (filled in with colored conducting fluid) at the PC layer of the X leaflet of the cerebellar cortex to make out-cell records of PC’s discharge activity. This is because the X leaflet received the least mechanical wound in making the slice, and hence the cerebellum slice should be so put in the recording trough that the X leaflet be well hatched by the ACSF and the mixed air.
We base on the following criteria to single out PC discharge signals: (1) position of the electrode: PCs in the cerebellar cortex are of linear type, namely, the cell bodies concentrate at one end and the dendrites stick to the other end, thereby form a PC layer in the cerebellar cortex. We put the recording electrode at the out side of the PC layer and near to the molecular layer, therefore there is very little chance to catch a discharge signal of the granular cell; (2) discharge wave shape: the PC action potential has large magnitude (0.700$`\pm `$0.143$`\mu \mathrm{V}`$) and thick contour , moreover, the effective recording distance of PC discharge is 50$`\mu `$m, much longer than that of the granular cell (20-30$`\mu `$m) ; (3) cell numbers: in the cerebellar cortex there are much less GO, ST, and BA than PC, and hence the discharge signals from these cells may be ignored.
Because the nerve cells in the brain have the character of continuously producing impulses, and often at the frequency of tens of times per second, the physiological status of a neuron is usually symbolized by its discharge frequency. Under the environmental stimulation, the discharge frequencies of central neurons will variate, and the intrinsic potentials will also deviate. Hence we magnify the PC discharge signal, choose a relatively small time constant for the recording system, so as to differentiate the input signal and transform the slowly increasing wave into a sharp high peak wave, then use a pulse discriminator to convert the peak signal into TTL pulse and input it into the computer, and then draw the post-stimulus histogram. Thus we can describe PC’s reaction to chemicals by the post-stimulus histogram of its discharge frequency.
All the drugs here are contemporarily compounded using ACSF, and are used to irrigate the cerebellum slice separately. When doing the experiments of studying the receptor mechanism of HA’s effect on PC, the slice is continuously irrigated by the receptor antagonist for over 10 minutes before by HA.
## III Results
### A The effect of HA on PC’s electric activities
Because intermittent discharge is often due to the ill state of PC or indistinguishability of cell discharge signal from other signals, we choose in 29 cerebellum slices 59 cells with relatively stable spontaneous discharge for our study. Their spontaneous discharge frequencies range from 4.48 to 78.32 Hz. We find that of the 59 cells, 83.1% (49/59) are affected by HA, the other 16.9% (10/59) show no response. And of the 49 responsive ones, 87.8% (43/49) are excited, the other 12.2% (6/49) are inhibited. (See Fig. 1) Therefore HA’s effect on PC is mainly excitative. From Figs. 1B, 1C, we can see that there is an evident increase of the variation of the cell’s discharge frequency as the irrigating concentration of HA increases.
Another important observation is that whether or not a PC shows response to HA may have some connection with its spontaneous discharge frequency. (cf. Fig. 2) The spontaneous discharge frequencies of the 59 tested cells are 4.48-78.32 Hz, while that of the 49 cells which show response to HA is 21.34$`\pm `$12.69 Hz (M$`\pm `$SD), this is significantly lower than the average frequency (37.00$`\pm `$7.73 Hz) of the 10 cells unaffected by HA. However, whether a cell is excited or inhibited by HA seems to have nothing to do with its intrinsic frequency. The average frequencies of the 43 excited cells and of the 6 inhibited cells are 21.53$`\pm `$13.20 and 20.00$`\pm `$8.80 Hz, respectively, which are not much different (P$`>`$0.5, t test). This differs from how HA’s effect on the cerebellar granular cells is related to their intrinsic frequencies . We also find that among the 49 responsive cells, those who have higher intrinsic discharge frequencies are more sensitive to HA (there are 8 responsive cells with intrinsic frequencies higher than 30 Hz, and 7 of them show response to HA at the lowest HA concentration of less than 30$`\mu `$mol/l).
### B The receptor conducting mechanism of HA’s excitative effect on PC
It is known that there are three sub-types of HA receptors in brain: H1, H2, and H3, therefore it is necessary to study the receptor conducting mechanism of HA’s effect on PC. Since HA’s main effect on PC is excitative, we make study for the excitative effect. For this purpose we observe the influences of Triprolidine (hyper-specific antagonist for H1 receptor) and Ranitidine (hyper-specific antagonist for H2 receptor) on PC’s excitative reaction to HA, and find that low-concentration Triprolidine (0.5-1.0$`\mu `$mol/l) could weaken HA’s excitative effect on PC, while low-concentration Ranitidine (0.9-1.0$`\mu `$mol/l) could weaken or even block HA’s excitative effect on PC. (cf. Fig. 3).
H3 receptor is a presynaptic self-receptor, it adjusts the presynaptic release of HA . In this research we have not investigated its role in HA’s excitative effect on PC.
## IV Discussions
### A Data-taking system
In this experiment we used the data-taking system: oscillometer-pulse discriminator-computer, and drawn the post-stimulus histogram, then evaluate PC’s reaction to chemicals by the post-stimulus histogram of the discharge frequency. The sampling bin width is 3, point=900, interval=1000, one data represents the discharge times of a cell in one second. Since a cell discharges at very high frequency and its electric signals transmit at very high speed, it gives very rich information in one second. We know that the information carried by a neuron is represented by its frequency distribution over the action potential, therefore if we can obtain the frequency spectrum, and make the combined analyses of the frequency and power spectra, we will get greater amount information. Besides, what we take in this experiment is out-cell record of a cell’s discharge activities, and PC is in connection with various other cells in the cerebellum slice, the measurement in such case may be disturbed by the environment, and the excitative or inhibitive reaction of PC may actually be a combined result of various influences including those from the environment. These are to be improved in future studies.
### B HA’s effect on PC
Histamine is an important neurotransmitter or neuromodulator, and acts as an inter-cell messenger. HA can hardly penetrate the blood-brain barrier, hence must be produced by the histaminergic neurons. It can be released through depolarization and by means of calcium dependence. The hypothalamus-cerebellum histaminergic fibres in the brain are casted by the hypothalamus tuberous mastoid nucleus (HA pericaryon) into the cerebellar cortex and the cerebellar deep nuclei (HA nerve end) . Cerebellum-hypothalamus histaminergic system can bring about many functions via the inter-cell messenger HA. The hypothalamus-cerebellum histaminergic projecting fibres end at the cerebellar cortex as spreading multi-layer fibres, and their nerve ends mainly spreadingly adjust the functions of the peripheral neurons in the varix form; these are much like the serotoninergic and norepinephrinergic afferent fibres of the cerebellum .
Purkinje cell is the only kind of efferent neuron of the cerebellar cortex, it is a deformation of the multi-pole cell, and has very strong dentrite. Our research shows that PC’s discharge activity is significantly affected by HA. From our observations in Sec. IIIA, and associating how the reaction of cerebellar granular cells to HA is related to their intrinsic discharge frequencies , we speculate that the cerebellar afferent fibres might be mainly responsible for adjusting the basic discharge levels of the cerebellar neurons: as the discharge frequency of a target cell lies in a certain region it loses sensitivity to HA; and as the discharge frequency lies out of this region the cell reacts to HA, and thereby its frequency tends to that region. Moreover, the high-frequency region is a low-sensitivity region to HA, while for the responsive cells, those with higher frequency are highly sensitive to HA. We therefore speculate that there exists a critical frequency, a cell with intrinsic frequency near to this critical point will significantly change its states by only a small amount of HA, while as its frequency goes well above this point, it becomes stable. In connection with the influence of NA and 5-HA on the spontaneous and induced electric activities of cerebellar PC , we go further to speculate that histaminergic and other aminergic afferent systems might, through their coordinations and/or antagonism, adjust the synaptic transmission efficiency of MF-PF (parallel fibre)-PC and CF-PC, or adjust PC’s sensitivity to signals from MF and CF, and thereby take part in the the global process of the sense movement of the cerebellar neuron net. That many kinds of neural active materials mutually act on the neuron is a common pattern of signal transmission of the nerve system; such mutual action may happen at the presynapse, postsynapse, or even postreceptor level. HA’s effect on the cerebellar PC is to adjust the excitativity level of the neuron and the neuron’s sensitivity to input information from outside of the cerebellum, and adjust the sleep or awake state of the cortex. The aminergic afferent fibres have synaptic or non-synaptic chemical transmission effect on the cerebellar cortex, their non-synaptic transmission may not encode the fast phase-like information, but hyperfinely adjusts the membrane potential and basic discharge level or the target neuron.
### C The receptor conducting mechanism of HA’s excitative effect on PC
Of the three sub-types of HA receptors in the brain (H1, H2, and H3), H3 receptor is a presynaptic self-receptor. The acting mechanism of H1 receptor is through some sub-type of G protein to activate PLC, hydrolize 4,5-PIP<sub>2</sub> to be 1,4,5-IP3 and DG, these two secondary messengers go further to trigger various biological effects, such as stimulating endoplasmic reticulum to release Ca<sup>+</sup>and thence trigger the ion channels . The H2 receptor brings effect mainly via G protein-AC-cAMP: HA combines with the H2 receptor of the cell membrane, the H2 receptor is then deformed and triggers G protein to give off its $`\beta `$ and $`\gamma `$ sub-radicals, and on the $`\alpha `$ sub-radical GDP is replaced by GTP, then AC is activated and catalyzes ATP to transform to cAMP, cAMP acts as a secondary messenger and triggers a series of reactions, such as activating PKA and then triggering ion channels, or directly triggering ion channels. Because many highly efficient enzymes take part in these reactions, small signals are magnified step by step.
It is usually thought that H1 and H2 receptors conduct HA’s excitative and inhibitive effects on neurons respectively . However, this is in contradiction with our report in Sec. IIIB that low-concentration H1 receptor antagonist could weaken HA’s excitative effect on PC, while low-concentration H2 receptor antagonist could weaken or even block HA’s excitative effect on PC. This makes us to conjecture that both H1 and H2 receptors involve in HA’s excitative effect on PC, and H2 receptor is the main conductor. There has been report that HA’s excitative effect on the granular cells of the cerebellar cortex and on the cells of the vestibular inner-side nucleus is mutually conducted by H1 and H2 receptors . Histochemistry researches also reveal that in the cerebellar cortex of rat the density of H2 receptor is higher than that of H1 receptor. All these partly support our conjecture. This may suggest that the H2-related signal transmission chain in cell differs from one type of neuron to another, and such difference determines whether the H2 receptor conducts excitative or inhibitive effect; but the particular mechanism of signal transmission is still to be studied.
### D Mathematical simulation
To systematically study the biophysical or biochemical mechanisms of the reactions of cells, it is often helpful to make mathematical simulations of the reactions. Here we give a simulation for the relation between PC’s (excitative) reaction to HA (measured by its maximum variation of discharge frequency under the action of HA at a certain irrigating concentration) and its intrinsic discharge frequency $`f_0`$. To be concrete, we choose a fixed frequency variation $`A`$, and denote by $`u_0`$ the irrigating concentration of HA at which the maximum change of a cell’s frequency would be $`A`$, then study the relation between $`u_0`$ and $`f_0`$. For this purpose we first look at how the maximum frequency $`f`$ of a cell under the action of HA is related to HA’s irrigating concentration $`u`$.
As we noted in Sec. IIIA, PC’s reaction to HA has something to do with its intrinsic discharge frequency: high-frequency cells are insensitive to HA, while for the cells affected by HA, the higher-frequency ones are highly sensitive to HA, and at low intrinsic frequencies $`f`$ increases steadily with $`u`$. We therefore assume that $`f`$ is an $`S`$-like function of $`u`$:
$$f(u)=f_c/(1+\mathrm{exp}((uu_c))).$$
(1)
As $`u=0`$, $`f(0)`$ is just the intrinsic frequency $`f_0`$:
$$f_0=f(0)=f_c/(1+\mathrm{exp}(u_c))\text{.}$$
(2)
According to our above explanations, we write
$$f(u_0)=f_0+A=f_c/(1+\mathrm{exp}((u_0u_c))).$$
(3)
From Eqs. (2,3) we can derive the $`u_0`$-$`f_0`$ relation:
$$u_0=\mathrm{ln}(f_c/f_01)\mathrm{ln}(f_c/(f_0+A)1)\text{.}$$
(4)
Here the physiological meaning of $`f_c`$ is the maximum discharge frequency of the cell, and $`A`$ is an arbitrarily chosen minimum frequency variation that can be observed in the experiment. It is straightforward to check that $`u_0`$ has a minimum value at an intermediate $`f_0`$, just as the experimental results show. Eq. (4) is much like the relation in the Hopfield model. But further study of the connection to the Hopfield model would require more detailed knowledge about the relation between $`f`$ and $`u`$, which is not possible in the present experiment, for PC cannot live through a long enough time to allow tests for various HA concentrations; hence the simulation here is merely a rough one.
### E Characteristic frequency in the cerebellum?
Eqs. (1,4) show a pattern very similar the characteristic frequency of the cerebrum (the $`\gamma `$ region of around 40 Hz): there exist a particular frequency at which the target cell is highly sensitive to HA, and well above this frequency the cell loses sensitivity to HA. For PC this insensitive region is 25.00-46.24 Hz, with the average 37.00$`\pm `$7.72, which is close to the $`\gamma `$ region. These suggest that cerebellum might also exhibit some sort of characteristic frequency to external stimulations. Further studies of such characteristic frequency would necessarily take into account of the interactions of various cell, such as in the HR model ; these we hope to accomplish in the future.
## V Acknowledgements
The author owes special thanks to Prof. Jiang-Jun Wang and Prof. Bing Zhu for valuable instructions, to Le Tian and Jie Ma for collaborations in this experiment, and to Feng Dong, Qin Xi, and Guo-Ning Hu for kind help and suggestions.
|
no-problem/9902/gr-qc9902039.html
|
ar5iv
|
text
|
# Untitled Document
Planetary Perturbation with cosmological Constant
Ishwaree P. Neupane <sup>*</sup><sup>*</sup>*Electronic mail: ishwaree@phya.snu.ac.kr
Department of Physics,Seoul National University, Seoul 151-742, Korea On leave from Department of Physics, Tribhuvan University, Kirtipur, Kathmandu, Nepal
## Abstract
A contribution of quantum vacuum to the energy momentum tensor is inevitably experienced in the present universe. One requires the presence of non-zero cosmological constant ($`\mathrm{\Lambda }`$) to make the various observations consistent. A case of $`\mathrm{\Lambda }`$ in the Schwarzschild de Sitter space-time shows that precession of perihelion orbit provides a sensative solar test for non-zero $`\mathrm{\Lambda }`$. Application of the relations involving $`\mathrm{\Lambda }`$ to the planetery perturbation indicates the values near to the present bound on $`\mathrm{\Lambda }`$. Also suggested are some relations in vacuum dominated flat universe with a positive $`\mathrm{\Lambda }`$.
The cosmological constant\[1-3\] has been an outstanding problem in various microscopic theories of particle physics and gravity for the past several decades, ever since Einstein introduced it in the field equations to avoid an expanding universe. The standard model of cosmology based on the ideas arising from particle physics involves the following trilogy of ideas: (i) $`\mathrm{\Omega }`$=1, (ii) $`\mathrm{\Lambda }=0`$ and (iii) $`\mathrm{\Omega }_{matter}0.9`$. But, in reference to the large scale structure measurements, the density of the matter insufficient to result in a flat universe($`\mathrm{\Omega }=1`$) suggests a non-zero $`\mathrm{\Lambda }`$. Now one would prefer either (1) $`\mathrm{\Omega }1`$, $`\mathrm{\Lambda }=0`$, $`\mathrm{\Omega }_{matter}0.20.4`$ or (2) $`\mathrm{\Omega }1`$, $`\mathrm{\Lambda }0`$, $`\mathrm{\Omega }_{matter}0.20.4`$. A small non-vanishing $`\mathrm{\Lambda }`$ is also required to make the two independent observations: the Hubble constant,$`H_o`$ which explains the expansion rate of the present universe, and the present age of the universe $`(t_o)`$ consistent each other. This has forced us critically re-examine the simplest and most appealing cosmological model- a flat universe with $`\mathrm{\Lambda }=0`$. A flat universe with $`\mathrm{\Omega }_m0.3`$ and $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$ is most preferable and $`\mathrm{\Lambda }=0`$ flat universe is almost ruled out. Indeed, $`\mathrm{\Lambda }`$ follows from the dynamical evolution of our universe when one interprets it as the vacuum energy density of the quantized fields. The large scale structure measurements of the present universe would imply $`\mathrm{\Lambda }`$ to have an incredibly small value $`10^{47}(GeV)^4`$\[2-5\], while the quantum field theories in curved spacetime predict quite different values of vacuum energy densityin units $`8\pi G=c=1`$, we denote $`\rho _v`$ by $`\mathrm{\Lambda }`$,($`\rho _v`$) in the early universe. In particular, $`\mathrm{\Lambda }_{GUT}10^{64}(Gev)^4`$, $`\mathrm{\Lambda }_{EW}10^8(GeV)^4`$ and $`\mathrm{\Lambda }_{QCD}10^4(GeV)^4`$. This is the source of cosmological constant problem.
In this letter we consider a case of non zero $`\mathrm{\Lambda }`$ in Schwarzschild de Sitter space-time and study its effect on the geodetic motions of the planets in vacuum dominated universe.
To the present limit $`\mathrm{\Lambda }_o10^{47}(GeV)^4`$ arising from the large scale structure measurements, there might exist its correspondence to the limit $`\mathrm{\Lambda }_o10^{120}M_{pl}^2`$ in natural units, which is $`10^{55}cm^2`$; a value consistent with that of particle data group <sup>§</sup><sup>§</sup>§Review of Particle Physics, Eur.Phys.J.C3 (1998) 70i.e., $`0|\mathrm{\Lambda }_o|2.2\times 10^{56}cm^2`$.
The vacuum expectation value of energy momentum tensor of quantum fields in de Sitter space takes the form $`<T_{\mu \nu }^{vac}>=\rho _{vac}g_{\mu \nu }`$. So, a model universe with an additional term $`\rho g_{\mu \nu }`$ in the Einstein field equation is highly motivated and $`\mathrm{\Lambda }`$ corresponding to the vacuum energy density enters in the form
$`R_{\mu \nu }{\displaystyle \frac{1}{2}}g_{\mu \nu }R=G_{\mu \nu }=8\pi GT_{\mu \nu }\mathrm{\Lambda }g_{\mu \nu }`$ (1)
A generally spherically symmetric metric is described by the form
$`d\tau ^2=e^{2\alpha (r,t)}dt^2e^{2\beta (r,t)}dr^2r^2(d\theta ^2+\mathrm{sin}^2d\phi ^2)`$ (2)
where $`\alpha `$ and $`\beta `$ are some functions of $`(r,t)`$. Corresponding to the field equations $`G_{\mu \nu }=\mathrm{\Lambda }g_{\mu \nu }`$, the generalized and spherically symmetric vacuum solution for the above metric by allowing non-zero cosmological constant (in units $`c=1`$) is given by
$`d\tau ^2=B(r)dt^2A(r)dr^2r^2(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2)`$ (3)
where $`B(r)=A(r)^1=12GM/r\mathrm{\Lambda }r^2/3`$. This metric is considered as the Schwarzschild de-Sitter metric and hence the space determined by it is not asymtotically flat as the case in Schwarzschild metric, for $`\mathrm{\Lambda }`$ related to the vacuum energy density implies a pre-existing curvature. It is easy to see that the Lagrangian and Hamiltonian for this metric are equal and hence no potential energy is involved in the problem. By rescaling $`\tau `$ and setting $`\theta =\pi /2`$ ( i.e., an equatorial plane), we get
$`E^2B(r)^1A(r)\dot{r}^2{\displaystyle \frac{J^2}{r^2}}==+1\text{or}0`$ (4)
for the time like or null geodesics respectively; where $`E=(12MG/r\mathrm{\Lambda }r^2/3)\dot{t}`$ and $`J=r^2\dot{\phi }`$ are the constants related to the energy and momentum of the test particle respectively. Here dot represents differentiation w.r.t. the affine parameter, $`\tau `$. For time like geodesic, considering $`r`$ as a function of $`\phi `$, we get
$`{\displaystyle \frac{A(r)}{r^4}}\left({\displaystyle \frac{dr}{d\phi }}\right)^2+\left({\displaystyle \frac{1}{J^2}}+{\displaystyle \frac{1}{r^2}}\right)={\displaystyle \frac{E^2}{B(r)J^2}}`$ (5)
The solution of this orbit equation is determined by a quadrature
$`\phi =\pm {\displaystyle A(r)^{1/2}r^2\left[\frac{E^2}{B(r)J^2}\frac{1}{J^2}\frac{1}{r^2}\right]^{1/2}𝑑r}`$ (6)
Defining $`r_{}`$ and $`r_+`$ as perihelion and aphelion of a closed elliptic orbit, the angular orbit precession in each revolution is $`\mathrm{\Delta }\phi =2|\phi (r_+)\phi (r_{})|2\pi `$. Following the treatment given by Weinberg with a slightly different technique to evaluate some constants, a solution valid for slightly eccentric orbit gives a precession
$`\mathrm{\Delta }\phi ={\displaystyle \frac{3\pi r_s}{L}}+{\displaystyle \frac{2\pi \mathrm{\Lambda }L^3}{r_s}}+{\displaystyle \frac{2\pi \mathrm{\Lambda }Lr_s}{3}}+\mathrm{}`$ (7)
where $`L`$ is the semilatus rectum and $`r_s=2GM/c^2`$ the Schwarzschild radius.The first term is the same as general relativity prediction for the precession of perihelion orbit obtained without introducing cosmological constant in the metric(3) and gives the precession for inner planets very much consistent with the experimental observation. Evidently, the extra precession factor obtained by introducing a positive cosmological constant is therefore
$`\mathrm{\Delta }\phi _\mathrm{\Lambda }={\displaystyle \frac{2\pi \mathrm{\Lambda }L}{r_s}}\left(L^2+{\displaystyle \frac{r_s^2}{3}}+\mathrm{}\right)`$ (8)
For planetary system, the contribution from second and higher terms is negligible compared to the first term.One can see that the contribution of the second term in eqn(8) would be significant only for very high eccentric orbits and large Schwarzschild radius. So for very massive binary star systems such as Great Attractor(GA) and Virgo Cluster with highly eccentric orbits, the value of cosmological constant may show up. So the main effect of the term involving $`\mathrm{\Lambda }`$ in eqn(7) is to cause an extra additional advance of the perihelion of the orbit by an amount
$`\mathrm{\Delta }\phi _\mathrm{\Lambda }{\displaystyle \frac{2\pi \mathrm{\Lambda }L^3}{r_s}}={\displaystyle \frac{\pi \mathrm{\Lambda }c^2a^3(1e^2)^3}{GM}}`$ (9)
where $`a`$ is semimajor axis and $`e`$ eccentricity of the orbit. In planetary motion the accuracy of precession of the orbit degrades rapidly as we move away from the sun mainly by two reasons: for smaller eccentricity the observation of the perihelia becomes more uncertain and also as $`L`$ increases the precession per revolution decreases.
In the case of Mercury, the extra precession factor $`\mathrm{\Delta }\phi _\mathrm{\Lambda }0.1^{\prime \prime }`$ per century (i.e.,the maximum uncertainty in the precession of the perihelion) would imply $`\mathrm{\Lambda }3.2\times 10^{43}cm^2`$. With the value of $`|\mathrm{\Lambda }|10^{56}cm^2`$, for Mercury, one gets $`\mathrm{\Delta }\varphi _\mathrm{\Lambda }3.6\times 10^{23}`$ arc second per revolution; which is unmeasurably small and very far from the present detectable limit of VLBI i.e.$`3\times 10^4`$arc second. With $`\mathrm{\Lambda }10^{56}cm^2`$, for Pluto with $`L=5.5\times 10^{14}`$ cm, one gets $`\mathrm{\Delta }\phi _\mathrm{\Lambda }=3.5\times 10^{17}`$ arc second per revolution; which is also unmeasurably small. For Pluto with $`\mathrm{\Delta }\varphi _\mathrm{\Lambda }0.1^{\prime \prime }`$ per revolution, one gets $`\mathrm{\Lambda }3.3\times 10^{49}cm^2`$, which is near to the present bound on cosmological constant i.e., $`0|\mathrm{\Lambda }_o|2.2\times 10^{56}cm^2`$.
For the case of bound orbits, a relation between the cosmological constant and the minimum orbit radius can be expressed by $`r_{min}=(3MG/\mathrm{\Lambda }c^2)^{1/3}`$. This suggests that the effect of $`\mathrm{\Lambda }`$ can be expected to be significant only at large radii. Also from eqn(9) it seems more reasonable to argue that more distant planets would give better limit to cosmological constant. For circular orbits one can generalise the relation further, i.e. $`\mathrm{\Delta }\varphi _\mathrm{\Lambda }=\pi \mathrm{\Lambda }c^2a^3/GM`$. If we define $`\rho `$ as the average density within a sphere of radius $`a`$ and $`\rho _{vac}=\mathrm{\Lambda }c^2/8\pi G`$ as the vacuum density equivalent of the cosmological constant, one gets $`\mathrm{\Delta }\phi _\mathrm{\Lambda }=6\pi (\rho _{vac}/\rho )`$ radians/revolution. If we evaluate the value at $`rL`$, we get
$`\mathrm{\Delta }\phi _\mathrm{\Lambda }={\displaystyle \frac{\pi \mathrm{\Lambda }c^2r^3}{GM}}={\displaystyle \frac{3P^2H_o^2\mathrm{\Omega }_{vac}}{4\pi }}`$ (10)
where $`P=(2\pi r^3/GM)^{1/2}`$ is the period of revolution, $`H_o`$ the present value of Hubble constant and $`\mathrm{\Omega }_{vac}=\rho _{vac}/\rho _c`$ the vacuum density parameter with $`\rho _c=3H_o^2/8\pi G`$ and $`\rho _{vac}=\mathrm{\Lambda }c^2/8\pi G`$.
The microscopic theories of particle physics and gravity suggest a large contribution of vacuum energy to energy momentum tensor. However, all cosmological observations to date show that $`\mathrm{\Lambda }`$ is very small and positive. It is logical to argue that an extremely small value $`\mathrm{\Lambda }`$ makes us unable to measure the extra precession with the required precision. It is here worthnoting that $`\mathrm{\Lambda }`$ must be quite larger than $`10^{50}cm^2`$ to observe its effects possibly with an advance of additional precession of perihelion orbit in the inner planets. It judges more sound to argue that only the tests based on large scale structure measurements of the universe can put a strong limit on $`\mathrm{\Lambda }`$. Nevertheless, the precession in the perihelia of the planets provides a sensitive solar test for a cosmological constant. Whether a non-zero Cosmological constant exists is one of the hot issues in various theories of particle physics and gravity. But it is certain that the planetary perturbations cannot be used to limit the present value of cosmological constant.
References
W.A.Hiscock, Phys.Lett.166B, Vol.3 (1986) 285
M.Ozer and M.O.Taha, Phys.Lett. 171B (1986) 363
S.Weinberg, Rev.Mod.Phy. 61(1989)1
L.M.Krauss and M.S.Turner, J.Gen.Rel.Grav., 27 (1995) 1137
Anup Singh, Phys.Rev. D52 (1995) 6700
J.Lopez and D.Nanopoulos, Mod.Phys.Lett. A11 (1996) 1
L.Ford, Phys.Rev. D28(1983) 710
G.Gibbons and S.Hawking, Phys.Rev. D15 (1977) 2738
S.Weinberg, Gravitation and Cosmology, Wiley Publ.NewYork (1972)
|
no-problem/9902/astro-ph9902220.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Strange Quark Matter (SQM), composed of u, d and s quarks, may probably be the ultimate ground state of matter (Farhi & Jaffe 1984, Witten 1984). If meta-stable at zero pressure it might exist in the central region of compact objects stabilized by the high pressure (Glendenning, Kettner & Weber 1995). If however, SQM is absolutely stable at zero pressure the existence of Strange Stars is a possibility. The stable range of mass and radius of strange stars are similar to neutron stars, hence the claim that some or all the pulsars are strange stars (Alcock, Farhi & Olinto 1986). In this work, we review the strange pulsar hypothesis from the point of view of the evolution of the magnetic field and investigate : a) the maximum field strength sustainable by strange stars, b) the possible current configurations supporting the field, and c) the evolution of such fields in isolated as well as accreting strange stars. We compare our results with the known observational facts concerning the nature of the pulsar magnetic fields.
## 2 Maximum Field Strength
For fields larger than $`4.4\times 10^{13}`$ G in the core of a proto-neutron star the deconfinement transition is strongly suppressed preventing the formation of a strange star in an SNE with a higher field (Ghosh & Chakrabarty 1998). On the other hand, for a strange star forming via the deconfinement conversion of an accreting neutron star (Olinto 1987) the maximum field would be that applicable for the neutron stars ($`10^{15}`$ G). For a strange star with a hadronic crust, though, the maximum field is determined by the shear stress of the crust ($`\sigma _{\mathrm{shear}}\stackrel{>}{_{}}P_{\mathrm{mag}}`$) and is found to be $`5\times 10^{13}\text{G}`$. Though sufficient for ordinary pulsars - it falls short of the field strength of exotic objects like magnetars.
## 3 Field Configuration
According to the recent models, strange stars have two distinct regions - a quark core and a thin crystalline nuclear crust separated by a dipole layer of electrons (Glendenning & Weber 1992). Since no currents can flow across a dipole layer the currents, supporting a magnetic field, either reside entirely within the quark core or are completely confined to the nuclear crust. The evolution of the magnetic field is governed by the equation :
$$\frac{\stackrel{}{B}}{t}=\frac{c^2}{4\pi }\times (\frac{1}{\sigma }\times \stackrel{}{}\times \stackrel{}{B}),$$
(1)
where $`\sigma `$ is the electrical conductivity of the medium. The ohmic dissipation time-scale is, $`\tau _{\mathrm{ohmic}}\frac{4\pi }{c^2}\sigma L^2`$, where $`L`$ is the system dimension. If the currents are in the crust then for allowable range of crustal parameters we find that the dissipation time-scale is $`\stackrel{<}{_{}}3\times 10^6\text{years}`$. This is much smaller than the typical time-scale in which the field remains stable in millisecond ($`10^9`$ yrs) as well as in isolated pulsars ($`10^8`$ yrs). Hence, a crustal field is not long-lasting enough for the strange star to effectively function as a pulsar.
## 4 field evolution
Assuming the quarks to be massless in the u,d,s plasma in the core, the conductivity is (Ha͡ensel & Jerzak 1989) $`\sigma 6\times 10^{25}(\frac{\alpha _c}{0.1})^{3/2}T_{10}^2\frac{n_B}{n_{B0}}s^1`$, where $`\alpha _c`$ is the QCD coupling constant, $`T_{10}`$ is temperature of the isothermal core in units of $`10^{10}`$ K, $`n_B`$ is the number density of baryons and $`n_{B0}`$ is the nuclear baryon number density. The main effect of accretion is to raise the temperature of the star. Assuming the strange star thermal evolution to be at least as fast as or faster than that of the neutron stars, the temperature of an accreting strange star should be equal to or less than that of an accreting neutron star. This gives a lower limit to the dissipation time-scale. Using $`T10^9`$ K corresponding to an accretion rate of $`10^9`$ M yr<sup>-1</sup>(Miralda-Escude et al. 1990) we get $`\tau _{\mathrm{ohmic}}10^{13}`$ years. This completely rules out the possibility of any field reduction even in an accreting strange star.
Therefore, even if strange pulsars exist their magnetic field would not decay in an accreting system, contrary to the expectation from extensive pulsar observation. Hence, there is as yet no compelling arguments in favour of strange pulsars vis-a-vis neutron stars to function as pulsars.
## Acknowledgment
Discussions with Dipankar Bhattacharya, Bhaskar Datta, Jes Madsen and Fridolin Weber have been very helpful.
|
no-problem/9902/nucl-th9902005.html
|
ar5iv
|
text
|
# Halflife of 56Ni in cosmic rays
## ACKNOWLEDGMENTS
This work was supported in part by the Danish Research Council.
|
no-problem/9902/cond-mat9902079.html
|
ar5iv
|
text
|
# Strong Enhancement of Superconducting Correlation in a Two-Component Fermion Gas
\[
## Abstract
We study high-density electron-hole (e-h) systems with the electron density slightly higher than the hole density. We find a new superconducting phase, in which the excess electrons form Cooper pairs moving in an e-h BCS phase. The coexistence of the e-h and e-e orders is possible because e and h have opposite charges, whereas analogous phases are impossible in the case of two fermion species that have the same charge or are neutral. Most strikingly, the e-h order enhances the superconducting e-e order parameter by more than one order of magnitude as compared with that given by the BCS formula, for the same value of the effective e-e attractive potential $`\lambda ^{ee}`$. This new phase should be observable in an e-h system created by photoexcitation in doped semiconductors at low temperatures.
\]
It is expected that electron-hole (e-h) systems created through photoexcitation of semiconductors exhibit various phases depending on the material parameters and the densities $`N^e`$ and $`N^h`$ of electrons and holes, respectively . Some of the interesting phases have been successfully observed, including the e-h liquid and the Bose–Einstein condensation of excitons . These successes are largely due to the careful control of both the material parameters (by choosing a semiconductor) and the e-h density (through the excitation intensity). By further increasing the e-h density, one should observe an “e-h BCS phase” at low temperatures, which is characterized by a nonzero e-h order parameter . Although these phases are realized when $`N^e=N^h`$, one can also use doped semiconductors. In this case, one can control $`N^e`$ and $`N^h`$ independently; $`N^eN^hN^x`$ through the donor (or acceptor) density $`N^d`$, and $`N^e+N^h`$ through the excitation intensity . The additional parameter $`N^x`$ may lead to a new quantum phase(s) at low temperatures. When $`0<|N^x|N^eN^h`$, in particular, we may expect a new superconducting phase, which we call a multiply ordered superconducting (MS) phase, in which doped electrons form Cooper pairs moving in the e-h BCS state. On the other hand, most of the previous studies on superconductivity in the doped e-h BCS state (or related systems such as doped excitonic insulators and doped exciton systems) treated either the case where the doped electrons are located in a third band that is different from the bands of e-h pairs , or the case $`N^hN^e`$ (or equivalently, $`N^eN^h`$. In these theories, the e-h BCS condensates (or excitons) work as polarizable media, which induce a large attractive interaction between electrons, whereas the superconducting order parameter $`\mathrm{\Delta }^{ee}`$ as well as the superconducting transition temperature $`T_\mathrm{c}^{ee}`$ is essentially given by substituting $`\lambda ^{ee}`$ (effective dimensionless $`e`$-$`e`$ attraction) and a cutoff parameter into the BCS formula or the McMillan formula . However, we expect the new MS phase when excess electrons (or holes) are doped in the $`e`$ ($`h`$) band where electrons (holes) forming the e-h BCS state are located, and when the e-h pairing is dominant, i.e., when $`0<|N^x|N^eN^h`$ and $`|\mathrm{\Delta }^{ee}|,|\mathrm{\Delta }^{hh}||\mathrm{\Delta }^{eh}|`$. Here, $`\mathrm{\Delta }^{ee}`$ and $`\mathrm{\Delta }^{hh}`$ are the superconducting $`e`$-$`e`$ and $`h`$-$`h`$ order parameters, respectively, and $`\mathrm{\Delta }^{eh}`$ denotes the $`e`$-$`h`$ order parameter.
In this Letter, we explore the possibility of such a new MS phase, by studying the phase diagram of high-density $`e`$-$`h`$ systems as a function of $`N^x`$. It is shown that the MS phase, which has the order parameters $`0<(|\mathrm{\Delta }^{ee}|,|\mathrm{\Delta }^{hh}|)|\mathrm{\Delta }^{eh}|`$, can be realized when $`0<|N^x|N^eN^h`$. Most strikingly, $`|\mathrm{\Delta }^{ee}|`$ is enhanced in the MS phase by more than one order of magnitude in comparison with the value given by the BCS or McMillan formula, for the same value of $`\lambda ^{ee}`$.
We assume a three-dimensional, high-density, isotropic $`e`$-$`h`$ gas at zero temperature, so that BCS-like mean field approximations work well . We decompose the interaction Hamiltonian $`H_{\mathrm{int}}`$ into the short- and long-range parts, $`H_{\mathrm{int}}^{\mathrm{SR}}`$ and $`H_{\mathrm{int}}^{\mathrm{LR}}`$, respectively. The latter is related to long-range charge fluctuations, and will be discussed later. For the moment, we consider a charge-neutral region of unit volume, in which $`H_{\mathrm{int}}^{\mathrm{LR}}`$ is irrelevant. In $`H_{\mathrm{int}}^{\mathrm{SR}}`$, the e-e, h-h and e-h interaction matrix elements are renormalized to effective values, $`U^{ee}`$, $`U^{hh}`$ and $`U^{eh}`$, respectively, due to the screening effect and negative (attractive) contributions from various bosonic excitations, such as lattice phonons and excitons. Since the bare value of $`U^{eh}`$ is negative, the bosonic excitations make it more negative, whereas (basically positive) $`U^{ee}`$ and $`U^{hh}`$ are reduced. Hence, $`|U^{eh}|`$ tends to be larger than $`|U^{ee}|,|U^{hh}|`$. It was suggested that $`U^{ee}`$ (and $`U^{hh}`$) can be negative for some parameter values . In the e-h BCS phase, there is an additional negative contribution from the Goldstone mode of the e-h order parameter. Since there is no reliable method of estimating $`U^{ee}`$ and $`U^{hh}`$, we here treat them as given parameters, assuming that $`U^{ee}`$, $`U^{hh}<0`$ and $`|U^{eh}|>|U^{ee}|,|U^{hh}|`$, and explore the phase diagram as a function of them and $`N^x`$ ($`0`$). We find that the minimal form of $`H_{\mathrm{int}}^{\mathrm{SR}}`$ for the MS state is
$`H_{\mathrm{int}}^{\mathrm{SR}}`$ $`=`$ $`{\displaystyle \underset{𝐤𝐤^{}}{}}U_{\mathrm{𝐤𝐤}^{}}^{eh}\left(e_𝐤^{}h_𝐤^{}h_𝐤^{}e_𝐤^{}+e_𝐤^{}h_𝐤^{}h_𝐤^{}e_𝐤^{}\right)`$ (2)
$`+{\displaystyle \underset{𝐤𝐤^{}}{}}U_{\mathrm{𝐤𝐤}^{}}^{ee}e_𝐤^{}e_𝐤^{}e_𝐤^{}e_𝐤^{},`$
where $`e_{𝐤\sigma }`$ ($`h_{𝐤\sigma }`$) denotes the annihilation operator of $`e`$ ($`h`$) with momentum $`𝐤`$ and spin $`\sigma `$, and
$`\{\begin{array}{cc}U_{\mathrm{𝐤𝐤}^{}}^{eh}=V^{eh},U_{\mathrm{𝐤𝐤}^{}}^{ee}=V^{ee}\hfill & \text{if }|\xi _𝐤|,|\xi _𝐤^{}|<\omega _\mathrm{c}\text{,}\hfill \\ U_{\mathrm{𝐤𝐤}^{}}^{eh}=U_{\mathrm{𝐤𝐤}^{}}^{ee}=0\hfill & \text{otherwise.}\hfill \end{array}`$ (3)
Here, $`V^{eh},V^{ee}>0`$ are constants, and $`\omega _\mathrm{c}`$ ($`\mu `$) is a cutoff of the interactions . We assume that $`V^{eh}>V^{ee}`$ so that $`|\mathrm{\Delta }^{eh}||\mathrm{\Delta }^{ee}|`$. Although we do not include an h-h interaction $`U^{hh}`$ in $``$, we have confirmed that $`U^{hh}`$ can only modify the magnitude of the pair correlations by a factor of order unity, as long as $`|U^{hh}||U^{ee}|`$. We do not include terms of the form $`eehh+\text{h.c.}`$ (such terms are important in the case of two-band superconductors ), because in $`e`$-$`h`$ systems intermediate processes involving such terms cost a large amount of energy of the order of the energy gap $`E_\mathrm{g}`$ ($`|U^{eh}|`$ ). The total Hamiltonian without $`H_{\mathrm{int}}^{\mathrm{LR}}`$ is denoted by $`H`$, and we put
$``$ $``$ $`H\mu ^e\widehat{N}^e\mu ^h\widehat{N}^h`$ (4)
$`=`$ $`{\displaystyle \underset{𝐤\sigma }{}}\left[(\xi _𝐤\nu )e_{𝐤\sigma }^{}e_{𝐤\sigma }+(\xi _𝐤+\nu )h_{𝐤\sigma }^{}h_{𝐤\sigma }\right]+H_{\mathrm{SR}}.`$ (5)
Here, $`\mu ^eE_\mathrm{g}/2+\mu +\nu `$ and $`\mu ^hE_\mathrm{g}/2+\mu \nu `$ ($`\nu 0`$) are the chemical potentials of e and h, respectively, which are assumed to have the same energy dispersion $`𝐤^2/(2m)+E_\mathrm{g}/2`$. Moreover, $`\widehat{N}^e_{𝐤\sigma }e_{𝐤\sigma }^{}e_{𝐤\sigma }`$, $`\widehat{N}^h_{𝐤\sigma }h_{𝐤\sigma }^{}h_{𝐤\sigma }`$, $`\xi _𝐤𝐤^2/(2m)\mu `$, and we take $`\mathrm{}=1`$.
We apply the mean field approximation that assumes the e-h correlation $`\delta _𝐤^{}^{eh}h_𝐤^{}e_𝐤^{}`$ and the e-e correlation $`\delta _𝐤^{}^{ee}e_𝐤^{}e_𝐤^{}`$. We assume that $`h_𝐤e_𝐤=h_𝐤e_𝐤`$. Using eq. (3), we find that the order parameters, defined by $`\mathrm{\Delta }_𝐤^{eh}_𝐤^{}U_{\mathrm{𝐤𝐤}^{}}^{eh}\delta _𝐤^{}^{eh}`$ and $`\mathrm{\Delta }_𝐤^{ee}_𝐤^{}U_{\mathrm{𝐤𝐤}^{}}^{ee}\delta _𝐤^{}^{ee}`$, take simple forms: $`\mathrm{\Delta }_𝐤^{eh}=\mathrm{\Delta }^{eh}`$ and $`\mathrm{\Delta }_𝐤^{ee}=\mathrm{\Delta }^{ee}`$ if $`|\xi _𝐤|<\omega _\mathrm{c}`$; $`\mathrm{\Delta }_𝐤^{eh}=\mathrm{\Delta }_𝐤^{ee}=0`$ otherwise. Here, $`\mathrm{\Delta }^{eh}`$ and $`\mathrm{\Delta }^{ee}`$ are constants, which are taken to be real without loss of generality. We then diagonalize $``$, and obtain the self-consistent equations . The system is characterized by $`N^x`$ and dimensionless effective coupling constants $`\lambda ^{eh}n_\mathrm{F}V^{eh}`$, $`\lambda ^{ee}n_\mathrm{F}V^{ee}`$, where $`n_\mathrm{F}`$ is the density of states per spin at the Fermi surface.
We have solved the self-consistent equations numerically and found five solutions, which we denote by I–V:
I: $`|\mathrm{\Delta }^{eh}|=|\mathrm{\Delta }^{ee}|=0`$; possible for all $`N^x`$. The one-particle distribution functions of e and h are $`n^e(\xi )=\theta (\nu \xi )`$ and $`n^h(\xi )=\theta (\nu \xi )`$, respectively, where $`\theta (x)`$ is the step function.
II: $`|\mathrm{\Delta }^{eh}|0`$, $`|\mathrm{\Delta }^{ee}|=0`$; possible for $`N^x=0`$ and $`\nu <|\mathrm{\Delta }^{eh}|`$. $`\delta _𝐤^{eh}0`$ for $`|\xi ||\mathrm{\Delta }^{eh}|`$, and the wave function takes the same form as the BCS state, if $`(e_{𝐤,\sigma },h_{𝐤,\sigma })`$ is replaced by $`(c_{𝐤,\sigma },c_{𝐤,\sigma })`$. This is the ordinary e-h BCS state in nondoped ($`N^e=N^h`$) semiconductors. The energy cost of adding an electron-like quasiparticle to this state is $`E_𝐤\nu `$, where $`E_𝐤\sqrt{\xi _𝐤^2+|\mathrm{\Delta }^{eh}|^2}`$.
III: $`|\mathrm{\Delta }^{eh}|0`$, $`|\mathrm{\Delta }^{ee}|=0`$; possible for small but finite $`N^x>0`$ and $`\nu >|\mathrm{\Delta }^{eh}|`$. Formally, this solution (whose wave function is denoted by $`|\mathrm{III}`$) is obtained from solution II (whose wave function $`|\mathrm{II}`$) by adding electron-like quasiparticles (whose annihilation operator $`ϵ_{𝐤\sigma }`$) up to $`E_𝐤<\nu `$, i.e., $`|\mathrm{III}=\left(_{𝐤\sigma }^{}ϵ_{𝐤\sigma }^{}\right)|\mathrm{II}`$, where $`_{𝐤\sigma }^{}`$ denotes the product over the range $`E_𝐤<\nu `$. Direct calculation shows that $`|\mathrm{III}=\left(_{𝐤\sigma }^{}e_{𝐤\sigma }^{}S_{𝐤\sigma }\right)|\mathrm{II}`$, where $`S_{𝐤\sigma }`$ annihilates an ($`e_{𝐤\sigma },h_{𝐤,\sigma }`$) pair. Therefore, $`n^e(\xi )=1`$, $`n^h(\xi )=0`$, and $`\delta _𝐤^{eh}=0`$ (e and h are unpaired) in the region $`|\xi |<\sqrt{\nu ^2|\mathrm{\Delta }^{eh}|^2}\xi _\mathrm{F}^{}`$. This unpairing is demonstrated in Fig. 1 by dotted lines, which have discontinuities (secondary “Fermi surfaces”) at $`\xi =\pm \xi _\mathrm{F}^{}`$. We call this solution the partially unpaired e-h BCS state (PU state). As $`N^x`$ increases, $`|\mathrm{\Delta }^{eh}|`$ diminishes gradually until it vanishes at a certain value of $`N^x`$, where this solution changes into solution I continuously.
IV: $`|\mathrm{\Delta }^{eh}|=0`$, $`|\mathrm{\Delta }^{ee}|0`$; possible for all $`N^x`$. Similar to solution I except that $`\delta _𝐤^{ee}0`$ for $`|\xi \nu ||\mathrm{\Delta }^{ee}|`$. This is an ordinary superconductor of electrons.
V: $`|\mathrm{\Delta }^{eh}|`$, $`|\mathrm{\Delta }^{ee}|0`$ ($`|\mathrm{\Delta }^{eh}||\mathrm{\Delta }^{ee}|`$ because $`\lambda ^{eh}\lambda ^{ee}`$); possible for small but finite $`N^x>0`$. Similar to solution III except that $`\delta _𝐤^{ee}0`$ if $`|\xi \pm \xi _\mathrm{F}^{}||\mathrm{\Delta }^{ee}|`$, i.e., the e-e pair correlation exists around the secondary “Fermi surfaces” (see solid lines in Fig. 1). This is the only solution where $`\mathrm{\Delta }^{eh}`$ and $`\mathrm{\Delta }^{ee}`$ coexist . We call this solution the multiply ordered superconducting (MS) state.
To identify what solution is physically realized, we compare their energies $`EH`$ for all values of $`N^xN^eN^h`$. Note that we compare $`H`$ rather than $``$, because the natural parameters controlled directly by the photoexcitation intensity and the doping are $`N^e`$ and $`N^h`$ rather than $`\mu ^e`$ and $`\mu ^h`$ . \[If we compared $``$, the discussion would be rather complicated because the relations between $`(N^e,N^h)`$ and $`(\mu ^e,\mu ^h)`$ are different for different solutions.\] Figure 2 shows $`E`$ as a function of $`N^x`$, where $`N(N^e+N^h)/2`$. One recognizes that the solution V has the lowest energy for all values of $`N^x/(n_\mathrm{F}\mathrm{\Delta }_0^{eh})1.86`$. However, care should be taken because its curve is convex up, which might indicate a phase separation. We now show that the solution V is stable because e and h have opposite electrical charges.
To show the stability, we first consider the unstable case where e and h denote some fermions that have charges of the same sign (including the neutral case). All calculations so far are also applicable to such a general case (as long as we regard $`U^{eh}`$ and $`U^{ee}`$ as given parameters). Since the energy of a single phase of V is larger than the average of the energies of two phases II and IV, the system should undergo a phase separation into two phases, one with the excess density $`N^x=0<N_{\mathrm{tot}}^x`$ (phase II) and the other with $`N^x>N_{\mathrm{tot}}^x`$ (phase IV), where $`N_{\mathrm{tot}}^x`$ denotes the average value of $`N^x`$ of the total system. As $`N_{\mathrm{tot}}^x`$ is decreased, the region(s) of phase IV becomes smaller, until the total system turns into a single phase II for $`N_{\mathrm{tot}}^x=0`$. In a similar manner, one can also show the instability of phase III. This corresponds to the instability of the Sarma state that was discussed in the studies of superconductivity in a magnetic field.
On the other hand, the situation is totally different if we return to the electron-hole system, where $`e`$ and $`h`$ have opposite charges $`q`$ and $`q`$ ($`q>0`$), respectively. If the phase separation occurred, each phase would have global net charge of density $`q\delta Nq(N^xN^d)`$ in phase II, and $`q\delta N`$ in phase IV. The global charge would result in a large cost of the long-range part of the Coulomb energy $`E_{\mathrm{LR}}H_{\mathrm{int}}^{\mathrm{LR}}`$, which we neglected in $`H`$, eq. (5). Taking $`E_{\mathrm{LR}}`$ into account, the total energy of such a nonuniform state would be $`E_{\mathrm{tot}}=_iv_iE_i+E_{\mathrm{LR}}+E_\mathrm{B}`$, where $`v_i`$ and $`E_i`$ denote the volume and the expectation value of $`H`$ for phase $`i`$, respectively, and $`E_\mathrm{B}`$ the boundary energy. Since $`E_\mathrm{B}0`$, $`E_{\mathrm{tot}}_iv_iE_i+E_{\mathrm{LR}}E_{\mathrm{tot}}^{}`$. When the system is in a single phase V, then $`E_{\mathrm{LR}}=0`$ and $`E_{\mathrm{tot}}`$ equals $`E`$ of phase V of Fig. 2. On the other hand, when the system is separated into cells of phases II and IV, one obtains a finite $`E_{\mathrm{LR}}`$, which would be dominated by the electrostatic energy. We find that $`E_{\mathrm{tot}}^\mathrm{V}`$ is smaller than $`E_{\mathrm{tot}}^{\mathrm{II}+\mathrm{IV}}`$ ($`E_{\mathrm{tot}}^{\mathrm{II}+\mathrm{IV}}`$), hence a single phase V should be realized, except for very small values of $`N^x`$ if $`L^{eh}L_\mathrm{c}`$. Here, $`L^{eh}v_\mathrm{F}/(\pi |\mathrm{\Delta }^{eh}|)`$ is the Pippard length of e-h pairs, which gives the minimum allowable size of cells of phase II, whereas $`L_\mathrm{c}\sqrt{\kappa /(q^2n_\mathrm{F})}`$ is the maximum allowable size of cells of phase II, below which the energy decrease by the phase separation overcomes the increase of $`E_{\mathrm{LR}}`$, where $`\kappa `$ denotes the dielectric constant of the semiconductor. The condition $`L^{eh}L_\mathrm{c}`$ is always satisfied at high densities, i.e., when $`r_\mathrm{s}[3/(4\pi N)]^{1/3}mq^2/(4\pi \kappa \mathrm{}^2)1`$ and $`N^eN^h`$ ($`N`$). In fact, in this case we may assume a screened Coulomb interaction for $`V^{eh}`$ and can easily show that $`L^{eh}/L_\mathrm{c}>10^8`$ for any $`r_\mathrm{s}1`$. Therefore, at high densities ($`r_\mathrm{s}1`$) the phase V is always stable against phase separation. Moreover, considering the large value $`10^8`$, we may extend this conclusion to densities where $`r_\mathrm{s}1`$ . The condition $`r_\mathrm{s}1`$ can be realized when, e.g., $`m=0.1m_0^e`$, $`\kappa =10\kappa _0`$ and $`N=10^{19}\mathrm{cm}^3`$, for which we obtain $`r_\mathrm{s}=0.54`$, where $`m_0^e`$ is the free electron mass and $`\kappa _0`$ the dielectric constant of vacuum.
By performing calculations of the energies of various solutions as a function of $`\lambda ^{ee}`$ and $`N^x`$, we obtain the phase diagram shown in Fig. 3. Phase V (MS phase), for which $`|\mathrm{\Delta }^{eh}||\mathrm{\Delta }^{ee}|>0`$, is realized in the shadowed region. This phase changes continuously into phase II as $`N^x0`$. On the other hand, this phase changes into phase IV as $`N^x`$ increases, because then the Fermi energies of e and h become separate, which prevents e-h pairing. To study the $`N^x`$ dependence in more detail, we plot $`|\mathrm{\Delta }^{ee}|`$ in the MS phase in Fig. 4. One sees that $`|\mathrm{\Delta }^{ee}|`$ strongly depends on $`N^x`$ and takes a maximum value $`\mathrm{\Delta }_{\mathrm{opt}}^{ee}`$ at an optimum $`N^x`$ ($`N_{\mathrm{opt}}^x`$), although $`N^e=N+N^x/2`$ is almost constant (because $`N^xN`$). This is in marked contrast with the single-component case, where the BCS theory gives $`|\mathrm{\Delta }^{ee}|=2\omega _\mathrm{c}\mathrm{exp}(1/\lambda ^{ee})\mathrm{\Delta }_{\mathrm{BCS}}^{ee}`$, which is constant when $`N^e`$ and $`\lambda ^{ee}`$ are constant. To compare $`\mathrm{\Delta }_{\mathrm{opt}}^{ee}`$ with $`\mathrm{\Delta }_{\mathrm{BCS}}^{ee}`$, we plot the enhancement factor $`Q\mathrm{\Delta }_{\mathrm{opt}}^{ee}/\mathrm{\Delta }_{\mathrm{BCS}}^{ee}`$ in the inset of Fig. 4. Since $`\mathrm{\Delta }_{\mathrm{BCS}}^{ee}`$ is the magnitude of $`|\mathrm{\Delta }^{ee}|`$ which would arise if there were no e-h interaction, the fact $`Q>1`$ implies that $`|\mathrm{\Delta }^{ee}|`$ is enhanced by the presence of $`|\Delta ^{eh}|`$. It is found that an enhancement by one order of magnitude or larger occurs as $`\lambda ^{ee}`$ becomes smaller. As $`\lambda ^{ee}0`$, in particular, $`|\mathrm{\Delta }^{ee}|`$ diminishes much more slowly than $`\mathrm{\Delta }_{\mathrm{BCS}}^{ee}`$, resulting in a very large enhancement factor $`Q`$. Since the superconducting transition temperature $`T_\mathrm{c}^{ee}`$ $`|\mathrm{\Delta }^{ee}|`$, the MS phase should have a $`T_\mathrm{c}^{ee}`$ which is much higher than the BCS transition temperature . The parameter region in which this enhancement occurs is hatched in Fig. 3, where the optimum doping line is also shown.
To reveal the physical mechanism leading to the enhancement of $`|\mathrm{\Delta }^{ee}|`$, we reexamine the $`e`$ distributions of the MS and PU states of Fig. 1(a), which is schematically plotted in Fig. 5. In the PU state, for which $`|\mathrm{\Delta }^{ee}|=0`$, excess electrons are concentrated in the region $`|\xi |\xi _\mathrm{F}^{}`$, and the secondary (lower and upper) “Fermi surfaces” appear at $`\xi =\pm \xi _\mathrm{F}^{}`$. In the MS state, electrons form e-e pairs to benefit from the attractive e-e interaction and consequently $`\delta _𝐤^{ee}`$ develops around those “Fermi surfaces” \[Fig. 1(d)\]. However, this pairing necessarily broadens the “Fermi surfaces”, resulting in increase of the kinetic energy $`E_{\mathrm{kin}}`$. Hence, the total energy is minimized for an optimum $`\delta _𝐤^{ee}`$. Although this mechanism is similar to the BCS instability in ordinary superconductors, there is an essential difference: In the BCS state, the broadening of the Fermi surface of width $`\delta \xi `$ increases the kinetic energy by $`\delta E_{\mathrm{kin}}n_\mathrm{F}|\delta \xi |^2n_\mathrm{F}|\mathrm{\Delta }^{ee}|^2`$. In the MS state, on the other hand, the broadenings of the lower and upper “Fermi surfaces” change the kinetic energy by $`\delta E_{\mathrm{kin}}^<n_\mathrm{F}|\mathrm{\Delta }^{ee}|^2`$ and $`\delta E_{\mathrm{kin}}^>+n_\mathrm{F}|\mathrm{\Delta }^{ee}|^2`$, respectively \[when $`\lambda ^{ee}`$ (thus $`|\delta \xi |`$) is small\]. These two contributions cancel to give a relatively small increase in the kinetic energy $`\delta E_{\mathrm{kin}}=\delta E_{\mathrm{kin}}^<+\delta E_{\mathrm{kin}}^>`$. This is the origin of the huge enhancement factor $`Q`$ for small $`\lambda ^{ee}`$. Since the cancellation is not perfect, $`|\mathrm{\Delta }^{ee}|`$ of course takes a finite value. As $`\lambda ^{ee}`$ is increased, the cancellation becomes less complete, and $`Q`$ is reduced.
The existence of an optimum doping can be understood in a similar manner: If $`N^x`$ is too small, no room for broadening is left between the two “Fermi surfaces”. Conversely, if $`N^x`$ is too large, the magnitude of the jump of $`n^e(\xi )`$ at the two “Fermi surfaces” differs, which makes the cancellation of contributions from the two “Fermi surfaces” less perfect, resulting in a smaller $`|\mathrm{\Delta }^{ee}|`$. Therefore, $`|\mathrm{\Delta }^{ee}|`$ takes a maximum value at an intermediate value of $`N^x`$.
Finally, we point out that although we have assumed the e-h BCS state in semiconductors, the same mechanism for the enhancement of the e-e correlation upon doping may be expected also in other ordered phases of other materials, if the electron distribution without the e-e correlation is similar to the dotted line of Fig. 5.
|
no-problem/9902/cond-mat9902350.html
|
ar5iv
|
text
|
# Optical detection of a BCS phase transition in a trapped gas of fermionic atoms
## Abstract
Light scattering from a spin-polarized degenerate Fermi gas of trapped ultracold <sup>6</sup>Li atoms is studied. We find that the scattered light contains information which directly reflects the quantum pair correlation due to the formation of atomic Cooper pairs resulting from a BCS phase transition to a superfluid state. Evidence for pairing can be observed in both the space and time domains.
The realization of Bose-Einstein condensation in trapped atomic gases has generated interest in the atomic physics, quantum optics and condensed matter physics communities. Although the experimental realization of a degenerate atomic Fermi gas has not yet been demonstrated, interest in this subject is increasing . Of course, the behavior of a degenerate Fermi gas is remarkably different from a degenerate Bose gas. By analogy with the BCS theory of superconductivity in metals, it has been predicted that a degenerate Fermi gas can undergo a BCS phase transition to an atomic superfluid state if the interatomic interaction in the gas is attractive . Experiments to trap and cool <sup>6</sup>Li and <sup>40</sup>K gases into the quantum degenerate regime are underway in several laboratories.
In this paper, we address the question of how to detect the superfluid state after the BCS phase transition. We assume that the Fermi gas has been cooled to near absolute zero, so that all trap levels up to the Fermi energy are filled. An attractive interatomic interaction will cause atoms in the vicinity of the Fermi level to form Cooper pairs, with each pair composed of two quantum correlated atoms behaving as a new composite Bose particle. These bosons automatically undergo Bose-Einstein condensation and form a superfluid. The quantum pair correlation of the Cooper pairs characterizes the superfluid properties of the gas.
A promising experimental approach is to prepare a degenerate gas with atoms in an incoherent mixture of two internal hyperfine state. Such a mixture allows Cooper pairing via an s-wave interaction, and leads to practically attainable BCS-transition temperatures when the scattering length a is large and negative. This occurs naturally for <sup>6</sup>Li , or can be obtained in the vicinity of a Feshbach resonance for other atoms. We consider here a trapped <sup>6</sup>Li gas in an incoherent mixture of ground states $`|+=|M_s=1/2,M_I=1`$ and $`|=|M_s=1/2,M_I=0`$ .
The key to observing the superfluid state is to determine the existence of pair correlations. To achieve this goal, we propose to use off-resonance light scattering and Fourier imaging techniques. A laser beam with amplitude $`𝐄_L`$, frequency $`\omega _L`$, and wave vector $`k`$ propagating along the $`z`$ direction is used to illuminate the gas. We take the light to be linearly polarized and tuned near resonance between an $`S`$ ground state and $`P`$ excited state. To avoid incoherent heating of the gas due to spontaneous emission, the magnitude of the laser detuning, $`\delta =\omega _L\omega _0`$, is assumed to be large. In vector quantum field theory , the atoms in the light field can be described by a four-component atomic field $`\mathrm{\Psi }(𝐫)=\psi _+|++\psi _{}|+\psi _{e+}|e++\psi _e|e`$ with $`\psi _\pm `$ denoting atoms in the ground-state hyperfine levels $`|\pm `$, and $`\psi _{e\pm }`$ in the corresponding excited-state hyperfine levels. For large $`\delta `$, the excited-state components can be adiabatically eliminated, yielding a total atomic polarization operator with positive-frequency part
$`𝐏^{(+)}(𝐫,t)=\mathrm{}{\displaystyle \frac{\mathrm{}𝐄^{(+)}}{\mathrm{}\delta }}\widehat{\rho }(𝐫,t)e^{i\omega _Lt},`$ (1)
where $`\widehat{\rho }(𝐫,t)=\psi _+^{}(𝐫,t)\psi _+(𝐫,t)+\psi _{}^{}(𝐫,t)\psi _{}(𝐫,t)`$ denotes the total atomic density operator in the ground state, $`\mathrm{}`$ the matrix element of the atomic dipole moment, and $`𝐫`$ a location in the gas. Light propagation is determined by the atomic polarization operator (1) and the wave equation
$`^2𝐄^{(+)}{\displaystyle \frac{1}{c^2}}{\displaystyle \frac{^2𝐄^{(+)}}{t^2}}`$ $`=`$ $`\mu _0{\displaystyle \frac{^2𝐏^{(+)}}{t^2}}.`$ (2)
The solution to Eq. (2) can be expressed as
$`𝐄^{(+)}(𝐑,t)`$ $`=`$ $`𝐄_S^{(+)}(𝐑,t)e^{i\omega _Lt}+𝐄_L^{(+)}e^{ikzi\omega _Lt},`$ (3)
where $`𝐄_S^{(+)}(𝐑,t)`$ is the scattered field at position $`𝐑`$. For $`R|𝐑||𝐫|`$, the scattered field has the form
$`𝐄_S^{(+)}(𝐑,t)=k^2{\displaystyle \frac{e^{ikR}}{R}}{\displaystyle d^3re^{ik\widehat{𝐑}𝐫}\left[𝐏^{(+)}(𝐫,t)\widehat{𝐑}𝐏^{(+)}(𝐫,t)\widehat{𝐑}\right]},`$ (4)
where the directional unit vector $`\widehat{𝐑}=𝐑/R`$. From Eqs. (1) and (4), we see that the scattered field depends on the density operator of the gas, so that the averaged spectral intensity of the scattered field received by a photodetector contains the second-order correlation of the atomic field operators
$`\widehat{\rho }(𝐫,t)\widehat{\rho }(𝐫^{},t^{})\widehat{\rho }(𝐫,t)\widehat{\rho }(𝐫^{},t^{})+G(𝐫,𝐫^{},t,t^{}),`$ (5)
where “$`\mathrm{}`$” denotes the quantum mechanical expectation value. The first term in Eq. (5), which depends on the total averaged density, describes the contribution to the scattered field by the normal ground-state component. The second term,
$`G(𝐫,𝐫^{},t,t^{})2\psi _{}(𝐫,t)\psi _+(𝐫^{},t^{})\psi _{}^{}(𝐫,t)\psi _+^{}(𝐫^{},t^{}),`$ (6)
gives the quantum pair correlation function arising from the formation of Cooper pairs in the superfluid state.
The contribution of the laser field $`𝐄_L`$ in Eq. (3) can be removed by imaging the cloud with a dark ground technique, as discussed in Refs. . If a plane located a distance $`z_0`$ from the atoms is observed in this way, the spectral and spatial intensity distribution measured on the detector will be
$`I(𝐑_{},\nu )={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\tau e^{i\nu \tau }{\displaystyle \frac{1}{2T}}{\displaystyle _T^T}𝑑t𝐄_S^{()}(𝐑_0,t)𝐄_S^{(+)}(𝐑_0,t+\tau ),`$ (7)
where 2T is the time interval used for detection, and $`𝐑_0(𝐑_{},z_0)`$ is a point in the image plane. Equation (4), along with relations (1) and (5), gives the spatial-temporal correlation function of the light field
$$𝐄_S^{()}(𝐑_0,t)𝐄_S^{(+)}(𝐑_0,t+\tau )=\frac{9I_L\gamma ^2}{16(kz_0\delta )^2}\left[I_1(𝐑_{},t,\tau )+I_2(𝐑_{},t,\tau )\right]e^{i\omega _L\tau },$$
(8)
where $`I_L=𝐄_L^{()}𝐄_L^{(+)}`$ is the intensity of the incident light and $`\gamma `$ is the natural linewidth of the transition. The functions $`I_1`$ and $`I_2`$ are defined as
$`I_1(𝐑_{},t,\tau )={\displaystyle d^2𝐫_{}d^2𝐫_{}^{}e^{ik𝐑_{}(𝐫_{}𝐫_{}^{})/z_0}\widehat{\rho }(𝐫_{},t)\widehat{\rho }(𝐫_{}^{},t+\tau )},`$ (9)
and
$`I_2(𝐑_{},t,\tau )={\displaystyle d^2\xi e^{ik𝐑_{}\xi /z_0}d^2𝐫_{}G(𝐫_{},𝐫_{}\xi ,t,t+\tau )},`$ (10)
where the relative distance between atoms is denoted by $`\xi =𝐫_{}𝐫_{}^{}`$. The function $`I_1`$ describes the signal from the normal component of the gas and $`I_2`$ the signal from the Cooper pairs. In general, $`I_2`$ is much weaker than $`I_1`$ since the averaged density of atoms in the normal component is far larger than that of the pairs.
The averaged density and the quantum pair correlation function can be found using vector quantum field theory . In the off-resonant light field, the degenerate Fermi gas is described by the coupled quantum field equations
$`i\mathrm{}{\displaystyle \frac{\psi _+}{t}}=(H_0\mu _++V_Li\mathrm{}\mathrm{\Gamma }/2)\psi _+\mathrm{\Delta }(𝐫)\psi _{}^{}`$ (11)
$`i\mathrm{}{\displaystyle \frac{\psi _{}^{}}{t}}=(H_0\mu _{}+V_L+i\mathrm{}\mathrm{\Gamma }/2)\psi _{}^{}\mathrm{\Delta }(𝐫)\psi _+,`$ (12)
where $`H_0=\frac{\mathrm{}^2}{2m}^2+\frac{1}{2}m\omega ^2r^2`$ is the free Hamiltonian of the trapped Fermi gas, $`V_L=\mathrm{}\mathrm{\Omega }^2/4\delta `$ is the light-induced potential, $`\mathrm{\Gamma }=\gamma \mathrm{\Omega }^2/4\delta ^2`$ is the rate for spontaneous emission, $`\mu _\pm `$ are the chemical potentials of the two internal states, and $`\mathrm{\Delta }(𝐫)=(4\pi |a|\mathrm{}^2/m)\psi _{}(𝐫)\psi _+(𝐫)`$ is the BCS energy gap function. The Rabi frequency of the light field is $`\mathrm{\Omega }|\mathrm{}𝐄_L/\mathrm{}|`$. For simplicity we consider the simple case $`\mu _+=\mu _{}`$ for equal number of atoms in each spin state and introduce the renormalized chemical potential $`\mu =\mu _+V_L`$. Further, a large laser detuning and a weak intensity allow $`\mathrm{\Gamma }\mu ,\mathrm{\Delta }`$ so that destruction of Cooper pairs by spontaneous emission and interactions involving excited-state atoms can be neglected. Employing an approach similar to that adopted in BCS theory, we approximate the solutions of Eqs. (12) by
$`\psi _\pm (𝐫,t)={\displaystyle \underset{𝐧}{}}\left(u_𝐧(𝐫)\widehat{b}_{𝐧\pm }e^{iE_𝐧t/\mathrm{}}\pm v_𝐧(𝐫)\widehat{b}_𝐧^{}e^{iE_𝐧t/\mathrm{}}\right),`$ (13)
where $`\widehat{b}_{𝐧\pm }`$ are generalized Bogoliubov quasi-particle operators and $`E_𝐧`$ the excitation energy for the mode indexed by $`𝐧`$. The superfluid state of the degenerate Fermi gas is characterized by the BCS ground state $`|\mathrm{\Phi }_{BCS}`$ with the property $`\widehat{b}_{𝐧\pm }|\mathrm{\Phi }_{BCS}=0`$. From Eqs. (12) with the dissipative terms ignored, the transformation coefficients $`\{u_𝐧,v_𝐧\}`$ satisfy the celebrated Bogoliubov equations
$`(H_0\mu )u_𝐧(𝐫)+\mathrm{\Delta }(𝐫)v_𝐧(𝐫)=E_𝐧u_𝐧(𝐫)`$ (14)
$`(H_0\mu )v_𝐧(𝐫)+\mathrm{\Delta }(𝐫)u_𝐧(𝐫)=E_𝐧v_𝐧(𝐫).`$ (15)
The total averaged density can be expressed as $`\widehat{\rho }(𝐫,t)\mathrm{\Phi }_{BCS}|\psi _+^{}\psi _+^{}+\psi _{}^{}\psi _{}^{}|\mathrm{\Phi }_{BCS}=2_𝐧|v_𝐧(𝐫)|^2`$ and the quantum pair function is
$`G(𝐫,𝐫^{},t,t^{})=2{\displaystyle \underset{\mathrm{𝐧𝐦}}{}}u_𝐧(𝐫)v_𝐧(𝐫^{})u_𝐦(𝐫^{})v_𝐦(𝐫)e^{i(E_𝐧+E_𝐦)(tt^{})/\mathrm{}}.`$ (16)
The average density and pair function can be calculated by self-consistently solving Eqs. (15). In the normal degenerate ground state, energy levels below the Fermi level $`E_F`$ are occupied, while those above are empty. The effect of interatomic interactions is to cause scattering between nearby energy levels, which creates an energy shell near $`E_F`$ where normally unoccupied states in the normal ground state acquire an amplitude to be occupied, and states below $`E_F`$ have some amplitude to be unoccupied. The stronger the interatomic interaction is, the wider the energy shell and the more atoms are available to form Cooper pairs. Physically, the coefficients $`u_𝐧`$ and $`v_𝐧`$ in Eqs. (15) determine the amplitudes for atoms to be scattered into the pair states. To evaluate these amplitudes, we expand the coefficients as $`u_𝐧=_𝐪u_{\mathrm{𝐧𝐪}}\varphi _𝐪`$ and $`v_𝐧=_𝐪v_{\mathrm{𝐧𝐪}}\varphi _𝐪`$, in terms of the eigenstates $`\varphi _𝐪`$ of the single-atom Hamiltonian $`H_0`$. In principle the sum over $`\stackrel{}{q}`$ in the coefficients should extend from zero to infinity. However, we should note that in the BCS theory , Equation (13) is a direct result of the Born approximation by replacing the realistic non-local interatomic interaction $`V(\stackrel{}{r})`$ by a local contact potential $`V(\stackrel{}{r})=4\pi \mathrm{}^2a\delta (\stackrel{}{r})/m`$. We know that the Born approximation is only valid for low-energy scattering. The invalidity of the approximation in high-energy scattering regime produces an ultra-violet divergence in the BCS theory. In the case of superconductivity, the ultra-violet divergence naturally vanishes by considering the fact that the phonon-exchange induced interaction between electrons can be cut-off in the Debye frequency. However, in the case of degenerate Fermi gas of atoms, to avoid the ultra-violet divergence, an exact theory for superfluid phase transition must take the realistic shape of the exact non-local triplet potential into account. Recently two independent approaches to remove the ultra-violet divergence in the BCS theory of degenerate Fermi gas of atoms have been proposed . One is to renormalize the interaction potential in term of the Lippman-Schwinger equation and the other is to employ the more exact pseudo-potential approximation . However for a first guess, the Born approximation provides a simple and reasonable way to evaluate the gap energy and the pair correlation if an appropriate momentum cut-off is introduced to remove the ultra-violet divergence. Now the question is how to choose a physically valid momentum cut-off $`\mathrm{}k_c`$. To determine the cut-off range, we must use the fact that Born approximation only gives the correct evaluation in the low-energy scattering regime with $`k|a|<1`$. Hence the validity of the present theory based on Born approximation requires a cut-off $`k_c<|a|^{(1)}`$. For $`{}_{}{}^{6}Li`$ atom, this is in the order of Fermi wave number $`k_F`$. With such a cut-off, we numerically evaluate the energy gap, the total averaged atomic density and the quantum pair function.
To be concrete, we assume that $`N=2\times 10^5`$ <sup>6</sup>Li atoms in each spin state are confined by a magnetic trap with an oscillation frequency $`\omega =2\pi \times 150`$ Hz. With these values, $`E_F100\mathrm{}\omega 740`$ nK, and the peak value of the energy gap is $`\mathrm{\Delta }(0)5\mathrm{}\omega =36`$ nK. For a degenerate Fermi gas in a harmonic trap, the characteristic size of the average density is given by the Fermi radius $`r_F=[2E_F/m\omega ^2]^{1/2}48\mu `$m , while the length scale of the pair correlation function is $`r_ck_{F}^{}{}_{}{}^{1}`$, where $`k_F=(2mE_F/\mathrm{}^2)^{1/2}2\pi \times 6800`$ cm<sup>-1</sup> is the Fermi wavenumber. The numerical result for the correlation function is shown in Fig. 1, along with the spatial variation of the energy gap.
We need emphasize that in the homogeneous gas, the correlation length (pair size) at zero temperature is defined in terms of the so-called coherence length $`\xi _c=\mathrm{}v_F/\mathrm{\Delta }(0)`$. The coherence length determines the region where the pair function extends . However within the region, the pair function still contains shorter oscillation structure which has the scale $`r_c`$. In fact from our numerical result for the trapped gas, we see that the pair function indeed varys with such a length scale. Now we will explain how such a scale can be observed by optical imaging.
Assuming that a plane at $`z_0`$ = 2 cm is imaged with unit magnification and with a transition wavelength $`\lambda =670`$ nm, the image size is $`z_0/kr_F0.09`$ mm for the normal component and $`z_0/kr_c1.9`$ cm for the pair component, differing by a factor of $`2E_F/\mathrm{}\omega `$. The calculated images for a gas below and above the critical temperature for the BCS phase transition are shown in Fig. 2(a) and (b), respectively. It is seen that when the transition occurs, a spatially broadened image appears. The physical situation is depicted in Fig. 3, where the small-scale structure induced by pairing causes light to scatter at a larger angle than that scattering from the cloud itself.
The normal signal is produced by coherent scattering and is therefore proportional to $`(2N)^2`$, as can be verified by reference to Eq. (9). The pair signal, however, arises from spontaneous Raman scattering between pairs above and below the energy gap, and is found using Eq. (10) to be proportional to the number of pairs $`N_p`$. This number is determined by the number of atoms in an energy shell of width $`\mathrm{\Delta }`$ centered on $`E_F`$, so $`N_p3N\mathrm{\Delta }/E_F`$. For the parameters given above, $`N_p3\times 10^4`$ and the ratio of the peak signal intensities is $`I_2(0)/I_1(0)2\times 10^7`$. It is difficult to experimentally measure a signal with such a large dynamic range, but the pair signal can be revealed by using a nearly opaque spatial filter to attenuate the normal signal. If the diameter of the filter is chosen to be approximately equal to the spatial dimension of the normal signal image, it will affect only the central region of the pair signal, and both contributions can be observed with the same intensity scale.
Finally, we calculate the scattered light spectrum. For the normal degenerate ground state, a single spectral line is obtained at the frequency of the incident light. For the superfluid state, the spectrum exhibits a double-peaked structure as shown in Fig. 4. The coherent peak is from scattering by the normal component. The frequency shift of the sideband line is approximately twice the gap energy, confirming that the sideband is due to Raman scattering by pairs. The long oscillating tail of the sideband is due to modulated broadening from the center of mass motion of atoms at the trap frequency. Hence, the presence of the shifted peak provides another effective method to detect the BCS phase transition and can be used to directly determine the gap energy.
The theory presented here was simplified by the neglect of spontaneous emission, permitting, for example, the assumption that $`\mathrm{\Delta }`$ remains constant during probing. However, the pair signal depends on breaking pairs by incoherent spontaneous Raman scattering, and thus requires spontaneous emission. The theory is therefore valid only in the weak-signal limit, where $`\mathrm{\Gamma }T^1`$. Larger signals could be obtained experimentally by allowing $`\mathrm{\Gamma }T^1`$, but quantitative interpretation would then be more difficult.
In conclusion, we have studied off-resonance light scattering by a trapped degenerate Fermi gas. The results show that both spatial imaging and the scattered light spectrum give clear signatures for the BCS phase transition to a gaseous superfluid state.
The work in Australia was supported by the Australian Research Council, and a Macquarie University Research Grant. The work at Rice was supported by the NSF, ONR, NASA, and the Welch Foundation. WZ thanks the atom-cooling group at Rice for their hospitality during his visit and also thanks Karl-Peter Marzlin for his help.
|
no-problem/9902/cond-mat9902094.html
|
ar5iv
|
text
|
# Geometric (Berry) phases and statistical transmutation in the two-dimensional systems of strongly correlated electrons
## I INTRODUCTION
Since the discovery of high temperature superconductivity in copper oxides, attempts have been made to identify the nature of charge carriers in these materials. There are some experimental evidences that polaronic charge carriers exist in both the superconducting state and the normal state. Photoinduced absorption measurements in $`\mathrm{La}_2\mathrm{CuO}_4`$ and $`\mathrm{YBa}_2\mathrm{Cu}_3\mathrm{O}_{7\delta }`$ indicate that self-localized structural distortions are present around photoexcited charge carriers. A recent experiment demonstrated that Sr-induced local lattice distortions occurs in $`\mathrm{La}_{2\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{CuO}_4`$ in association with holes donated by the Sr atoms. In order to investigate the nature of the polaronic charge carriers, we study geometric phases acquired by the electronic wave functions after the transport of a polaron around a closed loop.
A quantum object (a subsystem of fast degrees of freedom) acquires a geometric (Berry) phase in an environment where a parameter (a subsystem of slow degrees of freedom) is slowly transported around a closed circuit in a parameter space. Thus quantum subsystems with two widely separated energy scales (between slow and fast degrees of freedom) exhibit geometric phases. In this paper we investigate the geometric phases acquired by the electronic wave functions as a result of polaron transport around a closed loop in a hole-doped two-dimensional system of strongly correlated electrons, e.g., the copper oxide plane in high $`T_c`$ superconductors. In addition, we investigate the statistical transmutation of polarons by studying the exchange symmetry of polarons varying with the strength of electron correlations.
## II VARIOUS GEOMETRIC (BERRY) PHASES BASED ON MODEL HAMILTONIANS
We first study the geometric phases varying with the strength of antiferromagnetic electron correlations and of electron-phonon coupling. For this study we introduce the Holstein-Hubbard model Hamiltonian for the hole-doped two-dimensional systems of square lattice,
$$H=t\underset{ij\sigma }{}(c_{i\sigma }^{}c_{j\sigma }+\text{c.c.})+U\underset{i}{}n_in_ig\underset{i}{}u_in_i$$
$$+\frac{K}{2}\underset{i}{}(x_i^2+y_i^2),$$
$`(1\mathrm{a})`$
with the Holstein coordinate $`u_i`$,
$$u_i=\frac{1}{4}(x_{i_x,i_y}x_{i_x1,i_y}+y_{i_x,i_y}y_{i_x,i_y1}).$$
$`(1\mathrm{b})`$
Here $`c_{i\sigma }^{}`$ ($`c_{i\sigma }`$) is the creation (annihilation) operator of an electron with spin $`\sigma `$ at a copper site $`i`$ ($`i=(i_x,i_y)`$), and $`n_{i\sigma }=c_{i\sigma }^{}c_{i\sigma }`$ is the electron number operator. $`x_i`$ and $`y_i`$ denote the displacements of oxygen atoms in the unit cell of $`\mathrm{CuO}_2`$ plane. $`t`$ represents the electron hopping integral; $`U`$, the on-site Coulomb repulsion energy (correlation strength); $`g`$, the electron-lattice coupling constant and $`K`$, the spring constant. In our calculations the magnitude of the Holstein coordinate $`u_i`$ is normalized to unity, $`|u_i|=1`$ when a polaron is formed around copper site $`i`$ as a result of hole doping. The energy of the polaron is then given by $`g`$.
Now for the study of geometric phases varying with the Heisenberg coupling constant $`J`$ and the electron-phonon coupling constant $`g`$, Hamiltonians of interest are the Holstein-$`tJ`$ model,
$`H`$ $`=`$ $`t{\displaystyle \underset{ij\sigma }{}}(\stackrel{~}{c}_{i\sigma }^{}\stackrel{~}{c}_{j\sigma }+\text{c.c.})+J{\displaystyle \underset{ij}{}}𝐒_i𝐒_jg{\displaystyle \underset{i}{}}u_in_i`$ (3)
$`+{\displaystyle \frac{K}{2}}{\displaystyle \underset{i}{}}(x_i^2+y_i^2),`$
and the Holstein-$`tJ_z`$ model,
$`H`$ $`=`$ $`t{\displaystyle \underset{ij\sigma }{}}(\stackrel{~}{c}_{i\sigma }^{}\stackrel{~}{c}_{j\sigma }+\text{c.c.})+J_z{\displaystyle \underset{ij}{}}S_i^zS_j^zg{\displaystyle \underset{i}{}}u_in_i`$ (5)
$`+{\displaystyle \frac{K}{2}}{\displaystyle \underset{i}{}}(x_i^2+y_i^2).`$
Here $`\stackrel{~}{c}_{i\sigma }=c_{i\sigma }(1n_{i,\sigma })`$ is the electron annihilation operator at site $`i`$, which excludes double occupation. $`𝐒_i=\frac{1}{2}c_{i\alpha }^{}𝝈_{\alpha \beta }c_{i\beta }`$ is the electron spin operator. The Holstein-$`tJ_z`$ Hamiltonian takes into account only the spin $`z`$-component interaction.
The lattice distortions are treated in the adiabatic limit. The electronic wave functions (the fast degree of freedom) varying with lattice distortions are obtained from the application of Lanczos exact diagonalization method to a tilted $`\sqrt{10}\times \sqrt{10}`$ lattice with periodic boundary conditions. The propagation of the local lattice distortion (polaron hopping) is described by
$$u_i(\tau )=u_{A,i}+(u_{B,i}u_{A,i})\tau $$
(6)
with $`\tau `$ being the dimensionless time lapse with $`0\tau 1`$ for polaron hopping between adjacent lattice sites. For instance, the lattice distortion occurs at copper site $`A`$ at $`\tau =0`$ and moves to an adjacent copper site $`B`$ at $`\tau =1`$. In this manner, a closed path in the parameter space $`\{u_i\}`$ can be defined. We compute the geometric phase factors, $`e^{i\gamma _n(C)}`$ acquired by the electronic wave function after polaron transport around a closed path $`C`$ during time $`T`$, by using
$$e^{i\gamma _n(C)}=\underset{N\mathrm{}}{lim}\underset{k=1}{\overset{N}{}}n(u(\tau _k))|n(u(\tau _{k1})).$$
(7)
Here $`|n(u(\tau _k))`$ is the electronic eigenfunction for a given lattice distortion $`u(\tau _k)`$ at time $`\tau _k`$. $`\tau _k`$ is a $`k`$th discretized time lapse between the initial time $`\tau _0=0`$ and the final time (period) $`\tau _N=T`$. Eq. (7) is derived under the condition that $`|n(u(\tau _k))`$ is a single-valued complex wave function, that is, $`|n(u(0))=|n(u(T))`$. On the other hand, all the electronic wave functions obtained from the diagonalization of the model Hamiltonians above are real-valued. In such cases, geometric phases appear in the form of double-valued real wave functions. Using the local gauge transformation in parameter space $`u`$, the double-valued real wave function, $`|n(u)_r`$ can be converted into a single-valued complex wave function $`|n(u)_c`$,
$$|n(u)_c=e^{i\theta (u)}|n(u)_r$$
(8)
with $`\theta (u)`$, a differentiable function and $`\theta (u(T))\theta (u(0))=\pi `$ (see Appendix for details). By introducing the above expression into Eq. (7), one can now obtain the geometric phases of interest.
## III COMPUTED BERRY PHASES AND STATISTICAL TRANSMUTATION
First, by using the Holstein-Hubbard model we consider the Berry phase acquired by the electronic wave function as a result of polaron hopping around a closed path. Two types of closed paths, a triangular path and a square path are displayed in Figs. 1(a)–(b). For simplicity, for the next nearest neighbor hopping we chose the identical value of hopping integral to the one used for the nearest neighbor hopping. The electronic wave function is predicted to gain a geometric phase by $`\pi `$ (the geometric phase factor of $`1`$) for polaron hopping around the smallest possible closed path, that is, the triangular path \[Fig. 1(a)\]. On the other hand, the electronic wave function with the polaron transported around the square path \[Fig. 1(b)\] is predicted to gain a phase angle by $`2\pi `$, thus leaving the phase factor of the electronic wave function unchanged. The geometric phase of $`2\pi `$ can be understood from the decomposition of the square path into two triangular paths, as shown in Fig. 2; the geometric phase factor of $`+1`$ for the square path results from the product of two identical geometric phase factors of $`1`$ for the decomposed triangular paths.
By using both the Holstein-$`tJ`$ model and the Holstein-$`tJ_z`$ model, we now examine the geometric phases which result from the transport of a single polaron. They are shown in Figs. 3(a)–(b) and Figs. 4(a)–(b). The predicted geometric phase factor for the case of the triangular path \[Fig. 3(a)\] is in agreement with that of Schüttler et al. For large $`U`$ values, say, $`U=8t`$ both the Holstein-$`tJ`$ model and the Holstein-Hubbard model yielded identical geometric phase factors for both the triangular path and the square path, as are shown in Figs. 1(a)–(b) and Figs. 3(a)–(b). Such direct comparison between the two model Hamiltonians is valid only for large $`U`$ values. This is because the $`t`$-$`J`$ model is equivalent to the Hubbard model Hamiltonian corresponding to large $`U`$ values. On the other hand, the Holstein-$`tJ_z`$ model yields a trivial geometric phase factor of $`+1`$ for the triangular path \[Fig. 4(a)\], while the Holstein-$`tJ`$ model Hamiltonian gives rise to the nontrivial geometric phase factor of $`1`$. It is of note that only the $`t`$-$`J`$ (but not the $`t`$-$`J_z`$) Hamiltonian allows transverse spin fluctuations. Thus we discover from comparison of the two Hamiltonians that the generation of the nontrivial geometric phase factor of $`1`$ is caused by the transverse spin fluctuations (or spin flip-flop fluctuations) which comes from interactions involving the $`x`$\- and $`y`$-component (but not the $`z`$-component) spins, that is, $`J_{ij}(S_{ix}S_{jx}+S_{iy}S_{jy})=\frac{1}{2}J_{ij}(S_{i+}S_j+S_iS_{j+})`$. The nontrivial geometric phases are also predicted by the Holstein-Hubbard model Hamiltonian, in agreement with the Holstein-$`tJ`$ model Hamiltonian, as is shown in Fig. 1(a).
Now we investigate the exchange symmetry of two polarons based on the Holstein-Hubbard model. The numeral label for the square path in Fig. 5 indicates the consecutive order of polaron hopping. Interestingly enough, there exist both trivial and nontrivial geometric phase factors. After the two polarons are exchanged, the electronic wave function is seen to acquire a trivial geometric phase factor of $`+1`$ for the case of weak electron correlations ($`U\stackrel{<}{}1`$). This is in sharp contrast with the nontrivial geometric phase factor of $`1`$ for the case of intermediate electron correlations ($`2\stackrel{<}{}U\stackrel{<}{}4`$). A reentrant behavior of the trivial geometric phase factor of $`+1`$ occurs for the case of strong antiferromagnetic electron correlations ($`U\stackrel{>}{}6`$). It is thus proper to state that polarons behave as hard core bosons in the case of weakly and strongly correlated electrons, while they act as fermions in the intermediate region of electron correlations. Such statistical transmutation of polarons can be mapped into the Jordan-Wigner transformation for the two-dimensional lattice,
$$a_ia_j^{}=\delta _{ij}e^{i\delta }a_j^{}a_i.$$
(9)
We find that the predicted phase angle $`\delta `$ depends on $`U`$, the strength of electron correlation (Coulomb repulsion). That is, $`\delta =\pi `$ for small and large $`U`$ values, indicating that polarons behave as bosons and $`\delta =0`$ for intermediate $`U`$ values, indicating that they act as fermions.
We examine the exchange symmetry of polarons in different aspect. For the case of weak electron correlations (or free electron limit, $`U\stackrel{<}{}1`$) the predicted exchange symmetry indicates a bosonic nature of polarons with the absence of fictitious magnetic flux line. This is depicted in Fig. 6(a). As correlation (repulsive interaction) between electrons increases, a change of the geometric phase is predicted. That is, for the case of intermediate correlations it can be stated that the polarons hop around a fictitious flux of an elementary flux unit (fluxon), thereby exhibiting a fermionic nature (see a solid line in Fig. 6(b)). For the case of strong antiferromagnetic electron correlations, the exchange of the two polarons indicates the presence of two fictitious flux quanta, as is depicted in Fig. 6(c). Then the statistical transmutation of boson to a composite boson occurs. By composite boson we mean that the boson remains as a boson with the acquired phase of $`2\pi `$. In short the number of fictitious flux quanta is seen to vary with the strength of electron correlation and allows the statistical transmutation.
## IV CONCLUSION
By applying the Lanczos exact diagonalization method to Holstein-Hubbard, Holstein-$`tJ`$, and Holstein-$`tJ_z`$ model Hamiltonians, we have examined the different aspects of geometric phases acquired by electronic wave functions during the transport of a single polaron around a triangular loop. We find that the nontrivial geometric phase factor of $`1`$ is caused by the presence of transverse spin fluctuations. Further by examining the exchange symmetry of polarons based on the Holstein-Hubbard model we discover the statistical transmutation of polarons which varies with the strength of antiferromagnetic electron correlation. It is interesting to note that polarons behave as bosons in the hole-doped two-dimensional system (background) of weakly and strongly correlated electrons and as fermions in the background of intermediate correlation strength. In the future it remains to examine how the statistical transmutation for the exchange of polarons occurs as a function of electron correlation and whether there exists a possibility of anyonic exchange phase other than $`\delta =0`$ and $`\delta =\pi `$, that is, $`0<\delta <\pi `$.
###### Acknowledgements.
S.-H.S.S. acknowledges the financial supports of Korean Ministry of Education (BSRI-98) and of the Center for Molecular Sciences at KAIST. We are grateful to Dr. Kwangyl Park for discussions.
##
For the case of multi-valued real wave functions here we prove the Berry phase to be $`\gamma (C)=\theta (𝐑(T))\theta (𝐑(0))`$ by introducing a local gauge transformation in the parameter space, $`𝐑(=u)`$. The Berry phase $`\gamma _n(C)`$ is a geometric contribution to the total phase change for the final state of $`|\psi (T)=\mathrm{exp}[i\gamma _n(C)]\mathrm{exp}\left\{\frac{i}{\mathrm{}}_0^T𝑑tE_n(𝐑(t))\right\}|\psi (0)`$ where $`T`$ is the time period. For a subsystem of fast degrees of freedom coupled to a subsystem of slow degrees of freedom the Berry phase is obtained from
$$\gamma _n(C)=in;𝐑|_R|n;𝐑𝑑𝐑,$$
(10)
as is well known.
The equation of motion for the subsystem of fast degrees of freedom is given by
$$H(𝐑(t))|n;𝐑(t)=E_n(𝐑(t))|n;𝐑(t),$$
(11)
where $`𝐑(t)`$ is the slowly varying parameter. To obtain the nontrivial (non-zero) Berry phase in Eq. (10), the eigenstate $`|n;𝐑(t)`$ should be complex, non-degenerate, and single-valued. The gauge potential
$$𝐀(𝐑)=in;𝐑|_R|n;𝐑$$
(12)
vanishes for the case of the real eigenstates $`|n;𝐑`$.
It is obvious that the nontrivial (nonzero) ‘magnetic field’ $`𝐁(𝐑)=\times 𝐀(𝐑)`$ cannot be defined for the case of real eigenstates $`|n;𝐑(t)_r`$, since the gauge potential $`𝐀(𝐑)`$ vanishes. However, by using the following gauge transformation it is possible to define the nonvanishing Berry phase:
$$|n;𝐑_c=e^{i\theta (𝐑)}|n;𝐑_r.$$
(13)
Indeed, this is the local gauge transformation in the parameter space $`𝐑`$, which provides the gauge invariance of the Schrödinger equation, Eq. (11). Using Eq. (13) in Eq. (10) we readily obtain the Berry phase $`\gamma _n(C)`$,
$$\gamma _n(C)=\theta (𝐑(T))\theta (𝐑(0))$$
(14)
or using the notation used for the parameter space in Section II,
$$\gamma _n(C)=\theta (u(T))\theta (u(0)).$$
|
no-problem/9902/astro-ph9902071.html
|
ar5iv
|
text
|
# A comparison of the X-ray line and continuum morphology of Cassiopeia A
## 1 Introduction
The X-ray emission from supernova remnants has generally been understood to be thermal emission (i.e. bremsstrahlung and line emission) for the so called shell type supernova remnants, such as Tycho, Kepler and Cassiopeia A (Cas A), and synchrotron emission for the plerions such as the Crab nebula. In the latter case the synchrotron emission is caused by cosmic ray electrons accelerated in the magnetosphere surrounding the neutron star. The ideas on shell type remnants have, however, changed dramatically with the publication of the analysis of ASCA data of the remnant of SN1006 by Koyama et al. (1995). They proposed that the featureless X-ray spectra of the rims of the remnant were the result of synchrotron emission (earlier anticipated by Reynolds & Chevalier Reynolds81 (1981)). SN1006 is a shell type remnant and does not contain, as far as we know, a pulsar, hence the synchrotron emission should arise from cosmic ray electrons accelerated in the blast wave of the supernova remnant. This view is supported by the recent discovery of TeV gamma rays coming from the Northeastern rim of SN1006 (Tanimori et al. Tanimori (1998)).
Of course, if there is X-ray synchrotron radiation coming from SN1006, then it is very likely that other (young) shell type remnants also produce synchrotron radiation at some level. Although, it is worth pointing out that SN 1006 was probably a type Ia supernova, whereas Cas A is very likely the remnant of a core collapse supernova (Type Ib or II). Recently, a hard X-ray tail was discovered in the spectrum of Cas A, the youngest known galactic supernova remnant (The et al. The (1996), Favata et al. Favata (1997), Allen et al. Allen (1997)). This tail can be attributed to synchrotron radiation, but it could also arise from bremsstrahlung caused by a non-Maxwellian tail to the thermal electron distribution. Note that, like SN1006, Cas A does not seem to contain a pulsar. In the case of Cas A there is no doubt that a substantial part of the emission is thermal in nature, since the remnant displays strong line emission. So, whatever the nature of the hard tail, at lower photon energies this component is intermingled with thermal components.
From a theoretical point of view both synchrotron emission (Reynolds Reynolds98 (1998)) and non-thermal bremsstrahlung can exist (Asvarov et al. Asvarov (1990)). For instance, the cosmic ray electrons responsible for the radio emission of Cas A find their origin in thermal electrons accelerated at the shock. This means that at all energies intermediate between the thermal electron distribution and the relativistic electrons responsible for the radio emission, a population of electrons should exist. The Coulomb equilibration process is slow, so if at some moment the initial electron distribution is non-thermal, the relaxation towards a Maxwellian distribution is likely to evolve slowly (Laming Laming98 (1998)), because the relaxation time scales with electron energy ($`E`$) as $`E^{3/2}`$. To give an example applicable to Cas A ($`n_\mathrm{e}10`$ cm<sup>-3</sup>): at 1 keV the relaxation time is $`1`$ yr, at 10 keV it is $`30`$ yr, and at 50 keV its is of the order of the age of the remnant. Therefore, from say 20 keV to 1 GeV the electron distribution should be a continuously decreasing function and non-thermal bremsstrahlung should exist, but may be less pronounced than the synchrotron emission.
So, what can we say at present about the relative contributions of a synchrotron component and a non-thermal bremsstrahlung component to the hard X-ray tail? Ultimately this will be resolved by observing X-ray polarization, which is not feasible presently. Here we attempt to bring the discussion somewhat further by investigating the spatial distribution of the X-ray emission up to 10.5 keV. We do this by presenting deconvolved images in narrow bands obtained by the MECS instrument on board BeppoSAX.
Before discussing our analysis, a short characterization of Cas A. The remnant is at a distance of $`3.4_{0.1}^{+0.3}`$ kpc (Reed et al. Reed (1995)) and is between 340 and 318 years old. The younger age is based on the possible explosion date of 1680, when Flamsteed observed an unidentified 6th magnitude star near the current position of Cas A (Ashworth Ashworth (1980)). The older age is based on the expansion analysis of optical filaments (van den Bergh & Kamper vdBergh83 (1983)). The current wisdom is that the progenitor was probably an early type Wolf-Rayet star (Fesen et al. Fesen87 (1987)) with an initial mass of about 30M (Garciá-Segura et al. GLM96 (1996)) and a final mass less than 10M (Vink et al. VKB96 (1996)). Recent expansion studies of high resolution X-ray images indicate that the current overall expansion velocity is $``$ 3200 km/s for the bright shell and $``$ 5200 km/s for the fainter emission associated with the blast wave (Vink et al. Vink98 (1998), Koralesky et al. Koralesky (1998)). This is considerably greater than the expansion of the bright radio ring which is about 2000 km/s (Anderson & Rudnick AR95 (1995)).
## 2 Observations and analysis
### 2.1 The data
The BeppoSAX satellite (Boella et al. 1997a ) contains four X-ray detectors: the LECS (0.1–10 keV) and MECS (1.5–12 keV), composed of three units, the HPGSPC (4–120 keV) and the PDS (13–300 keV). The overall spectrum of Cas A as observed by BeppoSAX is shown in the paper by Favata et al. (Favata (1997)). Only the LECS and MECS have imaging capabilities. Here we have opted for only using the MECS (Boella et al. 1997b ) instrument, since the supporting structure of the LECS blocks part of the field of view, whereas the MECS has an unobstructed view of Cas A, which simplifies the image analysis considerably. Furthermore, the MECS has a higher sensitivity above 5 keV than the LECS. BeppoSAX observed Cas A five times; four observations during the performance verification phase in August and September 1996 and one observation on November 26, 1997. The total observation time is different for each set of instruments, but data presented here comprise 161 ks of MECS observation time. The data analyzed here are based on the MECS2 and MECS3 units (labeled together as MECS23) for uniformity: the MECS1 was no longer functioning at the time of the last observation. Comparing the various data sets we found that there were rather large inaccuracies in the aspect solutions. This was most distinct for the last observation ($`1.4`$′) made after a gyroscope failure, which may have caused the relatively large error in the aspect solution. Using a correlation analysis similar to that employed by Vink et al. (Vink98 (1998)), we shifted the observations in order to match in position.
### 2.2 Lucy-Richardson deconvolution
The MECS point spread function (PSF) consists of two components, representing the gaussian shaped intrinsic resolution of the MECS instrument and the scattering component originating from the mirrors. These components can be identified in the following parameterization of the PSF as a function of radius, $`r`$, (Boella et al. 1997b ):
$$f(r)=C\{\alpha \mathrm{exp}(\frac{1}{2}\frac{r^2}{\sigma ^2})+(1.0+\frac{r^2}{R^2})^\beta \};$$
(1)
note that the parameters $`\sigma `$ , $`R`$, $`\alpha `$ and $`\beta `$ are energy dependent (S. Molendi, private communication). $`C`$ is a normalization constant, so that the integration over the total plane equals 1. Fig. 1 shows the PSF of the MECS3 for four different energies. As can be seen the PSF changes with energy, especially the change from 2 to 4 keV is dramatic. Above 4 keV the PSF does not change much anymore. At higher energies the resolution of the MECS increases, but the mirror scattering becomes worse. Fortunately, the photon statistics at 2 keV are far superior to those at 8 keV, so we can compensate for the degraded resolution at 2 keV by performing more iterations of the deconvolution algorithm.
We used the analytical representation of the PSF to deconvolve the narrow band MECS images with the Lucy-Richardson deconvolution algorithm (Lucy Lucy (1974), Richardson Richardson (1972)). One problem associated with this algorithm is the number of iterations that should be used. Too many iterations may cause spurious results, too few iterations do not bring out the full potential of the image. We solved this by statistically comparing the convolved model image with the raw image at each iterative step; when the model improvements were less than $`1\sigma `$ we halted the deconvolution process. The consistency of our results was verified by comparing the deconvolved images of individual datasets with those of the total dataset and, as discussed below, with the results from other X-ray telescopes. We point out that some of the features presented below were already recognizable in the raw images (Maccarone et al. Maccarone (1998)).
### 2.3 Comparison with previous measurements
We present here the first analysis of the morphology of Cas A for photon energies in excess of 6 keV. A similar analysis was done with ASCA SIS0 data for energies below 6 keV (Holt et al. Holt (1994)). The advantage of the MECS instrument over ASCA SIS is that it is more sensitive above 6 keV and also the PSF of the MECS is simpler. The core of the MECS PSF is, however, broader than for the ASCA SIS, so that the deconvolved MECS images in the range 1.8 keV to 6 keV do not show as much detail as the deconvolved SIS images. Apart from the lack of detail, the MECS images below 6 keV are in agreement with the images presented by Holt et al. (Holt (1994)).
A comparison of the ROSAT PSPC image with the deconvolved silicon image provides another check on the validity of our results. As can be seen in Fig. 2 the images compare well.
## 3 Results
The deconvolved narrow band images are displayed in Fig. 3; the energy ranges and the number of Lucy-Richardson iterations are listed in Table 1. In Fig. 3 we have ordered the images such that images dominated by line emission are on the left side and continuum images are on the right. For comparison we included an archival VLA image showing synchrotron emission at a wavelength of 21.7 cm. This image has been smoothed to roughly the same resolution as the deconvolved MECS images.
It is immediately clear that the spatial distribution of the continuum emission is different from the line emission, whereas the images relating to the line emission do not differ much from one another. The silicon image is not so bright in the West, but this is due to the fact that this part of the remnant is more absorbed by the interstellar medium than the rest of Cas A (Keohane et al. Keohane98 (1998)). There is also a hint in the Fe K band image that the iron line emission in the Southeast peaks further out from the center of the remnant than the line emission of Si, S and Ar. The displacement is about 20″, as can be seen in Fig. 4. This may indicate that the iron emission in that area is predominantly coming from the shocked circumstellar medium rather than from the shocked ejecta. The lack of variation from one line image to the other indicates that dust depletion cannot play a major role in explaining the line radiation morphology, since silicon and iron can be easily dust depleted, whereas sulphur and argon cannot.
The continuum emission does vary strongly from one energy band to another, but the continuum images differ from the line emission images in that they all peak in the Western region of the remnant, a property they share with the VLA image.
In order to make more quantitative statements we concentrate on the emission above 4 keV. Each image was corrected for the bandwidth and instrument effective area, so that the pixel values correspond to the approximate flux density per pixel (i.e. photons keV<sup>-1</sup>m<sup>-2</sup>s<sup>-1</sup>pixel<sup>-1</sup>, one pixel being 16″ in size). With these images rough spectra at various regions were made (see Fig. 5 for the regions). This is shown in Fig. 6 and useful parameters related to the spectra are listed in Table 2. The statistical errors of the flux measurements are small, so the systematic errors, which are of the order of 5%, dominate. We corrected the iron image for the continuum contribution. Dividing this image by the estimated continuum emission at 6.6 keV we ended up with an image approximating the equivalent width of the iron K-shell emission (Fig. 7). The peak equivalent width in Fig. 7 of 4.7 keV is rather high, but the average equivalent width per region, as determined from the images, agrees with the value in Table 2 and spectral analysis results.
The continuum in the iron band image was estimated by interpolating the 4–6 keV and 7.3–9.0 keV image to 6.6 keV assuming a photon index of 3.0 and averaging those two continuum images. The results, shown in Fig. 6, Fig. 7 and Table 2, confirm that the line emission peaks in the North and Southeast, whereas the continuum peaks in the West. The Western regions becomes relatively brighter at higher energies.
If line emission is an indication for a thermal origin of the emission, is its prominent absence a signature for synchrotron emission? It seems at least the most natural explanation, since other explanations involving steep abundance gradients or anomalous electron distributions are more far fetched. This argument is supported by the observation that all the line images have rather similar morphologies, so if there are abundance gradients they should be roughly equal for all elements. Non-thermal bremsstrahlung does not explain the difference between the continuum morphology and the line morphology, because the electrons involved in this process should also cause line emission. The line emission does indicate, however, that there is also a contribution of bremsstrahlung to the continuum. The bending of the spectra in some regions shown in Fig. 6 (e.g. the Southeastern and Northeastern regions) indicates that the continua in those regions are probably dominated by thermal-bremsstrahlung, whereas the Western and Southern regions are likely to be dominated by synchrotron emission. The trends visible in the continuum images make it very likely that the hard X-ray tail above 20 keV (The et al. The (1996), Favata et al. Favata (1997), Allen et al. Allen (1997)) is coming predominantly from the West of Cas A.
Models for the X-ray synchrotron emission of shell type supernova remnants (e.g. Reynolds Reynolds98 (1998)) predict that the synchrotron spectrum extrapolated from the radio emission should cut off with a term shallower than $`\mathrm{exp}((\frac{E}{E_c})^{\frac{1}{2}})`$, with $`E`$ the photon energy and $`E_c`$ the cut off energy. From this we can derive that the photon index, $`\mathrm{\Gamma }`$, of the X-ray spectrum is given by:
$$\mathrm{\Gamma }=\alpha _R+\frac{1}{2}(\frac{E}{E_c})^{\frac{1}{2}}+1,$$
(2)
with $`\alpha _R`$ the radio spectral index, which is 0.78 for Cas A. For thermal bremsstrahlung the photon index is:
$$\mathrm{\Gamma }=1+\frac{E}{kT_\mathrm{e}},$$
(3)
with $`kT_\mathrm{e}`$ the electron temperature. So we see that the overall value of $`\mathrm{\Gamma }=3`$ at 6.6 keV (cf. Holt et al. Holt (1994)) implies $`E_c=1.1`$ keV or $`kT_\mathrm{e}=3.3`$ keV (in agreement with Vink et al. VKB96 (1996)). The thermal spectrum steepens much faster than the synchrotron spectrum: at 9.0 keV the synchrotron model predicts $`\mathrm{\Gamma }=3.2`$, whereas the bremsstrahlung model predicts $`\mathrm{\Gamma }=3.8`$. So we see that some spectra fall off too rapidly to be dominated by synchrotron emission, except the spectra of the Western and Southern region.
It has been argued (Keohane et al. Keohane96 (1996)) that in the Western region the blast wave is interacting with the molecular cloud seen in absorption towards Cas A (Bieging et al. Bieging (1986)). Such an event could lead to an enhanced cosmic ray production, but this hypothesis implies that the thermal emission should peak in the Western region as well since there should also be more shock heated material. The line images, which trace the thermal component, do not support this. Because the infrared image of Cas A (Lagage et al. Lagage (1996)) does not peak in the Western region, depletion in dust cannot explain the relative lack of emission. In addition, sulphur and argon are not depleted in dust. However, the lack of line emission can be reconciled with an interaction between a molecular cloud and Cas A, if the electron temperature is too low to produce appreciable S, Ar and Fe K line emission in this region. The electron temperature is potentially low if the shock wave decelerates rapidly when entering the dense cloud. The weak line emission originating from the Western region may in that case come from the far side of Cas A, where the blast wave is possibly undisturbed.
Turning the attention now to the line images, we note several interesting features. First of all, little line emission originates from the Southern region. Although the VLA image also shows some lack of emission in that region, the near absence of line emission is rather puzzling. Another feature is that the morphology of the line images indicates that the remnant can be divided into roughly two halves, with the intersection running from Northeast to Southwest. Interestingly, this is also the division made on the basis of X-ray Doppler shift maps (Markert et al. Markert (1983), Holt et al. Holt (1994) and Tsunemi et al. Tsunemi (1996)). Tsunemi et al. (Tsunemi (1996)) report that the Fe K line has larger Doppler shifts than the other elements, although there are some uncertainties, due the fact that non-equilibrium ionization effects may mimic Doppler shifts. If Doppler shifts are indeed larger for the Fe K lines, then this may be connected to the observation presented here that the Fe K emission in the Southeast peaks further out from the center than the emission from other elements. In this case most Fe K emission in the Southeast presumably originates from the swept up circumstellar medium, which has a larger velocity than the supernova ejecta heated by the reverse shock.
## 4 Conclusion
We have presented narrow band images of Cas A deconvolved with the Lucy-Richardson method. There is a clear distinction between the morphology of the continuum emission and the line emission. The images dominated by line emission resemble each other, but we found that the iron K-shell emission in the Southeast peaks $``$20″ further out from the center than the emission from other elements.
The continuum emission increasingly enhances in the Western region when going to harder X-rays, making it likely that the hard X-ray tail is coming predominantly from the Western side of the remnant. Furthermore, the difference in morphology between the line and continuum images indicates that part of the continuum emission of especially the Western region is probably synchrotron radiation.
The enhanced non-thermal emission from the Western region may be indicative of a collision of Cas A with a molecular cloud. This hypothesis does not seem to be supported by the line emission images, which should also show brightness enhancement in the West. However, a low electron temperature of the shocked molecular cloud material may circumvent this.
Our analysis provides the first images of Cas A at energies above 7 keV, which allows for the separation of the non-thermal and the thermal X-ray emission at angular scales of the order of 1′. In the near future XMM, with its high throughput above 8 keV, will allow extension of this type of research to the arcsecond scale. The excellent photon statistics expected from this X-ray observatory will allow accurate Doppler shift measurements and will potentially reveal new details, which may provide an explanation to some of the peculiarities of the X-ray emission from the Western region.
###### Acknowledgements.
We thank Silvano Molendi for providing us with the parameterization of the MECS PSF. M.C. Maccarone and T. Mineo acknowledge useful discussions with Bruno Sacco. Jacco Vink acknowledges pleasant and useful discussions with Glenn Allen. We thank the referee J. Dickel for some suggestions which helped to improve this article. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center and by the NCSA Astronomical Digital Image Library (ADIL). This work was financially supported by NWO, the Netherlands Organization for Scientific Research.
|
no-problem/9902/cond-mat9902298.html
|
ar5iv
|
text
|
# La substitution induced linear temperature dependence of electrical resistivity and Kondo behavior in the alloys, Ce2-xLaxCoSi3
## Abstract
The results of electrical resistivity ($`\rho `$), heat-capacity (C) and magnetic susceptibility ($`\chi `$) measurements (above 1.5 K) for the alloys of the type, Ce<sub>2-x</sub>La<sub>x</sub>CoSi<sub>3</sub>, are reported. We find that the S-shaped temperature dependence of $`\rho `$ for the mixed-valent x= 0.0 alloy is not a single-ion property; the most interesting observation is that, for an intermediate concentration, (x= 0.25), linear temperature dependence of $`\rho `$ below 30 K is noted, an observation of relevance to the c urrent trends in the topic of non-Fermi-liquid behavior. Other observations are: (i) Ce exhibits single-ionic heavy-fermion behavior with moderately enhanced electronic contribution to C; and (ii) the strength of the Kondo interaction, as measured by th e Curie-Weiss parameter gets reduced for larger substitutions (x $`>`$ 0.5) of La for Ce.
(Received, January 1999 by C.N.R. Rao)
KEY WORDS: D. heavy fermions; D. Kondo effects; D. Electronic transport; D.heat capacity
The ternary rare-earth (R) compounds of the type, R<sub>2</sub>TX<sub>3</sub> (T= transition metals, X= Si, Ga), derived from AlB<sub>2</sub>-type hexagonal structure have attracted considerable attention in the recent past \[1-14\]. Perhaps this class of rare-earth intermetallics will be one of the most extensively studied ones next to ThCr<sub>2</sub>Si<sub>2</sub>-type structure. Among these, the compound, Ce<sub>2</sub>CoSi<sub>3</sub>, is of special interest, as the crystallographic analysis reveals ordering of cobalt atoms within the CoSi<sub>3</sub> hexagonal layer and also Ce appears to be in a mixed-valent state . In this article, we present the results of electrical resistivity ($`\rho `$), heat-capacity (C) and magnetic susceptibility ($`\chi `$) measurements for the alloys, Ce<sub>2-x</sub>La<sub>x</sub>CoSi<sub>3</sub>, in order to probe how the lattice expansion caused by La substitution for Ce modifies the properties of this compound, paricularly to see whether any of these alloys exhibit non-Fermi liquid (NFL) behavior, considering current interest in this direction of resear ch in condensed matter physics.
The samples, Ce<sub>2-x</sub>La<sub>x</sub>CoSi<sub>3</sub> (x= 0.0, 0.25, 0.5, 1.0, 1.5 and 2.0), have been prepared by melting together stoichiometric amounts of constituent elements in an arc furnace in an atmosphere of argon. The samples were annealed at 750<sup>o</sup> C for one
week and then characterized by x-ray diffraction and the diffraction lines could be indexed on the basis of an AlB<sub>2</sub>-derived structure. It appears that the unit cell along the basal plane is twice that of AlB<sub>2</sub>-structure, and the lattice constant s derived for this solid solution (Fig. 1) reveal an expansion of the unit-cell with La substitution for Ce, as expected. The $`\rho `$ measurements (1.5-300 K) were performed by a conventional four-probe method employing a silver-paint for making electrica l contacts of the leads with the samples. The C measurements (2-50 K) were performed by a semi-adiabatic heat-pulse method. The $`\chi `$ data (2 - 300 K) were taken employing a superconducting quantum interference device in a magnetic field (H) of 2 kOe.
The results of $`\rho `$ measurements are shown in Fig. 2. Not much significance could be attached to the absolute values of $`\rho `$ considering that the samples are very brittle and porous and hence the data are normalised to respective 300 K values. The $`\rho `$ of the parent Ce compound (x= 0.0) decreases with temperature (T) with a significant, but continuous fall with T below about 100 K approaching a constant value at low temperatures. The shape of resistivity versus T plot (usually called S-shaped) ob served for this composition is typical of mixed-valent Ce compounds like CeSn<sub>3</sub> (Ref. 15). There are qualitative changes in the shape of $`\rho `$ versus T plots with the substitution of La for Ce. It is interesting to note that small substitutions (x= 0. 25) of La for Ce transforms low temperature (below 30 K) $`\rho `$ behavior to a linear T dependence (NFL behavior), retaining positive T coefficient of $`\rho `$; there is a slight deviation from the low temperature linear behavior around 30 K as the tempe rature is increased, followed by another linear region in the range 30 - 100 K. For the next higher content of La (x= 0.5), there is a feeble minimum in $`\rho `$ around 50 K(seen only if the low temperature data is plotted in an expanded scale), indicative of the single-ion Kondo effect, though there is a small drop of $`\rho `$ below 10 K the origin of which is not clear. For a further addition of La, a well-defined Kondo minimum in the plot could clearly be seen. Thus, the S-shaped behavior of the plot of $`\rho `$ versus T for x= 0.0 is not a single-ion property and the data demonstrate Kondo lattice to single-ion Kondo transformation by La substitution.
The results of C measurements are shown in Fig. 3. It is to be noted that there is no evidence for the existence of a prominent $`\lambda `$-type anomaly for any of the alloys in the entire temperature range of investigation. This establishes that the paren t Ce compound is non-magnetic and the possible reduction of the Kondo interaction caused by La substitution is not sufficient to drive Ce towards magnetic ordering. For x= 0.0, there is however an extremely small peak around 6 K, which was observed even in an earlier work and it is not clear whether it is intrinsic; it is possible that this peak arises from small traces (estimated to be around 1$`\%`$, below the detection limit of x-ray diffraction) of trivalent Ce oxide. The plot of C/T versus T (Fig . 4) shows an upturn at low temperatures in all cases reaching values above 100 mJ/Ce mol K<sup>2</sup> at about 2 K, presumably arising from the electronic contribution thereby indicating heavy-fermion character of Ce in this chemical environment. It is not clear whether this feature signals NFL nature of the low temperature state of all these alloys. The value of C/T extrapolated from the linear regime (12 - 20 K) is also large falling in the range 60 - 80 mJ/Ce mol K<sup>2</sup>. We substracted the lattice con tribution to C from the knowledge of C values of La analogue; we observe that the 4f contribution to C thus obtained divided by T is independent of T in the range 4 - 20 K, thereby suggesting that there is no influence of possible high temperature Schot tky anomalies in this temperature range. Hence we believe that this value represents true linear term in C. In short, the results characterize these alloys as moderate heavy-fermions.
We could also infer on the dependence of the strength of the Kondo interaction on composition from the measurement of $`\chi `$ at the high temperature range (100-300 K). The plot of inverse $`\chi `$ versus T is found to be non-linear in the entire temperatu re range of investigation, presumably due to large temperature independent contribution (Pauli susceptibility). Assuming that this contribution is the same as the $`\chi `$ value of La<sub>2</sub>CoSi<sub>3</sub> at 300 K (4 $`\times `$ 10<sup>-4</sup> emu/mol), we subtract this contribution for each composition. The values thus obtained (after normalising to Ce concentration) are shown in Fig. 5 in the form of the plot of inverse $`\chi `$ versus T. We are now able to clearly see a linear region in the range 100 - 300 K. The effe ctive moment obtained from this range turns out to be very close to 2.5 $`\mu `$<sub>B</sub>, typical of trivalent Ce ions for all the compositions and the Curie-Weiss parameter, $`\theta `$<sub>p</sub> ($`\pm `$ 10 K) is found to be - 240, -230, -250, - 140 , and -140 K for x = 0.0, 0.25, 0.5, 1.0 and 1.5 respectively. The trivalency of Ce and large negative $`\theta `$<sub>p</sub> value for x= 0.0, combined with a large low temperature electronic contribution to C, establish that the parent Ce alloy may be classified as a concentrated Kondo system. It is clear from the values of $`\theta `$<sub>p</sub> that the strength of the Kondo effect at higher temperatures is nearly unaffected by initial substitutions of La ($`<`$1.0), whereas the weakening of the Kondo effect caused by an increase of unit-cell volume is realised only for higher doping of La. It may be added that there is a deviation from the high temperature Curie-Weiss behavior at lower temperatures presumably due to crystal-field effects. There is no ferromagnetic impurity contr ibution at low temperatures, as the isothermal magnetization is found to be a linear function of magnetic field.
Summarising, the present results establish that the compound Ce<sub>2</sub>CoSi<sub>3</sub> is a non-magnetic concentrated Kondo system. The transformation from Kondo-lattice to single-ion Kondo behavior is investigated by the subsitution of La for Ce in this compound. A notable finding is that, for x= 0.25, we observe linear T-dependence of low temperature resistivity, a feature which may characterise this composition as a NFL; it is worthwhile to extend the measurements to low temperatures (below 1 K) to probe whether this NFL behavior persists in such a range as well. Generally speaking, in non-magnetic Ce-based Kondo lattices, La substitution tends to drive Ce towards magnetic ordering following the well-known relationship between unit-cell volume and the Ko ndo effect and magnetic-coupling strengths . In the present (parent) Ce compound, the strength of the Kondo interaction is so large (as indicated by Curie-Weiss parameter) that the lattice expansion caused by La substitution is not sufficient to ca use magnetic ordering. From the observation of NFL behavior in resistivity for x= 0.25, we infer that this alloy can be located across the border separating Fermi-liquids and magnetic ordering regimes in the Doniach’s magnetic phase diagram . Thus, this composition may be near the quantum critical-point and therefore the magnetic fluctuations presumably lead to non-Fermi-liquid properties . We also performed $`\rho `$ measurements with the application of H and we note that H does not influen ce linear temperature dependence of resistivity above 4.2 K, similar to the behavior of U<sub>0.9</sub>Th<sub>0.1</sub>ThBe<sub>13</sub> (Ref. 18). However, unlike the situation in this U alloy, the magnetoresistance, MR (till 70 kOe) at 4.2 K is found to be positive with a magnitude of less than 1$`\%`$. It is possible that the positive MR results from Kondo coherence effects in these Ce alloys. It is of interest to extend MR measurements to very low temperatures ($`<`$1 K) in order to see whether Fermi-liquid behavior can b e restored by the application of magnetic field, an information which will be extremely valuable to explore possible role of quadrupolar Kondo effect to result in NFL behavior . We have reported in the past similar quasi-linear T dependence of resist ivity in a stoichiometric compound, CeIr<sub>2</sub>Ge<sub>2</sub> (Ref. 19). The understanding of such class of alloys in general is a challenge in condensed matter physics .
Electronic address: sampath@tifr.res.in
1. B. Chevalier, P. Lejay, J. Etourneau, and P. Hagenmuller, Solid State Commun.49, 753 (1984).
2. P. Kostanidis, P.A. Yakinthos, and E. Gamari-seale, J. Magn. Magn. Mater. 87, 199(1990).
3. R.E. Gladyshevskii, K. Cenzual, and E. Parthe, J. Alloys Comp., 189, 221(1992).
4. J.H. Albering, R. Pöttgen, W. Jeitschko, R.-D. Hoffmann, B. Chevalier, and Etourneau, J. Alloys Comp., 206, 133 (1994).
5. R. Pöttgen and D.Kaczorowski, J. Alloys Comp., 201, 157 (1993).
6. R. Pöttgen, P. Gravereau, B. Darnet, B. Chevalier, E. Hickey and J. Etourneau, J. Mater. Chem., 4, 463 (1994).
7. A. Raman, Naturwiss., 54, 560 (1967).
8. R.A. Gordon, C.J. Warren, M.G. Alexander, F.J. DiSalvo and R. Pottgen, J. Alloys and Comp. 248, 24 (1997).
9. I. Das and E.V. Sampathkumaran, J. Magn. Magn. Mater. 137, L239 (1994).
10. R. Mallik, E.V. Sampathkumaran, M. Strecker, G. Wortmann, P.L. Paulose, and Y. Ueda, J. Magn. Magn. Mater.185, L135 (1998).
11. R. Mallik, E.V. Sampathkumaran, M. Strecker, and G. Wortmann, Europhys. Lett., 41, 315 (1998); R. Mallik, E.V. Sampathkumaran, P.L. Paulose, H. Sugawara, and H. Sato, Pramana - J. Phys, 51,, 505 (1998).
12. R. Mallik, E.V. Sampathkumaran, and P.L. Paulose, Solid State Commun. 106, 169 (1998).
13. C. Tien and L. Luo, Solid State Commun., 107, 295 (1998).
14. J.S. Hwang, K.J. Lin, and C. Tien, Solid State Commun., 100, 169 (1996).
15. J.M. Lawrence, P.S. Riseborough and R.D. Parks, Rep. Prog. Phys. 44, 1 (1981).
16. S. Doniach, in ”Valence Instabilities and Related Narrow Band Phenomena”, edited by R.D. Parks (Plenum, New York, 1977), p. 169; J.D. Thompson, Physica B 223&224, 643 (1996).
17. M.A. Continentino, Phys. Rep. 239, 179 (1994); Z. Phys. B 101, 197 (1996).
18. R.P. Dickey, M.C. de Andrade, J. Hermann, M.B. Maple, F.G. Aliev and R. Villar, Phys. Rev. B 56, 11 169 (1997).
19. R. Mallik, E.V. Sampathkumaran, P.L. Paulose, J. Dumschat and G. Wortmann, Phys. Rev. B 55, 3627 (1997); R. Mallik, E.V. Sampathkumaran and P.L. Paulose, Physica B 230-232, 169 (1997).
20. F.M. Grosche, P. Agarwal, S.R. Julian, N.J. Wilson, R.K.W. Haselwimmer, S.J.S. Lister, N.D. Mathur, S.V. Carter, S.S. Saxena and G.G. Lonzarich, Cond-Mat 9812133.
|
no-problem/9902/cond-mat9902006.html
|
ar5iv
|
text
|
# Laser induced freezing of charge stabilized colloidal system
## Acknowledgments
The authors thank T.V. Ramakrishnan, S. Sengupta, S.S. Ghosh, J. Chakrabarti, R. Pandit, C. DasGupta and S. Ramaswamy for many useful discussions. We thank SERC, IISc for computing resources. CD thanks CSIR, India for financial support.
|
no-problem/9902/physics9902021.html
|
ar5iv
|
text
|
# Collective motion of organisms in three dimensions
## I Introduction
The collective motion of organisms (birds, for example), is a fascinating phenomenon many times capturing our eyes when we observe our natural environment. In addition to the aesthetic aspects of collective motion, it has some applied aspects as well: a better understanding of the swimming patterns of large schools of fish can be useful in the context of large scale fishing strategies. In this paper we address the question whether there are some global, perhaps universal transitions in this type of motion when many organisms are involved and such parameters as the level of perturbations or the mean distance of the organisms is changed.
Our interest is also motivated by the recent developments in areas related to statistical physics. During the last 15 years or so there has been an increasing interest in the studies of far-from-equilibrium systems typical in our natural and social environment. Concepts originated from the physics of phase transitions in equilibrium systems such as collective behaviour, scale invariance and renormalization have been shown to be useful in the understanding of various non-equilibrium systems as well. Simple algorithmic models have been helpful in the extraction of the basic properties of various far-from-equilibrium phenomena, like diffusion limited growth , self-organized criticality or surface roughening . Motion and related transport phenomena represent a further characteristic aspect of non-equilibrium processes, including traffic models , thermal ratchets or driven granular materials .
Self-propulsion is an essential feature of most living systems. In addition, the motion of the organisms is usually controlled by interactions with other organisms in their neighbourhood and randomness plays an important role as well. In Ref. a simple model of self propelled particles (SPP) was introduced capturing these features with a view toward modelling the collective motion of large groups of organisms such as schools of fish, herds of quadrupeds, flocks of birds, or groups of migrating bacteria , correlated motion of ants or pedestrians . Our original SPP model represents a statistical physics-like approach to collective biological motion complementing models which take into account much more details of the actual behaviour of the organism, but, as a consequence, treat only a moderate number of organisms and concentrate less on the large scale transitions .
In this paper the large scale transitions during collective motion in three dimensions is considered for the first time. Interestingly, biological motion is typical in both two and three dimensions, because many organisms move on surfaces (ants, mammals, etc), but can fly (insects, birds) or swim (fish). In our previous publications we demonstrated that, in spite of its analogies with the ferromagnetic models, the transitions in our SSP systems are quite different from those observed in equilibrium models. In particular, in the case of equilibrium systems possessing continuous rotational symmetry the ordered phase is destroyed at finite temperatures in two dimensions . However, in the 2d version of the non-equilibrium SSP model phase transitions can exist at finite noise levels (temperatures) as was demonstrated by simulations and by a theory based on a continuum equation developed by Toner and Tu . Thus, the question of how the ordered phase emerges due to the non-equilibrium nature of the model is of considerable theoretical interest as well.
In section 2 we describe our model. The results are presented in section 3 and the conclusions are given in section 4.
## II Model
The model consists of particles moving in three dimensions with periodic boundary conditions. The particles are characterised by their (off-lattice) location $`\stackrel{}{x}_i`$ and velocity $`\stackrel{}{v}_i`$ pointing in the direction $`\vartheta _i`$. To account for the self-propelled nature of the particles the magnitude of the velocity is fixed to $`v_0`$. A simple local interaction is defined in the model: at each time step a given particle assumes the average direction of motion of the particles in its local neighbourhood $`S(i)`$ with some uncertainty, as described by
$`\stackrel{}{v}_i(t+\mathrm{\Delta }t)=\text{ }\text{N}(\text{ }\text{N}(\stackrel{}{v}(t)_{S(i)})+\stackrel{}{\xi }),`$ (1)
(2)
where $`\text{ }\text{N}(\stackrel{}{u})=\stackrel{}{u}/|\stackrel{}{u}|`$ and the noise $`\stackrel{}{\xi }`$ is uniformly distributed in a sphere of radius $`\eta `$.
The positions of the particles are updated according to
$$\stackrel{}{x}_i(t+\mathrm{\Delta }t)=\stackrel{}{x}_i(t)+v_0\stackrel{}{v}_i(t)\mathrm{\Delta }t,$$
(3)
The model defined by Eqs. (2) and (3) is a transport related, non-equilibrium analogue of the ferromagnetic models. The analogy is as follows: the Hamiltonian tending to align the spins in the same direction in the case of equilibrium ferromagnets is replaced by the rule of aligning the direction of motion of particles, and the amplitude of the random perturbations can be considered proportional to the temperature.
We studied this model by performing Monte-Carlo simulations. Due to the simplicity of the model, only two control parameters should be distinguished: the (average) density of particles $`\varrho `$ and the amplitude of the noise $`\eta `$. In the simulations random initial conditions and periodic boundary conditions were applied.
## III Results
For the statistical characterisation of the configurations, a well-suited order parameter is the magnitude of the average momentum of the system: $`\varphi \left|_j\stackrel{}{v}_j\right|/N`$. This measure of the net flow is non-zero in the ordered phase, and vanishes (for an infinite system) in the disordered phase.
The simulations were started from a disordered configuration, thus $`\varphi (t=0)0`$. After some relaxation time a steady state emerges indicated, e.g., by the convergence of the cumulative average $`(1/\tau )_0^\tau \varphi (t)𝑑t`$.
The stationary values of $`\varphi `$ are plotted in Fig. 1. vs $`\eta `$ for $`\varrho =2`$ and various system sizes $`L`$ (indicated in the plot by the number of particles). For weak noise the model displays long-range ordered motion (up to the actual system size $`L`$) disappearing in a continuous manner by increasing $`\eta `$.
These numerical results suggest the existence of a kinetic phase transition as $`L\mathrm{}`$ described by
$$\varphi (\eta )\{\begin{array}{cc}\left(\frac{\eta _c(\varrho )\eta }{\eta _c(\varrho )}\right)^\beta \hfill & \text{for }\eta <\eta _c(\varrho )\hfill \\ 0\hfill & \text{for }\eta >\eta _c(\varrho )\hfill \end{array},$$
(4)
where $`\eta _c(\varrho )`$ is the critical noise amplitude that separates the ordered and disordered phases. Due to the nature of our non-equilibrium model it is difficult to carry out simulations on a scale large enough to allow the precise determination of the critical exponent $`\beta `$. We find that the exponent 1/2 (corresponding to the mean field result for equilibrium magnetic systems) fits our results within the errors. This fit is shown as a solid line.
Next we discuss the role of density. In Fig. 2a, $`\varphi (\eta )`$ is plotted for various values of $`\varrho `$ (by keeping $`N`$=Const. and changing $`L`$). One can observe that the long-range ordered phase is present for any $`\varrho `$, but for a fixed value of $`\eta `$, $`\varphi `$ vanishes with decreasing $`\varrho `$. To demonstrate how much this behaviour is different from that of the diluted ferromagnets we have also determined $`\varphi (\eta )`$ for $`v_0=0`$. In this limit our model reduces to an equilibrium system of randomly distributed ”spins” with a ferromagnetic-like interaction. This system is analogous to the three dimensional diluted Heisenberg model. In Fig. 2b we display the results of the corresponding simulations. There is a major difference between the self-propelled and the static models: in the static case the system does not order for densities below a critical value close to 1 which in the units we are using corresponds to the percolation threshold of randomly distributed spheres in 3d.
This situation is demonstrated in the phase diagram shown in Fig 3. Here the diamonds show our estimates for the critical noise for a given density for the SPP model and the crosses show the same for the static case. The SPP system becomes ordered in the whole region below the curved line connecting the diamonds, while in the static case the ordered region extends only down to $`\rho 1`$.
## IV Discussion
A model (such as SSP) based on particles whose motion is biased by fluctuations is likely to have a behaviour strongly dependent on dimensionality around 2 dimensions since the critical dimension for random walks is 2. An other facet of this aspect of the problem is that a diffusing particle returns to the vicinity of any point of its trajectory with probability 1, while the probability of for the same to occur in 3d is less than 1. In other words, the diffusing particles and clusters of particles are likely to frequently interact in 2d, but in a three dimensional simulation they may not interact frequently enough to ensure ordering.
Our calculations, however, show that for any finite density for small enough noise there is an ordering in the SSP model.
On the other hand, in the 3d case it is very difficult to estimate the precise value of the exponent describing the ordering as a function of the noise. The value we get within the errors agrees with the exponent which is obtained for the equilibrium systems in the mean filed limit. It is possible that the correlations in the direction of motion of the particles spread so efficiently due to their motion that the SSP model behaves already in 3d similarly to an infinite-dimensional static system. Indeed, the motion leads to an effective long-range interaction, since particles moving in opposite direction will soon get close enough to interact.
Finally, these findings indicate that the three dimensional SPP system can be described using the framework of classical critical phenomena, but shows surprising new features when compared to the analogous equilibrium systems. The velocity $`v_0`$ provides a control parameter which switches between the SPP behavior ($`v_0>0`$) and equilibrium type models ($`v_0=0`$).
## Acknowledgments
This work was supported by OTKA F019299 and FKFP 0203/1997.
|
no-problem/9902/cond-mat9902052.html
|
ar5iv
|
text
|
# 1. Introduction
## 1. Introduction
In 1952, Lee and Yang presented a new approach to questions like the existence and location of critical points$`^\text{9}`$. They proposed to treat the field or fugacity as complex variables and to investigate the zeros of the partition function in the complex plane. Later, also the zeros in the complex temperature plane were studied, they also yield information about phase boundaries as well as critical exponents. For regular isotropic lattices, these zeros typically lie on simple curves (though they can fill 2D regions in anisotropic cases). But for hierarchical graphs, they generally form fractal structures$`^\text{6}`$. Non-periodic graphs with inflation symmetry may be regarded as a link between the extensively studied regular and hierarchical models. So one expects them to combine aspects of both classes. This is why we study the Ising model on certain aperiodic graphs based on substitution rules.
For the classical 1D Ising model on the Thue-Morse chain we will show the appearance of fractal structures in the patterns of magnetic field zeros – a consequence of non-commuting transfer matrices for this special aperiodic order. In 2D, however, these zeros did not show any interesting structures$`^\text{3}`$. Therefore, we present the temperature zeros of an Ising model on a quasiperiodic graph, namely the so-called Ammann-Beenker tiling. Its symmetry and the technique of corner transfer matrices allow an efficient numerical treatment of quite large patches.
## 2. Ising Model on the Thue-Morse Chain
Let us start with the discussion of a 1D chain of $`N`$ Ising spins $`\sigma _j\{\pm 1\}`$ with periodic boundary conditions ($`\sigma _{N+1}=\sigma _1`$). The energy of a configuration $`𝝈`$ reads:
$$E\left(𝝈\right)=\underset{j=1}{\overset{N}{}}\left(J_{j,j+1}\sigma _j\sigma _{j+1}+H_j\sigma _j\right).$$
(1)
Here, we consider a system with uniform magnetic field $`H_j=H`$ where the couplings $`J_{j,j+1}`$ take only two different values $`J_a,J_b`$ according to the two letters of the Thue-Morse chain. The latter is obtained through the substitution rule
$$S:\begin{array}{c}aab\\ bba\end{array}$$
(2)
where we consider the successive periodic systems obtained from the words $`a`$, $`ab`$, $`abba`$, $`abbabaab`$ and so on by cyclic closure. The partition function may be written as the trace of $`2\times 2`$ transfer matrices$`^\text{4}`$. Introducing the notation $`z_{a,b}=\mathrm{exp}(2\beta J_{a,b})`$ and $`w=\mathrm{exp}(2\beta H)`$, where $`\beta `$ is the inverse temperature, the two elementary transfer matrices $`T_a`$ and $`T_b`$ read:
$$T_{a,b}=(wz_{a,b})^{1/2}\left(\begin{array}{cc}wz_{a,b}& w^{1/2}\\ w^{1/2}& z_{a,b}\end{array}\right).$$
(3)
The recursion relation for the chain is the same as that for the transfer matrices, and so the essential part of the partition function is evidently a polynomial$`^{\text{2},\text{3}}`$ in the three variables $`z_a,z_b`$ and $`w`$.
In what follows, we restrict ourselves to the ferromagnetic regime (i.e. $`z_a,z_b1`$), focusing on the magnetic field zeros for fixed positive temperature. Due to the quite general Lee-Yang theorem$`^\text{8}`$ the magnetic field zeros are restricted to the unit circle. For a periodic chain the zeros can be calculated analytically$`^{\text{10}}`$, where they fill a connected part of the unit circle densely in the thermodynamic limit. The only gap is near the positive real axis, due to the fact that there is no phase transition for finite temperature in 1D.
In Fig. 1 we show the magnetic field zeros of the Thue-Morse chain in the $`w`$-plane in the ferromagnetic case $`z_a=3/2`$, $`z_b=100`$ for the periodic approximant of length $`2^8=256`$. As expected, the gap around the real axis near the point $`w=1`$ is still present. But there is, in fact, an infinite hierarchy of gaps, each with the well-known Lee-Yang edge singularity. It is an interesting property that these gaps (through the definition of a discrete step function along the unit circle) may be related to the gap labeling in the electronic spectrum of the Thue-Morse chain, for details see Refs. 2 and 5.
## 3. Ising Model on the Ammann-Beenker Tiling
Exact results for 2D quasiperiodic models are rather rare and generally restricted to very special cases. Even for systems with an inflation symmetry no exact renormalization is known for electronic systems or Ising models. So, we apply a combination of an exact calculation of a finite partition function followed by an investigation of the complex zeros as an approach to the thermodynamic limit.
For our 2D quasiperiodic Ising model, we have chosen the Ammann-Beenker tiling$`^\text{1}`$. It has only one kind of edges, which suggests a simple choice of equal couplings along all bonds. But it is even more important that the octagonal symmetry allows the application of corner transfer matrices$`^\text{4}`$. It may be built repeating the indicated small sector $`16`$ times. The corresponding (rectangular) corner transfer matrix $`𝑴`$ is easy to calculate. Consequently the partition function $`Z(w,z)`$ is simply given by:
$$Z(w,z)=tr\left((𝑴^t𝑴)^8\right).$$
(4)
This simple structure allows the exact calculation of the partition function for large patches using algebraic manipulation packages. So our calculations were limited by the degree of the resulting polynomial partition function, as the numerical calculation of polynomial roots quickly becomes really involved.
In contrast to 1D, the magnetic field ($`w`$) zeros do not seem to contain any relevant new information: their angular distribution looks astonishingly regular in comparison to the 1D case, and no gap structure is visible. So, we concentrate on the temperature ($`z`$) zeros here and
restrict ourselves to the case of zero magnetic field. In Fig. 3, the temperature zeros are shown for a growing sequence of patches with fixed boundary conditions, with Fig. 3c) corresponding to the patch shown in Fig. 2. In principle the zeros do not appear to lie on simple curves. But those near the real axis directly contain information about the critical point and indirectly even about critical exponents (for which one would need to know the dependence of the zeros on the magnetic field). Fig. 3 shows alignments of zeros ($`Re(z)>1`$) converging towards two points of the real axis. The numerical values of the zeros closest to the critical ferromagnetic as well as antiferromagnetic couplings are given in Table 1. In the ferromagnetic case they are in very good agreement with the specific heat and center spin magnetization also obtained by numerical calculations. In comparison to the square lattice (where the critical coupling is $`z_c=1+\sqrt{\text{2}}`$), the local coordination looks a bit higher (though its average is strictly 4), and the critical coupling shows this by a slight decrease in agreement with other results$`^\text{7}`$. As our graph is bipartite, the critical antiferromagnetic coupling is just the reciprocal of the ferromagnetic one. Due to the fixed boundary conditions, which are not appropriate for the antiferromagnetic case, we expect our numerical values to show large finite-size effects there.
## 4. Conclusion
Our 1D calculations showed the fractal structure of the magnetic field zeros for 1D non-periodic Ising models, while this structure seems to be absent in 2D. This and the relation to a gap labeling is quite similar to the electronic and vibrational spectra. For 2D Ising like systems the investigation of partition function zeros yields valuable information about the critical point of models on quasiperiodic graphs. Their distribution is rather more complicated than for regular periodic graphs but the location of the ferromagnetic phase transition clearly shows up where the zeros ”pinch” the real axis.
## 5. References
1. R. Ammann, B. Grünbaum and G. C. Shephard, Aperiodic tiles, Discrete Comput. Geom. 8 (1992) 1-25.
2. M. Baake, U. Grimm and D. Joseph, “Trace Maps, Invariants, and Some of their Applications”, Int. J. Mod. Phys. B7 (1993) 1527–50; and: “Practical Gap Labeling”, in preparation.
3. M. Baake, U. Grimm and C. Pisani, “Partition Function Zeros for Aperiodic Systems”, J. Stat. Phys. 78 (1995) 285–97.
4. R. J. Baxter, Exactly Solved Models in Statistical Mechanics, Academic Press, London (1982).
5. J. Bellissard, “Spectral Properties of Schrödinger’s Operator with a Thue-Morse Potential”, in: Number Theory and Physics, eds. J.-M. Luck, P. Moussa and M. Waldschmidt, Springer Proceedings in Physics, vol. 47, Springer, Berlin (1990), p. 140–50.
6. B. Derrida, L. De Seze and C. Itzykson, “Fractal Structure of Zeros in Hierarchical Models”, J. Stat. Phys. 33 (1983) 559–69.
7. D. Ledue, D. P. Landau and J. Teillet, “Static critical behaviour of the ferromagnetic Ising model on the quasiperiodic octagonal tiling”, Phys. Rev. B51 (1995) 12523–30.
8. E. H. Lieb and A. D. Sokal, “A General Lee-Yang Theorem for One-Component and Multicomponent Ferromagnets”, Commun. Math. Phys. 80 (1981) 153-79.
9. T. D. Lee and C. N. Yang, “Statistical Theory of Equations of State and Phase Transitions. II. Lattice Gas and Ising Model”, Phys. Rev. 87 (1952) 410–9.
10. R. K. Pathria, Statistical Mechanics, Pergamon, Oxford (1972).
|
no-problem/9902/hep-lat9902018.html
|
ar5iv
|
text
|
# UTCCP-P-61 Comparative Study of full QCD Hadron Spectrum and Static Quark Potential with Improved Actions
## I Introduction
With the progress over the last few years of quenched simulations of QCD, it has become increasingly clear that the quenched hadron spectrum shows deviations from the experiment if examined at a precision better than 5–10%. For light hadrons the first indication was that the strange quark mass cannot be set consistently from pseudo scalar and vector meson channels in quenched QCD . For heavy quark systems calculations both with relativistic and non-relativistic quark actions have shown that the fine structure of quarkonium spectra can not be reproduced on quenched gluon configurations. Most recently an extensive calculation by the CP-PACS collaboration found a systematic departure of both the light meson and baryon spectra from experiment . These results raise the question as to whether the discrepancies can be accounted for by the inclusion of dynamical sea quarks. It is therefore timely to study more thoroughly the effects of full QCD in order to answer this question.
Full QCD simulations are, however, computationally much more expensive than those of quenched QCD. Simple scaling estimates coupled with past experience place a hundred-fold or more increase in the amount of computations for full QCD compared to that of quenched QCD with current algorithms. Since $`32^3\times 64`$ is a typical maximal lattice size for quenched QCD which can be simulated with high statistics on computers with a speed in the 10 GFLOPS range , reliable full QCD results are difficult to obtain on lattice sizes exceeding $`32^3\times 64`$ even with TFLOPS-class computers such as CP-PACS and QCDSP . Recalling that a physical lattice size of $`L2.5`$–3.0 fm is needed to avoid finite-size effects , the smallest lattice spacing one can reasonably reach at present is therefore $`a^12`$ GeV. Hence lattice discretization errors have to be controlled through simulations carried out at inverse lattice spacings below this value, e.g. in the range $`a^112`$ GeV. It is, however, known that with the standard plaquette and Wilson quark actions discretization errors are already of order 10% even for $`a^12`$ GeV. These observations suggest the use of improved actions for simulations of full QCD.
Studies of improved actions have been widely pursued in the last few years. Detailed tests of improvement for the hadron spectrum, however, have been carried out mostly within quenched QCD with only a few full QCD attempts . In particular, a systematic investigation of how gauge and quark action improvement, taken separately, affects light hadron observables has not been carried out in full QCD. Prior to embarking on a large scale simulation, we examine this question as the first subject of the full QCD program on the CP-PACS computer.
For a systematic comparison of action improvement we employ four possible types of action combinations, the standard plaquette or a renormalization-group improved action for the gauge part, and the standard Wilson or the improvement of Sheikholeslami and Wohlert for the quark part. Since effects of improvement are clearer to discern at coarser lattice spacings, we carry out simulations at an inverse lattice spacing of $`a^11`$ GeV with quark masses in the range corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.7`$–0.9. Results for the four action combinations are used for comparative tests of improvement on the light hadron spectrum and the static quark potential.
Another limiting factor for full QCD simulations is how close one can approach the chiral limit with present computing power. To investigate this question we take the action in which both gauge and quark parts are improved, and carry out simulations down to a quark mass corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$. In addition to exploring the chiral behavior of hadron masses, this simulation allows an examination of signs of string breaking in the static quark-antiquark potential.
In this article we present results of our study on the two questions discussed above, expounding on the preliminary accounts reported in Refs. . We begin with discussions on our choice of actions for our comparative studies in Sec. II. Details of the full QCD configuration generation procedure and measurements of hadron masses and potential are described in Sec. III. Results for the hadron masses are discussed in Sec. IV where, after a description of the chiral extrapolation or interpolation of our data, we examine the effects of action improvement for the scaling behavior of hadron mass ratios. In Sec. V we turn to discuss the static potential. The influence of action improvement on the restoration of rotational symmetry of the potential is examined, and the consistency of the lattice spacing determined from the vector meson mass and the string tension is discussed. In Sec. VI we report on our effort to approach the chiral limit, where our attempt to observe a flattening of the potential at large distances due to string breaking is also presented. We end with a brief conclusion in Sec. VII. Detailed numerical results on run performances, hadron masses and string tensions are collected at the end in Appendices A, B and C.
## II Choice of action
The discretization error of the standard plaquette gauge action is $`O(a^2)`$ while that of the Wilson quark action is $`O(a)`$. In principle one would only need to improve the quark action to the same order as the gauge action. On the other hand, violations of rotational invariance have been found to be strong for the plaquette gauge action at coarse lattice spacings . Hence improving the gauge action is still advantageous for coarse lattices. In this spirit we employ (besides the standard actions) improved actions both in the gauge and quark sectors in the forms specified below.
Let us denote the standard plaquette gauge action by P. Improving this action requires the addition of Wilson loops with a perimeter of six links or more. The number, the precise form and the coefficients of the added terms differ depending on the principle one follows for the improvement . In this study we test the action determined by an approximate block-spin renormalization group analysis of Wilson loops, denoted by R in the pursuant, which is given by
$$S_g^𝐑=\frac{\beta }{6}\left(c_0W_{1\times 1}+c_1W_{1\times 2}\right),$$
(1)
where the $`1\times 2`$ rectangular shaped Wilson loop $`W_{1\times 2}`$ has the coefficient $`c_1=0.331`$ and from the normalization condition defining the bare coupling $`\beta =6/g_0^2`$ follows $`c_0=18c_1=3.648`$.
The discretization error of the R action is still $`O(a^2)`$. The coefficients of $`O(a^2)`$ terms in physical quantities, however, are expected to be reduced from those of the plaquette action. Indeed, the quenched static quark potential calculated with this action was found to exhibit good rotational symmetry and scaling already at $`a^11`$ GeV , and so does the scaling of the ratio $`T_c/\sqrt{\sigma }`$ of the critical temperature of the pure gauge deconfining phase transition and the string tension $`\sigma `$ . The degree of improvement is similar to those observed for tadpole-improved and fixed point actions .
To improve the quark action we adopt the clover improvement proposed by Sheikholeslami and Wohlert , denoted by C in the following and defined by
$$D_{xy}^𝐂=D_{xy}^𝐖\delta _{xy}c_{\mathrm{SW}}K\underset{\mu <\nu }{}\sigma _{\mu \nu }F_{\mu \nu },$$
(2)
where $`D_{xy}^𝐖`$ is the standard Wilson quark matrix given by
$$D_{xy}^𝐖=\delta _{xy}K\underset{\mu }{}\{(1\gamma _\mu )U_{x,\mu }\delta _{x+\widehat{\mu },y}+(1+\gamma _\mu )U_{x,\mu }^{}\delta _{x,y+\widehat{\mu }}\}$$
(3)
and $`F_{\mu \nu }`$ is the lattice discretization of the field strength,
$$F_{\mu \nu }=\frac{1}{8i}(f_{\mu \nu }f_{\mu \nu }^{}),$$
(4)
where $`f_{\mu \nu }`$ is the standard clover-shaped combination of gauge links.
The complete removal of $`O(a)`$ errors requires a non-perturbative tuning of the clover coefficient $`c_{\mathrm{SW}}`$. This has been carried out for the plaquette gauge action in both quenched and two-flavor full QCD . A similar analysis for the R gauge action is yet to be made, however. In this study we compare three different choices:
* The tree level value $`c_{\mathrm{SW}}=1`$.
* The mean-field (MF) improved value $`c_{\mathrm{SW}}=P^{3/4}`$ with $`P`$ the self-consistently determined plaquette average.
* A perturbative mean-field (pMF) improved value $`c_{\mathrm{SW}}=P^{3/4}`$ with the plaquette $`P`$ calculated in one-loop perturbation theory. For the R gauge action $`P=10.8412\beta ^1`$ .
For all three choices the leading discretization error in physical quantities is $`O(g_0^2a)`$. The magnitude of the coefficients of this term should be reduced in the cases of (b) and (c) as compared to (a). The one-loop value of $`c_{\mathrm{SW}}`$ has been recently reported to be $`c_{\mathrm{SW}}=1+0.678(18)/\beta `$ . This value is close to the pMF value $`c_{\mathrm{SW}}^{\mathrm{pMF}}=1+0.631/\beta +\mathrm{}`$. We also find that the one-loop value of $`P`$ reproduces the measured values from simulations within 10% for the R action. Hence the pMF value of the clover coefficient is similar to the MF value employed in (b). The advantage of the pMF choice is that it does not require a self-consistent tuning of $`c_{\mathrm{SW}}`$ for each choice of $`\beta `$ and $`K`$.
We carry out simulations employing either the plaquette (P) or rectangular action (R) for gluons, combining it with either the Wilson (W) or clover action (C) for quarks.
## III Simulations
### A choice of simulation parameters
We choose the coupling constant $`\beta `$ so that it gives an inverse lattice spacing of $`a^11`$ GeV. For each action combination we choose at least two values of $`\beta `$ to allow us to interpolate (or extrapolate) to a desired common lattice spacing.
Simulations are generally carried out at three values of the hopping parameter $`K`$ corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.7`$–0.9. The lattice size employed is $`12^3\times 32`$.
In Table I we give an overview of the calculations performed for the action comparison. Details of the simulation parameters at each run are collated in Appendix A. Our procedure for estimating the critical hopping parameter $`K_c`$, and the physical scale of lattice spacing either from the $`\rho `$ meson mass ($`a_\rho `$) or the string tension ($`a_\sigma `$) will be discussed in Sec. IV A and Sec. V C.
We take the RC<sub>pMF</sub> action at $`\beta =1.9`$ to explore how close one can take the calculation towards the chiral limit. For this study we employ two lattice sizes $`12^3\times 32`$ and $`16^3\times 32`$. In Table II we list the main features of these two runs whereas details can be found in Appendix A.
### B configuration generation and matrix inversion
Simulations are carried out for two flavors of dynamical quarks using the hybrid Monte Carlo (HMC) algorithm. The integration of molecular dynamics (MD) equations is made with the standard leapfrog scheme and with a step size $`\mathrm{\Delta }\tau `$ chosen to yield an acceptance ratio of 70–90% for trajectories of unit length. The actual values chosen for $`\mathrm{\Delta }\tau `$ in each case and the measured acceptance are given in Appendix A.
For the inversion of the fermion matrix we employed the minimal residue (MR) algorithm for our early simulations but switched later to BiCGStab. In both cases we use an even-odd preconditioning of the quark matrix $`D`$. $`D`$ can be decomposed into
$$D(K)=MK(D_{eo}+D_{oe}),$$
(5)
where $`M`$ is only defined on single sites and the remaining connects neighboring sites. For the Wilson quark action $`M`$ is a unit matrix, whereas for the clover action it is non-trivial in color and Dirac space. The even-odd preconditioning consists of solving the equation $`AG_e=B_e^{}`$ where $`A=1K^2M_e^1D_{eo}M_o^1D_{oe}`$ and $`B_e^{}=M_e^1\left(B_e+KD_{eo}M_o^1B_o\right)`$ instead of the equation $`D(K)G=B`$. As an initial guess for the solution vector on even sites, the right-hand-side vector $`G_e=B_e^{}`$ is used. The preconditioning requires the inversion of the local matrix $`M`$, which is trivial for the Wilson quark action. For the clover quark action we precalculate $`M^1`$ and store it before the solver starts.
As a stopping condition for the matrix inversion during the fermionic force evaluation we generally use, on the $`12^3\times 32`$ lattice, the criterion
$$r_1=DGB^210^{10}$$
(6)
which we found to be approximately equivalent to the condition
$$r_2=DGB/G10^8.$$
(7)
The actual stopping conditions chosen for each run and the number of iterations needed to reach this condition are listed in Appendix A. For the evaluation of the Hamiltonian we choose stricter stopping criteria for $`r_1`$ between $`10^{14}`$ and $`10^{18}`$.
A necessary condition for the validity of the HMC algorithm is the reversibility of the MD evolution . The CP-PACS computer, on which the present work is made, employs 64 bit arithmetic for floating point operations. Flipping the sign of momenta after a unit trajectory, with the stopping condition (7) above, we checked that, (i) the gauge link and conjugate momenta variables return to the starting values within a relative error of less than $`10^7`$ on the average, and (ii) the relative error in the evaluation of the Hamiltonian is less than $`10^{10}`$ (absolute error better than $`10^4`$ for the $`16^3\times 32`$ lattice where the check was made) so that the effects in the accept/reject procedure are far below the level of statistical fluctuations.
At each simulated parameter we first run for 100–200 HMC trajectories of unit length for thermalization and then generate 500–1500 trajectories for measurements. Hadron propagators are measured on configurations separated by 5 trajectories. The static quark potential is measured on a subset of the configurations separated by either 5 or 10 trajectories. The detailed numbers are again given in Appendix A.
### C hadron mass measurement
We calculate quark propagators for the hopping parameter equal to that for the dynamical quarks used in the configuration generation. Two quark propagators are prepared for each configuration, one with the point source and the other with an exponentially smeared source with the smearing function $`\psi (r)=A\mathrm{exp}(Br)`$. For the latter we fix the gauge configuration to the Coulomb gauge. The choice of the smearing parameters $`A`$ and $`B`$ is guided by previous quenched results for the pion wave function , readjusted by hand so that hadron effective masses reach a plateau as soon as possible.
Hadron propagators are constructed by combining quark propagators for the point (P) or the smeared (S) sources in various ways, but always adopting the point sink. For example, PS represents a meson propagator calculated with the point source for quark and the smeared source for antiquark. In Fig. 1 we show a typical example of effective masses for a variety of source combinations.
In most cases the effective masses for the SS (SSS for baryons) propagators comes from below, show the best plateau behavior, and have the smallest statistical errors estimated with the jackknife procedure. We therefore determine hadron masses with a fit to SS (SSS) hadron propagators, supplementing results for other source combinations as a guide in choosing the fit range.
Hadron masses are extracted from propagators by employing a single hyperbolic cosine fit for mesons and a single exponential fit for baryons. We use uncorrelated fits and determine the error with the jackknife method. While our runs of at most 1500 HMC trajectories are not really long enough to carry out detailed autocorrelation analysis, examining the bin size dependence of the estimated error indicates a bin size of 5 configurations or 25 HMC trajectories to be a reasonable choice, which we adopt for all of our error analyses.
The hadron mass results for all our runs are collected in Appendix B.
### D potential measurement
We measure Wilson loops $`W(r,t)`$ both in the on- and off-axis directions in space. The spatial paths of $`W(r,t)`$ are formed by connecting one of the following spatial vectors repeatedly,
$$(1,0,0),(1,1,0),(2,1,0),(1,1,1),(2,1,1),(2,2,1).$$
(8)
We measure $`W(r,t)`$ up to $`r6`$ and $`t8`$ on the $`12^3\times 32`$ lattice, while we enlarge the largest spatial size to $`r4\sqrt{3}`$ on the $`16^3\times 32`$ lattice in order to investigate the large distance behavior of the potential. The smearing procedure of Ref. is applied to the link variables, up to 6 times on the $`12^3\times 32`$ lattice and up to 8 times on the $`16^3\times 32`$ lattice, respectively. The Wilson loop is measured at every smearing step in order to choose the optimal smearing number for each value of $`r`$.
We extract the potential $`V(r)`$ and the overlap function $`C(r)`$ by a fully correlated fit of the Wilson loop to the form
$$W(r,t)=C(r)\mathrm{exp}\left[V(r)t\right].$$
(9)
The optimum smearing number at each $`r`$ is determined by the condition that the overlap $`C(r)`$ takes the largest value smaller than 1.
Typical results for the effective mass defined by
$$m_{\mathrm{eff}}=\mathrm{ln}\left[W(r,t)/W(r,t+1)\right],$$
(10)
are shown in Fig. 2. We find that noise generally dominates over the signal for $`t>4`$. Thus we set the upper limit of the fitting range to $`t_{\mathrm{max}}=4`$. Since choosing the lower limit $`t_{\mathrm{min}}=1`$ leads to an increase of $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}`$ by 3–10 times compared to the choice $`t_{\mathrm{min}}=2`$ for most values of $`r`$ and simulation parameters, we fix the fitting range to be $`t=2`$–4.
The statistical error of $`V(r)`$ is estimated by the jackknife method. We find that a bin size of 30 HMC trajectories is generally sufficient to ensure stability of errors against bin size. We therefore adopt this bin size for all of our error estimates with potential data.
## IV Hadron spectrum
### A chiral fits
A basic parameter characterizing the chiral behavior of hadron masses is the critical hopping parameter $`K_c`$ at which the pseudo scalar meson mass $`m_{\mathrm{PS}}a`$ vanishes. Results for $`(m_{\mathrm{PS}}a)^2`$ exhibit deviations from a linear function in $`1/K`$, and hence we extract $`K_c`$ by assuming
$$(m_{\mathrm{PS}}a)^2=B_{\mathrm{PS}}\left(\frac{1}{K}\frac{1}{K_c}\right)+C_{\mathrm{PS}}\left(\frac{1}{K}\frac{1}{K_c}\right)^2.$$
(11)
The fitted values of the critical hopping parameter are listed in Table I and II.
Another important parameter is the vector meson mass $`m_\mathrm{V}a`$ in the chiral limit $`m_{\mathrm{PS}}a=0`$, which allows us to set the physical lattice spacing. We determine this quantity by a chiral fit of the vector meson mass in terms of the pseudo scalar meson mass, both of which are measured quantities. Our results for this relation show curvature (see Fig. 8 in Sec. VI A for an example), and hence for the fitting function we employ
$$m_\mathrm{V}a=A_\mathrm{V}+B_\mathrm{V}(m_{\mathrm{PS}}a)^2+C_\mathrm{V}(m_{\mathrm{PS}}a)^3,$$
(12)
where the cubic term is inspired by chiral perturbation theory.
A practical problem with this fit is that we have only three data points for most of our runs. We estimate systematic uncertainties in the extrapolation by repeating the fit without the cubic term to the two points of data for lighter quark masses. Results for the vector meson mass in the chiral limit, translated into the lattice spacing through $`a_\rho =A_\mathrm{V}/768\mathrm{M}\mathrm{e}\mathrm{V}`$, are listed in Table I and II.
Results for the nucleon and $`\mathrm{\Delta }`$ also show curvature in terms of $`m_{\mathrm{PS}}a`$. We therefore fit them employing a cubic polynomial without the linear term (12) as for the vector meson mass.
### B scaling of mass ratios
We show in Fig. 3 a compilation of our hadron mass results for the four action combinations in terms of the mass ratios $`m_\mathrm{N}/m_\mathrm{V}`$ and $`m_\mathrm{\Delta }/m_\mathrm{V}`$ as a function of $`(m_{\mathrm{PS}}/m_\mathrm{V})^2`$. In order to avoid overcluttering of points, we include results for only two values of $`\beta `$ per action combination. Furthermore, for the PC action combination the results with $`c_{\mathrm{SW}}=`$ MF are displayed whereas for the RC action results for $`c_{\mathrm{SW}}=`$ pMF are shown.
We observe two features in this figure. In the first instance, for each action combination the baryon to vector meson mass ratio decreases as the coupling decreases. This is a well-known trend of scaling violation for Wilson-type quark actions. Secondly, the magnitude of scaling violation, measured by the distance from the phenomenological curve (solid line in Fig. 3) has an order where $`\mathrm{𝐏𝐖}>\mathrm{𝐑𝐖}>\mathrm{𝐏𝐂}>\mathrm{𝐑𝐂}`$. In particular the results for the PC and RC cases show a significant improvement over those for the PW and RW cases in that they lie close to the phenomenological curve even though the lattice spacing is as large as $`a_\rho ^11`$–1.3 GeV (see Tables I and II).
A point of caution, however, is that the lattice spacings for the data sets displayed in Fig. 3 do not exactly coincide. In order to disentangle effects associated with action improvement from those of a finer lattice spacing for each action, we need to plot results at the same lattice spacing.
One way to make such a comparison is to take a cross section of Fig. 3 at a fixed value of $`m_{\mathrm{PS}}/m_\mathrm{V}`$ and plot the resulting value of $`m_{\mathrm{N},\mathrm{\Delta }}/m_\mathrm{V}`$ as a function of $`m_\mathrm{V}a`$ at that value of $`m_{\mathrm{PS}}/m_\mathrm{V}`$. This requires an interpolation of hadron mass results, for which we employ the cubic chiral fits described in Sec. IV A and the jackknife method for error estimation.
In Fig. 4 we show results of this analysis for $`m_\mathrm{N}/m_\mathrm{V}`$ and $`m_\mathrm{\Delta }/m_\mathrm{V}`$ at $`m_{\mathrm{PS}}/m_\mathrm{V}=0.8`$. It is interesting to observe that the PW and RW results lie almost on a single curve, while the PC and RC results, respectively using the MF and pMF value of $`c_{\mathrm{SW}}`$, fall on a different, much flatter curve. This clearly shows that the improvement of the gauge action has little effect on decreasing the scaling violation in the baryon masses. The improvement is due to the use of the clover quark action for the PC and RC cases. An apparently better behavior of RW results in Fig. 3 compared to those for the PW case is merely an effect of the finer lattice spacing of the former.
We have commented in Sec. II that the values of $`c_{\mathrm{SW}}`$ for the MF and pMF cases are similar. This would explain why results for the PC action with the MF value of $`c_{\mathrm{SW}}`$ and those for the RC action with the pMF value of $`c_{\mathrm{SW}}`$ lie almost on a single curve. For both MF and pMF choices, the magnitude of $`c_{\mathrm{SW}}`$ is significantly larger than the tree-level value $`c_{\mathrm{SW}}=1`$. As is shown in Fig. 4 with open symbols, the degree of improvement with the tree-level $`c_{\mathrm{SW}}`$ is substantially less than those for the MF and pMF choices.
## V Static quark potential
### A restoration of rotational symmetry
In Fig. 5, we plot our potential data for the four action combinations at a quark mass corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.8`$ and $`a^11`$ GeV. We find a sizable violation of rotational symmetry in the PW case at this coarse lattice spacing. Looking at the potential for the PC case, we cannot observe any noticeable restoration of the symmetry. In contrast, a remarkable restoration of rotational symmetry is apparent in the RW and RC cases.
In order to quantify the violation of rotational symmetry and its improvement depending on the action choice, we consider the difference between the on-axis and off-axis potential at a distance $`r=3`$ defined by
$$\mathrm{\Delta }V=\frac{V\left(r=(3,0,0)\right)V\left(r=(2,2,1)\right)}{V\left(r=(3,0,0)\right)+V\left(r=(2,2,1)\right)}.$$
(13)
We find that the value of $`\mathrm{\Delta }V`$ monotonously decreases as the sea quark mass decreases for most cases. We ascribe this trend to the fact that one effect of dynamical sea quarks is to renormalize the coupling toward a smaller value, and hence reduces violation of rotational symmetry.
In order to make a comparison at the same quark mass, we estimate $`\mathrm{\Delta }V`$ at $`m_{\mathrm{PS}}/m_\mathrm{V}=0.8`$ by an interpolation as a linear function of $`(m_{\mathrm{PS}}a)^2`$. In Fig. 6 we plot results for $`\mathrm{\Delta }V`$ obtained in this way against the value of $`m_\mathrm{V}a`$ at $`m_{\mathrm{PS}}/m_\mathrm{V}=0.8`$. This figure confirms the qualitative impression from Fig. 5. Rotational symmetry is badly violated for the PW and PC cases, which is significantly improved by changing the gauge action as demonstrated by the small values of $`\mathrm{\Delta }V`$ for the RW and RC results. In contrast the effect of quark action improvement on the restoration of rotational symmetry appears to be small. This may not be surprising since dynamical quarks affect the static potential only indirectly through vacuum polarization effects.
### B string tension
The static potential in full QCD is expected to flatten at large distances due to string breaking. None of our potential data, which typically extends up to the distance of $`r1`$ fm, show signs of such a behavior, but rather increase linearly. As we discuss in more detail in Sec. VI this is probably due to a poor overlap of the Wilson loop operator with the state of a broken string. This suggests that we can extract the string tension from the present data for the potential $`V(r)`$ by assuming the form
$$V(r)=V_0\frac{\alpha }{r}+\sigma r.$$
(14)
In practice we find that the Coulomb coefficient $`\alpha `$ is difficult to determine from the fit, even if we introduce the tree-level correction term corresponding to the one lattice gluon exchange diagram. This may be due to the fact that our potential data taken at coarse lattice spacings do not have enough points at short distance to constrain the Coulomb term. As an alternative we test a two-parameter fitting with a fixed Coulomb term coefficient $`\alpha _{\mathrm{fixed}}=0.1`$, 0.125, …, 0.475, and 0.5, using the fitting range $`r_{\mathrm{min}}`$$`r_{\mathrm{max}}`$ with $`r_{\mathrm{min}}=1`$, $`\sqrt{2}`$, $`\sqrt{3}`$ and $`r_{\mathrm{max}}=5`$–6. We find that the value of $`\chi ^2/\mathrm{d}.\mathrm{o}.\mathrm{f}`$ takes its minimum value around $`\alpha _{\mathrm{fixed}}=0.3`$$`0.4`$ for most fitting ranges and simulation parameters.
Based on this result, we extract the string tension by fitting the potential at large distances, where a linear behavior dominates, to the form (14) with a fixed Coulomb coefficient $`\alpha _{\mathrm{fixed}}=0.35`$. The shift of the fitted $`\sigma `$ over the range $`\alpha =0.3`$$`0.4`$ is taken into estimates of the systematic error.
The result for the string tension $`\sigma `$ with this two-parameter fit is quite stable against variations of $`r_{\mathrm{max}}`$. It does depend more on $`r_{\mathrm{min}}`$, however. This leads us to repeat the two-parameter fit with $`\alpha _{\mathrm{fixed}}=0.35`$ over the interval of $`r_{\mathrm{min}}`$ listed in Appendix C, and determine the central value of $`\sigma `$ by the weighted average of the results over the ranges. The variance over the ranges are included into the systematic error of $`\sigma `$. We collate the final results for the string tension $`\sigma `$ in Appendix C.
### C consistency in lattice spacings
The scaling violation in the ratio $`m_\rho /\sqrt{\sigma }`$ leads to an inconsistency in the lattice spacings determined from the $`\rho `$ meson mass $`a_\rho `$ and the string tension $`a_\sigma `$ in the chiral limit. Thus, examination of this consistency provides another test of effectiveness of improved actions. For the physical value we use $`m_\rho =768`$ MeV and $`\sqrt{\sigma }=440`$ MeV. We should note that the latter value is uncertain by about 5–10% since the string tension is not a directly measurable quantity by experiment.
The chiral extrapolation of the vector meson mass was already discussed in Sec. IV A. We follow a similar procedure for the chiral extrapolation of the string tension. Namely we fit results to a form
$$\sigma a^2=A_\sigma +B_\sigma (m_{\mathrm{PS}}a)^2+C_\sigma (m_{\mathrm{PS}}a)^3.$$
(15)
In most cases we find a quadratic ansatz ($`C_\sigma =0`$) to be sufficient, which we then adopt for all data sets. Results for the string tension in the chiral limit, converted to the physical scale of lattice spacing $`a_\sigma `$, are listed in Table I and II.
In Fig. 7 we plot $`m_\mathrm{V}a/768\mathrm{M}\mathrm{e}\mathrm{V}`$ and $`\sqrt{\sigma }a/440\mathrm{M}\mathrm{e}\mathrm{V}`$ as a function of $`(m_{\mathrm{PS}}a)^2`$ for the four action combinations with a similar lattice spacing $`a_\rho ^11`$$`1.3`$ GeV determined from the vector meson mass. A distinctive difference between the results for the Wilson and the clover quark action is clear; while results for $`m_\mathrm{V}`$ and $`\sqrt{\sigma }`$ cross each other at heavy quark masses where $`m_{\mathrm{PS}}/m_\mathrm{V}0.75`$$`0.8`$ for the PW and RW cases, leading to a mismatch of $`a_\rho `$ and $`a_\sigma `$ in the chiral limit, the two sets of physical scales converge well toward the chiral limit for the PC and RC cases.
We expect the large discrepancy for the Wilson quark action to disappear closer to the continuum limit. This is supported by the results obtained at $`\beta =5.5`$ with $`a^12`$ GeV in Ref.. Our results show that the clover term helps to improve the consistency between $`a_\rho `$ and $`a_\sigma `$ already at $`a^11`$ GeV.
## VI Approaching the chiral limit
The analyses presented so far show that the RC action has the best scaling behavior for hadron masses and static quark potential among the four action combinations we have examined. We then take this action and attempt to lower the quark mass as much as possible.
Two runs are made at $`\beta =1.9`$, one on a $`12^3\times 32`$ lattice down to $`m_{\mathrm{PS}}/m_\mathrm{V}0.5`$, and the other on a $`16^3\times 32`$ lattice down to $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$. We discuss results from these runs below.
### A hadrons with small quark masses
In Fig. 8 we plot the results of hadron masses as functions of $`(m_{\mathrm{PS}}a)^2`$. The existence of a curvature is observed, necessitating a cubic ansatz for extrapolation to the chiral limit. The lattice spacing determined from $`m_\rho =768`$ MeV equals $`a_\rho =0.20(2)`$ fm using mass results from the larger lattice. Hence the spatial size equals 2.4 fm ($`12^3\times 32`$) and 3.2 fm ($`16^3\times 32`$) for the two lattice sizes employed.
Finite-size effects are an important issue for precision determinations of the hadron mass spectrum. Our results in Fig. 8 do not show clear signs of such effects down to the second lightest mass, which corresponds to $`m_{\mathrm{PS}}/m_\mathrm{V}0.5`$. We feel, however, that it is premature to draw conclusions with the present low statistics of approximately 1000 trajectories.
The results for mass ratios are plotted in Fig. 9. While errors are large, and may even be underestimated because of the shortness of the runs, we find it encouraging that the ratios exhibit a trend of following the phenomenological curve toward the experimental points as the quark mass decreases. If we use the chiral extrapolation described above for the results on the $`16^3\times 32`$ lattice, we obtain $`m_\mathrm{N}/m_\mathrm{V}=1.342(25)`$ and $`m_\mathrm{\Delta }/m_\mathrm{V}=1.700(33)`$ at the physical ratio $`m_{\mathrm{PS}}/m_\mathrm{V}=0.1757`$, which are less than 10% off the experimentally observed ratios of 1.223 and 1.603, respectively, despite the coarse lattice spacing of $`a0.2`$ fm.
### B static potential at large distances
We have mentioned in Sec. V that our results for the static potential do not show signs of flattening, indicative of string breaking up to the distance of $`r1`$ fm. Similar results have been reported by other groups . A possible reason for these results is that potential data do not extend to large enough distances where string breaking becomes energetically favorable. Another related possibility is that the dynamical quark masses, which in most cases correspond to $`m_{\mathrm{PS}}/m_\mathrm{V}=0.7`$$`0.9`$, are too heavy. With our runs on the $`16^3\times 32`$ lattice we can examine these points up to the distance of $`r2`$ fm and for quark masses down to $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$.
In Fig. 10 we plot our potential data obtained on the $`16^3\times 32`$ lattice at the lightest sea quark mass corresponding to $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$. We find that the potential increases linearly up to $`r2`$ fm, without any clear signal of flattening. The situation is similar for our data at heavier sea quark masses.
An interesting and crucial question here is whether the Wilson loop operator has sufficient overlap with the ground state at large $`r`$ so that the potential in that state is reliably measured there . In Fig. 11 we compare results for the overlap function $`C(r)`$ for the full QCD run at $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$ with that obtained in a quenched run with the R gauge action on a $`9^3\times 18`$ lattice at $`\beta =2.1508`$ ($`a^11`$ GeV) . For the quenched run the overlap $`C(r)`$ of the smeared Wilson loop operator with the ground string state is effectively 100 % at all distances. For full QCD, on the other hand, $`C(r)`$ significantly decreases as $`r`$ increases. Such a behavior of $`C(r)`$ is observed in all of our data including those taken with action choices other than RC. These results may be taken as a tantalizing hint that the Wilson loop operator develops mixings with states other than a single string, possibly a pair of static-light mesons in full QCD. We leave further investigations of this interesting question for future studies.
### C computer time
An important practical information in full QCD is the computer time needed for the approach to the chiral limit. In Table III we assemble the relevant numbers for our runs on the $`16^3\times 32`$ lattice. These runs have been performed on a partition of 256 nodes, which is 1/8 of the CP-PACS computer. For a partition of this size, our full QCD program, written in FORTRAN with the matrix multiplication in the quark solver hand-optimized in the assembly language, sustains about 37% of the peak speed of 75 GFLOPS. Adding the CPU time per trajectory of Table III, we find that accumulating 5000 trajectories for each of the 6 hopping parameters for this lattice size would take about 160 days with the full use of the CP-PACS computer. Carrying out such a simulation is certainly feasible. For larger lattice sizes such as $`24^3\times 48`$, however, we would have to stop at $`m_{\mathrm{PS}}/m_\mathrm{V}0.5`$ since the run at $`m_{\mathrm{PS}}/m_\mathrm{V}0.4`$ alone increases the computer time by a factor two. Let us add that the CPU time for a unit of HMC trajectory increases roughly proportional to $`(1/K1/K_c)^{1.6}`$ for the 4 smallest quark masses.
## VII Conclusions
In this paper we have presented a detailed investigation of the effect of improving the gauge and the quark action in full QCD. We have found that the consequence of improving either of the actions is different depending on the observable examined.
For the light hadron spectrum the clover quark action with a mean-field improved coefficient drastically improves the scaling of hadron mass ratios. Improving the gauge action, on the other hand, has almost no influence in this aspect. The SW-clover action also has the good property that the physical scale determined from the vector meson mass and the string tension in the chiral limit of the sea quark are consistent already at scales $`a^11`$ GeV, which is not the case with the Wilson quark action.
We have also confirmed that the use of improved gauge actions leads to a significant decrease of the breaking of rotational symmetry of the static quark potential.
Finally, we have made an exploratory simulation toward the chiral limit employing a renormalization group improved gauge and clover improved quark actions.
The results obtained in the present study suggest that a significant step toward a systematic full QCD simulation can be made with the present computing power using improved gauge and quark actions at relatively coarse lattice spacings of $`a^11`$–2 GeV.
###### Acknowledgements.
This work was supported in part by the Grants-in-Aid of the Ministry of Education (Nos. 08NP0101, 08640349, 08640350, 08640404, 08740189, 08740221, 09304029, 10640246 and 10640248). Two of us (GB, RB) were supported by the Japan Society for the Promotion of Science.
## A Run Parameters
In this appendix we assemble information about our runs. An overview of the runs has been given in Table I. For the inversion of the quark matrix either the MR algorithm (M) or the BiCGStab algorithm (B) is used with the stopping condition $`r_1`$ stop defined through Eq.(6). During the HMC update $`D^{}D`$ has to be inverted. We do this in two steps, first inverting $`D^{}`$ and then $`D`$. In the tables we quote the number of iterations $`N_{\mathrm{inv}}`$ needed for the first inversion $`D^{}`$. Finally we also quote the statistics, giving the number of configurations for spectrum and potential measurements separately. Configurations for the hadron spectrum are separated by 5 HMC trajectories, whereas for the potential the separation is either 5 or 10 trajectories. Unless stated otherwise the lattice size is $`12^3\times 32`$.
## B Hadron Masses
In this appendix we assemble the results of our hadron mass measurements. We quote numbers for pseudo scalar and vector mesons, nucleons and $`\mathrm{\Delta }`$ baryons together with mass ratios against vector mesons. Additionally we quote numbers for the bare quark mass based on the axial Ward identity defined by
$$m_qa=m_{\mathrm{PS}}a\underset{t\mathrm{}}{lim}\frac{_\stackrel{}{x}A_4(\stackrel{}{x},t)P}{_\stackrel{}{x}P(\stackrel{}{x},t)P},$$
(B1)
where $`A_4`$ is the local axial current and $`P`$ is the pseudo scalar density. Masses are extracted with an uncorrelated fit to the propagator and the errors are determined with the jackknife method with bin size 5.
## C String Tension
|
no-problem/9902/cond-mat9902208.html
|
ar5iv
|
text
|
# Length and time scale divergences at the magnetization-reversal transition in the Ising model
## Figure Captions
FIG. 1. Schematic time variation of the pulsed field $`h(t)`$ and the corresponding response magnetization $`m(t)`$ for two different cases. The solid line indicates no magnetization-reversal case whereas the dashed line indicates a magnetization-reversal.
FIG. 2. Divergence of the relaxation time in mean field limit for $`T=0.8`$ and $`\mathrm{\Delta }t=20`$ (from numerical solution of eqn. (4)). The solid line indicates the corresponding analytical estimate (eqn. (15)).
FIG. 3. Growth of the pseudo-correlation length $`\stackrel{~}{\xi }`$ for different system sizes in the Monte Carlo study on a square lattice of size $`L\times L`$.
|
no-problem/9902/nucl-th9902051.html
|
ar5iv
|
text
|
# References
MULTIFRAGMENTATION OF NON-SPHERICAL NUCLEI : ANALYSIS OF CENTRAL Xe + Sn COLLISIONS AT 50 MeV$``$A <sup>1</sup><sup>1</sup>1Talk given at the XXVII International Workshop on Gross Properties of Nuclei and Nuclear Excitation, Hirschegg (Austria), January 17 - 23, 1999
A. LE FÈVRE, M. PŁOSZAJCZAK and V.D. TONEEV
Grand Accélérateur National d’Ions Lourds (GANIL)
CEA/DSM – CNRS/IN2P3, BP 5027, F-14076 Caen Cedex 05, France
## Abstract
The influence of shape of expanding and rotating source on various characteristics of the multifragmentation process is studied. The analysis is based on the extension of the statistical microcanonical multifragmentation model. The comparison with the data is done for central $`Xe+Sn`$ collisions at $`50AMeV`$ as measured by INDRA Collaboration.
The multifragmentation process has been studied in a broad range of bombarding energies and for various types of projectiles. The reaction mechanism is often considered in terms of two-step scenario where the first, dynamical step results in the formation of thermalized source which then, in the second step, decays statistically into light particles and intermediate-mass fragments (IMF’s). Assuming that the thermal equilibrium is attained, various statistical multifragmentation models were employed for the second step (see and references quoted therein). These models were so successful that deviations between their predictions and the experimental data have been often taken as an indication for dynamical effects in the multifragmentation. However, one should be aware of several oversimplifying assumptions in the statistical calculations, such as , e.g., the spherical shape of the thermalized source. Indeed, one expects that the spherical shape is perturbed during the dynamical phase and the density evolution can give rise to complicated source forms . Even more important are the angular momentum induced shape instabilities which may cause large fluctuations of both the Coulomb barrier and the surface energy even for moderately high angular momenta ($`L40\mathrm{}`$). In this paper, the non-spherical fragmenting source is considered within the statistical model which is based on the MMMC method of the Berlin group . The observables sensitive to the source shape are discussed and preliminary comparison with the experimental data for $`E_{kin}(Z)`$ is presented.
An explicit treatment of the fragment positions in the occupied spatial volume allows for a direct extension of the MMMC method to the case of non-spherical shapes . The source deformation in this case is considered as an external constraint, similarly as the freeze-out volume. Below we shall discuss axially symmetric ellipsoidal configurations ($`R_x=R_yR_z`$) which are characterized by the ratio : $`=R_x/R_z`$. The freeze-out volume of deformed system is the same as that of an ’equivalent’ spherical system with the radius $`R_{sys}=(R_xR_yR_z)^{1/3}`$. Consequently, the statistical weights in the Metropolis scheme due to the accessible volume for fragments remain the same. The Coulomb energy in our model is calculated exactly for every multifragment configuration of non-spherical nucleus. The general scheme to account for the angular momentum in the calculation of statistical weights is the same as discussed in Ref. . The source deformation will change the moment of inertia of rotating system. In calculating the statistical decay of fragmenting system, the angular velocity of the source is added to the thermal velocity of each fragment. For each spatial configuration of fragments, part of the total energy goes into rotation and hence the temperature of the system will slightly fluctuate. We take also into account fluctuations of the moment of inertia arising from fluctuations in the positions of fragments and light particles.
In calculating all accessible states in the standard MMMC method , the source should be averaged with respect to spatial orientations of its axes. Some of these states will not be accessible if the angular momentum is conserved . We disentangle in the MMMC code the beam direction (the $`z`$ \- axis) and the rotation axis (the $`x`$ \- axis, perpendicular to the reaction plane). The rotation energy is then : $`𝐋^2/2J_xL_{x}^{}{}_{}{}^{2}/2J_x`$, where $`J_x`$ is the rigid-body moment of inertia with respect to the $`x`$ axis. Averaging over polar angle $`\theta `$ is not consistent with the angular momentum conservation. On the contrary, averaging over $`2\pi `$ in the angle $`\varphi `$ corresponds to averaging over azimuthal angle of the reaction impact parameter and should be included. Averaging over rotation angle $`\psi `$ around $`𝐋`$ depends on the considered relationship between a rotation time : $`\tau _{rot}=J_x/L_x`$, and a characteristic life-time of the source $`\tau _c`$. For high angular momenta when $`\tau _{rot}\tau _c`$, the full averaging in $`0\psi 2\pi `$ should be performed. In the opposite limit when $`\tau _{rot}\tau _c`$, only states with $`\psi 0`$ are accessible.
In central HI collisions, part of the excitation energy can be stored in the compression energy of pre-formed source which is transformed into the kinetic energy of fragments during the collective expansion . To get some insight into the influence of collective flow on the multifragmentation process, we shall mimic this effect by adding the blast velocity $`v_b`$ to the thermal velocity of each particle/fragment for any event simulated by the Metropolis method. The average collective energy of expansion, similarly as the deformation energy, is not included in the value of the total excitation energy. We assume a simple scaling solution of non-relativistic hydrodynamics which yields the radial velocity profile :
$$v_b(r)=v_0\left(r/R_0\right)^\alpha ,$$
(1)
where $`v_0`$ and $`R_0`$ are the strength and scale parameters respectively, and the exponent $`\alpha `$ characterizes the power-low profile function. $`R_0`$ corresponds to the system size at the beginning of the scaling expansion regime. Hence, $`R_0<R_{sys}`$ and we take $`R_0=0.7R_{sys}`$ in all studies. Strictly speaking, the profile function (1) describes radial expansion of a spherical source. For axially symmetric expansion, the velocity profile may be more complicated. But even in this case, the scaling solution (1) with $`0.5\alpha 2`$ was successfully applied for describing the velocity profile of the transverse expansion in high energy heavy-ion collisions . In the multifragmentation case, we are dealing with a non-spherical expansion of unstable matter and therefore $`\alpha `$ may be treated as a free parameter.
Let us now consider the multifragmentation of $`{}_{}{}^{197}Au`$ having the angular momentum $`L=40\mathrm{}`$ and the thermal excitation energy $`6AMeV`$. These parameters characterize an equilibrized source formed in central $`Xe+Sn`$ collisions at $`50AMeV`$ studied by INDRA Collaboration . All calculations are carried out at the break-up density $`\rho \rho _0/6`$, what gives $`R_{sys}=12.8fm`$ for the effective radius of $`{}_{}{}^{197}Au`$ source. In the following, we consider ellipsoidal forms with the ratio of axes $`=0.6`$ (the prolate shape) and $`=1/0.6=1.667`$ (the oblate shape). We have found that none of the observables related to the fragment-size distribution is sensitive to the source deformation at these high excitation energies. On the contrary, the c.m.s. angular distribution of the largest fragment ($`Z=Z_{max}`$) is a highly sensitive observable (see Fig.1). In the absence of collective expansion, the angular distribution of $`Z_{max}`$ is isotropic for oblate configurations and has small forward - backward peaks for prolate configurations if the $`\psi `$\- averaging is performed in the whole available interval : $`0\psi 2\pi `$. For the ’frozen’ spatial configuration $`(\psi =0)`$, the shape effect is clearly seen: for the prolate form one finds strong forward - backward peaks,
while for the oblate form the heaviest fragment is predominantly emitted in the sideward direction ($`\theta _{cm}=\pi /2`$), like in the hydrodynamic splashing scenario. The collective expansion ($`\alpha =2`$) enhances the ’deformation effect’. One may notice a strong increase of forward and backward peaks in the prolate case and the appearance of a strong peak at $`\theta _{cm}=\pi /2`$ in the oblate case. Similar features can be seen also in the cumulative angular distributions of all IMF’s but the relative amplitude of the ’deformation effect’ in that case is smaller.
Large sensitivity to the source shape can be seen in the analysis on an event-by-event basis using global variables, such as the sphericity, coplanarity, aplanarity and the flow angle $`\mathrm{\Theta }_{flow}`$ . In the latter case, the shape differences manifest themselves even for $`v_b=0`$. The whole effect is extremely sensitive to the collective expansion and is enhanced furthermore for the ’frozen’ configuration ($`\psi =0`$). It should be mentioned that $`\mathrm{\Theta }_{flow}`$-distribution is the only observable which explicitly depends on the angular momentum .
We have compared the model predictions with the experimental data for central $`Xe+Sn`$ collisions at $`50AMeV`$ . Various statistical multifragmentation codes assuming the spherical source shape (the standard MMMC , the SMM ) have been tried earlier to explain these data. All of them were successful in predicting fragment partitions which, as stated above, do not provide a constraint on the source shape. On the other hand, these models failed to reproduce the kinematical observables such as the IMF’s angular distributions, the event shapes (sphericity, coplanarity, aplanarity), the IMF’s average kinetic energies $`E_{kin}(Z)`$. In the latter case, the behaviour for heaviest fragments could not be reproduced even including radial expansion . To understand this issue, we have applied our model for different shapes, orientations, expansion profiles (following (1)), starting from the source of $`{}_{}{}^{197}Au`$ having $`6AMeV`$ of thermal excitation energy and an angular momentum of $`L=40\mathrm{}`$. All calculated events have been filtered with the INDRA software replica, and then selected with the experimental centrality condition : complete events (i.e., more than 80% of the total charge and momentum is detected) and $`\mathrm{\Theta }_{flow}\pi /3`$. We consider first the average kinetic energies of IMF’s as a function of their charge. Results for different deformations $`=0.6,1.667`$ and expansion parameterizations are compared in Fig. 2 with the experimental data . The calculations have been done for the ’frozen’ configuration ($`\psi `$=0). One can see that the data excludes the radial velocity profile with $`\alpha `$=1 and clearly favours the expanding prolate source with $`\alpha `$=2 and $`\psi `$0. These source characteristics have been confirmed also by the observables related to the event shape, like the sphericity, coplanarity and aplanarity distributions, for which an excellent fit of the data has been obtained. Finally, we have examined the experimental IMF’s and heaviest fragment angular distributions, as well as the $`\mathrm{\Theta }_{flow}`$-distribution which again clearly discriminate between prolate, spherical and oblate source shapes. On the basis of this whole set of observables, we believe to have the strong evidence in central $`Xe+Sn`$ collisions at $`50AMeV`$ for the formation of prolate, slowly rotating source with the orientation of the source aligned with the beam axis.
In conclusion, we have made an extension of the Berlin statistical multifragmentation code in order to include the effects of deformation, angular momentum and collective expansion. Due to the change in the Coulomb energy for deformed freeze-out configuration, the ’deformation effect’ is clearly seen in the IMF’s angular distributions ($`Z_{max}`$\- angular distribution) as well as in the $`\mathrm{\Theta }_{flow}`$ \- distribution. A surprising interplay between effects of non-spherical freeze-out shapes and the memory effects of nonequilibrium phase of the reaction, such as the rotation and the collective expansion of the source, has been found. The influence of shape on rotational properties of the system is not only reduced to the modification of the momenta of inertia. The limits on the averaging interval over the angle $`\psi `$ about the rotation axis, which are defined by the time scales involved, affect strongly the angular observables and are able to enhance strongly the ’deformation effect’. These constraints may be important for certain observables used in experimental procedures of selecting a specific class of events. Other striking finding is that the collective expansion allows to disclosure the source shape in the analysis using global variables as well as in the study of $`Z`$-dependence of $`E_{kin}`$. All these experimental observables, including $`E_{kin}(Z)`$, are well reproduced assuming strongly deformed ($`=0.6`$), slowly rotating ($`L=40\mathrm{}`$) and expanding source which has the radial velocity profile (1) with $`\alpha 2`$. Alongside with the observables discussed here, it would be interesting to investigate the velocity and angular correlations between fragments which are sensitive to the source shape at the freeze-out. Such a work is now in progress .
Acknowledgements
We are grateful to D.H.E. Gross for his encouragement and interest in this project. We also thank G. Auger, A. Chbihi and J.-P. Wieleczko for their interest in this work.
|
no-problem/9902/astro-ph9902005.html
|
ar5iv
|
text
|
# Radial Transport of Molecular Gas to the Nuclei of Spiral Galaxies
## 1 Introduction
Radial transport of gas in galactic disks likely plays an important role in the formation and evolution of bulges. There are two aspects in the effect of gas transfer to bulges, in both of which stellar bars are involved. First, theories predict that bars efficiently transport interstellar gas to the nuclei of spiral galaxies, providing star forming material to the bulge regions. Second, simulations have shown that the gas accumulation at a galactic center changes the gravitational potential and eventually destroys the bar (c.f., a review by Pfenniger in this workshop). Bulges may grow through this process by gaining stars from disks.
Observationally, evidence for bar-driven gas transport and for bar dissolution has been limited compared to the large amount of theoretical works. The pieces of observational evidence supporting the bar-driven gas transport are the estimation of gas inflow rates in two barred galaxies using CO and NIR observations and dynamical models (Quillen et al. 1995; Regan & Vogel 1997), shallower metallicity gradients in barred than unbarred galaxies (Zaritsky et al. 1994; Martin & Roy 1994), and larger H$`\alpha `$ luminosities in the nuclei of barred galaxies presumably due to larger amount of gas in barred nuclei (e.g., Ho et al. 1995). In order to further investigate the relation between bars, gas, and bulges, it is important to observe gas in many galaxies.
The NRO/OVRO CO imaging survey mapped the distribution of molecular gas in the central kiloparsecs of 20 ordinary nearby spirals using the millimeter arrays of the two observatories (Sakamoto et al. 1998, 1999). The 20 northern spiral galaxies were selected on the basis of inclination (face-on), lack of significant dynamical perturbation, and reasonable single-dish CO flux to allow high-resolution observations. No selection was made on starburst, nuclear activity, far-infrared luminosity, and galaxy morphologies. The sample contains 10 barred (SB+SAB) and 10 unbarred (SA) spirals with the mean distance of 15 Mpc and with luminosities $`L^{}`$. Our aperture synthesis observations have a mean resolution of $`4^{\prime \prime }`$ (= 300 pc at 15 Mpc) and recovered most ($`70\pm 14`$ %) of the single-dish flux. We use the data to set constraints on the above theoretical predictions.
## 2 Central gas condensations
Most galaxies in our sample show strong condensations of CO at their centers. Fig. 1 shows the histogram of CO-derived masses of molecular gas within the central kiloparsec. The central gas masses are mostly in the range of 10<sup>8</sup>–10<sup>9</sup> $`M_{}`$. It thus seems not unusual for a large gas-rich galaxy to have a condensation of such a large amount of gas at the center. The gas condensations generally have radial profiles sharply peaking toward the galactic centers, when observed with sub-kiloparsec resolutions. The distribution of radial scale lengths of CO is also in Fig. 1. The central scale length is defined as the radius at which a radial profile falls to $`1/e`$ of its maximum value, and is not affected much by the missing flux (15 % error at most). It is apparent that most galaxies have sub-kiloparsec scale lengths in the nuclear regions. The gas condensations are thus not simple extensions of outer exponential disks, which usually have scale lengths larger than a few kpc. It is interesting to note that the highest mass of the gas condensations, 10<sup>9</sup> $`M_{}`$, is comparable to the mass needed to destroy bars in simulations.
## 3 Higher gas concentrations in barred galaxies
In order to quantify the degree of gas concentration in disk galaxies, we compare in Fig. 2 the gas surface densities averaged in the central kiloparsec with those averaged over the optical galactic disks (i.e., $`R<R_{25}`$). The former are calculated from our data and the latter are calculated from the single-dish mapping data of the FCRAO survey (Young et al. 1995). The ratio of the two surface densities, $`f_{\mathrm{con}}\mathrm{\Sigma }_{\mathrm{gas}}^{R<500\mathrm{p}\mathrm{c}}/\mathrm{\Sigma }_{\mathrm{gas}}^{R<R_{25}}`$, is an indicator of gas concentration to the central kiloparsec. The concentration factor $`f_{\mathrm{con}}`$ shows more than 100-fold variation in our sample.
Barred (i.e., SB+SAB) and unbarred (SA) galaxies are plotted with different symbols in Fig. 2. It is apparent that barred galaxies have higher concentration factors than unbarred galaxies. The difference is statistically significant according to the Kolmogorov-Smirnov test; the probability to observe the difference of $`f_{\mathrm{con}}`$ in Fig. 2 would be 0.007 if there were no difference between the two classes of galaxies.
We note that the conversion factor from CO flux to mass of molecular gas, $`X_{\mathrm{CO}}`$, is unlikely to produce the difference of $`f_{\mathrm{con}}`$ between barred and unbarred galaxies. The ratio $`f_{\mathrm{con}}`$ is independent of $`X_{\mathrm{CO}}`$ if it has the same form of radial distribution in galaxies, e.g., $`X_{\mathrm{CO}}(r)`$ being either $`a`$ (const.), $`ae^{br}`$, or $`ar^b`$ with the same radial scale $`b`$. The multiplier $`a`$ can be different from one galaxy to another without changing $`f_{\mathrm{con}}`$. A systematic difference in the radial profile of $`X_{\mathrm{CO}}`$ between barred and unbarred galaxies may exist if the conversion factor scales with metallicity. However, the shallower metallicity gradients in barred galaxies would make the apparent CO concentrations lower in barred galaxies. Thus the correction for metallicity would only enhance the difference of $`f_{\mathrm{con}}`$ between the two types of galaxies. No other cause is known to create a systematic difference in radial profiles of $`X_{\mathrm{CO}}`$ between barred and unbarred systems.
We used classifications in the Third Reference Catalogue to distinguish barred and unbarred galaxies. It is possible that the classifications based on optical images missed small nuclear bars or misidentified open spiral arms as a bar. However, nuclear bars naturally have smaller power of gas transport, and spiral arms masquerading a bar create a global nonaxisymmetry in the gravitational potential as a bar does. Thus the optical classification is a qualitative index of the strength of nonaxisymmetry in mass distribution and in gravitational potential. We conclude therefore that galaxies with larger nonaxisymmetries (called ‘barred’ galaxies) have higher gas concentrations than galaxies with smaller nonaxisymmetries (i.e., ‘unbarred’ galaxies) <sup>1</sup><sup>1</sup>1The gas concentrations in unbarred galaxies are low but $`1`$. They may be due to the bars that had been destroyed, but they do not necessarily require the bar dissolution, because they may be also due to viscous accretion of gas, weak but finite nonaxisymmetries in those galaxies, or the centrally peaked distribution of stars that produce gas. .
## 4 Implications to the bar-dissolution scenario
The higher gas concentrations in barred galaxies are most likely due to radial transport of gas in the barred potentials. However, the transport of gas is not a sufficient condition to cause and maintain the higher gas concentrations in barred galaxies. It is also necessary that the molecular gas funneled to the galactic centers remains there in molecular form and that the bars responsible for the gas transport remain. These requirements set constraints on the relation between the rates of gas inflow and gas consumption, and also on the timescale for the possible bar dissolution.
First, the total amount of gas funneled to the center of a barred galaxy must be larger than the total amount of stars formed in the same region, because otherwise the higher gas concentration in the barred galaxy can not be sustained. Dividing the total masses by the age of the bar, the above relation translates to the condition that the time-averaged rate of gas inflow must be larger than that of star formation. One may be able to estimate the time-averaged rate of star formation from an ensemble-average of star formation rates in the centers of barred galaxies, thereby setting a lower limit to the mean gas inflow rate. If there are other ways of loosing molecular gas, such as a gas outflow due to starburst and accretion to active nucleus, then the lower limit becomes higher.
The second condition we can deduce is that the timescale of gas consumption in the central regions is longer than that of the possible bar dissolution. In other words, if bars are to be destroyed by the gas inflow of 10<sup>8</sup>–10<sup>9</sup> $`M_{}`$ to the central kiloparsec and if the bar dissolution is much quicker than the gas consumption in the central regions, then we would see currently unbarred but previously barred galaxies with high central gas concentrations that destroyed the bars. The lack of such galaxies (i.e., unbarred spirals with $`f_{\mathrm{con}}100`$) allows us to set the above condition on the timescale of bar dissolution.
Quantitative evaluation of the above conditions is hampered by the difficulty in accurately estimating star formation rates in galactic centers. The current star formation rates crudely estimated from H$`\alpha `$ in the centers of the sample galaxies are $`0.1`$–1 $`M_{}`$yr<sup>-1</sup>, which set a lower limit to the mass inflow rate. The consumption time of the gas concentrations is 10<sup>8</sup>–10<sup>10</sup> yrs. The lower value does not contradict with the predicted timescale of bar dissolution, which is comparable to the dynamical time or a few 10<sup>8</sup> yrs. If the higher value is the case in many spirals, then the bar dissolution must take longer time than predicted, or will not happen for the 10<sup>8</sup>–10<sup>9</sup> $`M_{}`$ gas concentrations.
It seems worthwhile to compile more data of gas concentration and star formation to tighten the above constraints on the mass transfer in galactic disks and on the fate of stellar bars. The index of gas concentration $`f_{\mathrm{con}}`$ may be useable as a tool to find out unbarred galaxies that were barred and galaxies with young bars: the former must have higher concentration factors for unbarred galaxies and the latter must have lower factors for barred galaxies. Observations of such galaxies would tell us about evolution of disks, bars, and bulges.
###### Acknowledgements.
Stimulating conversations at the workshop with Drs. Norman, Pfenniger, Hasan, Regan, and Wada are acknowledged. K.S. was supported by JSPS grant-in-aid.
|
no-problem/9902/cond-mat9902343.html
|
ar5iv
|
text
|
# Practical Methods for Ab Initio Calculations on Thousands of Atoms
## I Introduction
Over the last thirty years, the use of ab initio electronic structure techniques has become widespread in chemistry and physics. However, all traditional techniques are limited in the system size which they can treat (often defined by the number of atoms, $`N`$) by poor scaling of the computational effort required, which is generally at least $`𝒪(N^2)`$, if not worse. In the context of standard self-consistent field (SCF) quantum chemical methods such as Hartree-Fock theory or density functional theory (DFT), there are two demanding parts of a calculation: first, the build of the Fock matrix (or, in DFT, the Hamiltonian matrix), which can scale as $`𝒪(N^2)`$; second, the solution for eigenvectors of the Fock matrix, which scales as $`𝒪(N^3)`$ if performed as a diagonalisation; this only dominates for large values of $`N`$. An alternative to diagonalisation is iterative minimisation, which has $`𝒪(N^2)`$ scaling (one $`N`$ dependence comes from the eigenvectors spreading over all space, and the other from the number of eigenvectors, which depends on $`N`$); if, as is often the case, the eigenvectors must be orthogonalised to each other, this leads asymptotically to $`𝒪(N^3)`$ scaling. Whatever the cause of poor scaling, it results in a practical limit of a few hundred atoms in conventional ab initio techniques. The desire to model large systems, e.g. biomolecules or nanostructures, has seen a strong push in recent years to achieve linear scaling with system size.
The building of the Fock matrix (and specifically the Coulomb and exchange terms which are the most expensive) with linear scaling has been addressed recently by several groups; however, this aspect will not concern us here. Rather, we will consider linear scaling techniques (which also rely on iterative minimisation) for finding the self-consistent ground state of the system.
Recently, many linear scaling techniques have been proposed, which are all based on the search for the density matrix. They start from the observation that the density matrix between two points in space decays in some manner as those two points increase in separation. This is intuitively clear, as it is well-known that bonding is local. The result is that the electronic structure of an atom depends only on its local environment, so that if the overall size of the system changes, there is no effect on the local electronic structure; thus the effort required to solve for the whole system should be proportional to $`N`$. This is the foundation of all linear scaling electronic structure methods (with a few exceptions which will not concern us here).
The paper is divided up as follows: Section II describes the basic theory behind our $`𝒪(N)`$ DFT method, while Section III presents recent advances we have made in searching for the electronic ground state. The future directions the work will take are shown in Section IV, and the paper is concluded in Section V.
## II $`𝒪(N)`$ density functional theory
The density matrix within density functional theory (DFT) can be written as:
$$\rho (𝐫,𝐫^{})=\underset{i}{}f_i\psi _i(𝐫)\psi _i^{}(𝐫^{}),$$
(1)
where $`\psi _i`$ is a Kohn-Sham eigenfunction, and $`f_i`$ is the occupancy of that eigenfunction. The key observation which underpins $`𝒪(N)`$ DFT is that DFT can be formulated in terms of $`\rho (𝐫,𝐫^{})`$, and that the ground state can be found by minimising the total energy, $`E_{\mathrm{tot}}`$, with respect to $`\rho (𝐫,𝐫^{})`$ subject to the condition that $`\rho (𝐫,𝐫^{})`$ is idempotent (these statements are proved elsewhere). Idempotency means that $`\rho \rho =\rho `$, which is equivalent to the eigenvalues of $`\rho `$ (which are the occupation numbers $`f_i`$) being either zero or one. This is a crucial property for the density matrix, as it is ensures that the density matrix is a projector – it is the operator which projects onto the space of occupied states.
Another important property of the density matrix is that it decays as the separation between points increases:
$$\rho (𝐫,𝐫^{})0\mathrm{as}𝐫𝐫^{}\mathrm{}.$$
(2)
The fundamental reason for this decay is the loss of quantum phase coherence between distant points. The net result is that the local environment is all that is important in determining $`\rho (𝐫,𝐫^{})`$, and thus that the amount of information contained in $`\rho `$ is proportional to $`N`$. This decay property of the density matrix can be used to make the amount of information in the system strictly linear with the system size by imposing a constraint, and setting the density matrix to zero beyond a cutoff:
$$\rho (𝐫,𝐫^{})=0,𝐫𝐫^{}>R_c,$$
(3)
where $`R_c`$ is some cut-off radius. Solving for the energy with this constraint imposed, and that of idempotency, will lead to an upper bound on the ground state energy<sup>*</sup><sup>*</sup>*The idempotency constraint on $`\rho `$ will make it the projection operator onto the occupied states. The energy of the ground state given by these states will be the energy for the system under the constraint of a localised $`\rho `$. Since DFT is variational, this extra constraint will raise the energy above the true ground state energy (i.e. without localisation), and so will give an upper bound to the true ground state energy.; as $`R_c`$ is increased, it will converge to the true ground state. Clearly there is a balance to be struck between accuracy, which increases as $`R_c`$ is increased, and the complexity of the computational problem (e.g. number of variational degrees of freedom associated with $`\rho `$), which also increases with $`R_c`$.
This exact formulation cannot be followed for practical methods, as $`\rho `$ is dependent on two vector positions. Instead, a further approximation is introduced, namely that $`\rho `$ be separable, so that it can be written in the form:
$$\rho (𝐫,𝐫^{})=\underset{i\alpha ,j\beta }{}\varphi _{i\alpha }(𝐫)K_{i\alpha ,j\beta }\varphi _{j\beta }(𝐫^{}),$$
(4)
which is equivalent to requiring that $`\rho `$ only have a finite number of non-zero eigenvalues. The functions $`\varphi _{i\alpha }(𝐫)`$ are known as ‘localised orbitals’, where $`i`$ runs over atoms and $`\alpha `$ over localised orbitals on each atom. The matrix $`K_{i\alpha ,j\beta }`$ is the density matrix in the basis set of $`\{\varphi _{i\alpha }(𝐫)\}`$ (and is identical to the density matrix commonly seen in $`𝒪(N)`$ tight binding schemes; indeed many of these schemes can be used to solve for this matrix within $`𝒪(N)`$ DFT).
The localisation of $`\rho `$ is accomplished by setting the localised orbitals to be non-zero only inside a certain radius, $`R_{\mathrm{reg}}`$, centred on the atoms $`i`$. A spatial cutoff (not generally of the same value) must also be imposed on the matrix $`K`$, so that $`K_{i\alpha ,j\beta }=0`$ once atoms $`i`$ and $`j`$ are more than a certain distance apart.
### A A Specific Implementation: CONQUEST
The framework described above is completely general; we will now concentrate on our specific implementation, Conquest. This is based on the pseudopotential approach to DFT (described briefly in Appendix A), and has been constructed so as to be as accurate as conventional ab initio pseudopotential calculations, which use plane waves as a basis set.
In practice, there are various issues which must be addressed: minimising the total energy with respect to $`K_{i\alpha ,j\beta }`$ while maintaining idempotency and spatial localisation; representing the localised orbitals; the cutoffs required to achieve good convergence to the true ground state; and practical questions, including integration and implementation on parallel computers. These have been addressed in detail elsewhere; we present a brief summary here.
The imposition of idempotency on $`K`$ during the minimisation of $`E_{\mathrm{tot}}`$ with respect to $`K_{i\alpha ,j\beta }`$ is the hardest constraint to maintain. There are several proposed means of accomplishing this; the method described here is based on the purification technique of McWeeny, recently used in tight binding calculations by Li, Nunes and Vanderbilt, and described in detail in Section III A. It requires $`K`$ to be written in terms of an ‘auxiliary’ density matrix, $`L`$:
$`K=3LSL2LSLSL,`$ (5)
where $`S`$ is the overlap matrix:
$$S_{i\alpha ,j\beta }=d𝐫\varphi _{i\alpha }(𝐫)\varphi _{j\beta }(𝐫).$$
(6)
The localisation of $`K`$ is then imposed as a spatial cutoff on $`L`$:
$$L_{i\alpha ,j\beta }=0,𝐑_i𝐑_j>R_\mathrm{L},$$
(7)
where $`𝐑_i`$ is the position of atom $`i`$ and $`R_\mathrm{L}`$ is a cutoff radius. The energy is then minimised with respect to the matrix elements $`L_{i\alpha ,j\beta }`$ using the standard conjugate gradients technique. The effect of the purification is to make $`K`$ more nearly idempotent given an $`L`$ which is nearly idempotent.
The next issue to consider is the representation of the localised orbitals. The lessons learnt from the use of plane waves in conventional pseudopotential calculations are helpful to remember here. First, they offer a systematic convergence of the energy with respect to the basis set completeness, and this is achieved with a single parameter (the cutoff energy). Second, they are bias free - that is, they are completely flexible, and no knowledge of the kind of bonding in the system is required. If possible, the choice of basis set would reflect these qualities.
Conquest represents the localised orbitals in a real-space basis. There are various possibilities for an efficient, real-space basis. One is to use the spherical equivalent of plane waves, that is spherical Bessel functions $`j_l(𝐫)`$ combined with spherical harmonics $`Y_m^l(𝐫)`$, within each localisation region. This representation has been discussed by Haynes and Payne, but practical results have not yet been reported. Another $`𝒪(N)`$ DFT scheme uses pseudo-atomic orbitals as the basis functions, with considerable success. An alternative is to represent the $`\varphi _{i\alpha }`$ by their values on a grid, and to calculate matrix elements of the kinetic energy by taking finite differences. This technique is well established in conventional first principles calculations, and has been investigated in the context of $`𝒪(N)`$ techniques by us and recently by Hoshi and Fujiwara. At present we use a basis of B-splines, $`\mathrm{\Theta }(𝐫)`$, (also called blip functions) in the expansion:
$$\varphi _{i\alpha }(𝐫)=\underset{s}{}b_{i\alpha s}\mathrm{\Theta }(𝐫𝐑_{is}),$$
(8)
where the B-splines are piecewise polynomial functions (continuous up to the third derivative) which are strictly localised on the points of a grid (notated as $`𝐑_{is}`$ above) which is rigidly attached to each atom (known as the blip grid). The energy is then minimised with respect to the coefficients of the B-splines, $`b_{i\alpha s}`$.
Having described the minimisation of the total energy with respect to both the $`K`$ matrix and the localised orbitals, it is now appropriate to consider the practical performance of the method: are the cutoffs required to achieve convergence to the ground state small enough to be practical ? Tests on a model, local pseudopotential and standard non-local pseudopotentials have already been reported, and show that for reasonable cutoffs, good convergence is obtained. We reproduce some of these results in Figure 1. Fig. 1(a) shows the calculated total energy as a function of $`R_{\mathrm{reg}}`$ for Si. The results show that $`E_{\mathrm{tot}}`$ converges to the correct value extremely rapidly once $`R_{\mathrm{reg}}`$ is greater than 4 Å. For this radius, each localisation region contains 17 neighbouring atoms, and the calculations are perfectly manageable. Fig. 1(b) shows the total energy for $`R_{\mathrm{reg}}`$ = 2.715 Å, as a function of $`R_L`$. Rather accurate convergence to the $`R_\mathrm{L}=\mathrm{}`$ value is obtained for $`R_\mathrm{L}`$ 8 Å, which again is acceptable. No value is shown for exact diagonalisation because of technical difficulties in performing comparisons.
To perform integrations such as $`S_{i\alpha j\beta }=d𝐫\varphi _{i\alpha }(𝐫)\varphi _{j\beta }(𝐫)`$ numerical integration on a grid is used. This integration grid is generally of different spacing to the blip grid (and normally about twice as fine). Most matrix elements are found by integration on this grid (with the exception of fast-varying quantities which are calculated analytically), and the localised orbitals are projected from the blip grid to the integration grid in a manner similar to a fast Fourier transform (FFT), called a blip-to-grid transform. Once the charge density is calculated on the grid (as $`n(𝐫)=\rho (𝐫,𝐫)`$), the Hartree potential and energy are found using FFTs.
Conquest has been designed with parallel computers in mind; here we summarize the strategy. Each processor has three responsibilities. First, a group of atoms: it holds the blip coefficients, $`b_{i\alpha s}`$ and their derivatives of the energy, $`E_{\mathrm{tot}}/b_{i\alpha s}`$ and is responsible for performing the blip-to-grid transforms for these atoms. It also stores the rows of matrices corresponding to these atoms and performs the matrix multipications for these rows. Second, a domain of integration grid points: it has responsibility for calculating contributions to matrix elements arising from sums over these points, and for holding the electron density and the Kohn-Sham potential on these points. Third, part of the spatial FFT for the Hartree potential: it deals with a set of columns in the $`x,y`$ or $`z`$ directions. All processors switch between tasks in a concerted manner.
To test the efficiency of the scheme, we have extensively tested its scaling properties. There are two completely different kinds of scaling: parallel scaling (i.e. scaling of CPU time for a given system with varying numbers of processors); and inherent scaling (i.e. scaling of CPU time for a fixed number of processors as the system size varies. In the present implementation of Conquest, both types of scaling are excellent.
The overall Conquest scheme can be summarised as follows: the ground state energy and density matrix of the system are found by minimising the energy $`E_{\mathrm{tot}}`$ with respect to the elements of the auxiliary density matrix, $`L_{i\alpha ,j\beta }`$, and the localised orbitals, $`\varphi _{i\alpha }(𝐫)`$, subject to the spatial cutoffs $`R_L`$ and $`R_{\mathrm{reg}}`$. This yields an upper bound to the true ground state, which improves as the cutoffs are increased.
## III Strategies for reaching the ground state
Having described the specific manner in which Conquest is implemented, we now consider ways of reaching the ground state efficiently and robustly. At present, the minimisation is carried out in three separate stages: first, the ground state density matrix is found for a given set of localised orbitals; second, self-consistency is achieved between the charge density and the potential (which includes further density matrix minimisation in response to the new potential); finally, the form of the localised orbitals is changed in accordance with the gradient of the energy. The inner loops (density matrix minimisation and self-consistency) are then repeated. Schemes for efficiency and robustness for these three stages are now discussed.
### A Density matrix minimisation
As has already been emphasised, the density matrix of the true electronic ground state is idempotent. This important property is hard to impose during a minimisation; however, a number of schemes which drive the matrix towards idempotency have been suggested. For simplicity, we will consider these schemes in the framework of orthogonal tight binding theory; the extension to the non-orthogonal case and DFT is simple enough. The first scheme was proposed by McWeeny, who noted that if a matrix $`\rho `$ is close to idempotency, then the matrix $`\stackrel{~}{\rho }`$ given by:
$$\stackrel{~}{\rho }=3\rho ^22\rho ^3$$
(9)
will be more nearly idempotent. It has the effect of driving the eigenvalues of $`\rho `$ towards zero and one (this can be seen by considering the function $`3\lambda ^22\lambda ^3`$, which is shown in Figure 2). If this transformation (often called the McWeeny transformation or purification transformation) is applied iteratively (writing $`\rho _{n+1}=3\rho _n^22\rho _n^3`$ for iteration $`n+1`$), then the sequence of matrices generated will converge on an idempotent matrix. In fact, this transformation is quadratically convergent (i.e. if the idempotency error in $`\rho `$ is $`\delta \rho `$, then the idempotency error in $`\stackrel{~}{\rho }`$ is $`\delta \rho ^2`$).
Palser and Manolopoulos have recently suggested using this iterative scheme in an $`𝒪(N)`$ manner. They point out that if the initial density matrix is a linear function of the Hamiltonian, with eigenvalues between zero and one, then the iteration will converge to the correct ground state density matrix (given by $`\theta (\mu H)`$, where $`\theta (x)`$ is the Heaviside step function ($`\theta =1`$ for $`x>0`$ and $`\theta =0`$ for $`x<0`$) and $`\mu `$ is the chemical potential for electrons, or the Fermi energy), and that the energy, $`E=2\mathrm{T}\mathrm{r}[\rho _nH]`$, will decrease monotonically at each step. This procedure has the advantage of being fast (it only requires two matrix multiplies per iteration) and efficient (it converges quadratically). Unfortunately, when a localisation criterion (also called a truncation) is applied to the density matrix to achieve linear scaling, the monotonic decrease of energy will fail at some point in the iterative search. This can be taken as an indication that truncation errors are dominating the calculation, and that the search should be stopped; indeed, if it is not stopped, there is no guarantee that it will continue to converge towards an idempotent matrix. This is a heuristic criterion for stopping the iteration, and has the drawback that the method will not be variational, so that analytic forces will not be in agreement with the numerical gradient of the energy.
The Li, Nunes and Vanderbilt (LNV) scheme for achieving the ground state density matrix also uses the McWeeny transformation, though in a rather different manner. Here, the energy is written as $`E=2\mathrm{T}\mathrm{r}[\stackrel{~}{\rho }H]`$, with $`\stackrel{~}{\rho }`$ given by equation 9. Then the energy is minimised with respect to the elements of $`\rho `$, typically using a scheme such as conjugate gradients to generate a sequence of search directions. The localisation of the density matrix is achieved by applying a spatial cutoff to the elements of $`\rho `$. This scheme has at least two advantages: first, each line minimisation can be performed analytically, as the energy is cubic in $`\rho `$; second, it is variational, so that the energy found is always an upper bound to the ground state, and forces obey the Hellmann-Feynman theorem and are in exact agreement with the numerical derivative of the energy.
However, there are drawbacks to the LNV technique. First, it is unclear what the best initial value should be for the density matrix; typically, it is taken to be $`\frac{1}{2}`$I, or $`\frac{1}{2}`$S<sup>-1</sup> in a non-orthogonal basis set. Second, as the McWeeny transformation is a cubic, it is unbounded from below, and a poor starting choice for the minimisation can lead to runaway solutions; a sign of this is typically that the cubic has complex extrema. Third, the scheme can be poorly convergent, and is not guaranteed the quadratic convergence of the McWeeny method.
We have recently proposed a hybrid between the McWeeny and LNV schemes which builds on the complementarity between these two methods. It is based on the observation that the sequence of matrices generated during a McWeeny iterative search converges quadratically towards idempotency, and that the LNV search direction maintains idempotency in the density matrix to first order. Thus the McWeeny scheme is used as an initialisation to find an idempotent density matrix (but one which is not the ground state matrix because of truncation errors); this matrix is then used as the input to the LNV scheme, which maintains the idempotency while searching efficiently for the ground state density matrix. The combination of the two methods is both variational and robust - two highly desirable attributes for the inner loop of an $`𝒪(N)`$ DFT method.
As an example of the improved speed of convergence given by the hybrid scheme, Figure 3 shows the convergence to the ground state energy in diamond carbon for the LNV stage of the hybrid scheme and pure LNV (initialised from $`\rho =\frac{1}{2}𝐈`$). These results show that the McWeeny stage of the hybrid scheme gets closer to the ground state as the radius is increased, as expected, and that it acts as an excellent initial density matrix for the LNV scheme. The method has also been tested on a vacancy in diamond C, the Si(001) surface and liquid Si, as reported elsewhere.
### B Non-orthogonality
The previous section focuses on the traditional tight binding scheme where the localised orbitals are taken to be orthogonal. However, in our $`𝒪(N)`$ method, the orbitals are non-orthogonal, and this introduces a significant degree of complication. In strict mathematical terms, the metric for the space spanned by the localised orbitals must be chosen with care; this means defining a scalar product in a specific way, as explained in detail in Appendix B, along with other implications of using non-orthogonal orbitals. We will consider a few simple implications of this theory here.
For any given set of non-orthogonal orbitals, {$`\overline{\varphi }_i`$}, an orthogonal set can be defined by using the overlap matrix, $`S`$:
$$\varphi _i=\underset{j}{}(S^{1/2})_{ij}\overline{\varphi }_j,$$
(10)
where we use an over-bar to indicate the quantities in the non-orthogonal case. If the metric is chosen suitably, then similar transformations between the Hamiltonian and density matrices in the two spaces can be defined:
$`\overline{H}`$ $`=`$ $`\overline{S}^{1/2}H\overline{S}^{1/2}`$ (11)
$`\overline{\rho }`$ $`=`$ $`\overline{S}^{1/2}\rho \overline{S}^{1/2}.`$ (12)
There are various points which can be drawn from the above equations. First, the McWeeny transformation (and the other quantities associated with a minimisation such as the gradient of the energy) will change in the new basis set; in fact the McWeeny transformation becomes $`\stackrel{~}{\overline{\rho }}=3\overline{\rho }S\overline{\rho }2\overline{\rho }S\overline{\rho }S\overline{\rho }`$. Second, there are different types of matrix in the non-orthogonal case, one of which transforms with $`\overline{S}^{1/2}`$ and the other with $`\overline{S}^{1/2}`$ (in fact there are two types of orbital and hence four types of matrix); great care must be taken to combine these matrices in the correct fashion, as pointed out by White et al. for the case of the gradient of the energy with the non-orthogonal density matrix. Third, there may be a considerable advantage in choosing the metric so that the transformations shown in equation 12 apply, as this will enable direct comparison with the orthogonal case. The interested reader is referred to Appendix B and references therein for more details.
### C Charge mixing and self-consistency
The question of achieving self-consistency between the charge density and the potential has been examined in great detail over many years, and much is known about efficient implementation. Within Conquest, the direct inversion of the iterative subspace (DIIS) method of Pulay is used. At each iteration, a residual can be defined as:
$$R[\rho ^{\mathrm{in}}]=\rho [\rho ^{\mathrm{in}}]\rho ^{\mathrm{in}},$$
(13)
where $`\rho [\rho ^{\mathrm{in}}]`$ is the output charge density: that is, the potential arising from $`\rho _{\mathrm{in}}`$ is found (consisting of Hartree and exchange-correlation parts), the Schrödinger equation is solved, and the output charge density generated from the resultant wavefunctions. Clearly, the aim is to reduce $`R[\rho ^{\mathrm{in}}]`$ to as close to zero as possible in the least number of iterations. The simplest possible technique is to use the output charge density from one cycle ($`\rho _n^{\mathrm{out}}=\rho [\rho _n^{\mathrm{in}}]`$) as the input for the next cycle: $`\rho _{n+1}^{\mathrm{in}}=\rho _n^{\mathrm{out}}`$; however, this is potentially rather slow, and prone to the phenomenon known as ‘charge sloshing’, where long wavelength variations of the charge in the unit cell dominate the self-consistency procedure. There are in fact cases where this simple method fails to work at all, and self-consistency is never reached. This is clearly unacceptable.
Better than this is to perform a linear mix of the two previous charge densities, so that:
$$\rho _{n+1}^{\mathrm{in}}=(1\lambda )\rho _{n1}^{\mathrm{out}}+\lambda \rho _n^{\mathrm{out}}.$$
(14)
The value of lambda can be found easily. If the residuals (defined above in equation 13) are treated as vectors (considering the value at each spatial position as an entry in the vector), then scalar products can be formed between residuals, and the norm of a residual can be found as $`\left(R[\rho _{n+1}^{\mathrm{in}}]R[\rho _{n+1}^{\mathrm{in}}]\right)^{1/2}`$. The optimum value of $`\lambda `$ is found by minimising this norm with respect to $`\lambda `$. This gives:
$$\lambda =\frac{R[\rho _n^{\mathrm{in}}]R[\rho _n^{\mathrm{in}}]R[\rho _{n1}^{\mathrm{in}}]}{R[\rho _n^{\mathrm{in}}]R[\rho _{n1}^{\mathrm{in}}]R[\rho _n^{\mathrm{in}}]R[\rho _{n1}^{\mathrm{in}}]}.$$
(15)
This procedure can be generalised to more than two previous densities, which can give significant benefits, as described elsewhere. It is often important to mix in a small amount of the input charge densities, as well as performing the mixing given above.
As well as combining charge densities in the optimum manner, it is important to suppress the phenomenon of ‘charge sloshing’; an ideal way to do this is to use Kerker preconditioning. Here a scaling is applied in reciprocal space to the residual:
$$R[\rho _j^{\mathrm{in}}]=R[\rho _j^{\mathrm{in}}]\times \frac{q^2}{q^2+q_0^2},$$
(16)
where $`q`$ is a reciprocal space vector, and $`q_0`$ is chosen suitably (a value close to $`2\pi /a_0`$, where $`a_0`$ is a lattice vector, is appropriate). This scaling is an approximation to the inverse dielectric function, and enables fast and robust iteration to a self-consistent charge density and potential.
### D Pre-conditioning localised orbital variation
Now that we have described the robust and efficient search for the ground state density matrix (for a given set of localised orbitals) and the fast iteration to a self-consistent charge density and potential, we must consider the variation of the localised orbitals.
As is the case for minimisation problems in many areas of science, Conquest suffers from ill conditioning in the search for the ground state when varying the localised orbitals. Ill conditioning occurs if the function being minimised has a wide range of curvatures. For a general function $`f(x_1,x_2,\mathrm{},x_N)`$, dependent upon variables $`\{x_i\}`$, the curvature matrix (or Hessian) can be defined as $`C_{ij}=^2f/x_ix_j`$. If the eigenvalues $`\lambda _n`$ of $`C_{ij}`$ span a wide range, then the surfaces of constant $`f`$ are elongated (illustrated in Figure 4), and conventional techniques such as conjugate gradients will become very inefficient. It is known that the number of iterations required by conjugate gradients is proportional to $`(\lambda _{\mathrm{max}}/\lambda _{\mathrm{min}})^{1/2}`$, where $`\lambda _{\mathrm{max}}`$ and $`\lambda _{\mathrm{min}}`$ are the maximum and minimum eigenvalues of $`C_{ij}`$.
While ill conditioning is a widespread problem, the solution depends on the specific situation. Conventional first principles calculations which use plane waves as a basis set have been recognised for many years to suffer from ill conditioning, and it turns out that ill conditioning found in $`𝒪(N)`$ calculations is closely related to this; we will review the plane wave ill conditioning before describing the $`𝒪(N)`$ ill conditioning.
The plane wave energy functional (eq. A4) has large curvatures associated with high wavevector G, because of the form of the kinetic energy:
$$E_{\mathrm{kin}}=2\underset{i}{}f_i\underset{𝐆}{}\frac{\mathrm{}^2G^2}{2m}c_{i𝐆}^2,$$
(17)
so that the energy is proportional to $`G^2`$. This first type of ill conditioning is easily cured (essentially by scaling $`c_{i𝐆}`$ by a factor of $`(1+G^2/G_0^2)^{1/2}`$), and is referred to as ‘length scale ill conditioning’, as it comes from the variation of curvature with length scale.
Another type of ill conditioning seen in conventional techniques is associated with the invariance of $`E_{\mathrm{tot}}`$ under a unitary transformation of the orbitals. If the occupation numbers $`f_i`$ are all either zero or one, then $`E_{\mathrm{tot}}`$ is exactly invariant under transformations such as:
$$\psi _i^{}=\underset{j}{}U_{ij}\psi _j,$$
(18)
where $`U_{ij}`$ is a unitary matrix. If the occupancies deviate slightly from zero or one, however, the exactness of the invariance is broken, and the energy changes slightly. Some of the eigenvalues of the Hessian will go from exactly zero (under the exact transformation) to very small, leading to poor conditioning. We shall refer to this as ‘superposition ill conditioning’. In conventional techniques this is cured by performing a rotation of the wave functions so that the Hamiltonian becomes diagonal in the subspace spanned by the occupied states.
A final type of ill conditioning found in conventional methods arises with variable occupation numbers, and is associated with eigenvalues whose energies are well above the Fermi energy, which will have very small occupation numbers. Variations of the $`\psi _i`$ associated with these small occupation numbers will have little effect on the value of $`E_{\mathrm{tot}}`$, and lead to small values of the curvature. Since the variations of these eigenvalues are almost redundant in the minimisation, we refer to this as ‘redundancy ill conditioning’.
All three of these forms can cause difficulties within $`𝒪(N)`$ techniques, though typically the specific solution will vary. For instance, it is clear that variations of the localised orbitals, $`\{\varphi _{i\alpha }\}`$, will have different length scales, and will suffer from length scale ill conditioning. This is easily cured in the same way as for plane waves, as has been recently demonstated, though the methodology is somewhat different. As a demonstration of the efficacy of this preconditioning, Figure 5 shows the convergence of Conquest with and without length scale preconditioning for three different region radii for the localised orbitals. Clearly this problem becomes significantly worse for larger regions.
Superposition ill conditioning is associated with the linear mixing of localised orbitals. It is easily shown that linear mixing of the functions on the same atom leaves $`E_{\mathrm{tot}}`$ unchanged, and so will not cause ill conditioning. Variations of the localised orbital $`\varphi _{i\alpha }`$ such as:
$$\varphi _{i\alpha }^{}=\varphi _{i\alpha }+\underset{j\beta ,ji}{}c_{j\beta }\varphi _{j\beta }$$
(19)
are rather more interesting. Strictly, these are not possible, as the localised orbitals are constrained to be zero outside their localisation regions. However, once the region radii become large, there will be variations which almost fulfil this criterion. It is the small eigenvalues of the Hessian of $`E_{\mathrm{tot}}`$ associated with this almost perfect mixing which will cause superposition ill conditioning. It is perfectly possible to cure this, however, as the form of the variations can be written down. We have developed a method to precondition these variations, and are testing it. It will be described in a future publication.
Finally, we come to redundancy ill conditioning. Just as in conventional calculations this occurs when the occupation numbers $`f_i`$ are very small, this may occur in $`𝒪(N)`$ techniques when the number of localised orbitals $`\varphi _{i\alpha }`$ is more than half the number of electrons. It is desirable, if not essential, to be able to work with an extended number of orbitals; for instance, in group IV elements, the natural basis will consist of four orbitals, roughly corresponding to one $`s`$ and three $`p`$ orbitals. (It is worth noting that Kim, Mauri and Galli have found that it is essential to have more orbitals than filled bands to avoid local minima in the energy functional in a related $`𝒪(N)`$ scheme.) As before, we believe that this form of ill conditioning can be removed by suitable preconditioning, but detailed techniques have yet to be formulated.
## IV Future directions
Having summarised the techniques involved in Conquest, it is appropriate to look to the near future, and consider the directions in which the project is going.
### A Forces
The issue of forces is a key one for any electronic structure technique; if relaxation of ions or molecular dynamics are to be performed then the analytical forces must agree with the gradient of the energy. Conquest has been specifically constructed so that, provided small Pulay-type corrections are included, the forces are guaranteed to be consistent with the energy. Pulay corrections are required because the B-spline basis functions move with the atoms, and are easily found, as will be described elsewhere. This means that the relaxation of the system to mechanical equilibrium and the generation of time-dependent ionic trajectories will be feasible in $`𝒪(N)`$ DFT calculations.
### B Efficient choices for representation of localised orbitals
The present choice of basis for representing the localised orbitals has been described in Section II A. However, there are two good reasons for changing this basis somewhat. First, there is the problem of ripples in the energy caused by the numerical integration; this is due to a lack of translational invariance with respect to the integration grid. If the rapidly varying parts of the localised orbitals (i.e. the core regions, which do not alter greatly during a calculation) could be represented in a more efficient form, then the integrals could be performed analytically, significantly reducing ripples. Second, there would be great value in being able to perform ab initio tight binding calculations with the code (or even to have certain parts of a supercell treated with full DFT, while others were treated with ab initio tight binding). For these reasons, we are planning to move to a mixed basis (seen recently in conventional pseudopotential calculations) which combines pseudo-atomic orbitals (possibly of the form of Sankey and Niklewski) with a coarser blip grid. This will suppress the ripples with respect to the integration grid, and give the flexibility to model different parts of the system with appropriate accuracy.
### C Finding the density matrix for metals
The question of modelling metals is much harder than insulators or semiconductors for the $`𝒪(N)`$ methods described above, simply because the density matrix is more delocalised in metals, meaning that the cutoff applied to $`L`$ (and hence to $`K`$) must be much larger to obtain accurate results. If the metal is close packed, as is frequently the case, this entails a rapid increase in the number of elements in the density matrix, and a slowing down of variational techniques; this is discussed elsewhere.
One approach is to reduce the range of the density matrix in metals by introducing an artificial electronic temperature, which broadens the Fermi occupation function, and localises the density matrix. The drawback is a potentially large electronic entropy contribution to the energy; however, there is a scheme for extrapolating the results back to zero electronic temperature. It has been shown that even with such a scheme the variational density matrix method described above is inefficient; the hybrid method described in Section III A improves the efficiency. More efficient are recursion-based methods, such as the Fermi Operator Expansion or the Bond Order Potential. However, these have the disadvantage of not being strictly variational.
A further possibility which has emerged recently is to use a series of nested Hilbert spaces to find the exact zero temperature density matrix. This method has the advantage that no approximation is being made to remove the electronic entropy contributions, and allows high precision calculations on metals. It is, however, still in development, and practical demonstrations have yet to be published.
## V Conclusions
The recent developments which have been outlined above show that the future of $`𝒪(N)`$ ab initio techniques is extremely bright. We have shown that the localisation of the density matrix gives the framework within which these methods can be constructed, and have given details of the implementation of one such code, Conquest. The examples presented show that this method is practical, and that the spatial cutoffs required for accuracy are small enough to make the calculations perfectly feasible. The search for the ground state has been addressed, and methods for making this search more robust and efficient have been discussed. The remaining tasks for the Conquest code have been described, and the way forward for all of them is clear. The most important conclusion to draw from this body of work is that $`𝒪(N)`$ DFT methods actually work. Indeed, these methods are being demonstrated in practical calculations. Our group is working towards practical application of the Conquest code to large-scale problems.
## Acknowledgements
We are happy to acknowledge useful discussions with David Manolopoulos, Peter Haynes, Chris Goringe and Ed Hernández. Conquest has been developed within the framework of the UK Car-Parrinello consortium, which is supported by the EPSRC grant GR/M01753. The work of MJG is financially supported by CCLRC and GEC.
## A The pseudopotential method
As Conquest uses the standard pseudopotential method, it is important to recall the salient facts within this formalism; there are many excellent reviews elsewhere which give more detail.
In the pseudopotential method, only the valence electrons are considered, and their interaction with the ionic cores is replaced by a pseudopotential, $`v(r)`$. This means that in fact the solutions of the Schrödinger equation are pseudo-wavefunctions, and the charge density, $`n(𝐫)`$, is the pseudo-density of the valence electrons. The energy arising from the interaction between the cores and the valence electrons is given by:
$$E_{ei}=d𝐫V(𝐫)\rho (𝐫),$$
(A1)
where $`V(𝐫)`$ is found as the sum over the ionic pseudopotentials:
$$V(𝐫)=\underset{i}{}v(𝐫𝐑_i),$$
(A2)
with $`𝐑_i`$ the core positions. In general, the pseudopotential is non-local, that is $`v(𝐫,𝐫^{})`$.
In conventional pseudopotential ab initio techniques, the wavefunctions are often expanded in terms of plane waves:
$$\psi _i=\underset{𝐆}{}c_{i𝐆}\mathrm{exp}(i𝐆𝐫),$$
(A3)
where G is a reciprocal lattice vector. The total energy is then minimised with respect to the coefficients $`c_{i𝐆}`$. Frequently, particularly in metals, variable occupation numbers are allowed, so that the wave function $`\psi _i`$ has an occupation $`f_i`$. This gives a total energy function:
$$E_{\mathrm{tot}}=E_{\mathrm{tot}}(\{c_{i𝐆}\},\{f_i\}).$$
(A4)
## B Metrics and minimisation in non-orthogonal basis sets
When working with a non-orthogonal basis, care must be taken with notation. It is common to use raised and lowered indices to distinguish between vectors and matrices which transform differently; this has been introduced to electronic structure calculations by Ballentine and Kolář, who also describe the general formalism.
The eigenfunctions of the Hamiltonian are expanded in a set of non-orthogonal, localised, atom-centred orbitals $`\{\varphi _\alpha (𝐫)\}`$, where $`\alpha `$ runs over all orbitals on all atoms. These orbitals span a Hilbert space $`𝒱`$, and define an overlap matrix which is given by $`S_{\alpha \beta }=\varphi _\alpha \varphi _\beta `$. The inverse overlap matrix, $`\left(S^1\right)^{\alpha \beta }`$, is defined by the relation:
$`{\displaystyle \underset{\beta }{}}\left(S^1\right)^{\alpha \beta }S_{\beta \gamma }`$ $`=`$ $`\delta _\gamma ^\alpha `$ (B1)
$`=`$ $`1\mathrm{i}\mathrm{f}\alpha =\gamma `$ (B2)
$`=`$ $`0\mathrm{i}\mathrm{f}\alpha \gamma .`$ (B3)
A dual space, $`𝒱^{}`$, exists which is spanned by the orbitals $`\varphi ^\alpha =_\beta \left(S^1\right)^{\alpha \beta }\varphi _\beta `$. These two sets of vectors are bi-orthogonal, that is $`\varphi ^\alpha \varphi _\beta =\delta _\beta ^\alpha `$. It is important to note that contraction can only be carried out over indices which are opposed, while addition can only be carried out between tensors for which indices agree. The vectors in the original space are called covariant, while the vectors in the dual space are contravariant.
A convenient choice for the metric of $`𝒱`$ is $`S^1`$, so that $`_\beta \varphi _\alpha \left(S^1\right)^{\gamma \beta }\varphi _\beta =\delta _\alpha ^\gamma `$. An equivalent choice of metric for $`𝒱^{}`$ is $`S_{\alpha \beta }`$. These operate to change a vector in one space to the vector in another; thus a proper scalar product within a space can be formed by incorporation of the metric. A covariant operator can be represented as an outer product:
$$\widehat{A}=\underset{\alpha ,\beta }{}\varphi _\alpha A^{\alpha \beta }\varphi _\beta .$$
(B4)
Then the scalar product of two covariant operators is written:
$`(𝐀,𝐁)`$ $`=`$ $`{\displaystyle \underset{\alpha ,\beta ,\gamma ,\delta }{}}\varphi _\delta \varphi _\beta A^{\beta \alpha }\varphi _\alpha \varphi _\gamma B^{\gamma \delta },`$ (B5)
$`=`$ $`{\displaystyle \underset{\alpha ,\beta ,\gamma ,\delta }{}}S_{\delta \beta }A^{\beta \alpha }S_{\alpha \gamma }B^{\gamma \delta }`$ (B6)
$`=`$ $`Tr[A^{}SBS].`$ (B7)
It is important to note that the product of a covariant and a contravariant pair (such as $`H`$ and $`\rho `$) is invariant with basis set. Traditionally, the Hamiltonian is taken as covariant and the density matrix as contravariant.
### 1 Variation of L
The innermost part of Conquest consists of the minimisation of the energy with respect to the elements of the matrix $`L^{i\alpha j\beta }`$ with fixed support functions. This is achieved by performing line minimisations along directions supplied by the conjugate gradients algorithm (with, on occasion, a correction for maintaining the electron number constant). As pointed out recently by White et al., the gradient of the energy with respect to $`L^{i\alpha j\beta }`$, $`\mathrm{\Omega }`$, (which is used as the search direction in the minimisation) is actually covariant, while $`L^{i\alpha j\beta }`$ is contravariant; this means that the gradient must be transformed to a contravariant matrix before being combined with the density matrix: $`(S^1)\mathrm{\Omega }(S^1)`$. But there is more to the problem of minimisation than just this; conjugate gradients assumes the following relations:
$`𝐠_{i+1}`$ $`=`$ $`f(𝐏_{i+1}),`$ (B8)
$`𝐡_{i+1}`$ $`=`$ $`𝐠_{i+1}+\gamma _i𝐡_i,`$ (B9)
$`\gamma _i`$ $`=`$ $`{\displaystyle \frac{𝐠_{i+1}𝐠_{i+1}}{𝐠_i𝐠_i}},`$ (B10)
where $`𝐠_i`$ is the gradient of the function at step $`i`$ and $`𝐡_i`$ is the search direction (which is conjugate to the previous search directions) at step $`i`$. Clearly in the formation of $`\gamma _i`$, care must be taken to ensure that the product is tensorially correct, otherwise the choice of the new search direction will be wrong, so the correct formula for $`\gamma _i`$ is:
$$\gamma _i=\frac{𝐠_{i+1}(S^1)𝐠_{i+1}(S^1)}{𝐠_i(S^1)𝐠_i(S^1)},$$
(B11)
(This is discussed at more length in Section 2.7 of Ref., where the bi-conjugate gradient method is described. This degree of complexity is not needed here.) Similar care both with the correct nature of gradients and with the search directions must be taken when varying the localised orbitals.
|
no-problem/9902/hep-ph9902342.html
|
ar5iv
|
text
|
# References
CERN-TH/99-14
Universal Pion Freeze-out Phase-Space Density
D. Ferenc<sup>a,b</sup>, U. Heinz<sup>a,c</sup>, B. Tomášik<sup>a</sup>, U.A. Wiedemann<sup>a,d</sup>
<sup>a</sup>Institut für Theoretische Physik, Universität Regensburg,
D-93040 Regensburg, Germany
<sup>b</sup>Physics Department, University of California, Davis, CA 95616-8677,USA
<sup>c</sup>Theory Division, CERN, CH-1211 Geneva 23, Switzerland
<sup>d</sup>Physics Department, Columbia University, New York, NY 10027, USA
and J.G. Cramer
Nuclear Physics Laboratory, University of Washington, Seattle, WA 98195, USA
Abstract: Results on the pion phase-space density at freeze-out in sulphur-nucleus, Pb-Pb and $`\pi `$-p collisions at the CERN SPS are presented. All heavy-ion reactions are consistent with the thermal Bose-Einstein distribution $`f=[\mathrm{exp}(E/T)1]^1`$ at $`T`$ 120 MeV, modified for radial expansion. $`\pi `$-p data are also consistent with $`f`$, but at $`T`$ 180 MeV and without radial flow.
Introduction. In ultrarelativistic heavy-ion collisions, the pion freeze-out phase-space density determines the importance of multiparticle pion correlations and of dileption production from $`\pi ^+\pi ^{}`$ annihilation. G. Bertsch suggested a way of estimating this quantity by combining measurements of one-particle momentum spectra with two-particle correlations, thereby testing the thermal equilibrium of the pion source created in the collision. In local thermal equilibrium at temperature $`T(x)`$ the pion energy distribution is given by the Bose-Einstein function
$$f(x,p)=\frac{1}{e^{pu(x)/T(x)}1},$$
(1)
where $`u(x)`$ is the 4-velocity of the local rest frame at point $`x`$ in the observer frame. The coordinate space average of this function is the quantity to be measured:
$$f(p)=\frac{f^2(x,p)p^\mu d^3\sigma _\mu (x)}{f(x,p)p^\mu d^3\sigma _\mu (x)}.$$
(2)
Here $`d^3\sigma (x)`$ is the normal vector on a space-like space-time hypersurface $`\sigma (x)`$. According to Liouville’s theorem, $`\sigma `$ is arbitrary as long as its time arguments are later than the time $`t_\mathrm{f}(𝐱)`$ at which the last pion passing the surface at point $`𝐱`$ was produced.
If the measured single-particle $`p_T`$-spectrum is parameterized by an exponential with inverse slope parameter $`T_{\mathrm{eff}}(y)`$,
$$\frac{dn^{}}{dyp_Tdp_T}=\frac{dn^{}}{dy}\frac{1}{T_{\mathrm{eff}}^2(y)}\mathrm{exp}\left(\frac{p_T}{T_{\mathrm{eff}}(y)}\right),$$
(3)
and the two-particle correlation function by a Gaussian ,
$$C(q;p_T,y)=1+\lambda (p_T,y)\mathrm{exp}\left(\underset{i=o,s,l}{}R_i^2(p_T,y)q_i^22R_{ol}^2(p_T,y)q_oq_l\right),$$
(4)
where the subscripts $`o,s,l`$ refer to the usual out-side-long coordinate system and $`\lambda (p_T,y)`$ accounts for unresolvable contributions from long-lived resonances , one obtains for the spatially averaged phase-space density at freeze-out
$$f(p_T,y)=\sqrt{\lambda (p_T,y)}\frac{\frac{dn^{}}{dy}\frac{1}{2\pi T_{\mathrm{eff}}^2(y)}e^{p_T/T_{\mathrm{eff}}(y)}}{\pi ^{3/2}E_pR_s(p_T,y)\sqrt{R_o^2(p_T,y)R_l^2(p_T,y)R_{ol}^4(p_T,y)}}.$$
(5)
Here $`E_p=\sqrt{m^2+𝐩^2}=m_T\mathrm{cosh}y`$, with $`m_T=\sqrt{m^2+p_T^2}`$. The numerator (with experimental input $`dn^{}/dy,T_{\mathrm{eff}}(y)`$) gives the momentum-space density at freeze-out while the denominator (involving the measured two-pion Bose-Einstein correlation radii $`R_s,R_o,R_l,R_{ol}`$) reflects the space-time structure of the source at freeze-out and can be interpreted as its covariant homogeneity volume for particles of momentum $`p`$. The factor $`\sqrt{\lambda }`$ ensures that only the contributions of pions from the decays of short-lived resonances, which happen close to the primary production points, are included in the average phase-space density.
We have calculated $`f(p_T,y)`$ for the S-S, S-Cu, S-Ag, S-Au, S-Pb and Pb-Pb data from the experiments NA35 , NA49 , and NA44 , and for the $`\pi `$-p data from the NA22 experiment (all at the CERN SPS). The projectile energies were 200 GeV per nucleon in S-nucleus collisions, 158 GeV per nucleon in Pb-Pb collisions and 250 GeV in $`\pi `$-p collisions, which correspond to projectile rapidities 6, 5.8 and 8.2, respectively. Our results will also be compared with the average phase-space density in Au-Au collisions at projectile momentum 10.8 GeV/c, published by the E877 collaboration at the AGS . In all cases the analysis was done in the LCMS (longitudinally comoving system) where the longitudinal momentum of the pion pair vanishes.
Calculation of the phase-space density. The experimental input into Eq. (5) is partly incomplete. This concerns particularly the intercept parameter $`\lambda `$ which was measured in all the considered experiments but not always published in the final corrected form. However, the necessary information is still available, and in the following we explain how we used it. The following experimental effects are responsible for the uncertainties in the measurement of the $`\lambda `$ parameter:
1. Finite momentum resolution reduces the correlation intercept, since it leads to a smearing or widening of the correlation peak.
2. Corrections for Coulomb repulsion of like-sign pions play an important role in the Bose-Einstein correlation analysis, in particular in the measurement of $`\lambda `$. Certain sophisticated Coulomb correction methods have been applied , but not to all the data sets, as we shall discuss below.
3. If pions are not positively identified, as is the case in NA35 and NA49, the direct-pion sample is contaminated with kaons, converted electrons, protons and other particles. NA35 and NA49 have performed a contamination correction based on detailed Monte Carlo simulations , but unfortunately the resulting corrected results for $`\lambda `$ have not been directly published. However, since the contamination levels were published, as well as the uncorrected results for $`\lambda `$, we were able to estimate the corrected values ourselves. For example, in the NA49 Pb-Pb collisions, in the rapidity interval 3.4$`<y<`$3.9, the fraction of pure $`\pi ^{}\pi ^{}`$ pairs is $`x55`$, and the uncorrected measured $`\lambda _{meas}`$ is between 0.4 and 0.5 . The contamination-corrected value is then $`\lambda _{corr}=\lambda _{meas}/x`$ 0.73-0.91.
4. Pions originating from weak decays also contaminate the direct pion sample when the decay takes place unresolvably close to the main interaction point. In the rejection of decayed pions, the experiments NA35 and NA49, with continuous tracking detectors covering the region close to the target, are in an advantage over the NA44 spectrometer with tracking devices placed only far away from the target. Since there are more hyperons ($`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }`$, $`\mathrm{\Xi }`$) than antihyperons in the fireball which produce more negative than positive pions, the measurement of $`\lambda `$ with positive pions is less affected and should be more reliable. Note that NA44 indeed reported significantly different results: $`\lambda ^{}0.52\pm 0.03`$ and $`\lambda ^{++}0.66\pm 0.04`$, the difference being mostly reproduced in a detector simulation with the RQMD event generator, and thereby traced to the weak decay asymmetry.
NA44 and NA49 both reported an approximate independence of $`\lambda `$ on $`p_T`$, valid to about $`\pm `$10%. Regarding the absolute values, for S-Pb data NA44 has reported $`\lambda ^{++}(\mathrm{NA44})0.520.59`$ (with negligible non-pion contamination), consistent with the lower half of the NA35 interval, $`\lambda ^{}(\mathrm{NA35})0.550.7`$ (contamination corrected). The latter may be read out directly from the correlation functions presented in Fig. 2 of Ref. . That figure also provides an insight into the reliability of the $`\lambda `$ measurement, since it presents the evolution of the correlation function through different correction steps.
In case of Pb-Pb collisions the NA44 results are again consistent with the lower end of the NA49 interval, see Fig. 1: $`\lambda ^{}`$(NA49)$``$0.7-0.9, obtained with “our” correction for contamination, and $`\lambda ^{++}`$(NA44)$``$0.66-0.69, without a correction for the contamination due to weak decays. Since positive pions are also contaminated by weak decay products, the NA44 result is certainly an underestimate, unlike the NA49 result which has been corrected for all sources of contamination, including those weak decays which could not be filtered out by vertex cuts.
If only the contamination by weak decay products were to explain the difference between $`\lambda ^{}`$(NA44) and $`\lambda ^{++}`$(NA44), as suggested by NA44 , then the contamination effect itself on $`\lambda ^{++}`$(NA44) should be quite strong, probably on the order of 10%. After shifting the NA44 results up by this ad hoc 10% correction, as shown in Fig. 1 by full lines, the NA44 and NA35/NA49 intervals already fully overlap.
In comparing S-nucleus and Pb-Pb results one has to take into account also a difference in the Coulomb correction methods. Sophisticated Coulomb correction methods lead to higher $`\lambda `$ measurements than the “old” Gamov correction. This effect seems to be around 5-15% for Pb-Pb data , and should be lower for S-nucleus data , probably 0-10%, depending on the way the normalization has been done . Note that the Pb-Pb data have been Coulomb-corrected in the appropriate way, while the S-nucleus data were only Gamov corrected, which might be the reason for the discrepancy seen in Fig. 1. Assuming that the S-nucleus data indeed need to be corrected by 10% to account for the systematical error due to the inappropriate Gamov correction, one arrives at the results presented by dotted lines in Fig. 1 which are in much better agreement with the Pb-Pb results. Our assumption is, however, rather doubtful and should serve as an illustration of the systematic uncertainties rather than as a quantitative result. Note that also the correlation radii should in principle increase due to a proper Coulomb correction, which would partly cancel or even reverse the effect of an increased $`\lambda `$, see Eq. (5).
It is beyond our ability to check all these points in further detail, since this would require detailed Monte Carlo simulations of the detector response, with a realistic event generator as input. It is quite obvious that with the quality of the presently available data an accurate determination of the factor $`\sqrt{\lambda }`$ to be used in Eq. (5) is not possible. We have therefore proceeded in a rather pragmatic way, taking the most probable interval for $`\lambda `$ for all the data sets from Fig. 1, based on a consideration of the corrected data presented by full (and to some extent also dotted) lines in Fig. 1. The most likely value for all these data sets seems to be $`\lambda `$= 0.7, with a relatively wide error $`\pm `$0.1. This value has been used in our analysis.
Fortunately, $`\lambda `$ enters Eq. (5) under a square root, and a systematical error of e.g. 15% for $`\lambda `$ will result in only 7% error of the final result, which raises the confidence into our simplistic approach.
In the introduction we stated already that we shall use an exponential function, Eq. (3), to parametrize the $`p_T`$-spectrum. One of the reasons for our choice was the general accessibility of $`T_{\mathrm{eff}}`$ for all the data sets; $`T_{\mathrm{eff}}`$ equals $`p_T/2`$, a quantity which is usually quoted together with the data. The best method would be to use directly the measured $`dn^{}/d^2p_T`$, but most of the data have not been published in a useful form. The pion rapidity density $`dn^{}/dy`$ was derived from the NA35 and NA49 measurements of the negative hadron rapidity density $`dn^h/dy`$, by a simple scaling with a factor 0.9 .
One should also specify the $`p_T`$ value representative for a given $`p_T`$-interval, both in order to calculate $`f`$ and to allow for a meaningful model comparison, i.e. to properly place the calculated points in the plot $`f(p_T)`$. Fortunately, the result is almost insensitive to systematical errors in the choice of the representative $`p_T`$ value. To estimate the effect of this error we have shifted each measured point by 10% up and down in $`p_T`$ from the average $`p_T`$ for the interval, as shown in Fig. 2. The resulting smearing is elongated along the actual shape of the entire distribution. The systematical error in the choice of the $`p_T`$ value representative for a given $`p_T`$ interval is therefore essentially harmless.
With the S-nucleus collision data from NA35 and NA44 and with the $`\pi `$-p collision data from NA22 we had yet another problem: the two-pion correlation functions of these data have not been fitted with the proper functional form which includes the out-longitudinal cross term $`R_{ol}`$ . For those data the resulting volume term in the denominator of Eq. (5) reduces to $`R_sR_oR_l`$. To estimate the systematic error arising from the omission of the cross term we considered the ratio of the expressions for $`f`$ with and without the cross term:
$$\frac{f}{f(R_{ol}=0)}=\frac{1}{\sqrt{1\frac{R_{ol}^4}{R_o^2R_l^2}}}.$$
(6)
With the explicit expressions given in and the Cauchy inequality one shows that $`R_{ol}^4<R_o^2R_l^2`$ such that this ratio is always well-defined. It must be 1 at midrapidity since there $`R_{ol}=0`$ by symmetry. Larger values of 1.2-1.3 were only found for data far away from mid-rapidity for which the measured cross term was comparable to $`R_o`$ or $`R_l`$. In this publication we have considered data for which no cross term was included in the analysis only near mid-rapidity where $`R_{ol}`$ is small. We estimate that that this procedure limits the (downward) systematic error on $`f`$ due to this effect to less than 5%.
Results. In Figures 3-6 we show the average phase-space density as a function of $`p_T`$, for different systems and rapidity bins, as extracted from Eq. (5). The error bars reflect the statistical errors of the single-particle spectra and correlation radii, but not the systematic uncertainties discussed above (including the dominant uncertainty in the intercept $`\lambda `$). We also plot for comparison the Bose-Einstein distribution (1) for a static system ($`u(x)=0`$) and different temperatures $`T`$. In doing so we set $`E\sqrt{m^2+p_T^2}`$ which is justified if the longitudinal pair momentum can be neglected. Since the correlation data which we used were analyzed in the so-called “fixed longitudinal co-moving system” (FLCMS) in which the pair rapidity corresponding to the center of the rapidity bin vanishes, this approximation holds as long as the considered rapidity interval is narrow. Rapidity intervals of 0.5 (as in the NA49 Pb-Pb data) and 1 (as for the NA35 S-nucleus data) result in a systematic overestimate of the theoretical distribution function by approximately 2% and 7%, respectively.
From the results presented in Figures 36, one may draw the following conclusions:
1. Universal average phase-space density at freeze-out:
Even though the heavy-ion data span about an order of magnitude in multiplicity density ($`dn^{}/dy`$(S-S)=22, $`dn^{}/dy`$(S-Cu)=24, $`dn^{}/dy`$(S-Ag)=28, $`dn^{}/dy`$(S-Au)=37, and $`dn^{}/dy`$(Pb-Pb)=40-185, depending on rapidity), the resulting average phase-space densities vary by much less. Given the large error bars, all the nuclear collision data at mid-rapidity from the SPS experiments in Figs. 3,4 are almost indistinguishable. Freeze-out happens in all cases at comparable values of the average phase-space density.
2. Rough agreement with the Bose-Einstein distribution:
The dashed lines in Fig. 3 indicate thermal Bose-Einstein distributions for static sources at various freeze-out temperatures. A rough comparison with the SPS heavy-ion data indicates consistency with temperatures in the range of 100-140 MeV. Thermal freeze-out temperatures in this domain were recently obtained in analyses of the measured spectra and correlation functions .
3. Multiparticle symmetrization effects are small:
Irrespective of the collision system and $`p_T`$, we find phase-space densities smaller than 0.5. For $`f1`$ the Bose-Einstein phase-space enhancement is dominated by two-particle symmetrization effects, and multiparticle symmetrization effects are weak . This is an important consistency check for the current practice of calculating the two-particle correlation function from the two-particle symmetrized contributions only. For $`f<0.5`$ the system is far away from a pion laser. We find no sign for a striking pion excess or a pion condensate.
4. Radial flow:
Looking in more detail, in particular applying a logarithmic scale as in Fig. 4, one finds that the data indicate a somewhat slower decrease with increasing $`p_T`$ than the Bose-Einstein curve. Particularly strong differences are seen for the NA49 Pb-Pb data at mid-rapidity ($`2.9<y<3.4`$). These data can be fitted with the function $`\mathrm{exp}[1.1(1)p_T/0.31(6)\mathrm{GeV}]`$ ($`\chi ^2`$/ndf=0.2), presented in Fig. 4 by a dotted line, which has a considerably flatter slope than the Bose-Einstein distribution. This behaviour can be reproduced by a model for the emission function which includes radial collective expansion and whose parameters are adjusted to reproduce the single- and two-particle spectra . The (strong) radial expansion adds extra transverse momentum to the particles (Doppler blue shift), i.e. the local $`f`$ values appear in the lab frame at a higher $`p_T`$ than in the rest frame of the effective source. A detailed study will be published elsewhere .
5. Decoupling at high temperature in $`\pi `$-p collisions:
In contrast to freeze-out in nuclear collisions which takes place in two steps (chemical freeze-out of particle abundances at around $`T_{\mathrm{chem}}170180`$ MeV , thermal freeze-out of momentum spectra at around $`T_{\mathrm{therm}}120`$ MeV ), pion production in $`\pi `$-p collisions is essentially immediate, without the second evolution stage, and therefore common chemical and thermal freeze-out temperatures of around 170-180 MeV should be expected. The data on $`f`$ are indeed consistent with this expectation, as seen in Fig. 5.
6. Rapidity dependence:
A certain departure from the universal scaling is seen for the data at rapidities close to the projectile rapidity, see Fig. 6, both at AGS and SPS; but again the results at these two widely different beam energies are mutually consistent and agree with the expectations from a thermalized expanding source. The sources at AGS and SPS energies show strong longitudinal expansion at freeze-out, but with maximal longitudinal flow rapidities well below the beam rapidity ($`\eta _{\mathrm{flow},\mathrm{max}}1.7`$ at the SPS and $`1.1`$ at the AGS ). Pions with CMS rapidities larger than $`\eta _{\mathrm{flow},\mathrm{max}}`$ thus come from the tail of the thermal longitudinal momentum distribution . A similar decrease of $`f`$ which is seen near midrapidity as a function of $`p_T`$ is thus seen near beam rapidities as a function of $`y`$.
Acknowledgement: This work was initiated during the Heavy Ion Workshop at the Institute for Nuclear Theory (Seattle) in March 1998, and U.H. would like to thank the INT for its hospitality. We also acknowledge support by DAAD, DFG, GSI, BMBF, and the US Department of Energy.
|
no-problem/9902/hep-ph9902448.html
|
ar5iv
|
text
|
# Comment on “Majoron emitting neutrinoless double beta decay in the electroweak chiral gauge extensions”
## Abstract
We point out that if the majoron-like scheme is implemented within a 331 model, there must exist at least three different mass scales for the scalar vacuum expectation values in the model.
preprint: IFT-P.020/99 February 1999
In a recent paper by Pisano and Sharma the majoron scheme was implemented in a 331 model . In that paper two different scales of vacuum expectation values (VEV) in the scalar sector have been considered: one related with the electroweak symmetry breaking and the other one with the $`SU(3)`$ breaking. Here we show that the model is consistent with the experimental value of the $`\rho `$ parameter only if three mass scales are introduced.
It is a well known fact that Higgs triplets under the standard $`SU(2)_LU(1)_Y`$ gauge group have to have vacuum expectation values which are smaller than the electroweak scale in order to not spoil the agreement between the theoretical and the experimental value of the electroweak $`\rho `$ parameter ($`\rho =M_Z/M_Wc_W`$. This is due to the fact that triplets and doublets give different contributions to the $`W^\pm `$ and $`Z`$-boson masses. This result does not depend on the hypercharge of the Higgs triplet. For instance, a Higgs doublet and a Higgs triplet with $`Y=2`$ with spontaneous (Majoron Model ) or explicit (non-Majoron Model ) lepton number violation give:
$$M_W^2=\frac{g^2}{4}(v_D^2+2v_T^2),M_Z^2=\frac{g^2}{4c_W^2}\left(v_D^2+4v_T^2\right)$$
(1)
where $`v_D`$ and $`v_T`$ denote the VEVs of the doublet and the triplet, respectively. Notice that the condition $`v_D=v_T`$ violates the $`\rho =1`$ condition. We can not even use $`v_D^2+2v_T^2=(\text{246 GeV})^2`$. The only way to avoid this problem is that $`v_T5.5`$ GeV (if $`v_D=246`$ GeV) using the present experimental value for the $`\rho `$-parameter (see below). Thus, we see that these sort of models have two different mass scales: $`v_T`$ and $`v_D`$.
Next, let us consider a similar situation in the context of the 331 model . In that model in order to give mass to all the fermions it is necessary to introduce three Higgs triplets and a Higgs sextet. Two of the triplets and the sextet have the neutral component in a doublet of the subgroup $`SU(2)`$; we denote the respective VEVs by $`v_\eta ,v_\rho `$ and $`v_{DS}`$. The other triplet has its neutral component transforming as a singlet under $`SU(2)`$ and the sextet has another neutral field transforming as a triplet under $`SU(2)`$. Let us denote their respective VEVs by $`v_\chi `$ (the VEV which is in control of the $`SU(3)`$ breaking) and $`v_{TS}`$. The $`W^\pm `$ and $`Z`$-boson masses, neglecting terms of the order $`v_iv_j/v_\chi ^2`$ ($`i,j=\eta ,\rho ,DS,DT`$), are given by:
$$M_W^2\frac{g^2}{4}(v_\eta ^2+v_\rho ^2+2v_{DS}^2+4v_{TS}^2),$$
(2)
and
$$M_Z^2\frac{g^2}{4c_W^2}\left(v_\eta ^2+v_\rho ^2+2v_{DS}^2+8v_{TS}^2\right),$$
(3)
respectively.
We see from Eqs.(2) and (3) that as in the case of the $`SU(2)_LU(1)_Y`$ model given in Eq. (1), the triplet $`v_{TS}`$ contributes in a different way to the $`W^\pm `$ and $`Z`$-boson masses. We can estimate the order of magnitude of $`v_{TS}`$ by assuming that $`v_\eta v_\rho v_{DS}\stackrel{~}{v}`$ and using the experimental value of the $`\rho `$-parameter: $`\rho =0.9998\pm 0.0008`$ . From Eqs. (2) and (3) we have
$$\rho =\frac{1+r}{1+2r},\sqrt{r}=\frac{v_{TS}}{\stackrel{~}{v}},$$
(4)
which implies the upper limit for $`r0.001`$ or $`v_{TS}3.89`$ GeV for $`\stackrel{~}{v}^2=(246/2)^2\mathrm{GeV}^2`$. If we make $`v_{TS}=v_\eta =v_\rho =v_{DS}`$ as it has been done in Ref. we violate the $`\rho =1`$ condition (it gives $`\rho =2/3`$). In conclusion, the model must have at least three different mass scales in the scalar vacuum expectation values: $`v_{TS},\stackrel{~}{v}`$ and $`v_\chi `$.
###### Acknowledgements.
This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Conselho Nacional de Ciência e Tecnologia (CNPq) and by Programa de Apoio a Núcleos de Excelência (PRONEX). One of us (CP) would like to thank Coordenadoria de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for financial support.
|
no-problem/9902/hep-ph9902404.html
|
ar5iv
|
text
|
# References
The size and energy dependence of the hadron helicity-flip amplitude is an interesting problem in the study of the asymptotic properties and the role of spin-dependent interaction and it is also an important issue in the polarimetry studies based on the Coulomb-Nuclear Interference (CNI) . If the hadronic part of the proton-proton interaction is helicity conserving the CNI analysing power would be then due to the interference between a real electromagnetic helicity-flip amplitude and an imaginary hadronic helicity-nonflip amplitude.
However, the hadronic interaction may not conserve helicity in small angle scattering. Helicity conservation does not follow from QCD in a region where chiral symmetry is spontaneously broken. In Regge theory the Pomeron is usually assumed to be helicity conserving. However this is merely an assumption which seems to be wrong; recent experimental results on the central production of mesons and observation of nontrivial azimuthal dependence demonstrates the nonzero helicity tranfer by the effective Pomeron . Earlier it was shown that the unitarity generates different phases in the helicity-flip and nonflip Pomeron contributions and leads to nonzero analysing power.
The CNI analysing power would also certainly change if there were any nonzero hadronic single-helicity-flip amplitude.
The above issues are considered in the recent survey . Among the new results, the most interesting one is the unexpected bound for the single-helicity-flip amplitude $`F_5`$ of elastic $`pp`$-scattering, i.e. the function
$$\widehat{F}_5(s,0)\frac{mF_5(s,t)}{\sqrt{t}}|_{t=0}$$
cannot increase at $`s0`$ faster than
$$cs\mathrm{ln}^3s,$$
while for the helicity nonflip amplitudes there is the Froissart-Martin bound $`cs\mathrm{ln}^2s`$.
In this note we show that, in fact, a stronger bound for the function $`\widehat{F}_5(s,0)`$ can be obtained if one takes into account the unitarity in the explicit way, e.g. uses an unitarization method based on the $`U`$-matrix approach .
This method is based on the unitary representation for helicity amplitudes of elastic $`pp`$-scattering:
$$F_{\lambda _1,\lambda _2,\lambda _3,\lambda _4}(s,b)=U_{\lambda _1,\lambda _2,\lambda _3,\lambda _4}(s,b)+i\rho (s)\underset{\lambda ^{},\lambda ^{\prime \prime }}{}U_{\lambda _1,\lambda _2,\lambda ^{},\lambda ^{\prime \prime }}(s,b)F_{\lambda ^{},\lambda ^{\prime \prime },\lambda _3,\lambda _4}(s,b),$$
(1)
where $`\lambda ^{}s`$ are the intial and final proton’s helicities. $`F_i`$ are the helicity amplitudes in the standard notations, i.e.
$$F_1F_{1/2,1/2,1/2,1/2},F_2F_{1/2,1/2,1/2,1/2},F_3F_{1/2,1/2,1/2,1/2}$$
and
$$F_4F_{1/2,1/2,1/2,1/2},F_5F_{1/2,1/2,1/2,1/2}.$$
The kinematical function $`\rho (s)1`$ at $`s4m^2`$ and will be neglected in the following.
The functions $`U_i(s,b)`$ could be treated similar to the eikonal, i. e. they can be considered as input amplitudes.
Explicit solution of Eqs. (1) has the following form:
$$F_1(s,b)=\frac{\stackrel{~}{U}_1(s,b)[1iU_1(s,b)]i\stackrel{~}{U}_2(s,b)U_2(s,b)}{[1iU_1(s,b)]^2[U_2(s,b)]^2},$$
$$F_3(s,b)=\frac{\stackrel{~}{U}_3(s,b)[1iU_3(s,b)]i\stackrel{~}{U}_4(s,b)U_3(s,b)}{[1iU_3(s,b)]^2[U_4(s,b)]^2},$$
where
$$\stackrel{~}{U}_i(s,b)=U_i(s,b)+2U_5(s,b)F_5(s,b)$$
and
$$F_5(s,b)=\frac{U_5(s,b)}{[1iU_1(s,b)iU_2(s,b)][1iU_3(s,b)iU_4(s,b)]4U_5^2(s,b)}.$$
We consider the two cases. First, we suppose that the helicity nonflip functions $`U_1(s,b)`$ and $`U_3(s,b)`$ are the dominant ones.
In this case one can get
$$F_5(s,t)=\frac{s}{\pi ^2}_0^{\mathrm{}}b𝑑b\frac{U_5(s,b)}{[1iU_1(s,b)][1iU_3(s,b)]}J_1(b\sqrt{t}).$$
(2)
Unitarity requires that Im$`U_{1,3}(s,b)0`$. The functions $`U_{1,3}(s,b)`$ could be different. For our purposes, however, it is safe to assume that they are the same $`U_1(s,b)=U_3(s,b)=U(s,b)`$. For the function $`U(s,b)`$ we use a simple form
$$U(s,b)=gs^\mathrm{\Delta }e^{\mu b}.$$
(3)
This is a rather general parameterization for $`U(s,b)`$ which provides correct analytical properties in the complex $`t`$–plane, i.e. it is consistent with the spectral representation for the function $`U(s,b)`$ :
$$U(s,b)=\frac{\pi ^2}{s}_{t_0}^{\mathrm{}}\rho (s,t)K_0(b\sqrt{t})𝑑t.$$
(4)
We do not use here model features and do not consider detailed structure of the helicity functions $`U_i`$ but appeal to reasonable arguments of the general nature. To maximize the function $`U_5(s,b)`$ we take it in the form $`U_5(s,b)=aU(s,b)`$ where $`|a|<1`$. Then from Eq. (2) it follows that at $`s\mathrm{}`$:
$$|\widehat{F}_5(s,0)|cs\mathrm{ln}^2s.$$
(5)
This means that the magnitude of the ratio
$$r_5(s,0)2\widehat{F}_5(s,0)/[F_1(s,0)+F_3(s,0)]$$
cannot increase with energy and will not exceed constant at $`s\mathrm{}`$.
This result has a general meaning and retains in the opposite case, i.e. in the case when the function $`U_5(s,b)`$ is a dominant one. We have for the amplitude $`F_5(s,t)`$ the following representation
$$F_5(s,t)=\frac{s}{\pi ^2}_0^{\mathrm{}}b𝑑b\frac{U_5(s,b)}{14U_5^2(s,b)}J_1(b\sqrt{t}).$$
(6)
Using for $`U_5(s,b)`$ the functional dependence in the form of Eq. (3) it can be easily shown that the same bound Eq. (5) does take place for the single-helicity-flip amplitude $`\widehat{F}_5`$.
Thus we can state that general principles do not allow rising behavior of $`|r_5(s,0)|`$. The experimental data as well as the most of the model predictions are consistent with this bound . This result allow us to hope that the contribution of the single-helicity-flip amplitude could be controllable and the effective use of the CNI polarimeter would be possible.
Above results were obtained in impact parameter representation for simplicity. They can easily be reproduced using the partial wave expansion.
It is accurate account of the unitarity for the helicity amplitudes leads to Eq. (5), i.e. due to unitarity the amplitude $`F_5(s,b)`$ has a peripheral dependence on the variable $`b`$ at high energy and
$$|F_5(s,b=0)|0$$
at $`s\mathrm{}`$. This is a consequence of the explicit unitarity representation for the helicity amplitudes and it means that the assumption on $`F_5(s,b)=constant`$ at $`b<R(s)`$ appears to be inadequte (however, it remains to be good for the helicity-nonflip amplitudes).
Thus, as it was shown, we have an asymptotic bound
$$|r_5(s,0)|constant$$
at $`s\mathrm{}`$.
To conclude, it is worth to note that only the model-dependent estimations exist for the magnitude of the ratio $`|r_5(s,0)|`$, but the rise of the function $`|r_5(s,0)|`$ at $`s\mathrm{}`$ can be excluded on the unitarity ground.
|
no-problem/9902/hep-th9902050.html
|
ar5iv
|
text
|
# Untitled Document
Facts and Notation:
* O \- octonions: nonassociative, noncommutative, basis $`\{1=e_0,e_1,\mathrm{},e_7\}`$;
* Q \- quaternions: associative, noncommutative, basis $`\{1=q_0,q_1,q_2,q_3\}`$;
* C \- complex numbers: associative, commutative, basis $`\{1,i\}`$;
* R \- real numbers.
* $`\text{K}_L,\text{K}_R`$ \- the adjoint algebras of left and right actions of an algebra K on itself.
* K(2) - 2x2 matrices over the algebra K (to be identified with Clifford algebras);
* $`𝒞(p,q)`$ \- the Clifford algebra of the real spacetime with signature (p+,q-);
* $`{}_{}{}^{2}\text{K}`$ \- 2x1 matrices over the algebra K (to be identified with spinor spaces);
* $`\text{O}_L`$ and $`\text{O}_R`$ are identical, isomorphic to R(8) (8x8 real matrices),
64-dimensional bases are of the form $`1,e_{La},e_{Lab},e_{Labc}`$, or $`1,e_{Ra},e_{Rab},e_{Rabc}`$, where, for example, if $`x\text{O}`$, then $`e_{Lab}[x]e_a(e_bx)`$, and $`e_{Rab}[x](xe_a)e_b`$ (see );
* $`\text{Q}_L`$ and $`\text{Q}_R`$ are distinct, both isomorphic to Q, bases
$`\{1=q_{L0},q_{L1},q_{L2},q_{L3}\}`$ and $`\{1=q_{R0},q_{R1},q_{R2},q_{R3}\}`$;
* $`\text{C}_L`$ and $`\text{C}_R`$ are identical, both isomorphic to C (so we only need use C itself);
* P = $`\text{C}\text{Q}`$, 8-dimensional;
* $`\text{P}_L`$ = $`\text{C}_L\text{Q}_L`$, isomorphic to $`\text{C}(2)𝒞(3,0)\text{C}𝒞(0,2)`$ (P is the spinor space of $`\text{P}_L`$, consisting of a pair of Pauli spinors; the doubling is due to the internal action of $`\text{Q}_R`$, which commutes with $`\text{P}_L`$ actions);
* T = $`\text{C}\text{Q}\text{O}`$, 64-dimensional;
* $`\text{T}_L`$ = $`\text{C}_L\text{Q}_L\text{O}_L`$, isomorphic to $`\text{C}(16)𝒞(0,9)\text{C}𝒞(0,8)`$ (as was the case with P, the algebra T is the spinor space of $`\text{T}_L`$, its dimension twice what is expected due to the internal action of $`\text{Q}_R`$, the only part of $`\text{T}_R`$ missing from $`\text{T}_L`$);
* $`\text{P}_L(2)\text{C}(4)\text{C}𝒞(1,3)`$, the Dirac algebra of (1,3)-spacetime (the major difference being that the spinor space, $`{}_{}{}^{2}\text{P}`$, contains an extra internal $`SU(2)`$ degree of freedom associated with $`\text{Q}_R`$);
* $`\text{T}_L(2)\text{C}(32)\text{C}𝒞(1,9)`$, the Dirac algebra of (1,9)-spacetime (spinor space $`{}_{}{}^{2}\text{T}`$; one internal $`SU(2)`$).
Some Lie algebras and their bases:
* $`so(7)`$ \- {$`e_{Lab}`$: a,b = 1,…,7};
* $`so(6)`$ \- {$`e_{Lpq}`$: p,q = 1,…,6};
* $`LG_2`$ \- {$`e_{Lab}e_{Lcd}`$: $`e_ae_be_ce_d=0`$, a,b,c,d = 1,…,7};
* $`LG_2`$ explicitly ($`LG_2`$ is the 14-d Lie algebra of $`G_2`$, the automorphism group of O):
$`\begin{array}{cc}e_{L24}e_{L56},\hfill & e_{L56}e_{L37};\hfill \\ e_{L35}e_{L67},\hfill & e_{L67}e_{L41};\hfill \\ e_{L46}e_{L71},\hfill & e_{L71}e_{L52};\hfill \\ e_{L57}e_{L12},\hfill & e_{L12}e_{L63};\hfill \\ e_{L61}e_{L23},\hfill & e_{L23}e_{L74};\hfill \\ e_{L72}e_{L34},\hfill & e_{L34}e_{L15};\hfill \\ e_{L13}e_{L45},\hfill & e_{L45}e_{L26};\hfill \end{array}`$
* $`su(3)`$ \- {$`e_{Lpq}e_{Lmn}`$: $`e_pe_qe_me_n=0`$, p,q,m,n = 1,…,6};
* $`su(3)`$ explicitly (this is the Lie algebra of $`SU(3)`$, the stability group of $`e_7`$, a subgroup of $`G_2`$):
$`\begin{array}{cc}e_{L24}e_{L56};\hfill & \\ e_{L35}e_{L41};\hfill & \\ e_{L46}e_{L52};\hfill & \\ e_{L12}e_{L63};\hfill & \\ e_{L61}e_{L23};\hfill & \\ e_{L34}e_{L15};\hfill & \\ e_{L13}e_{L45},\hfill & e_{L45}e_{L26}.\hfill \end{array}`$
The spinor space of $`\text{T}_L\text{C}𝒞(1,9)`$ is $`{}_{}{}^{2}\text{T}`$. This can be interpreted as the direct sum of a family of leptons and quarks, and its antifamily (Dirac spinors, including the righthanded neutrino). The standard symmetry, $`U(1)\times SU(2)\times SU(3)`$, was derived. Here we shall rederive this symmetry from a different and more accessible direction. The steps taken will be:
1. Reduce the spinor space T to P (equivalent to reducing (1,9)-Dirac spinors to (1,3)-Dirac spinors);
2. Carry this reduction to $`\text{T}_L(2)`$ and see what symmetries (bivectors) survive. To make things easier, and because it’s all that is necessary, we’ll do this by focusing on the ”Pauli” algebras, $`\text{T}_L\text{C}𝒞(0,8)`$ and $`\text{P}_L\text{C}𝒞(0,2)`$, and their respective spinor spaces, T and P.
Projection operators and some of their actions:
$$\rho _\pm =\frac{1}{2}(1\pm ie_7);\rho _{L\pm }=\frac{1}{2}(1\pm ie_{L7});\rho _{R\pm }=\frac{1}{2}(1\pm ie_{R7});$$
(1)
(note that $`\rho _+\rho _{}=0;\rho _++\rho _{}=1`$);
$$\rho _{L\pm }[e_7]=\rho _\pm e_7=\rho _{R\pm }[e_7]=e_7\rho _\pm =i\rho _\pm ;$$
(2)
$$\rho _\pm e_p\rho _\pm =\rho _{L\pm }\rho _{R\pm }[e_p]=0,p=1,\mathrm{},6.$$
(3)
In the context developed in the spinor spaces $`\rho _+\text{T}`$ and $`\rho _{}\text{T}`$ are the family and antifamily parts of $`\text{T}=\rho _+\text{T}+\rho _+\text{T}`$. This partial reduction of T is insufficient: $`\rho _\pm \text{T}\text{P}`$. We need one further step (performed on the family half of T):
$$\rho _+\text{T}\rho _+=\rho _{L+}\rho _{R+}[\text{T}]=\rho _+\text{P}.$$
(4)
The corresponding reduction on the Pauli algebra $`\text{T}_L`$ is:
$$\text{T}_L\rho _{R+}\rho _{L+}\text{T}_L\rho _{L+}\rho _{R+}.$$
(5)
A 1-vector basis for $`𝒞(0,8)`$ in $`\text{T}_L`$, and this same set after reduction, is:
$$\{ie_{L7}q_{Lr},e_{Lp}:r=1,2,p=1,\mathrm{},6\}\rho _{L+}\rho _{R+}\{q_{Lr}:r=1,2\},$$
(6)
a 1-vector basis for $`𝒞(0,2)`$ in $`\text{P}_L`$. In the broader Dirac context this is equivalent to reducing the 1-vectors of $`𝒞(1,9)`$ to those of $`𝒞(1,3)`$.
The 2-vectors of $`𝒞(1,9)`$ form a basis for $`so(1,9)`$, the Lie algebra of the Lorentz group of (1,9)-spacetime. Using the 1-vector basis in (6), a 2-vector basis for $`𝒞(0,8)`$ (basis for $`so(8)`$) is:
$$\{q_{L3},e_{Lpq},ie_{Lp7}q_{Lr}:r=1,2,p,q=1,\mathrm{},6\}.$$
(7)
We’ll reduce this set in steps. First,
$$\rho _{L+}\{q_{L3},e_{Lpq},ie_{Lp7}q_{Lr}:r=1,2,p,q=1,\mathrm{},6\}\rho _{L+}=\rho _{L+}\{q_{L3},e_{Lpq}\},$$
(8)
a basis for $`so(2)\times so(6)`$. The next step is:
$`\rho _{R+}\rho _{L+}\{q_{L3},e_{Lpq}:r=1,2,p,q=1,\mathrm{},6\}\rho _{R+}=\rho _{R+}\rho _{L+}\{q_{L3},\mathrm{?}\}.`$
In particular we need to look at
$$\rho _{R+}e_{Lpq}\rho _{R+}.$$
(9)
We’ll look at some examples, with $`e_pe_qe_7`$, then $`e_pe_q=e_7`$, which will give us the general result. Consider $`\rho _{R+}e_{L12}\rho _{R+}`$. Because $`\rho _{L+}`$ commutes with $`e_{Lpq},p,q7`$, $`\rho _{L+}e_{Lpq}\rho _{L+}=\rho _{L+}e_{Lpq}`$. But $`\rho _{R+}`$ does not commute with these $`e_{Lpq}`$. To see what it does we’ll re-express our chosen element $`e_{L12}`$ as
$$e_{L12}=\frac{1}{2}(e_{R12}+e_{R4}+e_{R63}+e_{R57})$$
(10)
(see ). $`\rho _{R+}`$ commutes with $`e_{R12}`$ and $`e_{R63}`$, but becomes $`\rho _R`$ when commuted with $`e_{R4}`$ and $`e_{R57}`$ (recall: $`\rho _{R+}\rho _R=0`$). Therefore,
$$\rho _{R+}e_{L12}\rho _{R+}=\frac{1}{2}\rho _{R+}(e_{R12}+e_{R63})=\frac{1}{2}\rho _{R+}(e_{L12}e_{L63})$$
(11)
(see for the last equality).
Finally we need to look at the three terms $`e_{Lpq}`$ for which $`e_pe_q=e_7`$. These can be re-expressed:
$$\begin{array}{c}e_{L13}=\frac{1}{2}(e_{R13}+e_{R7}+e_{R26}+e_{R45}),\\ e_{L26}=\frac{1}{2}(+e_{R13}+e_{R7}e_{R26}+e_{R45}),\\ e_{L45}=\frac{1}{2}(+e_{R13}+e_{R7}+e_{R26}e_{R45}).\end{array}$$
(12)
All terms in (12) commute with $`\rho _{R+}`$, so
$$\rho _{R+}\{e_{L13},e_{L26},e_{L45}\}\rho _{R+}=\rho _{R+}\{e_{L13},e_{L26},e_{L45}\}.$$
(13)
Another basis for the space spanned by the set $`\rho _{R+}\{e_{L13},e_{L26},e_{L45}\}`$ is:
$$\rho _{R+}\{e_{L13}e_{L26},e_{L26}e_{L45},e_{L13}+e_{L26}+e_{L45}\}.$$
(14)
But $`e_{R7}=\frac{1}{2}(e_{L7}+e_{L13}+e_{R26}+e_{R45})`$, so
$$e_{L13}+e_{L26}+e_{L45}=e_{L7}+2e_{R7},$$
(15)
which commutes with all the other surviving elements, and therefore generates a copy of $`U(1)`$.
Taken all together and generalized over all $`e_{Lpq}`$ these results imply
$$\begin{array}{c}\rho _{R+}\rho _{L+}\{q_{L3},e_{Lpq},ie_{Lp7}q_{Lr}:r=1,2,p,q=1,\mathrm{},6\}\rho _{L+}\rho _{R+}\hfill \\ =\rho _{R+}\rho _{L+}\{q_{L3},e_{Lpq}\}\rho _{R+}\hfill \\ =\rho _{L+}\rho _{R+}\{q_{L3},e_{L7}+2e_{R7},e_{Lpq}e_{Lmn}:p,q,m,n=1,\mathrm{},6,e_pe_qe_me_n=0\}.\hfill \end{array}$$
(16)
This is a basis for (see ”Some Lie algebras and their bases” above)
$$so(2)\times u(1)\times su(3).$$
(17)
The presence of the $`U(1)`$ generator $`e_{L7}+2e_{R7}`$ ensures that the quarks charges will be $`\frac{1}{3}`$ and $`\frac{2}{3}`$, given an electron charge of $`1`$. The $`so(2)`$ above is part of $`so(1,3)`$, a Lorentz generator.
Including the $`su(2)`$ from $`\text{Q}_R`$, the total symmetry reduction is
$$so(1,9)\times su(2)so(1,3)\times (u(1)\times su(2)\times su(3)).$$
(18)
As mentioned, with respect to the ”internal” symmetry $`u(1)\times su(2)\times su(3)`$, the spinor space $`{}_{}{}^{2}\text{T}`$ transforms as the direct sum of a family and antifamily of quarks and leptons (see for the derivation of charges).
Final notes:
* $`u(1)\times su(2)\times su(3)`$ is an internal symmetry, meaning it commutes with $`so(1,3)`$. However, $`su(3)`$ is a subalgebra of $`so(1,9)`$. It is a spacetime symmetry in this larger context; $`su(2)`$ is not.
* There are two other important distinctions to be briefly mentioned. A full resolution of the identity of T (the $`\rho _\pm `$ resolve the identity of $`\text{C}\text{O}`$) contains four members. With these T can be reduced to the $`su(2)`$ vector level, but only to the $`su(3)`$ multiplet level.
* And the idempotents of the resolution are $`su(3)`$ invariant, but not $`su(2)`$ invariant.
These distinctions implied in that $`su(2)`$ breaks and is chiral, while $`su(3)`$ is exact and nonchiral.
References:
G.M. Dixon, Division Algebras: Octonions, Quaternions, Complex
Numbers, and the Algebraic Design of Physics, (Kluwer, 1994).
G.M. Dixon, www.7stones.com/Homepage/history.html
|
no-problem/9902/astro-ph9902211.html
|
ar5iv
|
text
|
# No Increase of the Red-Giant-Branch Tip Luminosity Toward the Center of M31
## 1 Introduction
For more than a decade, the nature of the stellar population towards the center of M31 has been the subject of considerable debate. In the first such studies using modern CCD detectors, Mould (1986) and Mould & Kristian (1986) observed four fields located on the South-East minor axis of M31, at 20 kpc, 12 kpc, 7 kpc, and 5 kpc from the center, respectively. Rich & Mighell (1995) subsequently reached more central regions, into the inner 500 pc of the galaxy. Summarizing their own work based on WFPC1 data along with some previous work based on ground-based data, they confirmed that the first-ascent red giant branch (hereafter referred to as RGB) tip seemed to brighten towards the center of M31. At 7 kpc, the RGB tip appears to have the same luminosity as that observed for typical Galactic globular clusters ($`I`$ $``$ 20.5 mag at the distance of M31) while it becomes significantly brighter near the center of M31. Rich & Mighell (1995) mentioned the apparent contradiction between such a RGB tip luminosity brightening and its predicted theoretical dimming, due to a decrease in luminosity with increasing metallicity, as expected when going from the halo to the more metal-rich bulge stellar population. See, e.g., Bica et al. (1990) for the metal-rich nature of the M31 bulge. As also illustrated by Bica et al. (1991), higher metallicity populations normally have fainter RGBs, due to TiO blanketing in the $`I`$-band. In order to account for their observations, Rich & Mighell (1995) proposed that the M31 bulge is younger than the Galactic halo by 5 to 7 Gyr and/or the presence of a rare stellar evolutionary phase.
Renzini (1993, 1998) discussed in detail the simpler possibility that the apparent brightest RGB stars in the bulge of M31 were merely the result of image crowding, and concluded that this was indeed likely. Further support for this conclusion came from the work of Depoy et al. (1993) who obtained ground-based $`K`$-band, $`2.2\mu `$m photometry of a 604 arcmin<sup>2</sup> field in Baade’s Window and carried out careful artificial star experiments. They simulated the appearance of Baade’s Window at the distance of M31 and obtained an artificial brightening, due to crowding, of more than 1 mag, reproducing the most luminous stars found by Rich & Mould (1991) and Davies et al. (1991).
This letter presents the analysis of our HST WFPC2 observations. These high-resolution images in the central part of the bulge of M31 provide us with an ideal means by which to construct deep color-magnitude diagrams (CMDs) of the stellar populations near the center of this galaxy. In addition, they allow us to quantitatively assess and take into account the degree of crowding.
## 2 Observations and Data Reduction
We obtained Hubble Space Telescope WFPC2 images during Cycles 5 and 6 with the F555W ($`V`$) and F814W ($`I`$) filters; our targets were three fields centered on super-metal-rich star clusters in the bulge of M31 (Jablonka et al. 1992). Two fields, around the star clusters G170 and G177, are located SW along the major axis of M31, respectively at 6.1 and 3.2 arcmin from the galaxy nucleus; the third field, around the cluster G198, is located NE along the major axis at 3.7 arcmin (Huchra et al. 1991). Adopting 1 arcmin = 250 pc, as Rich & Mighell (1995), these angular separations correspond to projected distances of about 1.55, 0.80, and 0.92 kpc, respectively. In Figure 1, we show the location of the Rich & Mighell (1995) fields (squares) and our fields (circles). The region shown is a 13.5 $`\times `$ 13.5 arcmin box centered on the M31 nucleus, and the squares/circles are $``$ 80 arcsec in size/diameter.
Each cluster was centered on the PC chip and exposed for 4800 seconds in $`I`$ and 5200 seconds in $`V`$, in a series of 1100- and 1300-second subexposures.
After standard recalibration of our data, we used the DAOPHOT/ALLSTAR/ALLFRAME software package for crowded-field photometry (Stetson 1994), along with the PSFs kindly provided to us by the Cepheid-Distance-Scale HST key project. The F555W and F814W instrumental magnitudes were in fine converted to Johnson $`V`$ and Cousins $`I`$ magnitudes, using the zero points and color terms given by Hughes et al. (1998). See Jablonka et al. (1999) for further details of our observations and data reduction.
We have not applied any aperture corrections to our photometry. There are simply no sufficiently bright, isolated stars in our very crowded frames to allow us to obtain our own aperture corrections. It is not possible to use aperture corrections from other WFPC2 data, because there are variations of the aperture corrections with time (HST breathing, etc; see Suchkov & Casertano 1997). However, previously published studies based on the same PSFs as those used herein (e.g. Holland et al. 1996; Holland et al. 1997; Hill et al. 1998; Hughes et al. 1998) find that the aperture corrections are rarely larger than $``$0.05 mag. In addition, our own photometric study of the M31 globular cluster G1 (Jablonka et al. 1999; Meylan et al. 1999) confirms this fact. As a result, we estimate that our assumption of zero aperture correction introduces an uncertainty of $`\pm `$ 0.05 mag into our photometry.
## 3 Results and Discussion
We constructed a ($`I`$,$`VI`$) CMD for each of our three fields. Between $``$35,000 and $``$55,000 stars were detected in the Planetary Camera, depending on the field. The high density of stars superposed upon the bright, unresolved bulge background in these fields prevents the detection of faint stars. This induces incompleteness due to a rather bright detection limit.
Figure 2 shows the CMD of the field around the cluster G170 in the PC chip. The mean photometric errors, representing the frame-to-frame dispersion in the measured magnitudes, are indicated; these correspond to 0.3 mag at $`I=26`$ mag, 0.08 mag at $`I=24`$ mag and 0.05 mag at $`I=22`$ mag. A total of 53,036 stars are detected in this field. A red clump at $`I`$ = 24.5–25.0 mag can be seen in Figure 2, although rather hidden by the high density of points; it is more clearly visible in the luminosity function. Similar red clumps, at the same $`I`$ mag, are observed in the M31 globular cluster G1 (Rich et al. 1996; Jablonka et al. 1999). The CMDs of our other two fields are similar to the one presented in Figure 2: all have well defined red giant branches.
Figure 3 displays the mean locii of the CMDs for our three bulge fields (PC frames). The points were obtained by splitting the data in intervals of 0.65 mag in $`I`$ and 0.4 mag in $`VI`$. In each interval, the mean value was calculated using the Numerical Recipes function moment which gives a proper measure of the absolute deviation of a distribution. The resulting mean locii are clearly the signature of very metal-rich stellar populations (cf. Fig. 1 of Bica et al. 1991), as is the red clump seen in Figure 2. Within the errors, the three fields exhibit the same mean stellar population. No gradient of the stellar population is expected, given our photometric precision.
We estimate, from Figure 3, the apparent $`I`$ band magnitude of the RGB tip of the mean stellar population. This yields $`I(tip)=22.39\pm 0.38`$ mag for the G170 field, $`I(tip)=22.31\pm 0.44`$ mag for the G177 field, and $`I(tip)=22.27\pm 0.48`$ mag for the G198 field. The uncertainties in these quantities are the results of combining, in quadrature, the error in the measurement of this quantity along with an estimated error of $`\pm 0.05`$ mag in the aperture corrections and $`\pm 0.05`$ mag in the Holtzman et al. (1995) photometric transformations. Schlegel et al. (1998) find an average reddening of E($`BV`$) = 0.062 towards M31. Applying E($`VI`$) = 1.3 E($`BV`$) and A<sub>I</sub> = 1.46 E($`VI`$), and adopting the Cepheid distance modulus of 24.43 mag (Freedman & Madore 1990), we obtain a mean location for the RGB tips in our three fields of M<sub>I</sub> $``$ $`2.5\pm 0.4`$ mag; this places the mean metallicity of these fields between those of the Galactic bulge globular clusters NGC 6553/NGC 6528 and Terzan 1, if we follow the ranking of Bica et al. (1991). Interpreted within the context of the abundances published by Barbuy et al. (1998), this suggests that the mean M31 bulge metal abundance in these fields is approximately solar.
In order to allow for the comparison of our results with the previous work summarized by Rich & Mighell (1995), we have determined the magnitude of the upper envelopes of the RGBs in our color-magnitude diagrams.<sup>*</sup><sup>*</sup>* According to an anonymous referee, the quantity measured by Rich & Mighell is more closely akin to the upper envelope of the RGB rather than the RGB tip. We find $`I(env)=21.0\pm 0.2`$ mag for the G170 field, $`I(env)=20.8\pm 0.2`$ mag for the G177 field, and $`I(env)=20.9\pm 0.2`$ mag for the G198 field. Although these values seem to indicate a slight brightening towards the galaxy center, we do not consider this significant, as (i) the values are the same within the error bars, and (ii) crowding becomes severe with decreasing radius and induces an artificial brightening as will be described below.
In Figure 4, the crosses indicate the previous determinations of the apparent RGB upper envelope as a function of the projected distance (in kpc) from the nucleus of M31. All of these values come from ground-based and pre-refurbishment HST WFPC1 data. Note that the point at 7 kpc is taken from the work of Mould & Kristian (1986). The three filled hexagons, presented with their error bars, are our new results. We see that our PC values are $``$1.5 mag fainter than WFPC1 and ground-based measurements at similar radii. We interpret this difference as due to severe crowding problems in all previous data at these radii. A similar conclusion was reached by Grillmair et al. (1996) in their WFPC2 study of the M32 bulge population. They did not find the very luminous AGB stars present in previous ground-based studies, and experiments with their WFPC2 frames degraded to ground-based spatial resolution showed that many of these ‘stars’ are in fact the result of image blends due to crowding. We are confident that the point at 7 kpc has not been significantly affected by crowding as we note that Mould & Kristian (1986) performed their stellar photometry using two techniques - aperture photometry and PSF fitting. They point out that their results were not significantly affected by the choice of reduction method. We take this as suggestive evidence indicating that crowding was not a problem.
We ran numerous artificial star experiments in order to check the validity of our photometry in all of our fields. We limit this description to the tests performed on the PC and the WF2 frames of the globular cluster G170. The WF2 frame is selected among the three WF frames, since, because of its orientation, the mean surface brightness in this frame is closest to that observed in the PC frame; WF2 thus provides a good location to isolate and analyse the effect of a change in spatial resolution (pixel size). We added 210 stars to the PC and 419 stars to the WF2 frames, all of them distributed along the fiducial RGB as shown in Panel (a) of Figure 5. From those stars, 193 (91%) are recovered in the PC frame and 310 (73%) in the WF2 frame. Panels (c) and (d) of Figure 5 display the resulting photometry. In the PC frame, the input sequence is very well recovered, even though a small (constant) error shallows it a bit. However, this is not the case for the WF2 frame, where a clear spurious increase in luminosity is detected, an effect which is larger for fainter magnitudes as can be seen in panel (b) of Figure 5. This induces, along the y-axis, a shrinking of the color-magnitude diagram with a general shift towards brighter magnitudes. In a forthcoming paper, we will discuss at length the fact that this crowding effect in the WF2 frame, due to a degraded spatial resolution when going from the PC frame to the WF2 frame, is similar to keeping the PC resolution but observing increased stellar density. In their artificial star experiments, Rich & Mighell (1995) take as input a narrow stellar luminosity distribution, centered on an already rather bright magnitude. Thus, as the genuine bright stars are hardly affected by the crowding, they indeed recover their stars. However, their experiments do not test the hypothesis that fainter stars (not existing in their sample) would create some bright agglomerates in their frames. From our own experiments, it becomes clear that only the PC frames can be reliably used for the study of stellar populations in the M31 bulge.
As described above, the RGB tip becomes fainter as one approaches the nucleus of M31, from a value of I$``$20.5–21 mag at a radius of 7 kpc to I$``$22.3 mag inside of 2 kpc. The work of Da Costa & Armandroff (1990) showed that the first ascent RGB tip of a typical Galactic globular with $`[Fe/H]<0.7`$ has $`M_I`$ $``$ $`4.0`$ mag. For an age larger than about 8 Gyr and abundances in the range $`1.7<[M/H]<0.7`$, the isochrones of Bertelli et al. (1994) corroborate the assertion of Da Costa & Armandroff (1990) regarding the constancy of the $`I`$-band RGB tip luminosity at $`M_I`$ $``$ $`4.0`$ mag. Adopting the M31 distance modulus and reddening given above, this translates to I$``$20.5 mag, which is similar to the RGB tip of M31 at 7 kpc. The simplest conclusion we can make based on this apparent consistency is that the M31 stellar population at 7 kpc from the center is similar in metallicity and age to the Galactic globulars.
Earlier, we showed that the M31 bulge at a mean distance of 1.1 kpc from that galaxy’s center has an RGB tip magnitude that is consistent with an old, approximately solar metal abundance stellar population. Thus, our results confirm the steep metallicity increase from halo to bulge stars in M31 and, given the probable uncertainty in our metallicity estimate, are in-line with the analysis of the spectral features of the M31 nucleus by Bica et al. (1990).
## 4 Conclusions
Since none of our observations probe the nucleus of M31, there remains the possibility that a rare stellar population could be present in the nucleus as claimed by Rich & Mighell (1995; see also Davidge et al. 1997); however, in our view, the results of our observations and analysis make this possibility unlikely.
The observations presented in this study strongly support the idea that very bright stars were likely the result of spurious detections of blended stars due to crowding in WFPC1 and ground-based images, as also suggested by Renzini (1993, 1998), Depoy et al. (1993), and Grillmair et al. (1996). Only the refurbished HST with the WFPC2 Camera can cope with projected stellar densities as high as 55,000 stars within a 36<sup>′′</sup> $`\times `$ 36<sup>′′</sup> area.
In addition, based upon the absolute $`I`$ magnitude of the first ascent RGB tip, we conclude that the M31 bulge at 7 kpc from the center is consistent with an old intermediate-to-metal-poor (i.e. $`[Fe/H]`$ $`<`$ –0.7) population. This is in stark contrast with the bulge stars inside 2 kpc from the center, which are also old but have a mean metal abundance that is approximately solar.
Ata Sarajedini was supported by the National Aeronautics and Space Administration (NASA) grants HF-01077.01-94A, GO-05907.01-94A, and GO-06477.02-95A from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
|
no-problem/9902/hep-ex9902009.html
|
ar5iv
|
text
|
# Tests of the Electroweak Symmetry Breaking Sector at the Tevatron 11footnote 1Lecture given at the XXVIth International Meeting on Fundamental Physics, La Toja, Spain, June 1998
## 1 Introduction
For many years the standard model has been remarkably successful at explaining and predicting experimental data. However the details of the electroweak symmetry breaking (EWSB) sector of the theory remain largely unknown, and one of the primary goals of present and future colliders is to uncover the mechanism responsible for the symmetry breaking.
In the standard model and many extensions to it, the electroweak symmetry is spontaneously broken by introducing fundamental scalar particles into the theory. These are eventually identified with $`W_L`$, $`Z_L`$, and one or more physical Higgs bosons . In fact the standard model incorporates the simplest implementation, with just one scalar field doublet. This leaves a single observable scalar particle, the Higgs boson, with unknown mass but fixed couplings to other particles. It should be noted that this simple model explains several striking facts, and any complication (even if possible) tends to weaken it .
Alternatively, the electroweak symmetry may be broken dynamically. This is the hallmark of technicolor (TC) theories in which a new strong gauge force (technicolor) and new fermions (technifermions) are introduced. The technicolor force is modelled after QCD, scaled up to the TeV scale, with the technifermions being the analogs of ordinary quarks. Technicolor acts between the technifermions to form bound states (technihadrons). In particular, the technipions include the longitudinal weak bosons, $`W_L`$ and $`Z_L`$, as well as the pseudo-Goldstone bosons of dynamical symmetry breaking. Thus the dynamics of the technifermions assume the role of the scalar Higgs fields in theories with spontaneous symmetry breaking.
Both models coincide in that they can be extended to generate all masses in the theory, notably the fermion masses. In the first case, explicit Yukawa couplings are used. In the second, the first TC model was augmented by Extended Technicolor (ETC) . In fact, it is attractive to assume that a connection between the physics of the EWSB and the physics of flavour exists even more generally than the two previous examples. The important point is that both models now predict that the scalars (fundamental or composite) will have couplings to ordinary fermions that are proportional to mass, i.e., these scalars will decay predominantly in channels involving heavy quarks and heavy leptons. The experimental consequence of this is to provide a general frame to look for the particles involved in the EWSB, namely, to look for resonances decaying into heavy quarks and/or heavy leptons produced in pairs or in association with a $`W`$ or $`Z`$.
In the following we describe three experimental searches recently published by the CDF Collaboration using $`100`$ pb<sup>-1</sup> of $`p\overline{p}`$ collisions at the Tevatron :
* Search for $`Xb\overline{b}`$ in association with $`W\mathrm{}\nu `$.
* Search for $`Xb\overline{b}`$ in association with $`Vq\overline{q}`$, where $`V=W,Z`$.
* Search for events including a $`\tau ^+\tau ^{}`$ pair and two extra jets.
No signal beyond standard model expectations has been observed in the current data sample. So in the rest of the article we will investigate the implications of this null result on the models cited at the beginning of the section, emphasizing the first-hand information on the performance of the different experimental techniques and direct evaluation of involved backgrounds. In particular we use searches (1) and (2) to constrain the production of a light standard model Higgs, and search (3) to investigate technicolor models containing a technifamily.
There are other analyses not discussed here for lack of space that can be used to find information about the EWSB sector: top physics, charged Higgs searches, etc. The reader will find information in references .
## 2 Light Standard Model Higgs
The possible range for the mass of the standard model Higgs extends from a lower bound of about 88 GeV$`/c^2`$ from the LEP experiments to $`𝒪`$(1) TeV. The present analysis has to be restricted to light masses for two reasons:
* The production cross sections at the Tevatron are small and decrease rapidly as a function of $`M_{H^0}`$. In $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV, the Higgs production mechanism with the most promising detection possibilities is $`p\overline{p}V+H^0`$, with $`V=W,Z`$. In the framework of the standard model, the production cross section in this channel is 1.3 to 0.11 pb for Higgs masses between 70 to 140 GeV/$`c^2`$ .
* The technique used is to look for resonances in the $`b\overline{b}`$ channel. $`H^0b\overline{b}`$ is the dominant decay mode of the Higgs boson only up to $`M_{H^0}`$ 130 GeV/$`c^2`$.
However there are two important motivations to emphasize the region of light Higgs masses:
* Precision electroweak experiments suggest that the Higgs boson mass may lie at the lower end of the open range .
* A similar symmetry breaking mechanism occurs in the minimal supersymmetric extension of the standard model, where several observable scalar states are predicted, the lightest of which is expected to decay predominantly into $`b\overline{b}`$ states and to have a mass below 135 GeV$`/c^2`$ .
The cross sections expected in the standard model are out of the scope of the present analysis, using $`100`$ pb<sup>-1</sup> of $`p\overline{p}`$ collisions. We therefore report on a general search for a Higgs scalar produced in association with a vector boson with unknown cross section $`\sigma _{VH^0}`$. We look for $`H^0`$ decays to a $`b\overline{b}`$ pair with unknown branching ratio $`\beta `$, and for two possible decays of the vector boson: ($`i`$) $`W\mathrm{}\nu `$ with $`\mathrm{}=e,\mu `$ and ($`ii`$) $`Vq\overline{q}`$. Finally, the limits obtained in both channels are combined.
### 2.1 Leptonic Analysis
The experimental signature considered is $`WH^0`$ with $`We\nu `$ or $`\mu \nu `$, and $`H^0b\overline{b}`$, giving final states with one high-$`p_T`$ lepton, large missing transverse energy ($`\overline{)}E_T`$) due to the undetected neutrino, and two $`b`$ jets . The ability to tag $`b`$ jets with high efficiency and a low mistag rate is vital for searching for the decay of $`H^0b\overline{b}`$. We use the secondary vertex (SECVTX) and soft-lepton (SLT) $`b`$-tagging algorithms developed for the top quark discovery .
A three-level trigger selects events that contain an electron or muon for this analysis. The event selection starts with the requirement of a primary lepton, either an isolated electron with $`E_T>20`$ GeV or an isolated muon with $`p_T>20`$ GeV$`/c`$, in the central region $`|\eta |<1.0`$. A $`W`$ boson sample is selected by requiring $`\overline{)}E_T>20`$ GeV. Events which contain a second, same flavour lepton with $`p_T>10`$ GeV$`/c`$ are removed as possible $`Z`$ boson candidates if the reconstructed $`ee`$ or $`\mu \mu `$ invariant mass is between 75 and 105 GeV$`/c^2`$. The events must also not be accepted by the CDF top dilepton analysis . To further reduce the dilepton backgrounds, we reject events with an additional high-$`p_T`$ isolated track with opposite charge to that of the primary lepton. The remaining events are classified according to jet multiplicity. Jets are defined as localized energy depositions in the calorimeters and are reconstructed using an iterative clustering algorithm with a fixed cone of radius $`\mathrm{\Delta }R=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}=0.4`$ in $`\eta \varphi `$ space . Jet energies are then corrected for energy losses in uninstrumented detector regions, energy falling outside the clustering cone, contributions from underlying event and multiple interactions, and calorimeter nonlinearities.
The $`W+2`$ jet bin is expected to contain most of the signal, while the other bins are used to check the background calculation. In order to enhance the signal in the $`W+2`$ jet bin, we require that one or both of the jets be identified (’tagged’) as coming from a $`b`$ hadron. We require at least one jet to be tagged by the SECVTX algorithm, which has a higher signal-to-noise ratio than the SLT algorithm. For the single-tag analysis, the other jet must not be tagged, while for the double-tag analysis the second jet must be tagged by either SECVTX or SLT.
The SECVTX tagging algorithm begins by searching for secondary vertices that contain three or more displaced tracks. If none are found, the algorithm searches for two-track vertices using more stringent track criteria. A jet is tagged if the secondary vertex transverse displacement from the primary one exceeds three times its uncertainty.
The SLT tagging algorithm identifies electrons and muon from semileptonic $`b`$ decays by matching tracks with $`p_T>2`$ GeV$`/c`$ with clusters of electromagnetic energy in the calorimeters or tracks in the muon chambers. To gain additional background rejection, we require the track to lie within a cone $`\mathrm{\Delta }R<0.4`$ of the axis of a jet and to be displaced in the transverse plane from the primary vertex by at least two standard deviations in the jet direction. This latter requirement reduces lepton misidentifications by a factor of five while retaining 65% of the efficiency.
The total acceptance is calculated as a product of the kinematic and geometric acceptance, trigger, lepton identification, and $`b`$-tagging efficiencies, and the $`W`$ leptonic branching ratios. The acceptance for identifying $`WH^0`$ is calculated from data and a standard model simulation of Higgs production, where the Higgs is forced to decay into $`b\overline{b}`$ with 100% branching ratio. The acceptance increases monotonically from $`0.53\pm 0.13\%`$ ($`0.17\pm 0.04\%`$) to $`1.1\pm 0.3\%`$ ($`0.42\pm 0.11\%`$) for single (double) tagging as $`M_{H^0}`$ increases from 70 to 120 GeV$`/c^2`$. A 25% systematic in the acceptance comes from uncertainties in the modeling of initial and final state radiation, jet energy, and $`b`$-tag, trigger, and lepton identification efficiencies.
Background events come predominantly from the direct production of $`W`$ bosons in association with heavy quarks (estimated using the HERWIG Monte Carlo program), mistags (from generic jet data), and $`t\overline{t}`$ production (normalized to the CDF measured cross section $`\sigma _{t\overline{t}}=7.6_{1.5}^{+1.8}`$ pb ). Other small backgrounds are estimated from a combination of Monte Carlo simulations and data.
The numbers of observed single-tagged and double-tagged events and the corresponding background estimates are shown in Table 1. By construction, data and expectations are in reasonably good agreement in the $`W+3`$ jet bins, which, along with other $`t\overline{t}`$ decay channels, were used to measure the $`t\overline{t}`$ production cross section. The number of $`b`$-tags in the $`W+2`$ jet bin can be compared to the background calculation. This bin shows a small excess of events corresponding to one standard deviation.
To increase the sensitivity of the search we look for a resonant mass peak in the reconstructed two-jet invariant mass distribution using the 4-momenta of the jets as measured by the calorimeter. The expected two-jet invariant mass shape for $`WH^0`$ production is shown in Figure 1 (left) for $`M_{H^0}=110`$ GeV$`/c^2`$. The distributions for the data are shown in Figure 1 (right), along with the background expectation.
We set an upper limit on the production cross section times branching ratio of $`p\overline{p}WH^0`$ as a function of $`M_{H^0}`$, by fitting the number of events in the $`W+2`$ jet samples and the shape of the two-jet mass distributions. The fit yields $`\sigma _{WH^0}(H^0b\overline{b})`$ in the range from $`0.2_{0.0}^{+4.7}`$ to $`5.7_{3.0}^{+4.2}`$ pb for a new particle of mass between 70 and 120 GeV$`/c^2`$, statistically compatible with no signal. From the 95% C.L. limits on $`\sigma (p\overline{p}WH^0)`$, the corresponding limits on $`VH^0`$ production were calculated. We used the program PYTHIA to compute the standard model prediction for the ratio $`\sigma (ZH^0)/\sigma (WH^0)`$. The leptonic analysis efficiency for $`ZH^0`$ events relative to that for $`WH^0`$ events was estimated to be $`(10\pm 2)`$%. The limits are summarized in Table 2.
### 2.2 Hadronic Analysis
The experimental signature considered is four jets in the final state, with two of them identified as $`b`$ jets . The hadronic channel described here has the advantage of a larger branching ratio, and is sensitive to both $`WH^0`$ and $`ZH^0`$ production ($`\sigma (ZH^0)/\sigma (WH^0)0.6`$), but suffers from a larger QCD background.
The data sample was recorded with a trigger which requires four or more clusters of contiguous calorimeter towers, each with transverse energy $`E_T15`$ GeV, and a total transverse energy $`E_T125`$ GeV. Offline, events are required to have four or more jets with uncorrected $`E_T>15`$ GeV and $`|\eta |<2.1`$. After this initial selection the sample contains $`207,604`$ events. In addition, we require that at least two among the four highest-$`E_T`$ jets in the event tagged by the SECVTX algorithm.
There are 764 events with four or more jets and two or more $`b`$-tags. In these events, only the four highest-$`E_T`$ jets are considered for the mass reconstruction: the two highest-$`E_T`$ $`b`$-tagged jets are assigned to the Higgs boson, and the other two to the vector boson. The $`b\overline{b}`$ invariant mass distribution in signal events contains a Gaussian core with a sigma of $`0.14\times M_{H^0}`$. The tails of the distribution are dominated by the cases (25-30%) where the jet assignment in the mass reconstruction is incorrect. In most of these cases, one of the jets assigned to the Higgs is a heavy quark jet from the decay of the $`V`$ boson.
The challenge of this analysis is to understand the sample composition. The main source of background events is QCD heavy flavor production. The heavy flavor content of QCD hard processes has been modelled with PYTHIA. We generated all QCD jet production channels and retained the events that contained a heavy quark produced either in the hard scattering or in the associated radiation process. Events with a heavy quark are conventionally classified in three groups: direct production, gluon splitting, and flavor excitation. Direct production events are characterized by a high value of the invariant mass, $`M_{b\overline{b}}`$, and a low value of the transverse momentum of the $`b\overline{b}`$ system, $`p_T(b\overline{b})`$. The same is true for flavor excitation events. The kinematics of final state gluon splitting events favor a relatively smaller invariant mass value and a large $`p_T(b\overline{b})`$, since both jets tend to be emitted along the same direction. In this plane, the Higgs signal shows a greater tendency to large $`M_{b\overline{b}}`$ and $`p_T(b\overline{b})`$ values. A cut on $`p_T(b\overline{b})50`$ GeV/$`c`$ is $`80\%`$ efficient for the signal and strongly discriminates against direct production and flavor excitation of heavy quarks. After the $`p_T(b\overline{b})`$ requirement is applied to the data 589 events remain.
Other backgrounds are $`t\overline{t}`$ production, $`Z`$ \+ jets events with $`Zb\overline{b}/c\overline{c}`$ and fake double-tags. The first two are estimated from Monte Carlo and the last one from data. Using the CDF measured $`t\overline{t}`$ production cross section and a top quark mass of $`M_t=175`$ GeV/$`c^2`$, HERWIG predicts $`26\pm 7`$ $`t\overline{t}`$ events in the data, after trigger, kinematic and $`b`$-tag requirements. The same generator predicts $`17\pm 4`$ $`Z`$ \+ jets background events. Fake double-tags are defined as events in which at least one of the two tagged jets contains a false secondary vertex in a light quark or gluon jet. Fake tag probabilities are parameterized by measuring in several inclusive jet data samples the proportion of jets in which a secondary vertex is reconstructed on the wrong side of the primary vertex with respect to the jet direction . The current data set is estimated to contain $`89\pm 11`$ fake double-tag events. Finally, other minor sources of background account for less than 1% of the total number of events, have a broad invariant mass distribution, and are neglected in the final fit.
The total signal detection efficiency is defined as the product of the trigger efficiency, the kinematical and geometrical acceptances, the double $`b`$-tagging efficiency, the $`p_T(b\overline{b})`$ cut efficiency, and the $`V`$ hadronic branching fractions. The total efficiency increases linearly from $`0.6\pm 0.1\%`$ to $`2.2\pm 0.6\%`$ for Higgs masses ranging from 70 GeV/$`c^2`$ to 140 GeV/$`c^2`$.
The shape of the observed $`b`$-tagged dijet invariant mass distribution is fit, using a binned maximum-likelihood method, to a combination of signal, fake double-tag events, and QCD, $`t\overline{t}`$ and $`Z`$ \+ jets backgrounds. The QCD and signal normalizations are left free in the fit while the normalizations of the $`t\overline{t}`$, $`Z`$ \+ jets and fakes are constrained by Gaussian functions to their expected values and uncertainties.
The fit yields $`\sigma _{VH^0}\beta =44\pm 42`$ pb for $`M_{H^0}=70`$ GeV/$`c^2`$, statistically compatible with zero signal. For larger masses, zero signal contribution is preferred. Table 2 shows the result of the fits as a function of the Higgs mass. Figure 2 (left) shows the $`b`$-tagged dijet invariant mass distribution for the data compared to the results of the fit for $`M_{H^0}80`$ GeV/$`c^2`$.
Since the observed distribution is consistent with standard model background expectations, we place limits on $`p\overline{p}VH^0`$ production. Systematic uncertainties on the 95% C.L. limits arise from luminosity, jet energy scale, double $`b`$-tagging efficiencies, QCD radiation, limited Monte Carlo statistics, and background normalizations and shapes. The total systematic uncertainty is in the range $`26\%30\%`$. The 95% C.L. limits are summarized in Table 2 and Figure 2 (right). The resulting bounds fall rapidly from 117 pb at $`M_{H^0}=70`$ GeV/$`c^2`$ to values between 15 and 20 pb for $`M_{H^0}>105`$ GeV/$`c^2`$.
### 2.3 Combined results
To combine the two results presented above, the data from both channels were then fitted simultaneously . Correlations between systematic uncertainties due to luminosity, QCD radiation, and $`b`$-tagging efficiency were taken into account. All other systematic uncertainties were considered uncorrelated. The 95% C.L. limits range from 16 to 24 pb and are shown in Table 2 and Figure 2 (right).
## 3 Technicolor
Now we turn to the second mechanism of EWSB discussed in the Introduction . Particularly interesting from the present experimental point of view are TC models containing a technifamily, i.e. a set of technifermions with the same structure and quantum numbers of a complete standard model generation of quarks and leptons, and carrying an additional TC quantum number. By convention, technifermions which are color-triplets of ordinary QCD are called techniquarks, and color-singlet technifermions are called technileptons. The particle spectrum of these models includes color-singlet, -triplet and -octet technipions. The technipions ($`\pi _T`$) decay via ETC interactions. Since these are also responsible for the fermion masses, technipions are expected to have Higgs-boson-like couplings to ordinary fermions, i.e. to decay preferentially to third-generation quarks and leptons. In particular, the color-triplet technipions are an example of scalar third-generation leptoquarks ($`\pi _{LQ}`$). In this section, we use the results of a search for third-generation leptoquarks by CDF, in order to explore TC models containing a technifamily. Other experimental constraints on these models come from precision electroweak measurements at LEP , and from measurements of the $`bs\gamma `$ decay rate .
Let’s consider technicolor models containing a family of color-singlet technileptons and color-triplet techniquarks. In these models, there is a color-octet vector resonance, called technirho ($`\rho _T`$), with the quantum numbers of the gluon. Leptoquarks are assumed to be pair produced via gluon-gluon fusion and $`q\overline{q}`$ annihilation. In $`q\overline{q}`$ and $`gg`$ collisions, the $`\rho _T`$ couples to the gluon propagator enhancing s-channel reactions (Fig. 3), analogously to the vector-meson-dominance description of the process $`e^+e^{}\pi ^+\pi ^{}`$ . Two decay modes may exist for the technirho : $`\rho _Tq\overline{q},gg`$ and $`\rho _T\pi _T\overline{\pi }_T`$. If the $`\rho _T`$ mass is less than twice the $`\pi _T`$ mass, only the $`q\overline{q},gg`$ decay mode is possible, resulting in resonant dijet production. A search result for the dijet signal of $`\rho _T`$ has already been reported by CDF. The CDF-measured dijet mass spectrum excludes $`\rho _T`$ with masses in the range $`260<M(\rho _T)<480`$ GeV$`/c^2`$ at the 95% C.L. . If the $`\rho _T`$ mass is larger than twice the $`\pi _T`$ mass, the $`\rho _T`$ decays preferentially into $`\pi _T`$ pairs.
The technipion spectrum of the technifamily model was estimated in . It contains color-singlet, -triplet and -octet ($`\pi _8`$) technipions. The octets are heavier than the triplets, and these are heavier than the singlets. We make the simplifying assumption that there is no mass splitting among the different octet and triplet technipions. We consider the class of color-triplet technipions decaying via $`\pi _{LQ}\overline{b}\tau ^{}`$ ($`\overline{\pi }_{LQ}b\tau ^+`$) with branching fraction $`\beta `$.
The leading-order leptoquark pair production cross section depends only on the technirho mass ($`M(\rho _T)`$), the leptoquark mass ($`M(\pi _{LQ})`$), and the technirho width ($`\mathrm{\Gamma }(\rho _T)`$). $`M(\pi _{LQ})`$ and $`M(\rho _T)`$ are treated as independent free parameters. $`\mathrm{\Gamma }(\rho _T)`$ can be calculated as a function of four more basic quantities, $`\mathrm{\Gamma }(\rho _T)=\mathrm{\Gamma }(M(\rho _T),M(\pi _{LQ}),\mathrm{\Delta }M,N_{TC})`$, where $`\mathrm{\Delta }M=M(\pi _8)M(\pi _{LQ})`$, and $`N_{TC}`$ is the number of technicolors. We consider $`M(\rho _T)`$, $`M(\pi _{LQ})`$, $`\mathrm{\Delta }M`$, and $`N_{TC}`$ as the four continuous parameters of the theory. We set limits in the $`M(\pi _{LQ})M(\rho _T)`$ plane. We probe the dependence of the production cross section on $`\mathrm{\Gamma }(\rho _T)`$ by fixing $`N_{TC}=4`$, while allowing $`\mathrm{\Delta }M`$ to take one expected and two limiting values. ETC and QCD corrections to $`M(\pi _8)`$ and $`M(\pi _{LQ})`$ are responsible for $`\mathrm{\Delta }M`$, analogously to the QED corrections to $`M(\pi ^o)`$ and $`M(\pi ^\pm )`$. $`\mathrm{\Delta }M`$ is expected to be around 50 GeV$`/c^2`$ . We take $`\mathrm{\Delta }M=0`$ and $`\mathrm{\Delta }M=\mathrm{}`$ as two extreme values. The resulting variation in $`\mathrm{\Gamma }(\rho _T)`$ could also have been obtained changing $`N_{TC}`$ by a factor of 4, for a fixed $`\mathrm{\Delta }M=50`$ GeV$`/c^2`$.
The experimental signature considered is $`\tau ^+\tau ^{}`$ plus two jets in the final state, in the case where one $`\tau `$ decays leptonically and the other decays hadronically. The analysis selects a 110 pb<sup>-1</sup> data set containing an isolated electron or muon in the region $`|\eta |<1`$ with $`p_T>20`$ GeV$`/c`$ , and an isolated, highly-collimated hadronic jet consistent with a hadronic tau decay. Hadronic $`\tau `$ candidates ($`\tau `$-jets) are selected from jets that have an uncorrected total transverse energy of $`E_T>15\mathrm{GeV}`$ in the region $`|\eta |<1`$. The associated charged particles with $`p_T>`$ 1 GeV/c in a 10<sup>o</sup> cone around the jet direction must satisfy the following requirements: ($`i`$) the $`\tau `$-jet must have one or three charged particles; ($`ii`$) if there are three, the scalar sum $`p_T`$ must exceed 20 GeV/c and the invariant mass must be smaller than 2 GeV/c<sup>2</sup>; and ($`iii`$) the leading charged particle must have $`p_T>10\mathrm{GeV}`$/c and must point to an instrumented region of the calorimeter. The efficiency of the $`\tau `$-jet identification criteria grows from 32% for $`\tau `$-jets in the range $`15<E_T<20\mathrm{GeV}`$ to a plateau value of 59% for $`E_T>40\mathrm{GeV}`$. Isolated $`\tau `$-jets must have no charged particles with $`p_T>`$ 1 GeV/c in the annulus between 10<sup>o</sup> and 30<sup>o</sup> around the jet axis. Events where the high-$`p_T`$ lepton is consistent with originating from a $`Zee`$ or $`Z\mu \mu `$ decay are removed. In addition, the analysis uses the missing transverse energy characteristic of neutrinos from tau decays. The requirement $`\mathrm{\Delta }\mathrm{\Phi }<50^\mathrm{o}`$, where $`\mathrm{\Delta }\mathrm{\Phi }`$ is the azimuthal separation between the directions of the missing transverse energy $`\overline{)}E_T`$ and the lepton, distinguishes $`\tau ^+\tau ^{}`$ events from backgrounds such as $`W`$ \+ jets. Figure 4 (left) shows the jet multiplicity in $`\tau ^+\tau ^{}`$ candidate events. The agreement with the standard model background prediction is excellent. Finally, two or more jets with $`E_T>`$ 10 GeV and $`|\eta |<`$ 4.2, assumed to originate from $`b`$ quark hadronization, are required. One leptoquark pair candidate event survives these selection criteria. The observed yield is consistent with the $`2.4_{0.6}^{+1.2}`$ expected background events from standard model processes, dominated by $`Z\tau \tau +`$ jets production ($`2.1\pm 0.6`$) with the remainder from diboson and $`t\overline{t}`$ production .
The detection efficiencies for the signal are determined using a full leading-order matrix element calculation for technipion pair production (continuum, resonant, and interference terms are included) and embedded in the PYTHIA Monte Carlo program to model the full $`p\overline{p}`$ event structure. The generated events are passed through a detector simulation program and subjected to the same search requirements as the data. The total efficiency increases from 0.3% for $`M(\rho _T)=`$ 200 GeV$`/c^2`$ and $`M(\pi _{LQ})=`$ 100 GeV$`/c^2`$, to 1.8% for $`M(\rho _T)=`$ 700 GeV$`/c^2`$ and $`M(\pi _{LQ})=`$ 300 GeV$`/c^2`$. The systematic errors in the efficiencies were estimated as described in , including uncertainties in the modelling of gluon radiation, in the calorimeter energy scale, in the dependence on renormalization scales, and in the luminosity measurement. They range from 15% for $`M(\rho _T)=`$ 200 GeV$`/c^2`$ and $`M(\pi _{LQ})=`$ 100 GeV$`/c^2`$, to 10% for $`M(\pi _{LQ})`$ 125 GeV$`/c^2`$.
We place limits on the leptoquark pair production cross section times branching ratio squared within the framework of the technicolor model described above. Table 3 lists the leptoquark 95% confidence level upper limits on the production cross section times branching ratio squared as a function of $`M(\pi _{LQ})`$ and $`M(\rho _T)`$, for $`\mathrm{\Delta }M=50`$ GeV$`/c^2`$. These numbers differ by at most 1 pb from the corresponding limits for $`\mathrm{\Delta }M=0`$ and $`\mathrm{\Delta }M=\mathrm{}`$ when $`M(\pi _{LQ})<175`$ GeV$`/c^2`$. For larger values of $`M(\pi _{LQ})`$ the differences are negligible. Comparing to the theoretical expectations for $`\sigma (p\overline{p}\pi _{LQ}\overline{\pi }_{LQ})\beta ^2`$, we place bounds in the $`M(\pi _{LQ})M(\rho _T)`$ plane. Figure 4 (right) shows the 95% C.L. mass exclusion regions. The upper part of the plot corresponds to the kinematically forbidden region where $`M(\rho _T)<2M(\pi _{LQ})`$. The bottom region is the exclusion area from the continuum leptoquark analysis, $`M(\pi _{LQ})99`$ GeV$`/c^2`$ . The three shaded areas from left to right correspond to technipion mass splitting values of $`\mathrm{\Delta }M=0`$, 50 GeV$`/c^2`$ and $`\mathrm{}`$, respectively. Although more information is presented in Figure 4, it is useful to summarize our technirho excluded region using a single number. For $`\mathrm{\Delta }M=0`$ (50, $`\mathrm{}`$) and $`M(\pi _{LQ})<M(\rho _T)/2`$, we exclude color octet technirhos with mass less than 465 (513, 622) GeV$`/c^2`$ at 95% confidence level.
## 4 Discussion and Prospects
We have described several attempts to explore the phenomenology of the EWSB sector at the Tevatron. A first kind of analyses tried to reconstruct $`b\overline{b}`$ resonances. From the experimental point of view, this involved the study and development of a number of techniques of broad interest for the hadron collider experiments of the next decade. These include jet spectroscopy, $`b`$-tagging algorithms, and quantitative evaluation of challenging new backgrounds as QCD production of heavy quarks in a multijet environment.
No signal has been found yet, and the results have been used to constrain the production of a light standard model Higgs. The Higgs searches have not provided yet any lower Higgs mass bound. The sensitivity of the present search is limited by statistics to a cross section approximately two orders of magnitude larger than the predicted cross section for standard model Higgs production . It also should be noted that, because these limits were derived from a shape fit, they only apply to a very restricted region of parameter space in the minimal supersymmetric extension of the standard model.
For the next Tevatron run CDF hopes for an approximately twenty-fold increase in the total integrated luminosity and a factor of two improvement in the double $`b`$-tagging efficiency. D0 will be as good as CDF in this respect. However, this is still insufficient to reach say a 120 GeV$`/c^2`$ Higgs mass, unless the total detection efficiency be improved by one order of magnitude. The viability of this improvement is at present under study . Plans include the installation of dedicated Higgs triggers, systematic inclusion of all relevant channels (notably $`ZH^0`$ production, with $`Z\mathrm{}\mathrm{}`$ and $`Z\nu \nu `$), finer mass resolution, and additional, more efficient selection algorithms (e.g., neural networks).
Another analysis described a virtually background-free search of resonances decaying into $`\tau `$ leptons. The four objects appearing in the selected events can be arranged in two ways. Looking for $`\tau `$-jet leptoquark resonances the analysis can be used to directly constrain TC models including one technifamily. The expected factor $`\times `$20 in luminosity will help to push the technirho mass limits closer to the TeV region. Interpreted as containing a $`\tau ^+\tau ^{}`$ resonance, the same events can be used to test effectively the large tan($`\beta `$) region of the SUSY extension of the standard model .
In conclusion, we are just at the very beginning of this kind of physics at hadron colliders. Most importantly we need data to look at, and for that reason the CDF and D0 Collaborations are looking forward to Run 2 with great anticipation.
## Acknowledgments
I would like to thank Prof. B. Adeva and collaborators for the stimulating atmosphere of the meeting and warm hospitality.
## References
|
no-problem/9902/astro-ph9902261.html
|
ar5iv
|
text
|
# Unusual radio variability in the BL Lac object 0235+164
## 1 Introduction
The radio source AO 0235+164 was identified by Spinrad & Smith (spinrad75 (1975)) as a BL Lac object due to its almost featureless optical spectrum at the time of the observation, and due to its pronounced variability. Long-term flux density monitoring in the radio and optical regimes have revealed strong variations and repeated outbursts with large amplitudes and timescales ranging from years down to weeks (e.g. Chu et al. chu96 (1996), O’Dell et al. odell88 (1988), Teräsranta et al. teraesranta92 (1992), Schramm et al. schramm94 (1994), Webb et al. webb88 (1988), this paper, Fig. 2). Furthermore, intraday variability in the radio (Quirrenbach et al. quirrenbach92 (1992), Romero et al. romero97 (1997)), in the IR (Takalo et al. takalo92 (1992)), and in the optical regime (Heidt & Wagner heidt96 (1996), Rabbette et al. rabbette96 (1996)) has also been observed in this object. In the high energy regime, 0235+164 was detected with EGRET on board of the CGRO (v. Montigny et al. montigny95 (1995)), showing variability between the individual observations. Madejski et al. (madejski96 (1996)) report variability by a factor of 2 in the soft X-rays during a ROSAT PSPC observation in 1993. VLBI observations (e.g. Shen et al. shen97 (1997), Chu et al. chu96 (1996), Bååth baath84 (1984), Jones et al. jones84 (1984)) reveal a very compact structure and superluminal motion with extremely high apparent velocities (perhaps up to $`\beta _{\mathrm{app}}30`$).
Three distinct redshifts have been measured towards 0235+164 (e.g. Cohen et al. cohen87 (1987)). Whereas the emission lines at $`z=0.940`$ have been attributed to the object itself, two additional systems are present in absorption ($`z=0.851`$) and in emission and absorption ($`z=0.524`$). Smith et al. (smith77 (1977)) observed a faint object located about $`2^{\prime \prime }`$ south of 0235+164, and measured narrow emission lines at a redshift of $`z=0.524`$. Continued studies on the field of 0235+164 have revealed a number of faint galaxies, mostly at a redshift of $`z=0.524`$, including an object located 1$`\stackrel{}{.}`$3 to the east and 0$`\stackrel{}{.}`$5 to the south (e.g. Stickel et al. stickel88 (1988), Yanny et al. yanny89 (1989)). Recently, Nilsson et al. (nilsson96 (1996)) investigated 0235+164 during a faint state and found prominent hydrogen lines at the object redshift of $`z=0.940`$. They note that 0235+164 – at least when in a faint state – shows the spectral characteristics of an HPQ. Furthermore, through HST observations of 0235+164 and its surrounding field, Burbidge et al. (burbidge96 (1996)) discovered about 30 faint objects around 0235+164 and broad QSO-absorption lines in the southern companion, indicating that the latter is an AGN-type object. Due to the presence of several foreground objects, gravitational microlensing might play a role in the characteristics of the variability in 0235+164, as was suggested by Abraham et al. (abraham93 (1993)).
In this paper, we further investigate the radio variability of 0235+164, and attempt to determine the most likely physical mechanisms behind the observed flux density variations. The plan of the paper is as follows. In Section 2 we describe the observations and the data reduction; subsequently we analyze the lightcurves and point out some of their special properties. In Section 3 we explore different scenarios which could explain the variability: we discuss relativistic shocks, a precessing beam model, free-free-absorption, interstellar scattering, and gravitational microlensing. Finally, in Section 4 we conclude with a summary of the failures and successes of these models.
Throughout the paper we assume a cosmological interpretation of the redshift, we use $`H_0=100h`$ km/(s Mpc) and $`q_0=0.5`$, which gives for the redshift of 0235+164 a luminosity distance of 3280 $`h^1`$ Mpc; 1 mas corresponds to 4.2 $`h^1`$ pc. The radio spectral index is defined by $`S_\nu \nu ^\alpha `$.
## 2 Observations and data reduction
### 2.1 Radio observations
From Oct 2 to Oct 23, 1992, we observed 0235+164 with a five-antenna subarray of the VLA<sup>1</sup><sup>1</sup>1The Very Large Array, New Mexico, is operated by Associated Universities, Inc., under contract with the National Science Foundation. during and after a reconfiguration of the array from D to A. The aim of these observations was to search for short-timescale variations in several sources. The complete data set will be presented elsewhere (Kraus et al., in preparation). In parallel, optical observations were performed in the R-band (see Section 2.2). Data for 0235+164 were taken at 1.49, 4.86, and 8.44 GHz ($`\lambda =`$ 20, 6, 3.6 cm) every two hours around transit, i.e., six times per day. These three sets of receivers have the lowest system temperatures and highest aperture efficiencies of those available at the VLA (see Crane & Napier crane89 (1989)); and data in these bands are less susceptible to problems with poor tropospheric phase stability than those at higher frequencies. In addition, intraday variability of compact flat-spectrum radio sources did appear most markedly in this frequency range in previous observations (Quirrenbach et al. quirrenbach92 (1992)). During the first week, the antennae included in our subarray were changed repeatedly due to the ongoing reconfiguration; however, an attempt was made to maintain an approximately constant set of baselines. Since 0235+164 and the used calibrator sources used are extremely compact (cf. VLA calibrator list), the effect of the ongoing reconfiguration on the measurements is negligible.
After correlation and elimination of erroneous data intervals, we performed phase calibration first. Subsequently, a (one-day) mean amplitude gain factor was derived using non-variable sources such as 1311+678, which have been linked to an absolute flux density scale (Baars et al. 1977, Ott et al. 1994) by frequent observations of 3C 286 and 3C 48. After a second pass of editing spurious sections of the data, the visibilities of each scan were incoherently averaged over time, baselines, polarization, and IFs. Because of the point-like structure of the sources, the mean source visibility is proportional to the flux density. Eventually, systematic elevation and time-dependent effects in the lightcurves were removed, using polynomial corrections derived from observations of the calibrator sources 0836+710 and 1311+678.
The errors are composed of the statistical errors from the averaging and a contribution from the residual fluctuations of the non-variable sources 3C 286, 1311+678 and 0836+710. The level of these fluctuations was estimated from a running standard deviation of the calibrator measurements over a two-day period. Over the full three-week period, the standard deviations were found to be 0.5, 0.5, and 0.7 % of the mean value at 1.5, 4.9, and 8.4 GHz respectively (with no significant difference between the three non-variable sources).
The resulting lightcurves for the three frequencies are displayed in the top panels of Fig. 1. The mean flux densities are 1.57 Jy, 4.05 Jy, and 5.22 Jy for $`\nu =1.49,4.86,8.44`$ GHz, respectively. Therefore, 0235+164 had a highly inverted spectrum at the time of the observations with spectral indices $`\alpha _{4.9\mathrm{GHz}}^{1.5\mathrm{Ghz}}=0.80`$ and $`\alpha _{8.4\mathrm{GHz}}^{4.9\mathrm{GHz}}=0.46`$.
### 2.2 Optical observations
The radio data were supplemented by observations at 650 nm (R-band filters) taken at the following telescopes: 0.7 m Telescope, Landessternwarte Heidelberg, Germany; 1.2 m Telescope, Observatoire de Haute Provence, France; 1.2 m Telescope, Calar Alto, Spain; 2.1 m Telescope, Cananea, Mexico.
Owing to limited observing time per source and weather limitations, the optical data are sampled more sparsely. They cover the first week of the radio observations, leave a gap for ten days, and continue for a total of thirty days, thus ending ten days after the radio monitoring. After the usual CCD reduction process (see e.g. Heidt & Wagner 1996), we performed relative photometry referencing the measurements to three stars within the field. The corresponding lightcurve is plotted in the bottom panel of Fig. 1. The measurement errors are smaller than the symbol size. In addition, we include three data points, taken from the long-term monitoring by Schramm et al. (1994). Those are marked by triangles.
### 2.3 Lightcurve analysis
As evident from Fig. 1, 0235+164 is variable in all three radio bands and in the optical. A major flare around JD 2448905 can be identified throughout the radio frequencies, and may be tentatively connected with the optical maximum at the beginning of the observation. We note, however, that the exact position of the latter cannot be determined precisely due to the sparse sampling of the optical data. Therefore, we consider the connection between radio and optical variations as possible, but not definitive.
In addition, a second flare towards the end is present in the 21 cm-data, possibly corresponding to the increase at 6 cm, and the sharp peak by a factor of two in the optical. A corresponding feature at 3.6 cm would be expected well inside the observation period but is definitely not present. The lightcurve at 6 cm shows additional faster variations which have no corresponding features at the other wavelengths. These faster variations (which are not shown by the calibrator sources and therefore are real) could for example be caused by scattering in the ISM as we will discuss later. But we note that the “global” behavior is very similar at all three radio wavelengths.
We focus in this paper on the first flare (JD $`\begin{array}{c}<\hfill \\ \hfill \end{array}`$ 2448910) which is pronounced in all three radio frequencies, and could be connected to the optical increase around JD 2448900. We assume that all four observed lightcurves are caused by the same physical event in the source. In order to describe this major feature, we fit a linear background and one Gaussian component to the radio lightcurves (using all data points before JD 2448910) according to
$$S(t)=a_0+a_1t+a_2\mathrm{exp}\left(\frac{(ta_3)^2}{a_4^2}\right),$$
(1)
where S(t) is the measured flux density. The parameters and estimated errors are listed in Table 1.
The fits reveal three properties which make the radio variability quite unusual. First, the relative amplitude of the flare becomes larger with increasing wavelength. Second, the duration of the event (i.e., the width of the Gaussian given by the parameter $`a_4`$) decreases with increasing wavelength. And third, no monotonic wavelength dependence of the time of the peak can be found. Including the sparse optical data the sequence is rather: 650 nm $``$ 20 cm $``$ 6/3.6 cm (the peaks at 6 cm and 3.6 cm are simultaneous within the errors).
We determine the time lags between the peaks by deriving Cross Correlation Functions for the radio data sets (again using only data before JD 2448910.0). The CCFs were computed using an interpolation method (e.g. White & Peterson white94 (1994) and references therein). Afterwards, the time lags were determined by the calculation of a weighted mean of the CCF (i.e., the center of mass point) using all values $`0.5`$. The resulting time lags are:
$`\tau _{6\mathrm{cm}}^{20\mathrm{cm}}`$ $`=`$ $`0.84\text{days,}`$
$`\tau _{3.6\mathrm{cm}}^{20\mathrm{cm}}`$ $`=`$ $`0.71\text{days,}`$
$`\tau _{3.6\mathrm{cm}}^{6\mathrm{cm}}`$ $`=`$ $`0.24\text{days,}`$
with an error of about 0.2 days in each case. The differences between the time lags derived from the Gaussian fits and the CCF are within the errors and probably due to the fact that the flares are not perfectly Gaussian (this explains also that $`\tau _{6\mathrm{cm}}^{20\mathrm{cm}}+\tau _{3.6\mathrm{cm}}^{6\mathrm{cm}}\tau _{3.6\mathrm{cm}}^{20\mathrm{cm}}`$). The deviation from the Gaussian shape is particularly obvious in the lightcurve at 6 cm. Nevertheless, the CCF analysis confirms the result that the sequence of the flares is unusual, since the 20 cm peak clearly precedes the peaks in the other bands, while the time lag between the 6 and the 3.6 cm-data does not appear to be significant.
To check the significance of the time lags between the maxima, we carried out Monte-Carlo-Simulations for the Cross-Correlations between the radio frequencies. As a start model for the lightcurves we used the Gaussian fit parameters with the original sampling and added Gaussian noise by a random process. In a second step, we allowed the sampling pattern to be shifted in time randomly and independently for every single simulation. This procedure confirmed that the peak at 20 cm significantly precedes the other two.
### 2.4 Long-term variability
In Fig. 2 we present radio data at 4.8, 8.0 and 14.5 GHz obtained within the Michigan Monitoring Program (e.g. Aller aller99 (1999) and references therein) from January 1991 to November 1995. Our VLA observations (indicated by the arrow in the bottom panel of Fig. 2) coincide with the peak of a large flux density outburst.
Also in the mm- and cm-radio data, published by Stevens et al. (stevens94 (1994)) a maximum at the time of the VLA-observations can be seen at least at 22, 37, 90 and 150 GHz. The optical data presented by Schramm et al. (schramm94 (1994)) also give a clear indication for an outburst at visible wavelengths right before our VLA observations. Three data points which are close to our observations are included in our R-band lightcurve (Fig. 1, marked as triangles. The long-term monitoring implies that our observations took place when 0235+164 was in a very bright state.
## 3 Discussion
### 3.1 Problems
On the basis of the collected data, and the analysis in the previous section, we note several properties of the observed flare which are unusual and require an explanation in the framework of physical models:
* The sequence of the flares is rather unusual. The 20 cm maximum precedes the maxima at 3.6 cm and 6 cm. The first optical maximum – if connected to the radio events – is about four days earlier.
* The peaks become narrower and stronger with increasing radio wavelength – a unique behavior which is not seen in other sources and not easily explained in any of the “standard” physical models.
* In case of an intrinsic origin of the variability, one can derive the corresponding source brightness temperature from the duration of the event (e.g. Wagner & Witzel 1995). For $`\lambda =20`$ cm this yields $`T_B710^{17}`$ K, far in excess of the inverse Compton limit (Kellermann & Pauliny-Toth kellermann69 (1969)).
* Our observations show that variations are present at both radio and optical wavelengths, with very similar timescales. The gaps in the optical lightcurve do not allow us to establish a one-to-one correspondence between individual events in both wavelength ranges, but it seems plausible that they are caused by a common physical mechanism. This is a severe difficulty for models that attribute the variations to strongly wavelength-dependent propagation effects (free-free absorption and interstellar scintillation).
In the following, we discuss various models which could describe the variations and take into consideration at least some of the peculiar properties mentioned above.
### 3.2 Relativistic shocks
Propagation of a relativistic shock-front through the jet is commonly accepted as one of possible causes of flux-density variability in AGN (e.g. Blandford & Königl 1979, Marscher & Gear 1985). The time scales usually involved in these models are of the order of weeks to months (corresponding to source sizes in the range of light weeks to months), and are consequently significantly longer than the ones observed here. Following Marscher & Gear (marscher85 (1985)), the characteristics of the flux density evolution in the case of a moving shock within the jet can be described as follows. Starting at high frequencies (in the sub-mm-regime) the outburst propagates to longer wavelengths while the peak of the synchrotron spectrum shows a very special path in the $`S_\mathrm{m}`$-$`\nu _\mathrm{m}`$-plane. This path can be described by three power laws $`S_\mathrm{m}\nu _\mathrm{m}^k`$ (with different exponents $`k`$), distinguishing three different stages of the evolution (see also Marscher marscher90 (1990)). During the synchrotron or the adiabatic expansion stages, which are likely to be found in this wavelength range, the spectral maximum is expected to move from higher to lower frequencies, with the peak flux density being either constant or decreasing with decreasing frequency. Thus, for this “standard-model”, we expect that the flux density reaches its maximum at higher frequencies first, and that the amplitude of the peak decreases towards lower frequencies. This is contrary to our observations.
In contrast, the canonical behavior for a shock-in-jet model is seen in the long-term lightcurve (Fig. 2): The amplitude increases with increasing frequency, resulting in a strongly inverted spectrum during the outburst and the sequence of the peaks (determined from CCFs) follows the expectations: 14.5 GHz $``$ 8.0 GHz $``$ 4.8 GHz.
It should be noted, that the model of Marscher & Gear (1985) is based on three assumptions: (i) the instantaneous injection of relativistic electrons, (ii) the assumption that the variable component is optically thick at the beginning of the process, and (iii) that the jet flow is adiabatic. Therefore, this model describes a transition from large ($`\tau >1`$) to small ($`\tau <1`$) optical depths for each frequency. It is possible, however, that 0235+164 is initially optically thin at our observing wavelengths, and that the optical depth increases with time, e.g. due to continuous injection of electrons or field magnification, or through compression. In this case, $`\tau `$ may reach unity – and the flux density its maximum – at lower frequencies earlier than at higher ones (e.g. Qian et al. qian96 (1996)), as observed. A similar behavior was discussed for CTA 26 by Pacholczyk (pacholczyk77 (1977)) – although on longer time scales. In this model, we expect the maximum at 4.9 GHz to precede the one at 8.4 GHz or that they are reached at the same time. The latter may be true within the uncertainty. However, the different amplitudes and the durations of the event cannot be explained without additional assumptions.
Alternatively, the observed variations may be explained with a thin sheet of relativistic electrons moving along magnetic field lines with a very high Lorentz factor ($`\gamma 20`$–25). In this case, a slight change of the viewing angle (e.g. from 0 to 2–3) may give rise to dramatic variations of the aberration angle and therefore of the observed synchrotron emission (Qian et al., in preparation). Additionally, this should cause significant changes in the linear polarization (strength and position angle), which may be studied in future observations.
### 3.3 Precessing beam model
We now investigate a scenario in which the observed effect is caused by the variable Doppler boosting of an emitting region moving along a curved three-dimensional path. If the observed turnover frequency of such a region falls between 1.5 and 8.4 GHz, peaks in the lightcurves can be displaced relative to each other. The Doppler factor variations required to reproduce the observed timelags may be caused by a perturbed relativistic beam (cf. Roland et al. roland94 (1994), see also Camenzind & Krockenberger 1992). The jet is assumed to consist of an ultra-relativistic ($`\gamma 10`$) beam surrounded by a thermal outflow with speed $`\beta 0.4`$. The relativistic beam precesses with period $`P_0`$ and opening angle $`\mathrm{\Omega }_0`$. The period of the precession may vary from a few seconds to hundreds of days. Roland et al. (roland94 (1994)) show that this model can explain the observed short-term variability of 3C 273, and also makes plausible predictions about the kinematics of superluminal features in parsec-scale jets. We use a similar approach to describe the flux evolution of 0235+164. The trajectory of an emitting component inside the relativistic beam is determined by collimation in the magnetic field of the perturbed beam, and can be described by a helical path. In the coordinate system (x,y,z) with z-axis coinciding with the rotational axis of the helix, the component’s position is given by
$$\{\begin{array}{ccc}\hfill x& =& r(z)\mathrm{cos}(\omega tkz+\varphi _0)\hfill \\ \hfill y& =& r(z)\mathrm{sin}(\omega tkz+\varphi _0)\hfill \\ \hfill z& =& z(t),\hfill \end{array}$$
(2)
where $`r(z)`$ describes the amplitude of the helix, and can be approximated as $`r(z)=r_0z/(a_0+z)`$. For a precessing beam, $`a_0=r_0/\mathrm{tan}\mathrm{\Omega }_0`$. The form of the function $`z(t)`$ should be determined from the evolution of the velocity $`\beta _\mathrm{b}`$ of the relativistic component. $`\beta _\mathrm{b}`$ can be conveniently expressed as a function of $`z`$, and in the simplest case assumed to be constant. Then, under the condition of instantaneous acceleration of the beam ($`dz/dt>0`$, for $`z0_+`$), the component trajectory is determined by
$$t(z)=t_0+_{z_0}^z\frac{1}{\dot{z}}𝑑z=t_0+_{z_0}^z\frac{C_2(z)}{k\omega r^2(z)+C_3(z)}𝑑z,$$
(3)
with $`C_1(z)=[r_z^{}(z)]^2+1`$, $`C_2(z)=C_1(z)+k^2r^2(z)`$, and $`C_3(z)=[C_2(z)\beta _\mathrm{b}^2\omega ^2r^2(z)C_1(z)]^{1/2}`$. (This follows directly from Equation (7) in Roland et al. (1994).) Generally, both $`\omega `$ and $`k`$ can also vary. Their variations should be then represented with respect to $`z`$, and $`\omega (z)`$ and $`k(z)`$ be used in Equation (3).
We describe the emission of the perturbed beam by a homogeneous synchrotron spectrum with spectral index $`\alpha =0.5`$, and rest frame turnover frequency $`\nu _\mathrm{m}^{}=150`$ MHz. The beam precession period $`P_0=200`$ days, and $`\mathrm{\Omega }_0=5\stackrel{}{.}7`$. $`r(z)`$ is described by $`r_0=0.1`$ pc and $`a_0=1`$ pc. The corresponding lightcurves are plotted in Figure 3.
We can see that the model is capable of reproducing the observed time lag between 1.4 GHz and the higher radio frequencies. One can speculate that a more complex physical setting (e.g. spectral evolution of the underlying emission, or inhomogeneity of the emitting plasma) may be required for explaining the apparent discrepancy between the modeled and observed widths of the flare.
### 3.4 Free-free absorption by a foreground medium
Here we consider the effect of free-free absorption in a foreground medium, either in the host of the BL Lac object itself, or in one of the intervening redshift systems. To keep the discussion simple, we neglect the cosmological redshift, i.e., factors $`(1+z)`$. The optical depth for free-free absorption of a plasma is approximately given by (see e.g. Lang lang74 (1974)):
$$\tau =8.23510^2T^{1.35}\nu ^{2.1}N^2𝑑l,$$
(4)
where $`T`$ is the electron temperature in K, $`\nu `$ is measured in GHz, and the emission measure $`N^2𝑑l`$ in pc cm<sup>-6</sup>. Thus, the absorption of radiation by a foreground medium can be described by $`e^{c\lambda ^2}`$, where $`c`$ is a constant. We assume the following scenario. The source is moving with transverse speed $`v`$ behind a patchy foreground medium so that changes in the emission measure towards the source produce variable absorption. To lowest order, we describe gaps between the clouds by
$$\tau =k^2\lambda ^2x^2$$
(5)
($`x`$ being the axis perpendicular to the line of sight). Since the observed flux density of a point source is given by $`S_{\mathrm{obs}}=Se^\tau `$, a moving source seen through such a gap in the foreground medium will show peaked lightcurves with roughly Gaussian shape. The width of the peaks will decrease with increasing wavelength. However, there are two major problems that need to be addressed. Firstly, in this model the maxima for all frequencies are reached at the same time, and secondly, the observed durations (i.e., the widths of the Gaussians fitted according to Equation (1)) do not follow the expected behavior $`a_4\lambda ^1`$.
The time lags between the peaks of the observed lightcurves can be explained for an extended source by a slight shift of the brightness center depending on the frequency.
To deal with the second problem, we assume that the source is not point-like, but has a circular Gaussian shape, with the source size proportional to the wavelength. Thus, the flux density is given by
$$S(x,t)=S_0\mathrm{exp}\left(\frac{(xvt)^2}{\sigma _\lambda ^2}\right),$$
(6)
with $`\sigma _\lambda =\sigma _0\lambda `$ (i.e., $`\sigma _0`$ is the source size at 1 m wavelength).
Assuming that the angular size of the variable region is much smaller than the antenna beam, the observed flux density is given by the integral
$$S_{\mathrm{obs}}(t)=_{\mathrm{}}^+\mathrm{}S(x,t)e^{\tau (x)}𝑑x$$
(7)
Evaluating the above integral, gives (since $`S_{\mathrm{obs}}`$ is the integral of a product of two Gaussians)
$$S_{\mathrm{obs}}(t)=S^{}\mathrm{exp}\left(\frac{v^2t^2}{\sigma _\lambda ^2+1/k^2\lambda ^2}\right)$$
(8)
with a new normalization constant $`S^{}`$. Therefore, the square of the width of the Gaussian is
$$a_4^2=\frac{1}{v^2}\left(\sigma _\lambda ^2+\frac{1}{k^2\lambda ^2}\right)=\frac{\sigma _0^2}{v^2}\lambda ^2+\frac{1}{k^2v^2}\frac{1}{\lambda ^2},$$
(9)
and it should depend on wavelength like $`A\lambda ^2+B\lambda ^2`$. By adjusting the parameters $`A`$ and $`B`$ to fit the measured values of $`a_4^2`$ at the three observing wavelengths, we derive values for $`\sigma _0^2/v^2`$ and $`k^2v^2`$. We assume here that the transverse speed is dominated by superluminal motion with $`v/c=\beta _{\mathrm{app}}`$ and obtain a source size of $`0.0067\beta _{\mathrm{app}}`$ pc corresponding to an angular size of $`1.6\beta _{\mathrm{app}}\mu `$as at $`\lambda =1`$ m. We note that such a small source diameter even for $`\beta _{\mathrm{app}}=10`$ results in a brightness temperature of about $`10^{15}`$ K, and therefore violates the inverse Compton limit. For higher velocities – as observed in this source (e.g. Chu et al. chu96 (1996)) – the observed size can be larger. However, to reconcile our observational findings with the inverse Compton limit, Doppler factors of the order of 100 are needed.
The second term in Equation (9) gives the size of the gap in the foreground medium, i.e., the distance between the points where $`\tau =1`$. Since we assumed $`\tau =k^2\lambda ^2x^2`$, this distance is $`\mathrm{\Delta }x=2/(k\lambda )`$, which is about $`2.310^4\beta _{\mathrm{app}}`$ pc at $`\lambda =1`$ m. (Note that this is true only for the case where the absorber is at the redshift of the BL Lac object; the ratio of the angular diameter distances of emitter and screen has to be applied as a correction factor in the case of an intervening absorber.)
We still have to check whether Equation (4) gives a sufficient optical depth for reasonable choices of electron temperature and emission measure. The strongest constraints come from the data at 3.6 cm: to explain the observed amplitude of 0.24 Jy at a source flux of 5 Jy, $`\tau `$ must be at least 0.05 at this wavelength. For an electron temperature of 5000 K, an emission measure of $`510^6`$ pc cm<sup>-6</sup> is needed. The thickness of the absorber cannot be much larger than the transverse scale derived above, which is 0.06 pc at 3.6 cm for $`\beta _{\mathrm{app}}=10`$; this gives an electron density of $`10^4`$ cm<sup>-3</sup>. These values are within the range found in Galactic H II regions and planetary nebulae.
We conclude that this model can explain the observed shorter duration of the flares at longer wavelengths, and – under the assumption of slightly different spatial locations of the brightness center at the observed wavelengths – also the sequence of the peaks. It predicts that the amplitude of the peaks increases more strongly with wavelength than observed, but it is consistent with the data when an underlying non-variable component is taken into account. However, in the possible case of a connection between the radio and the optical variations this model fails, since the optical radiation would not be affected by free-free-scattering.
### 3.5 Interstellar scattering (ISS)
Scattering processes in the interstellar medium are well known to cause flux density variations at radio frequencies (e.g. Rickett rickett90 (1990)). In this section we investigate the possibility that ISS is the cause of the variations seen in our observations. We will follow mainly the considerations and notations of Rickett et al. (rickett95 (1995)). For a point-like source, the spatial scale of flux density variations caused by RISS is given by ($`L`$ is the path length through the medium, $`\theta _{\text{scat}}`$ the scattering angle)
$$r_{0,\lambda }0.25L\theta _{\text{scat}},$$
(10)
which is proportional to $`\lambda ^{2.2}`$, for a Kolmogorov-type medium (Rickett et al. rickett84 (1984)), and therefore also $`\theta _{\text{scat}}\lambda ^{2.2}`$ (cf. Cordes et al. cordes84 (1984)). The spatial scale of an extended source (assuming a Gaussian shape of $`\sigma _\lambda `$ in width) is then given by
$$r_{\theta ,\lambda }=\sqrt{r_{0,\lambda }^2+(0.5L\sigma _\lambda )^2}.$$
(11)
Then, the scintillation index $`m_{\theta ,\lambda }`$ and the variability timescale $`\tau _{\theta ,\lambda }`$ for the extended source can be derived by
$`m_{\theta ,\lambda }`$ $`=`$ $`m_{0,\lambda }{\displaystyle \frac{r_{0,\lambda }}{r_{\theta ,\lambda }}},`$ (12)
$`\tau _{\theta ,\lambda }`$ $`=`$ $`{\displaystyle \frac{r_{\theta ,\lambda }}{V}},`$
where $`V`$ is the velocity of the Earth (i.e., the observer) relative to the scattering medium, and $`m_{0,\lambda }`$ is the (wavelength dependent) scintillation index of a point source.
We assume a source diameter which is proportional to $`\lambda `$ as we did in the previous section, thus $`\sigma _\lambda =\sigma _0\lambda `$, and use $`\theta _{\text{scat}}=\theta _0\lambda ^{2.2}`$ (see above). This gives
$`m_{\theta ,\lambda }`$ $`=`$ $`{\displaystyle \frac{m_{0,\lambda }\theta _0\lambda ^{2.2}}{\sqrt{\theta _0^2\lambda ^{4.4}+4\sigma _0^2\lambda ^2}}}\text{and}`$ (13)
$`\tau _{\theta ,\lambda }`$ $`=`$ $`{\displaystyle \frac{L}{4V}}\sqrt{\theta _0^2\lambda ^{4.4}+4\sigma _0^2\lambda ^2}.`$
Therefore, it is clear that – independent of the wavelength dependence of $`m_{0,\lambda }`$ – the timescales of the variations become shorter for decreasing wavelengths. This is contrary to our observational findings (see Table 1), implying that this simple model is unlikely to explain the observations. Additionally, interstellar scattering cannot cause variability in the optical regime. Hence, in this case again, a possible connection of the optical and the radio variations would rule out ISS as the only cause of the observed variability.
However, owing to the small source diameters involved here, ISS can be present as an additional effect. As an example, we calculate the scintillation index and the timescales with the following assumptions. Following Rickett (rickett86 (1986)) the path length in the interstellar medium of our galaxy is $`L500\mathrm{pc}\mathrm{csc}|b|788\mathrm{pc}`$ (the source galactic latitude is $`40^{}`$). With $`\sigma _0=1.2`$ mas (which corresponds to $`T_\mathrm{B}10^{12}`$ K), $`\theta _0=60`$ mas and a typical velocity (of the observer) $`V=50`$ km/s this yields:
| $`\lambda `$ \[cm\] | $`m_{\theta ,\lambda }`$ | $`\tau _{\theta ,\lambda }`$ \[d\] |
| --- | --- | --- |
| 20 | 0.48 | 12.2 |
| 6 | 0.32 | 1.28 |
| 3.6 | 0.21 | 0.64 |
Therefore, the faster variations which are clearly seen at higher frequencies (especially in the 6 cm lightcurve) may be due to ISS.
### 3.6 Gravitational Microlensing
Another possible explanation for the origin of the observed variations is gravitational microlensing (ML) by stars in a foreground galaxy. ML effects have been unambiguously observed in the multiple QSO 2237+0305 (Irwin et al. irwin89 (1989), Houde & Racine houde94 (1994)), and most likely also in other multiply-imaged QSOs (see Wambsganss wambsganss93 (1993), and references therein). The possibility that ML can cause AGN variability has long been predicted (Paczyński paczynski86 (1986), Kayser et al. kayser86 (1986), Schneider & Weiss schneider87 (1987)), but it remains unclear whether ML causes a substantial fraction of the observed variability in QSOs (e.g. Schneider schneider93 (1993)).
0235+164 has a foreground galaxy ($`z=0.524`$) situated within two arcseconds from the line of sight (Spinrad & Smith spinrad75 (1975)), and an additional galaxy 0$`\stackrel{}{.}`$5 away from the source (Stickel et al. stickel88 (1988), see also additional components reported in Yanny et al. yanny89 (1989)). Additionally, a nearby absorption system was observed at $`\lambda =21`$ cm by Wolfe, Davis & Briggs (wolfe82 (1982)). All three objects may host microlenses affecting the emission from 0235+164. Thus, for 0235+164 the probability for ML is expected to be high (Narayan & Schneider narayan90 (1990)), so that sometimes ML events should be present in the lightcurves.
We will show now how ML can modulate the underlying long-term lightcurve and explain faster variations of long-wavelength flux compared to short-wavelength radiation, even when the longer wavelength radiation comes from a larger source (component). Since the available data do not permit a detailed account of possible ML situations, the attention here is restricted to two simple situations: an isolated point-mass lens in the deflector, and a cusp singularity, formed by an ensemble of microlenses (Schneider & Weiss schneider87 (1987), Wambsganss wambsganss90 (1990)). In fact, both cases yield similar predicted ML lightcurves. The scales of the source size and the lens mass necessary to yield a flux variation of the observed kind can be estimated for both cases together.
We assume an elliptically shaped emitting feature that moves relativistically in the direction roughly coinciding with the minor axis of the ellipse. Such a component can be formed by relativistic electrons which are locally accelerated by a shock front inside a superluminal jet. The shape of the source component and its orientation is then determined by the flow inside the jet. A Gaussian brightness profile is assumed, with component size $`\lambda `$ (see Fig. 4 for details). We postulate that the emission peaks at all three wavelengths are displaced relative to each other, but that the peaks of shorter-wavelength components are situated within the half intensity contour of longer-wavelength components.
Let $`\beta c`$ be the apparent effective transverse velocity of the source component; using the redshift $`z_s=0.94`$ of the object, this corresponds to an angular velocity of $`v_\mathrm{a}=2\beta h\mathrm{\hspace{0.17em}10}^4`$ mas/day. If a source component moves along a track in the source plane, and the component size is much smaller than the minimum angular separation $`d_\mathrm{a}`$ from the singularity, as indicated in Fig. 4 (solid ellipse), then the timescale of variation is given roughly by the ratio $`d_\mathrm{a}/v_\mathrm{a}`$. On the other hand, if a strongly elongated source component moves so that parts of it cross the line of sight to the singularity (as indicated in Fig. 4, dashed ellipse), then the shortest possible timescale is roughly the ratio between the transverse angular source size $`a_\lambda `$ (the minor semi-axis at wavelength $`\lambda `$) and the angular velocity. Now assume that the former case approximates the 3.6 cm source and the latter case approximates the 20 cm source. If $`\mathrm{\Delta }t_{3.6}4`$ days, $`\mathrm{\Delta }t_63`$ days and $`\mathrm{\Delta }t_{20}2`$ days are the variability timescales for the three wavelengths considered, we have
$$d_\mathrm{a}v_\mathrm{a}\mathrm{\Delta }t_{3.6}8\beta h\mathrm{\hspace{0.17em}10}^4\mathrm{mas},$$
(14)
and
$$ra_{20}4\beta h\mathrm{\hspace{0.17em}10}^4\mathrm{mas},$$
(15)
where $`r1`$ is the axis ratio of the Gaussian source component. In order for the 20 cm source to experience appreciable variations, the closest separation of its center from the singularity cannot be larger than its major semi-axis, i.e., $`a_{20}\begin{array}{c}>\hfill \\ \hfill \end{array}d_\mathrm{a}`$, and this inequality can be satisfied for $`r\begin{array}{c}<\hfill \\ \hfill \end{array}0.5`$.
Since the relative contribution of the moving component to the total flux of the source is unknown, we cannot use the observed lightcurves to determine the magnification of the component emission. The magnification of a point source at separation $`\theta `$ from the point singularity is
$$\mu _\mathrm{p}=\frac{x^2+2}{x\sqrt{x^2+4}},$$
(16)
where $`x=\theta /\theta _0`$, and $`\theta _0`$ is the angular scale induced by a point mass lens of mass $`M`$:
$$\theta _0=\sqrt{\frac{4GM}{c^2}\frac{D_{\mathrm{ds}}}{D_\mathrm{s}D_\mathrm{d}}},$$
(17)
where $`D_\mathrm{d}`$, $`D_\mathrm{s}`$, and $`D_{\mathrm{ds}}`$ denote, respectively, the angular diameter distances to the lens, the source, and from the lens to the source, $`m=M/M_{}`$ is the lens mass in units of the solar mass. Assuming that the lens is situated at $`z=0.525`$,
$$\theta _0=1.87\sqrt{mh}\mathrm{\hspace{0.17em}10}^3\mathrm{mas}.$$
(18)
Approximating the point-source magnification by $`\mu _\mathrm{p}1/x`$ (for $`x1`$), and assuming as before that the size $`a_{3.6}`$ is much smaller than the closest separation of the source from the point-like singularity, the maximum magnification of this source component becomes
$$\mu _{3.6,\mathrm{max}}\frac{\theta _0}{d_\mathrm{a}}2.34m^{1/2}h^{1/2}\beta ^1.$$
(19)
Hence, a solar-mass star would yield a magnification of the order of 2 for the smallest source component moving at roughly the speed of light and in general can produce lightcurves similar to the observed variations.
In Fig. 5, we plot numerically determined ML lightcurves for a moving source with an axis ratio $`r=0.4`$, minimum separation $`d_\mathrm{a}=8\beta h10^4`$ mas, and semi-major axis of the 20 cm source component of $`a_{20}=\beta h10^3`$ mas. The lens mass is $`m=0.4\beta ^2h`$. The source sizes are chosen to be proportional to wavelength, and the brightness peaks of the 6 cm and 20 cm components are displaced relative to the peak of the 3.6 cm component by 0.4 of their corresponding sizes. As can be seen from the modeled lightcurves, the variability timescale of the 20 cm component is considerably shorter than that of the shorter wavelength components, in accordance with our analytical estimates. In addition, the observed shift of the brightness peak at 20 cm before those at smaller wavelengths can be accounted for in our model by a slight tilt of the direction of motion of the source relative to the minor axis of the surface brightness ellipses, in the sense of the large component crossing the caustic point before the closest approach of the 3.6 cm component to that point. Nevertheless, we note that the small source sizes needed (in the range of $`\mu `$as) will result in brightness temperatures of the order of $`10^{15}`$ K, i.e., three orders of magnitude above the inverse Compton limit.
A more detailed modeling of the lightcurves by a microlensing scenario is not warranted at this stage, given the large number of degrees of freedom. Nevertheless, the above considerations have demonstrated that the basic qualitative features can be understood in the microlensing picture without very specific assumptions.
## 4 Conclusions
We have observed the BL Lac object 0235+164 at three radio wavelengths and in the optical R-band and found rapid variations in all frequency bands. One single event that can be identified at all radio wavelengths shows very peculiar properties. The brightness peak is reached first at 20 cm wavelength, and afterwards at 3.6 and 6 cm. The amplitudes of the flares decrease from longer to shorter radio wavelengths, and the timescales become longer. The event in the radio regime might be connected to the bright peak in the optical lightcurve, although this connection remains questionable due to the sparse sampling of the R-band data. In the previous sections, we have discussed some models and to what extent they can explain the observed variations.
While the conventional application of the shock-in-jet model has difficulties in reproducing the observations, the assumption of an increasing optical depth (e.g. due to continuous injection of relativistic electrons) can cause a delay of the maximum at high frequencies with respect to the lower frequencies, and therefore explain at least one of the special features.
Variable Doppler boosting can cause simultaneous short-term variability in all observed wave bands. Fairly pronounced time lags between the different frequencies can be caused by turnover frequency variations in the observed spectrum of a moving source. However, broader peaks are expected at longer wavelengths.
Free-free absorption and interstellar scattering are only capable of explaining radio variations, not variability in the optical regime. Therefore, if the connection between the optical and the radio variability is real, these models are ruled out as the only cause for the variations. Furthermore, the dependence of the timescales on wavelength argues against an explanation of the flare by interstellar scattering. The absorption by a patchy foreground medium can easily describe the shape and the widths of the flares (in the radio) and can – if we assume different locations for the brightness center – also explain the time sequence of the brightness peaks.
Gravitational Microlensing – in combination with a wavelength-dependent source size and a slight displacement of the brightness peak – provides a possible explanation for the observed variations in the radio regime. One would also expect fairly strong variability in the visible range, because of the much smaller source size. Microlensing thus appears to be a viable explanation of the observations, which is also quite attractive because of the known foreground objects.
It is quite remarkable that these attempts to explain the rapid radio variability in 0235+164 – different as they are – all imply that the intrinsic source size is very small. To reconcile the observations with the 10<sup>12</sup> K inverse Compton limit, a Doppler factor substantially higher than the “canonical” value of 10 (see e.g. Ghisellini et al. 1993, Zensus 1997) is required. Most scenarios that we have investigated imply $`𝒟100`$. In this context it is interesting to note that circumstantial evidence for superluminal motion with $`\beta _{\mathrm{app}}30`$ has been found in this source (Chu et al. 1996). The variations in 0235+164 are also among the strongest and fastest of all sources in the Michigan monitoring program (e.g. Hughes et al. 1992). This suggests that the distribution of Doppler factors in compact radio sources has a tail extending to $`𝒟100`$, and that 0235+164 – and perhaps more generally the sources showing strong intraday radio variability – belong to this tail. The implied extremely small source size can allow rapid intrinsic variations, and at the same time favor propagation effects. It is therefore plausible that the observed variability is caused by a superposition of both mechanisms.
###### Acknowledgements.
We thank I. Pauliny-Toth and E. Ros for critically reading the manuscript, the referee, J.R. Mattox, for valuable comments, C.E. Naundorf and R. Wegner for help with the observations, and B.J. Rickett for stimulating discussions. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under a cooperative agreement by Associated Universities, Inc. This research has made use of data from the University of Michigan Radio Astronomy Observatory which is supported by the National Science Foundation and by funds from the University of Michigan.
|
no-problem/9902/astro-ph9902007.html
|
ar5iv
|
text
|
# GRB990123: Evidence that the Gamma Rays Come from a Central Engine
## 1 INTRODUCTION
On January 23, 1999, the Robotic Optical Transient Search Experiment (ROTSE) discovered strong optical emission (9th mag) during a gamma-ray burst (Akerlof et al. 1999a ; Akerlof et al. 1999b ). Remarkably, such extraordinary behavior was predicted a few weeks before (Sari & Piran 1999a ). This event, GRB990123, had a location from the BeppboSAX satellite (Piro (1999); Heise (1999)) and a measured redshift of the optical transient of $`z=1.6`$ (Kelson et al. (1999); Hjorth et al. (1999)). The Burst and Transient Source Experiment (BATSE) observed the burst (Kippen (1999)), as did the Comptel experiment (Conners et al. (1999)).
The detection in Comptel implies that the burst had a typical hard gamma-ray burst (GRB) spectrum. GRB spectra often extend to very high energies with no indication of attenuation by photon-photon interactions. This implies substantial relativistic bulk motion of the radiating material with Lorentz factors in the range of $`10^2`$ to $`10^3`$. Two classes of models have arisen that explain various aspects of the observations. In the “external” shock models (Mészáros & Rees (1993)), the release of energy is very quick, and a relativistic shell forms that expands outward for a long period of time ($`10^5`$ to $`10^7`$ s). At some point, interactions with the external medium (hence the name) cause the energy of the bulk motion to be converted to gamma-rays. Although the shell might produce gamma-rays for a long period of time, the shell keeps up with the photons such that they arrive at a detector over a relatively short period of time. If the shell has a velocity, $`v=\beta c`$, with a corresponding bulk Lorentz factor, $`\mathrm{\Gamma }=(1\beta ^2)^{1/2}`$, then photons emitted over a period $`t`$ arrive at a detector over a much shorter period, $`T=(1\beta )t=t/(2\mathrm{\Gamma }^2)`$. Although this model is consistent with the short energy release time expected for a compact object merger and the observed long time scale of GRBs, we have argued that it cannot explain gamma-ray emission with gaps. It can only explain the rapid time variability if the shell is very narrow, has angular structure much smaller than $`\mathrm{\Gamma }^1`$, and cannot be decelerating (Fenimore, Madras, & Nayakshin (1996); Ramirez-Ruiz & Fenimore (1999)).
The alternative theory is that a central site releases energy in the form of a wind or multiple shells over a period of time commensurate with the observed duration of the GRB (Rees & Mészáros (1994)). The gamma-rays are produced by the internal interactions within the wind; hence these scenarios are often referred to as internal shock models although they might actually involve the Blandford-Znajek effect (Mészáros & Rees (1997); Pacyznski (1997)). The discovery of x-ray afterglows lasting hours (Costa et al. (1997)), optical afterglows lasting weeks to months (Metzger et al. (1997)), and radio afterglows lasting many months (Frail et al. (1997)) strongly indicated external shocks were involved in some aspects of GRB emission. The observed power law decay of the afterglows is expected from many external shock models (Wijers, Rees, & Mészáros (1997); Rechart (1997); Tavani (1997); Waxman, Kulkarni, & Frail (1998); Mészáros, Rees, & Wijers (1998); Rees & Mészáros (1998); Sari, Piran, & Narayan (1998)).
Piran & Sari (1997) suggested that the initial gamma-ray phase is due to internal shocks from a relativistic wind (or multiple shells) that merge into a single relativistic shell which then produces the afterglows in a manner similar to the external shock models. More recently, Sari & Piran (1999a) have predicted bright early optical afterglows resulting from a reverse shock in the shell. Their detailed calculations showed behavior much like that observed in GRB990123: a rapid rise in the optical soon after the burst starts to very bright levels and a power law decay as the shell decelerates.
Using kinematic arguments, the rapid time variability of many GRBs indicates that only a small fraction (typically 0.005) of the shell surface can be involved in the gamma-ray emission (Fenimore, Madras, & Nayakshin (1996); Sari & Piran (1997); Fenimore et al. (1999)). There might be ways to obtain filling factors as large as 0.1 (Dermer & Mitman (1999)). However, the average pulse width is remarkably constant (to within a few percent) and, thus, shows no sign of any deceleration (Ramirez-Ruiz & Fenimore (1999)). As a result, the only kinematically allowed external shock models for the gamma-ray phase have been forced to involve very narrow shells and no deceleration.
The early optical emission of GRB990123 was discovered by ROTSE (Akerlof et al. 1999a ; Akerlof et al. 1999b ) which consists of four 11.1 cm aperture telephoto lenses with unfiltered CCD cameras on a single rapidly slewing mount. It is capable of responding rapidly to alerts from the Gamma-ray Coordinates Network (GCN). In the case of GRB990123, the initial trigger was sent out on GCN well before the explosive rise in the gamma rays. Optical exposures were actually taken while the gamma-ray emission was still increasing. The ROTSE experiment detected extremely strong optical emission that increased rapidly from 12th to 9th magnitude, then faded with a power law decay during the burst (Akerlof et al. 1999b ). Other optical observations many hours later saw a continued power law decline beyond the initial ROTSE data (Bloom et al. (1999)).
Figure 1 shows the ROTSE optical observations overlaid on the BATSE time history of the gamma-ray emission. Both are plotted as Log-Log, an unusual way to present gamma-ray time histories. This presentation shows the power law decay of the optical data, and it allows one to see how the gamma-ray data tracks the optical emission. The gamma-ray envelope would also show a power law decay if, it too, were due to a relativistic shell. Interestingly, the gamma-ray emission does have an envelope that is similar to the optical envelope. However, one must be careful because the ROTSE observations were short compared to their separation, so only a few points within the time history are sampled. In fact, the optical emission could have closely mirrored the three main gamma-ray releases of energy (at 25, 37, 80 s) and ROTSE would not have resolved them. The potential similarity of the optical and gamma-ray envelopes means one cannot necessarily argue from the envelopes alone for a different origin.
The purpose of this paper is to analyze the BATSE data to show that the fine time structure of the gamma-ray emission does not change during the burst as it should if the gamma rays originate on the decelerating shell as indicated by the optical emission. We find there is effectively no apparent deceleration in the gamma-ray source. Thus, we eliminate the only remaining external shock model for the gamma-rays and conclude that the gamma-rays in this source come from a central engine while the optical emission arises from decelerating external shocks.
## 2 VARIATION IN PULSE WIDTH IN GRB990123
In the early phase of the shell’s expansion, $`\mathrm{\Gamma }`$ is effectively constant ($`=\mathrm{\Gamma }_0`$). Eventually, the shell begins to decelerate as it sweeps up the interstellar medium (ISM). Without detailed knowledge of the physical process that generates the gamma rays, it is not clear if the expansion is adiabatic or dominated by radiation losses. However, both solutions lead to a power law decay of the Lorentz factor (Sari, Piran, & Narayan (1998); Mészáros, Rees, & Wijers (1998); Rees & Mészáros (1998)). For typical parameters, the decay is $`\mathrm{\Gamma }(T)T^{3/8}`$ although other indexes are possible (see Sari, Piran, & Narayan (1998); Mészáros, Rees, & Wijers (1998); Rees & Mészáros (1998)). Sari & Piran (1999a) predicted that the very early deceleration could be somewhat slower: $`\mathrm{\Gamma }(T)T^{1/4}`$ in a long plateau before the $`\mathrm{\Gamma }^{3/8}`$ phase, although this depends on the initial conditions and was not observed in this burst.
The proper time for a gamma-ray pulse in our rest frame, $`\mathrm{\Delta }t`$, should be measured by clocks placed in our rest frame at all emitting sites, clearly, an impossible task. Rather, we have one clock (i.e, the BATSE detector) that measures when the photons arrive at a single location in our rest frame. We denote the arrival time as $`T`$. Rather than a Lorentz transformation, these two “times” are related by $`\mathrm{\Delta }T=(1\beta \mathrm{cos}\theta )\mathrm{\Delta }t`$ where $`\theta `$ is the angle between our line of sight and the region on the shell which is emitting. The relationship between time measured by a clock moving with the shell ($`\mathrm{\Delta }t^{}`$) and our rest frame time is a Lorentz transformation: $`\mathrm{\Delta }t^{}=\mathrm{\Gamma }^1\mathrm{\Delta }t`$. Thus, BATSE time is related to time measured in the rest frame of the shell as
$$\mathrm{\Delta }T=\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )\mathrm{\Delta }t^{}.$$
$`(1)`$
Here, we have ignored cosmological terms because they do not introduce any differential effects.
The average profile of GRBs displays a fast rise and a slower decay. This profile is often abbreviated as a “FRED” (fast rise, exponential decay) although the actual average decay is linear (Fenimore (1999)). The fast rise indicates that the shell emits only over a small range of radii and one is observing emission from near the line of sight early in the burst. The slow decay is due to the late arrival of emission from regions off the line of sight. Using Equation (1) evaluated at $`\theta =0`$ and $`\theta =\mathrm{\Gamma }^1`$, the later emission from a shell ought to have pulses that are about twice as long as the pulses at the beginning of the burst. If, in addition, the shell is slowing down, the $`\mathrm{\Gamma }`$ dependency would cause a commensurate increase in the pulse width. An analysis of 53 bright BATSE bursts showed that the pulse width is remarkably constant (a few percent) throughout the gamma-ray emitting phase of a GRB. That constancy strongly indicates that the only kinematically acceptable external shock model for the gamma-ray burst phase is one that has no deceleration and comes from a range of angles that is much smaller than $`\mathrm{\Gamma }^1`$ (Ramirez-Ruiz & Fenimore (1999)).
The GRB990123 observations eliminate that last remaining external shock model. In this burst, we see an optical signal that peaks and decays during the gamma-ray emitting phase. Either would be a clear indication that the shell has started to deacelerate. In our previous work, we showed the lack of time variability two ways. First, we showed that the average aligned peaks of many bursts is virtually identical throughout the $`T_{90}`$ period. (Here, $`T_{90}`$ is the duration containing 90% of the gamma-ray photons.) Second, we used the fits to pulses by Norris et al. (1996) to show that there is no trend in individual bursts to have peaks wider late in the gamma-ray phase. Since GRB990123 is an individual burst, it would be best if the individual peaks were fit until all variations have been accounted for and then determine if a trend is present. However, the complexity of the overlap between peaks in GRB990123 would probably prevent fitting every peak (see Norris et al. (1996)).
To determine the variability of the time scale in GRB990123, we have analyzed four regions labeled A - D in Figure 1. In each region, we first removed the envelope of emission by subtracting the time history smoothed by a boxcar function (width = 2 sec). An autocorrelation of the residuals (see Fig. 2) has a width that is related to the average time structure in each region. If the pulses were, on average, wider later in the burst, one would observe autocorrelation functions for A - D that progressively get wider. Rather, sometimes A is wider, sometimes D is wider. The maximum spread of $`\mathrm{\Delta }T_\mathrm{D}/\mathrm{\Delta }T_\mathrm{A}`$ when the autocorrelation is greater than 0.5, is only 1.15. However, when the autocorrelation function is 0.25, $`\mathrm{\Delta }T_\mathrm{D}`$ is actually narrower than $`\mathrm{\Delta }T_\mathrm{A}`$, by a factor of $`1/1.15`$ (and $`\mathrm{\Delta }T_C`$ and $`\mathrm{\Delta }T_B`$ are even narrower than $`\mathrm{\Delta }T_\mathrm{A}`$). On average, $`\mathrm{\Delta }T_\mathrm{D}/\mathrm{\Delta }T_\mathrm{A}`$ is about unity and we are confident that its value is less than the maximum observed value of 1.15.
There are two sources of pulse width from an external shell: angular effects and deceleration. The pulse width at two times A and D are related as
$$\frac{\mathrm{\Delta }T_\mathrm{D}}{\mathrm{\Delta }T_\mathrm{A}}=\frac{\mathrm{\Gamma }_\mathrm{D}(1\beta \mathrm{cos}\theta _\mathrm{D})}{\mathrm{\Gamma }_\mathrm{A}(1\beta \mathrm{cos}\theta _\mathrm{A})}$$
$`(2)`$
where $`\theta _\mathrm{D}`$ and $`\theta _\mathrm{A}`$ are the angles responsible for the emission. If the emission is from a relativistic shell that turns on for a short period after expanding away from the central site, then (from Eq. of Fenimore, Madras, & Nayakshin (1996)):
$$\mathrm{\Gamma }(1\beta \mathrm{cos}\theta _\mathrm{D})=\frac{T_\mathrm{D}}{2\mathrm{\Gamma }T_0}$$
$`(3)`$
where $`T_\mathrm{D}`$ must be measured from when the shell left the central site and $`T_0`$ is the time of the peak of the emission. During the afterglow, $`\mathrm{\Gamma }(T)T^{3/8}`$ so
$$\frac{\mathrm{\Delta }T_\mathrm{D}}{\mathrm{\Delta }T_\mathrm{A}}=\left[\frac{T_\mathrm{D}}{T_\mathrm{A}}\right]^{11/8}.$$
$`(4)`$
It is likely that the shell started to leave the central site at about the time when the first gamma rays were emitted. BATSE detected emission quite early in this burst, so we will use the BATSE time for $`T`$. Time period A is at $`45`$ s after the start and period D is at $`82`$ s. Based on this, we expect the pulse widths to increase by about a factor of 2.3, much larger than observed, and very easy to detect if it was present. Dermer & Mitman (1999) did detailed simulations of GRB pulses produced by external shocks on a decelerating shell. These simulations (e.g., Fig. 2 in Dermer & Mitman (1999)) show pulses that are, indeed, more than a factor of 2 wider late in the burst. This is not observed in most GRBs and not in GRB990123.
## 3 SURFACE FILLING FACTOR FOR GRB990123
Another argument against a shell model is given by an analysis of the “surface filling factor”. We define the surface filling factor as the fraction of the shell’s surface that becomes active. Let $`A_N`$ be the area of an emitting entity and $`N_N`$ be the number of entities that (randomly) become active during an observation period, $`T_{\mathrm{obs}}`$. If $`A_{\mathrm{obs}}`$ is the area of the shell that can contribute during $`T_{\mathrm{obs}}`$, then the surface filling factor is
$$f=N_N\frac{A_N}{A_{\mathrm{obs}}}=N_N\frac{A_N}{\eta A_S}$$
$`(5)`$
where $`\eta `$ is the fraction of the visible area of the shell ($`A_S`$) that contributes during the interval $`T_{\mathrm{obs}}`$ (approximately 1). The rapid variations in GRB time histories imply emitting entities the size of $`\mathrm{\Delta }R_{}c\mathrm{\Gamma }\mathrm{\Delta }T_p`$. Assuming a single expanding shell, these entities must form on a much larger surface, $`c\mathrm{\Gamma }T`$.
There are three cases. Case a is the constant $`\mathrm{\Gamma }`$ phase, case b is the initial deceleration when the size of the shell ($`\mathrm{d}\mathrm{\Omega }`$) exceeds the radiation beaming angle, and case c is when the deceleration reduces the beaming such that the shell’s angular size is no longer larger than the beaming angle. For these cases, the filling factor is related to the observations by
$$f=\{\begin{array}{cc}N_N\left[\frac{\mathrm{\Delta }T_p}{T}\right]^2\frac{1}{k\eta }\hfill & \text{case a: }\mathrm{\Gamma }(T)=\mathrm{\Gamma }_0\text{,}\hfill \\ N_N\mathrm{d}\mathrm{\Omega }^110^6\left[\frac{4\pi E_{54}}{\rho _0\mathrm{d}\mathrm{\Omega }}\right]^{1/4}\frac{\mathrm{\Delta }T_p^2}{k\eta T^{5/4}}\hfill & \text{case b: }\mathrm{d}\mathrm{\Omega }>2\pi (1\mathrm{cos}\mathrm{\Gamma }^1)\text{,}\hfill \\ N_N\frac{0.07}{k\eta }\left[\frac{\mathrm{\Delta }T_p}{T}\right]^2\hfill & \text{case c: }\mathrm{d}\mathrm{\Omega }<2\pi (1\mathrm{cos}\mathrm{\Gamma }^1)\text{,}\hfill \end{array}$$
$`(6)`$
where $`k`$ is 16 if the pulses are the result of the shell interacting with ambient objects (e.g., clouds) and 13 for entities that undergo causally connected growth (e.g. shocks, see Fenimore et al. (1999) for details). Here, the term $`\frac{4\pi E_{54}}{\rho _0\mathrm{d}\mathrm{\Omega }}`$ relates the characteristics of the shell to how fast it slows down. $`E_{54}`$ is the kinetic energy of the shell in units of $`10^{54}`$ erg and $`\rho _0`$ is the density of the material the shell runs into.
Basically, $`N_N`$ is the number of individual peaks in the time history but, because peaks often overlap, one cannot just count the number of peaks. Rather, the number of entities can be estimated from the fluctuations in the time history under the assumption that the (non-counting statistics) fluctuations are due to a randomly varying number of underlying entities. In any random process, the square of the mean divided by the root-mean-square is approximately the rate of occurrence, $`\mu _N`$. We remove the envelope of emission (by either fitting a function or smoothing the time history), and determine $`\mu _N`$ from the fluctuations in the residuals. Then, $`N_N`$ is $`\mu _N\mathrm{\Delta }T_p/T`$, where $`\mathrm{\Delta }T_p`$ is the typical pulse width for which we use 0.3 s (from Fig. 2).
In Figure 3 we show the distribution of surface filling factors as a function of burst duration $`T_{50}`$ based on BATSE bursts. The solid squares are FRED-like bursts, and the open squares are long complex bursts. Although some of the smooth FRED-like bursts can have surface filling factors near unity, most bursts have values on the order of $`5\times 10^3`$.
The solid circle is the filling factor for GRB990123. We have used equation (6a) even though the optical emission indicates that the shell is decelerating. Equations (6b and c) are for decelerating shells and would give even smaller values. GRB990123 has a very typical value for the filling factor (0.008) implying that there are many fewer emitting entities than the minimum number of possible emitting sites. This forces one to conclude that the gamma-ray emitting shell must have angular structure much smaller than $`\mathrm{\Gamma }^1`$ and that only a portion of the shell becomes gamma-ray active. It does not, however, necessarily indicate that more energy is needed in the reservoir (Fenimore et al. (1999)).
## 4 DISCUSSION
Although 9th magnitude optical emission seems incredible from an object at $`z1.6`$, it was predicted a few weeks before the event (Sari & Piran 1999a ). The low time resolution of the ROTSE data prevents detailed comparisons to the theory, but the agreement is remarkable. Sari & Piran (1999a) predicted early optical emission of up to 7th magnitude, and ROTSE saw at least 8.95 in this burst. They also predicted a fast rise with a power law slope of up to 3.7, and the first two ROTSE points can be connected by a power law with a 3.5 index. The ROTSE optical peak is at 45 s from the start of the event and Sari & Piran (1999a) predicted 30 to 50 seconds, depending on the initial Lorentz factor. The agreement is not perfect, however. They predict peak times of 30 - 50 s for short bursts whereas this is a long burst. Much better agremment is achievable with the proper selection of parameters (Sari & Piran 1999b ). Nevertheless, we feel that the points of agreement that exist with the theory are additional evidence that there is a decelerating external shock during the gamma-ray phase.
If the gamma rays are produced with an external shock, the resulting pulse width should increase. Analyses of previous bursts showed no such trend forcing one to conclude that the only viable external shock model had to involve very narrow shells and no deceleration (Ramirez-Ruiz & Fenimore (1999)). GRB990123 appears to be a normal GRB, with normal time variations as witnessed by a normal filling factor (see Fig. 3). From the optical and gamma-ray data, we estimate that the pulse width should increase by about a factor of 2.3 from a combination of angular and deceleration effects. Averages from many bursts show an increase of only a few percent (Ramirez-Ruiz & Fenimore (1999)). From this single burst, we find changes less than 15% (see Fig. 2). We conclude that the gamma-rays are not coming from the decelerating shell but from the central site.
We have shown that the source region for the gamma rays must be small to explain the variability. Since our limitation is based on kinematics, we cannot comment on the physical process that generates that gamma-ray phase. In particular, we cannot say that internal shocks are the origin of the gamma rays. However, internal shocks from a wind seem to require large variations in the $`\mathrm{\Gamma }`$ of the shells that collide ($``$ a factor of 2). It is surprising that such a large variation produces no effects on the resulting pulse widths. Thus, we conclude that the source is small (a “central engine”) but not that it is necessarily powered by internal shocks.
We thank Galen Gisler and Hui Li for useful comments on this manuscript. Gerry Fishman and the BATSE team provided the GRB990123 time history. This work was done under the auspices of the US Department of Energy.
Figure 1
Figure 2
Figure 3
|
no-problem/9902/nucl-ex9902011.html
|
ar5iv
|
text
|
# Effective widths and integrated cross sections for two-photon transitions in 178Hf
(23 February 1999)
The possibility to produce an increase of the activity of an isomeric sample by irradiating the sample with a laser pulse and the x-ray pumping of nuclear transitions have been discussed some time ago. We have studied recently the rates for two-photon transitions from isomeric states to given final states, induced by an incident flux of photons having a continuous energy distribution. ,
In this work we have calculated integrated cross sections for two-photon processes in <sup>178</sup>Hf. The calculations are based on tabulated nuclear data. We have assumed that the nucleus is initially in a state $`|i`$ of energy $`E_i`$ and spin $`J_i`$, which in general is different from the isomeric state. By absorbing an incident photon of energy $`E_{ni}`$, the nucleus makes a transition to a higher intermediate state $`|n`$ of energy $`E_n`$, spin $`J_n`$ and half-life $`t_n`$. The state $`|n`$ then decays into a lower state $`|l`$ of energy $`E_l`$ and spin $`J_l`$ by the emission of a gamma-ray photon having the energy $`E_{nl}`$ or by internal conversion. In some cases the state $`|l`$ may be situated above the initial state $`|i`$, and in these cases the transition $`|n|l`$ is followed by further gamma-ray transitions to lower states.
We have analyzed two-photon transitions in <sup>178</sup>Hf for which there is an intermediate state $`|n`$ of known energy $`E_n`$, spin $`J_n`$ and half-life $`t_n`$, for which the intermediate state is connected by a known gamma-ray transition to the initial state $`|i`$ and to a lower state $`|l`$, and for which the relative intensities $`R_{ni},R_{nl},R_{nl^{}}`$ of the transitions to lower states are known.
The cross sections are summed over all lower states $`|l`$ which are possible for a given pair of initial and intermediate states $`|i,|n`$. The integrated cross sections are calculated as
$$\sigma _{int}^{tot}=\frac{2J_n+1}{2J_i+1}\frac{\pi ^2c^2\mathrm{}^2}{E_{ni}^2}\mathrm{}\mathrm{\Gamma }_{eff}^{tot},$$
(1)
where
$$\mathrm{\Gamma }_{eff}^{tot}=\underset{li}{}\mathrm{\Gamma }_{eff},$$
(2)
and where $`E_{ni}=E_nE_i`$. The quantity $`\mathrm{\Gamma }_{eff}`$ is the effective width of the two-photon transition,
$$\mathrm{\Gamma }_{eff}=F_R\mathrm{ln}2/t_n,$$
(3)
where $`t_n`$ is the half-life of the intermediate state $`|n`$, and the dimensionless quantity $`F_R`$ has the expression
$$F_R=\frac{(1+\alpha _{nl})R_{ni}R_{nl}}{\left[(1+\alpha _{ni})R_{ni}+(1+\alpha _{nl})R_{nl}+_l^{}(1+\alpha _{nl^{}})R_{nl^{}}\right]^2}.$$
(4)
In Eq. (4), $`\alpha _{ni},\alpha _{nl},\alpha _{nl^{}}`$ are the internal conversion coefficients for the transitions $`|n|i,|n|l,|n|l^{},l^{}i,l,`$.
The effective widths $`\mathrm{\Gamma }_{eff}^{tot}`$ and the integrated cross sections $`\sigma _{int}^{tot}`$ are given in Table I. The multipolarities of the transitions are given in columns $`ni`$ and $`nl`$. The largest integrated cross section found among the listed 24 two-photon processes in <sup>178</sup>Hf has the value $`1.65\times 10^{26}`$ cm<sup>2</sup> keV. If the initial state is the isomeric state at $`E_i`$=2446 keV, the nuclear parameters required for the calculation of the two-photon integrated cross section are known at present only for the intermediate state at $`E_n`$=2573 keV.
|
no-problem/9902/nucl-ex9902002.html
|
ar5iv
|
text
|
# Study of supernarrow dibaryon production in process 𝑝𝑑→𝑝𝑋
## Abstract
The reactions $`p+dp+p(\gamma n)`$ and $`p+dp+d(\gamma )`$ at 305 MeV are studied with the aim to search for supernarrow dibaryons. The experiments were carried out at the Moscow Meson Factory using a spectrometer TAMS, which detected two charged particles at various angles. Narrow structures in missing-mass spectra at 1905 and 1924 MeV have been observed. An analysis of the angular dependence of the experimental data shows that the resonance at $`M`$=1905 MeV most likely corresponds to the production of the isovector supernarrow dibaryon.
Recently, a number of works have appeared in which narrow dibaryons were searched for in experiments on collisions of the nucleon and a few-body system at intermediate energies .
Early, we carried out measurements of missing-mass spectra in the reactions $`pdpXpd(\gamma )`$ and $`pdpXpp(n\gamma )`$ at an incident proton energy of 305 MeV using a double-arm spectrometer. The narrow structure in the missing-mass spectrum at 1905 MeV with a width equal to the experimental resolution 7 MeV was observed in this experiment.
Now, we present further results of the study of the reactions under consideration with the improved facility. The experiments were performed with the proton accelerator of the Moscow Meson Factory at 305 MeV. A proton beam alternately bombarded CD<sub>2</sub> and <sup>12</sup>C targets. The $`pd`$-reaction contribution was determined by subtraction of the <sup>12</sup>C spectrum from the CD<sub>2</sub> one. The two-arm spectrometer TAMS detected the scattered proton in coincidence with the second charged particle ($`p`$ or $`d`$).
The left movable spectrometer arm, being a single telescope $`\mathrm{\Delta }E\mathrm{\Delta }EE`$, was used to measure the energy and time of flight of the scattered proton at $`\theta _L=72.5^{}`$ (or $`70^{}`$ in another run). The right fixed arm detected the proton or the deuteron from the expected dibaryon decay. It consisted of three telescopes, which were located at $`\theta _R=33^{}`$, $`35^{}`$, and $`37^{}`$. These angles correspond to the directions of motion of the produced dibaryons with the chosen mass ranges. Each telescope included a full absorption detector and two thin plastic $`\mathrm{\Delta }E`$ detectors for a time-of-flight measurement. A trigger was generated by a coincidence of the $`\mathrm{\Delta }E`$ detector signals of the left arm with those of any right-arm telescope. Selected by a coincidence, the $`E`$-signals of the scattered proton form its energy spectrum and, accordingly, a missing-mass spectrum. The spectrometer was calibrated using the peak of elastic $`pd`$ scattering .
The experimental missing-mass spectra obtained on the targets of deuteried polyethylene and carbon are shown in Figs. $`1a1c`$. Each spectrum corresponds to a certain combination of outgoing angles of the scattered proton and the second charged particle. These combinations in Figs. $`1b`$ and $`1c`$ are consistent with the change in the emission angle $`\theta _R`$ of a dibaryon with the given mass when the angle $`\theta _L`$ is equal to $`70^{}`$ or $`72,5^{}`$. As evident from Fig. 1, resonance-like behavior of the spectra is observed in two mass regions for the CD<sub>2</sub> target, while the spectra for the carbon target are smooth enough .
There are $`58\pm 13`$ events in the peak at $`1905\pm 2`$ MeV which is shown in Fig. $`1b`$. The statistical significance of this resonance is 4.5 standard deviations. The spectrum in Fig. $`1a`$ shows the other peak with the mass $`M=1924\pm 2`$ MeV containing $`79\pm 16`$ events. The statistical significance of this structure is 4.7 S.D. The widths of both observed peaks correspond to the experimental resolution (3 MeV). The peak at 1924 MeV was only obtained for one spectrum close to the upper limit of the missing mass. In the other cases, this mass position was out of the range of measurement. Therefore, in the present work, we restrict ourselves only to an analysis of the peak at 1905 MeV.
The experimental missing-mass spectra in the range of 1895–1913 MeV, after subtracting the carbon contributions, are shown in Figs. $`2a2c`$.
As seen from Figs. 1 and 2, the resonance behaviour of the cross section exhibits itself in a limited angular region.
If the observed structure at $`M=1905`$ MeV corresponds to a dibaryon decayed mainly into two nucleons, then the expected angular cone size of the emitted nucleons would be about 50. Moreover, the angular distributions of the emitted nucleons are expected to be very smooth in the angle region under consideration. Thus, even assuming the dibaryon production cross section to be equal to an elastic scattering one (40 $`\mu `$b/sr), their contribution to the missing-mass spectra in Fig. $`2a2c`$ would be nearly the same and would not exceed a few events. Hence, the found peaks are hardly interpreted as a manifestation of the formation and the decay of such states.
In supernarrow dibaryons were considered, whose decay into two nucleons is suppressed by the Pauli exclusion principle. Such states with a mass $`M<2m_N+m_\pi `$ ($`m_N`$ and $`m_\pi `$ are the masses of the nucleon and the pion) can decay mainly with a photon emission.
Using the Monte Carlo simulation, we estimated the contribution of the supernarrow dibaryons with different quantum numbers and $`M`$=1905 MeV to the mass spectra at various angles of the left and right arms of our setup. The production cross section and branching ratio of these states were taken from . The obtained results are listed in the table.
This calculation showed that the angular cone of charged particles emitted from a certain dibaryonic state can be narrow enough. An axe of this cone is lined up with the direction of the dibaryon emission. Therefore, by placing the right spectrometer arm at an expected angle of the dibaryon emission, we essentially increase the signal-to-background ratio.
In Fig. $`2a2c`$, the experimental spectra are compared with the predicted yields normalized to the maximum of the measured signal in Fig. $`2b`$. The solid and dashed curves in this figure correspond to states with isospin $`T`$=1 and $`T`$=0, respectively.
As seen from this figure and the table, the ratios of the calculated contributions to the given spectra are expected to be $`0.3:1:0.7`$ if the state at 1905 MeV is interpreted as an isovector dibaryon \[$`D(T=1,J^P=1^+)`$ or $`D(1,1^{})`$\]. This is in agreement with our experimental data within the errors. On the contrary, the signals from isoscalar dibaryons \[$`D(0,0^+)`$ or $`D(0,0^{})`$\] could be observed in Figs. $`2b`$ and $`2c`$ with the same probability.
The following conclusions could be drawn : 1) as a result of the study of the reactions $`pdpd(\gamma )`$ and $`pdpp(\gamma n)`$, two narrow structures at 1905 and 1924 MeV with widths of less than 3 MeV were observed in the missing-mass spectra; 2) the analysis of the angular dependence of the experimental and theoretical yields of the reactions under consideration showed that the found peak at 1905 MeV can be explained as the manifestation of the supernarrow dibaryons, the decay of which into two nucleons is suppressed by the Pauli exclusion principle; 3) it is most likely that the observed state has a isospin equal to 1.
|
no-problem/9902/astro-ph9902292.html
|
ar5iv
|
text
|
# Unusual Burst Emission from the New Soft Gamma Repeater SGR1627-41
## 1 Introduction
Repeating soft gamma-ray bursts were discovered 20 years ago (Golenetskii, Ilyinskii, & Mazets, 1984; Mazets, Golenetskii, & Guryan, 1979; Atteia et al. 1987). For a long time only three soft gamma repeaters were known (Norris, Hertz, & Wood 1991) suggesting a rarity of this class of astrophysical objects (Kouveliotou et al. 1998). Two of them, SGR 1806-20 and SGR 1900+14, have exhibited reactivation phases after periods of long silence. Precise localizations and a search for optical counterparts revealed that all three SGRs are associated with rather young supernova remnants (Cline et al. 1982; Kulkarni et al. 1994; Hurley et al. 1999a) favoring a suggestion that the SGR’s are neutron stars. Quiescent soft X-ray sources and emission periodicity were discovered associated with SGR 1806-20 (Murakami et al. 1994), SGR1900+14 (Hurley et al. 1999b) and SGR 0526-66 (Rothschild, Kulkarni & Lingenfelter 1994). A spectacular huge periodic flare on August 27,1998 which came from SGR 1900+14 turned out to be strikingly similar to the famous event on March 5,1979. It demonstrated that such giant outbursts are an intrinsic but less common characteristic of SGRs (Cline, Mazets & Golenetskii 1998). Observational data accumulated so far on SGRs have found their most complete explanation in the magnetar model by Thompson and Duncan (Thomson & Duncan 1995) which proposes that SGRs are young slowly rotating neutron stars with superstrong magnetic fields of $`10^{15}G`$.
In June 1998, BATSE announced an observation of a fourth soft gamma repeater SGR 1627-41 (Kouveliotou et al. 1998 ) confirmed by Ulysses (Hurley, et al. 1998a), Beppo Sax (Feroci, et al. 1998), RXTE (Smith & Levine 1998), and Konus-Wind (Hurley et al. 1998b) data. This SGR was precisely localized by IPN/Beppo Sax (Hurley et al. 1999d; Woods et al. 1999). Its position coincides with the SNR G337.0-0.1. Some evidence was obtained for a possible periodicity of 6.7 s (Dieters et al. 1998).
In this letter we report temporal and spectral properties of SGR 1627-41 as well as an unusual behavior of this source recorded in Konus-Wind observations.
## 2 Observations
The cosmic gamma-ray burst spectrometer Konus aboard the GGS WIND spacecraft observed 34 bursts from the new SGR in the period June 17-July 12,1998. Some events were too weak to trigger the instrument. They are recorded in a background mode with a time resolution of 2.94 s which is too coarse to study processes a small fraction of a second in duration. The bursts from the new source were strongly bunched in time. In two days, June,17 and 18, 29 bursts were observed. This high rate created an additional problem. The information on a burst which triggered the instrument is read by the S/C TM-system over a period of $``$ one hour. If another burst occurs in this time interval, it can’t trigger the instrument and is recorded only among housekeeping data with 3.86 s resolution. As a result, we obtained triggered records of only 13 bursts, which nevertheless give a good idea of the temporal and spectral behaviour of this SGR. Regrettably, as we can see from the untriggered data, several very interesting events were not caught with high time resolution.
In Figure 1 we show some examples of time histories of bursts recorded with a time resolution of 2 ms at photon energy $`E_\gamma >15`$ keV. One can see that temporal structures are rather complex with rise and fall times of only a few milliseconds. Energy spectra, as expected, are soft with an exponential cutoff, $`kT25keV`$. Spectral variability is apparent in the course of many bursts. For example, time histories G1 and G2 of the burst on June 18, 16229.0s UT recorded in the low energy window G1(15 – 55 keV) and the middle one G2(55 – 250 keV) as well as the hardness ratio G2/G1 are shown in Figure 2. Detailed energy spectra for this event, accumulated after triggered time $`T_o`$ in subsequent 64 ms long time intervals, are presented in Figure 3. These photon spectra can be fitted by the expression $`dN/dEE^{0.5}e^{E/kT}`$. A weak tail of the bursts exhibits a fading power law spectrum with an index of $`2.8`$. Data obtained for 13 events are sampled in Table 1 which contains values of kT, fluences, and peak fluxes; as well as the energy output and maximum luminosity of the source assuming a 5.8 kpc distance (Case & Bhattacharya 1998).
One event from this Table, GRB 980618a 6153.5s UT, stands out dramatically in intensity compared to the other bursts. The intensity of this event is so high that count rates in the low and middle energy windows approached saturation level. High count rates appear in the high energy window G3(250 – 1000 keV) due to the pile up of light pulses in the scintillator and photomultiplier. The instrument response to very high fluxes of various incident photon spectra were studied thoroughly in a laboratory simulation using a spare unit (Mazets, et al. 1999). This enables a reliable deconvolution of the Konus-Wind input fluxes. The time history of the giant outburst, after correction for dead time and pile up effects, is shown in Figure 4. Possible errors in the peak flux region can be represented by a scale factor of 0.5 – 1.5.
Figure 5 presents energy loss spectra accumulated during four adjacent 64 ms long intervals and one of 256 ms. The very hard spectrum in the 64-128 ms interval is partially the result of pile up of light flashes in the NaI crystal. From laboratory testing data however, it is evident that in this case the actual incident photon spectrum is also much harder than the spectrum for the first interval $`TT_o`$ = 0-64 ms. Its kT can be as high as 100-150 keV whereas unaffected spectra A, D, E correspond to kT: 50, 50, and 35 keV respectively.
## 3 Discussion and conclusion
The fluence and peak flux of this event exceed by several hundred times the values for other bursts of the series. At an assumed distance to the source of 5.8 kpc, they indicate an energy output of $`310^{42}erg`$ and a peak luminosity of $`810^{43}ergs^1`$. These quantities approach the huge energy releases and maximum luminosities for the giant outbursts on March 5,1979 and August 27,1998 (Hurley, et al. 1999c). From this point of view, the burst on June 18,1998 6153.5 s UT is also a giant outburst seen in a third SGR! But this burst differs strikingly from them. It exhibits no fast rise time, no long tail, and no pulsations. It is similar to ordinary repeating bursts from SGR 1627-41, but much stronger.
It is now widely accepted that an SGR’s normal activity is a result of starquakes on neutron stars leading to a liberation of a large amount of the energy contained in superstrong magnetic fields of $`10^{15}G`$ (Thompson & Duncan 1995). These authors have proposed that giant outbursts like the March 5 event are created by large scale reconnection instabilities of a huge stellar magnetic field. The more frequent weaker repeating bursts are generated during fractures of the neutron star crust. It was suggested that the energy of starquake-related bursts (i.e. their observed fluences) may depend on the size/length of fractures (Duncan, 1998; Golitsin, 1998). In such behavior an upper limit for the burst energy output will be determined by the maximum length of fractures crossing over the whole surface of the neutron star. It seems that the burst on June 18,1998 may be the first observed example of an event close to such an upper limit.
It is expected that rise times of repeated bursts must relate to their fluence S as $`\tau S^{1/3}`$ (Golitsin, 1998). For weak bursts of (3-6) $`10^7ergcm^2`$ rise times are $`10ms`$. For the giant event with $`S210^4ergcm^2`$ the rise time is $`10^2`$ ms, in accord with the prediction.
There is some evidence that such energetic bursts may be quite common. Among bursts detected with Konus-Wind in untriggered background mode there are at least two events, June 18 14360 s UT and June 18 14661 s UT, with very high fluences. If their durations were short compared to the time resolution of 3.86 s, then their intensities corrected for dead time could be comparable with the energy fluence of the strong June 18 burst under discussion.
These results provide an important confirmation of the SGR starquake model predictions.
This work on the Russian side was supported by RSA contract.
|
no-problem/9902/cond-mat9902032.html
|
ar5iv
|
text
|
# A single defect approximation for localized states on random lattices
## Abstract
Geometrical disorder is present in many physical situations giving rise to eigenvalue problems. The simplest case of diffusion on a random lattice with fluctuating site connectivities is studied analytically and by exact numerical diagonalizations. Localization of eigenmodes is shown to be induced by geometrical defects, that is sites with abnormally low or large connectivities. We expose a “single defect approximation” (SDA) scheme founded on this mechanism that provides an accurate quantitative description of both extended and localized regions of the spectrum. We then present a systematic diagrammatic expansion allowing to use SDA for finite-dimensional problems, e.g. to determine the localized harmonic modes of amorphous media.
Since Anderson’s fundamental work , physical systems in presence of disorder are well known to exhibit localization effects . While most attention has been paid so far to Hamiltonians with random potentials (e.g. stemming from impurities), there are situations in which disorder also originates from geometry.
Among these are of particular interest the harmonic vibrations of amorphous materials as liquids, colloids, glasses, … around random particle configurations. Recent experiments on sound propagation in granular media have stressed the possible presence of localization effects, highly correlated with the microscopic structure of the sample. The existing theoretical framework for calculating the density of harmonic modes in amorphous systems was developed in liquid theory. In this context, microscopic configurations are not frozen but instantaneous normal modes (INM) give access to short time dynamics . Wu and Loring and Wan and Stratt have calculated good estimates of the density of INM for Lennard-Jones liquids, averaged over instantaneous particle configurations. However, localization-delocalization properties of the eigenvectors have not been considered.
Diffusion on random lattices is another problem where geometrical randomness plays a crucial role . Long time dynamics is deeply related to the small eigenvalues of the Laplacian on the lattice and therefore to its spectral dimension. Campbell suggested that diffusion on a random lattice could also mimic the dynamics taking place in a complicated phase space, e.g. for glassy systems . From this point of view, sites on the lattice represent microscopic configurations and edges allowed moves from a configuration to another. At low temperatures, most edges correspond to very improbable jumps and may be erased. The tail of density of states of Laplacian on random graphs was studied by means of heuristic arguments by Bray and Rodgers . Localized eigenvectors, closely related to metastable states are of particular relevance for asymptotic dynamics.
Remarkably, the above examples lead to the study of the spectral properties of random symmetric matrices $`𝐖`$ sharing common features. In amorphous media, the elastic energy is a quadratic function of the displacements of the particles from their instantaneous “frozen” positions. The INM are the eigenmodes of the stiffness matrix $`𝐖`$. As for diffusion on random lattices, $`𝐖`$ simply equals the Laplacian operator. In both cases, each row of $`𝐖`$ is comprised of a small (with respect to the size $`N`$ of the matrix) and random number of non-zero coefficients $`W_{ij}`$ and most importantly, diagonal elements fluctuate : $`W_{ii}=_{j(i)}W_{ij}`$ .
In this letter, we present a quantitative approach to explain the spectral properties and especially localization effects of such a random matrix $`𝐖`$ in the simplest case, that is when all off-diagonal elements of $`𝐖`$ are independent random variables. Our analytical approximation is corroborated by exact numerical diagonalizations. We then expose a systematic diagrammatic expansion allowing for the study of more realistic models in presence of correlated $`W_{ij}`$’s.
The spectral properties of $`𝐖`$ can be obtained through the knowlegde of the resolvent $`G(\lambda +iϵ)`$, that is the trace of $`((\lambda +iϵ)\mathrm{𝟏}𝐖)^1`$ . Denoting the average over disorder by $`\overline{()}`$, the mean density of states reads
$$p(\lambda )=\frac{1}{\pi }\underset{ϵ0^+}{lim}\text{Im}\overline{G(\lambda +iϵ)}.$$
(1)
The averaged resolvent is then written as the propagator of a replicated Gaussian field theory
$`\overline{G(\lambda +iϵ)}`$ $`=`$ $`\underset{n0}{lim}{\displaystyle \frac{i}{Nn}}{\displaystyle \underset{i}{}d\stackrel{}{\varphi }_i\underset{k=1}{\overset{N}{}}\stackrel{}{\varphi }_k^{\mathrm{\hspace{0.33em}2}}\underset{i}{}z_i\overline{\underset{i<j}{}(1+u_{ij})}}`$ (2)
$`\text{where}z_i`$ $``$ $`z(\stackrel{}{\varphi }_i)=\mathrm{exp}\left({\displaystyle \frac{i}{2}}(\lambda +iϵ)\stackrel{}{\varphi }_i^{\mathrm{\hspace{0.33em}2}}\right)`$ (3)
$`u_{ij}`$ $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{i}{2}}W_{ij}(\stackrel{}{\varphi }_i\stackrel{}{\varphi }_j)^2\right)1.`$ (4)
Replicated fields $`\stackrel{}{\varphi }_i`$ are $`n`$-dimensional vector fields attached to each site $`i`$. To ligthen notations, we have restricted in (4) to the scalar case. We shall focus later on $`𝐖`$ having an internal dimension, as happens to be in the INM problem.
In the uncorrelated case, the $`W_{ij}`$’s ($`i<j`$) are independently drawn from a probability law $`𝒫`$. To take into account geometrical randomness only, we focus on distribution $`𝒫\left(W_{ij}\right)=\left(1\frac{q}{N}\right)\delta \left(W_{ij}\right)+\frac{q}{N}\delta \left(W_{ij}w\right)`$ . Such a bimodal law merely defines a random graph : $`i`$ and $`j`$ can be said to be connected if and only if $`W_{ij}`$ does not vanish. Due to the scaling of the edge-probability $`\frac{q}{N}`$, the mean site-connectivity $`q`$ remains finite for large sizes $`N`$. We rescale the eigenvalues by choosing $`w=\frac{1}{q}`$ to ensure that the support of the spectrum is positive and bounded when $`q\mathrm{}`$ .
Numerical diagonalizations of the random Laplacian $`𝐖`$ have been carried out for different sizes, up to $`N=3200`$. To each eigenvector $`\psi _{i,\mathrm{}}`$ of eigenvalue $`\lambda _{\mathrm{}}`$ normalized to unity is associated the inverse participation ratio $`w_{\mathrm{}}^4=_i|\psi _{i,\mathrm{}}|^4`$. We then define $`w^4(\lambda )d\lambda `$ as the sum of $`w_{\mathrm{}}^4`$ over all $`\psi _{i,\mathrm{}}`$ lying in the range $`\lambda \lambda _{\mathrm{}}\lambda +d\lambda `$, divided by the number $`Np(\lambda )d\lambda `$ of such eigenvectors. Fig. 1 displays $`p(\lambda )`$ and $`w^4(\lambda )`$ for a mean connectivity $`q=20`$ . The central part of the spectrum ($`\lambda _{}<\lambda <\lambda _+`$) has a smooth bell shape and corresponds to extended states. For increasing sizes $`N`$ and at fixed $`\lambda `$, $`w^4`$ vanishes as $`1/N`$ and the breakdown of this scaling identifies the mobility edges : $`\lambda _{}0.47\pm 0.01`$ and $`\lambda _+1.67\pm 0.03`$. Outside the central region, that is for small or large eigenvalues, the eigenstates become localized and the density exhibits successive regular peaks.
We have measured for each eigenvector $`\psi _{i,\mathrm{}}`$ the connectivity $`c_{\mathrm{}}`$ of its center, that is the site $`i_0`$ with maximum component $`|\psi _{i_0,\mathrm{}}|`$. The mean connectivity $`c(\lambda )`$ of the centers of eigenvectors having eigenvalue $`\lambda `$ is plotted fig. 1. It is a smooth monotonous function of $`\lambda `$ in the extended part of the spectrum. In the localized region, $`c(\lambda )`$ is constant over a given peak and integer-valued ($`cc_{}=10`$ on the left side of the spectrum, $`cc_+=33`$ on the right side); the center connectivity abruptly jumps when $`\lambda `$ crosses the borders between peaks. Furthermore, Table 1 shows the good agreement between the weight of peak associated to connectivity $`c`$ and the fraction of sites having $`c`$ neighbors, given by a Poisson law of parameter $`q`$ .
Therefore, numerics indicate that localized eigenvectors are centered on geometrical defects, that is on sites whose number of neighbors is much smaller or much larger than the average connectivity. To support this observation, it is intructive to consider a simpler model including a unique defect i.e. a Cayley tree with connectivity $`c`$ for the central site and $`q+1`$ for all other points (locally a random graph is equivalent to a tree since no loops of finite length are present). Looking for a localized state $`\psi _i`$ with a radial symmetry $`\psi _i=\psi _{d(i)}`$ where $`d(i)`$ is the distance between site $`i`$ and center, the eigenvalue equations read $`c(\psi _0\psi _1)=q\lambda \psi _0`$ and $`(q+1)\psi _dq\psi _{d+1}\psi _{d1}=q\lambda \psi _d`$ for $`d1`$ . The eigenvalue problem reduces to the search of the solution of a homogeneous linear difference equation of order two (eqn. for $`d1`$) fulfilling a boundary condition (eqn. for $`d=0`$). After a little algebra we have found that strong defects, such that $`|qc|>\sqrt{q}`$ give rise to localized states around the central site with an eigenvalue $`\lambda =\frac{c}{q}(1\frac{1}{qc})`$. The predicted connectivities at mobility edges ($`c_{}=15`$ and $`c_+=25`$ for $`q=20`$) are in poor agreement with numerical findings. A more refined picture requires to take into account the connectivity fluctuations of the neighbours of the central site. We have thus considered a Cayley tree with a coordination number $`c`$ for the central site, $`c^{}`$ for the nearest neighbours and $`q+1`$ for all other points. We have found that localized states due to weak (respectively strong) central connectivity $`c`$ can disappear and become extended if the connectivity of the neighbors $`c^{}`$ reaches large (resp. small) values. In other words, a defect can be screened by an opposite connectivity fluctuation of its surrounding neighbors. Numerics supports this scenario. As $`\lambda `$ varies, $`w^4(\lambda )`$ exhibit oscillations (interpreted as finite-size contributions coming from extended states) of rapidly decreasing amplitudes with increasing $`N`$. These oscillations are correlated (positively for small $`c`$ and negatively for large $`c`$) with the fluctuations of the neighbors connectivity $`c^{}(\lambda )`$ around its mean value $`q+1`$, see inset of fig. 1.
Let us see how the above results may be recovered from theory. Due to the statistical independence of the $`W_{ij}`$’s, the $`u_{ij}`$ (4) interactions are averaged out separately . The resulting theory is of course invariant under any relabelling of the sites $`i`$ and depends on the fields $`\stackrel{}{\varphi }_i`$ through the density $`\rho (\stackrel{}{\varphi })`$ of sites $`i`$ carrying fields $`\stackrel{}{\varphi }_i=\stackrel{}{\varphi }`$ only . The functional order parameter $`\rho (\stackrel{}{\varphi })`$ is found when optimizing the “free-energy”
$`\mathrm{ln}\mathrm{\Xi }[\rho ]`$ $`=`$ $`{\displaystyle 𝑑\stackrel{}{\varphi }\rho (\stackrel{}{\varphi })\left[\mathrm{ln}z(\stackrel{}{\varphi })\mathrm{ln}\rho (\stackrel{}{\varphi })+1\right]}`$ (5)
$`+`$ $`{\displaystyle \frac{q}{2}}{\displaystyle 𝑑\stackrel{}{\varphi }𝑑\stackrel{}{\psi }\left(e^{i(\stackrel{}{\varphi }\stackrel{}{\psi })^2/2q}1\right)\rho (\stackrel{}{\varphi })\rho (\stackrel{}{\psi })},`$ (6)
under the normalization constraint $`𝑑\stackrel{}{\varphi }\rho (\stackrel{}{\varphi })=1`$; $`z(\stackrel{}{\varphi })`$ has been defined in (3). This order parameter is simply related to the original random matrix problem through
$$\rho (\stackrel{}{\varphi })=\frac{1}{N}\overline{\underset{i=1}{\overset{N}{}}C_i\mathrm{exp}\left(\frac{i\stackrel{}{\varphi }^2}{2[(\lambda +iϵ)\mathrm{𝟏}𝐖]_{ii}^1}\right)},$$
(7)
The $`C_i`$ are normalization constants going to unity as $`n`$ vanishes. Therefore, the averaged resolvent reads $`\overline{G(\lambda +iϵ)}=ilim_{n0}𝑑\stackrel{}{\varphi }\rho (\stackrel{}{\varphi })(\varphi ^1)^2`$.
Finding an exact solution to the maximization equation $`\delta \mathrm{ln}\mathrm{\Xi }/\delta \rho (\stackrel{}{\varphi })=0`$ seems to be a hopeless task. This is a general situation, which arises in the study of the physics of dilute systems (for the case of sparse random matrices see, for example, ).
Identity (7) may however be used as a starting point for an effective medium approximation (EMA). In the extended part of the spectrum, we expect all matrix elements appearing in (7) to be of the same order of magnitude and thus $`\rho (\stackrel{}{\varphi })`$ to be roughly Gaussian. EMA is therefore implemented by inserting the Gaussian Ansatz
$$\rho ^{EMA}(\stackrel{}{\varphi })=\left(2\pi ig(\lambda )\right)^{\frac{n}{2}}\mathrm{exp}\left(\frac{i\stackrel{}{\varphi }^2}{2g(\lambda )}\right),$$
(8)
into functional $`\mathrm{\Xi }`$ (6). The average EMA resolvent $`g`$ is then obtained through optimization of $`\mathrm{ln}\mathrm{\Xi }[g(\lambda )]`$. The resulting spectrum, which is given by the imaginary part of $`g`$ divided by minus $`\pi `$, is shown fig. 1. As expected, EMA gives a sensible estimate of the spectral properties in the extended region and of the mobility edges $`\lambda _{}^{EMA}=0.468`$, $`\lambda _+^{EMA}=1.732`$. However, EMA is intrinsically unable to reflect geometry fluctuations and thus the presence of localized states .
To do so, we start by writing the extremization condition of $`\mathrm{ln}\mathrm{\Xi }`$ over $`\rho `$ as
$$\rho (\stackrel{}{\varphi })=[\rho ](\stackrel{}{\varphi }),$$
(9)
where the functional $``$ may be expanded as
$$[\rho ](\stackrel{}{\varphi })=hz(\stackrel{}{\varphi })\underset{k=0}{\overset{\mathrm{}}{}}\frac{e^qq^k}{k!}\left[𝑑\stackrel{}{\psi }\rho (\stackrel{}{\psi })e^{i(\stackrel{}{\varphi }\stackrel{}{\psi })^2/2q}\right]^k$$
(10)
$`h`$ is a multiplicative factor equal to unity in the $`n0`$ limit. Equations (9,10) describe an elementary lattice of one central site connected to $`k`$ neighbors according to a Poisson distribution of mean $`q`$. Neighbors carry information about the random matrix elements through the order parameter $`\rho (\stackrel{}{\psi })`$ (7) and interact with the central site via kernel $`\mathrm{exp}(i(\stackrel{}{\varphi }\stackrel{}{\psi })^2/2q)`$ (4,10). Self-consistency requires that the resulting order parameter at central site equals $`\rho `$ . Baring in mind the localization mechanism unveiled in previous paragraphs, we propose a single defect approximation (SDA). SDA amounts to make the central site interact with $`k`$ neighbours belonging to the effective medium defined above. Since EMA precisely washes out any local geometrical fluctuation, we partially reintroduce them by allowing the connectivity $`k`$ of the central site (the defect) to vary. The SDA order parameter is thus obtained through an iteration of equation (9)
$$\rho ^{SDA}(\stackrel{}{\varphi })=[\rho ^{EMA}](\stackrel{}{\varphi }).$$
(11)
Using the EMA resolvent $`g`$ (8), we have computed the SDA spectrum for $`q=20`$. The SDA extended part is shown to be in better agreement with numerical results than EMA on fig. 1. Improvement is even more spectacular for localized states that were absent within EMA. We have found Dirac peaks whose weights and eigenvalues are listed Table 1. The agreement with numerical results is quite good. We have verified analytically that SDA peaks do correspond to localized states by calculating $`lim_{ϵ0}[ϵ\overline{G(\lambda +iϵ)G(\lambda iϵ)}]`$ using SDA with two groups of replicas. This quantity gives also access to $`w^4(\lambda )`$, whose value seems slightly higher than numerical measures close to the mobility edges.
Starting from any sensible $`\rho `$, successive iterations of equation (9) would also provide more and more accurate descriptions of the “fractal” structure of the localized peaks at a price of heavier and heavier calculations. Besides being theoretically founded, SDA has the advantage that a single iteration from EMA (which is easily computable) succeeds in capturing localized states in a quantitative way .
In general, the components of $`𝐖`$ are correlated and the average over disorder requires an expansion in terms of the connected correlation functions of the $`u_{ij}`$ interactions
$$\overline{\underset{i<j}{}(1+u_{ij})}=\mathrm{exp}\left(\underset{i<j}{}\overline{u_{ij}}+\frac{1}{2}\underset{i<j,k<l}{\overset{}{}}\overline{u_{ij}u_{kl}}^c+\mathrm{}\right),$$
(12)
where the prime indicates that the sum runs over different pairs of sites. Free-energy (6) corresponds to the case where all terms in (12) but the first one vanish. The presence of these cumulants (of order $`2,3,\mathrm{}`$ in $`u`$) will result in the addition of cubic, quartic, … $`\rho (\stackrel{}{\varphi })`$ interactions terms to $`\mathrm{ln}\mathrm{\Xi }`$. Though calculations become more difficult, the existence of a variational free-energy $`\mathrm{ln}\mathrm{\Xi }`$ is preserved. This is all what is needed to derive EMA and the optimization equation (11).
Let us see how SDA can be implemented to determine the INM spectrum of amorphous media. We shall restrict to liquids but our approach could also be applied to glasses using the formalism recently developed by Mézard and Parisi . Particles $`i`$ are individuated by their positions $`𝐱_i`$ and interact through a two-body potential $`V(𝐱_i𝐱_j)`$ (hereafter bold letters will denote vectors in the $`D`$-dimensional real space). For a given microscopic configuration, INM are the eigenmodes of the $`D\times N`$-dimensional matrix $`W_{ij}=^2V(𝐱_i𝐱_j)/𝐱_i𝐱_j`$ ($`ij`$). The calculation of the spectrum and the average over particle configurations (with the equilibrium Boltzmann measure at inverse temperature $`\beta `$) can be performed at the same time by introducing a generalized liquid . Each particle is assigned a “position” $`𝐫_i=(𝐱_i,\stackrel{}{\varphi }_i)`$ and the generalized fugacity reads $`z^{}(𝐫_i)=yz(\stackrel{}{\varphi }_i)`$ where $`y`$ is the liquid fugacity and $`z`$ is defined in (3). The grand-canonical partition function $`\mathrm{\Xi }`$ may then diagrammatically expanded in powers of the Mayer bond $`b(𝐫_i,𝐫_j)=\mathrm{exp}(\beta V(𝐱_i𝐱_j)+\frac{i}{2}W_{ij}(\stackrel{}{\varphi }_i\stackrel{}{\varphi }_j)^2)1`$ (the summation over the $`D^2`$ internal indices of $`𝐖`$ is not written explicitely for sake of simplicity). With these notations, $`\mathrm{\Xi }`$ coincides with formula (3.21) of Ref.. It is now straigthforward to take advantage of the variational formulation of the diagrammatic virial expansion by Morita and Hiroike . The generalized density $`\rho (𝐫)=\rho (𝐱,\stackrel{}{\varphi })`$ of particles optimizes
$$\mathrm{ln}\mathrm{\Xi }[\rho ]=𝑑𝐫\rho (𝐫)\left[\mathrm{ln}z^{}(𝐫)\mathrm{ln}\rho (𝐫)+1\right]+𝒮,$$
(13)
where $`𝒮`$ is the sum of all diagrams composed of bonds $`b(𝐫,𝐫^{})`$ and vertices weighted with $`\rho (𝐫)`$ that cannot be split under the removal of a single vertex, see equation (4.6) of Ref.. Due to translational invariance in real space, $`\rho `$ does not depends on $`𝐱`$ and we are left with a variational functional $`\mathrm{\Xi }`$ of the density $`\rho (\stackrel{}{\varphi })`$. Note that (13) contains (6) as a special case when $`𝒮`$ includes only the simplest single bond diagram. The random graph model we have studied in this letter may be seen as a physical system for which keeping the first coefficient of the virial expansion only is exact.
To our knowledge, Morita and Hiroike’s work has not been used so far in the context of INM theory as a short cut to avoid tedious diagrammatical calculations. In addition, the variational formulation of allows to implement SDA in a practical way. We are currently attempting to apply the present formalism to characterize localized eigenstates in two- and three-dimensional granular media.
Acknowledgements : We are deeply indebted to D.S. Dean for numerous and thorough discussions on this work. We also thank D.J. Thouless for an enlightening discussion, particularly about the Cayley tree argument.
|
no-problem/9902/cond-mat9902282.html
|
ar5iv
|
text
|
# Force Distribution in a Granular Medium
## Introduction
Granular materials have a rich set of unusual behavior which prevents them from being simply categorized as either solids or fluids . Even the most simple granular system, a static assembly of noncohesive, spherical particles in contact, holds a number of surprises. Particles within this system are under stress, supporting the weight of the material above them in addition to any applied load. The inter-particle contact forces crucially determine the bulk properties of the assembly, from its load-bearing capability to sound transmission or shock propagation . Only in a crystal of identical, perfect spheres is there uniform load-sharing between particles. In any real material the slightest amount of disorder, due to variations in the particle sizes as well as imperfections in their packing arrangement, is amplified by the inherently nonlinear nature of inter-particle friction forces and the particles’ nearly hard-sphere interaction. As a result, stresses are transmitted through the material along “force chains” that make up a ramified network of particle contacts and involve only a fraction of all particles .
Force chains and spatially inhomogeneous stress distributions are characteristic of granular materials. A number of experiments on 2D and 3D compression cells have imaged force chains by exploiting stress-induced birefringence . While these experiments have given qualitative information about the spatial arrangement of the stress paths inside the granular assembly, the quantitative determination of contact forces in three dimensional bead packs is difficult with this method. Along the confining walls of the assembly, however, individual force values from all contacting particles can be obtained. Liu et al.’s experiments showed that the spatial probability distribution, $`P(F)`$, for finding a normal force of magnitude $`F`$ against a wall decays exponentially for forces larger than the mean, $`\overline{F}`$. This result is remarkable because, compared to a Gaussian distribution, it implies a significantly higher probability of finding large force values $`F\overline{F}`$.
A number of fundamental questions remain, however. While several model calculations , computer simulations as well as experiments on shear cells and 2D arrays of rods have corroborated the exponential tail for $`P(F)`$ in the limit of large $`F`$, other functional forms so far have not been ruled out . Furthermore, there has been no consensus with regard to the shape of the distribution for forces smaller than the mean. The original “q-model” by Coppersmith et al. and Liu et al. predicted power law behavior with $`P(F)F^\alpha `$ and $`\alpha 2`$ for small $`F`$, while recent simulations by Radjai et al. and Luding found $`\alpha 0`$. So far, experiments have lacked the range or sensitivity required for a firm conclusion. The roles of packing structure and history, identified in much recent work as important factors in determining stresses in granular media, have not yet been explored experimentally in this system. Finally, the existence of correlations between forces remains unclear. Shear cell data by Miller et al. have been interpreted as an indication for correlations between forces against the cell bottom surface.
In this paper we present results from a set of systematic experiments designed to address these issues. We have refined the carbon paper method for determining the force of each bead against the constraining surface and are now able to measure force values accurately over two orders of magnitude. With this improvement we are able to ascertain the existence of the exponential behavior and to obtain close bounds on its decay constant in the regime $`F>\overline{F}`$. For $`F<\overline{F}`$ we find that $`P(F)`$ flattens out and approaches a constant value. In addition, our experiments investigated the effects of the packing history. We also studied both the influence of the boundary conditions posed by the vertical container walls on the distributions of forces $`P(F)`$ as well as the spatial correlations in the arrangement of beads due to crystallization near a wall during system preparation. None of these variations on the experiment are found to influence $`P(F)`$ significantly. Finally, we have also measured the lateral correlations between forces on different beads and find that no correlations exist.
## Experimental Method
The granular medium studied was a disordered 3D pack of 55,000 soda lime glass spheres with diameter $`d=3.5\pm 0.2`$ mm. The beads were confined in an acrylic cylinder of 140 mm inner diameter. The top and bottom surfaces were provided by close-fitting pistons made from 2.5 cm thick acrylic disks rigidly fixed to steel rods. The height of the bead pack could be varied, but experiments described in this paper were performed with a height of 140 mm. Once the cell was filled with beads, a load, typically 7600 N, was applied to the upper piston using a pneumatic press while the lower piston was held fixed. In most experimental runs, the outside cylinder wall was not connected to either piston so that the cylinder was supported only by friction with the bead pack (see Fig. 1). We shall refer to this as the “floating wall” method. The system could also be prepared with the bottom piston rigidly attached to the cylinder wall, which we shall refer to as the “fixed wall” method. To estimate the bead-bead and bead-wall static friction coefficients, we glued beads to a plate resting on another glass or acrylic plate and inclined the plates until sliding occurred. We found the static coefficient of friction to be close to 0.2 for both glass-glass and glass-acrylic contacts.
As the beads were loaded into the cell, they naturally tended to order into a 2D polycrystal along the lower piston. The beads against the upper piston, by contrast, were irregularly packed. We were able to enhance ordering on the lower piston by carefully loading the system, or disturb it by placing irregularly shaped objects against the surface which were later removed. For some experiments, the cell was inverted during or after loading with beads. By varying the experiment in these ways, we probed the effect of system history on the distribution of forces.
Contact forces were measured using a carbon paper technique . With this method, all constraining surfaces of the system were lined with a layer of carbon paper covering a blank sheet of paper. For the blank sheet we used color copier paper, which is smoother, thicker, and has a more uniform appearance than standard copier paper. Beads pressed the carbon onto the paper in the contact region and left marks whose darkness and area depended on the force on each bead. After the load had been applied to the bead packing, the system was carefully disassembled and the marks on the paper surface were digitized on a flatbed scanner for analysis. A region from a typical data set taken from the area over one of the pistons is shown in Fig. 1. Each experiment yielded approximately 3,800 data points over the interior cylinder wall and between 800 and 1,100 points for each of the piston surfaces, depending on how the system was prepared. The position of each mark was identified and the thresholded area and integrated darkness were calculated. At the scan resolution used, marks ranged from several pixels to several hundred pixels in area.
The force was determined by interpolating the measured area and darkness on calibration curves that were obtained by pressing a single bead with a variable, known force onto the carbon paper. This was achieved by slowly lowering a known mass through a spring onto a single bead. The spring was essential as it greatly reduced the otherwise large impulse which occurs when a bead makes contact with the carbon paper and quickly comes to rest. Both area and darkness of the mark left on the copier paper were found to increase monotonically with the normal component of the force exerted by each bead, as seen in Fig. 2. Note that the only requirement is that these curves are monotonic; we do not assume any particular functional relationship. With this carbon paper technique, we were able to measure forces between 0.8 N and 80 N with an error of less than 15%. We ensure that the beads do not slide relative to the carbon paper during an experiment by measuring the eccentricity of each mark. We find that the eccentricities $`ϵ`$ are narrowly distributed with a mean of 0.1, corresponding to a ratio of major to minor axis $`\frac{a}{b}=\frac{1}{\sqrt{1ϵ^2}}`$ of 1.005 for both piston surfaces and container walls.
We find that for less than approximately 0.8 N, little or no mark is left on the copier paper. A consequence, visible in Fig. 1, is that there are regions where there may have been one or more contacts with normal force less than 0.8 N, or alternatively, which may have had no bead in contact with the surface. This ambiguity presents a problem for the precise determination of the mean force $`\overline{F}`$. To estimate the number of contacts below our resolution, we could fill the voids with the maximum possible number of additional beads, using a simple computer routine. However, this over-estimates the number of actual contacts with the carbon paper. Instead, we used the following method: The average number of beads touching a piston surface was measured by placing double-sided tape on the piston and lowering it onto the pack. The tape was sufficiently sticky that the weight of a single bead would affix it to the tape. Subtracting the average number of contacts with $`F>0.8`$ N from this number, we found that 6.4% of the beads on the lower piston and 4.3% of the beads on the upper piston have $`F<0.8`$ N. The upper piston had fewer points below 0.8 N because the total number of beads in contact with that piston was typically smaller than on the bottom, raising the mean force and decreasing the fraction of beads with $`F<0.8`$ N. The weight supported by the walls was calculated by subtracting the net weight on the two pistons. For experiments performed with floating walls, we verified that the pistons had equal net force (since the weight of the walls can be neglected with respect to the applied force).
## Results
While we conducted experiments with both fixed walls and floating walls, most experiments were performed with the walls floating to reduce asymmetry. In this configuration the cylindrical wall of the system was suspended solely by friction with the bead pack. Since the applied load was much greater than the weight of the system, any remaining asymmetry between the top and bottom of the system must have come primarily from system preparation, and not from gravity. In Fig. 3 we show the resulting force distributions $`P(f)`$ (where $`fF/\overline{F}`$ is the normalized force) for all system surfaces, averaged over fourteen experimental runs performed under identical, floating wall conditions. We find that, within experimental error, the distributions $`P(f)`$ for the upper and lower piston surfaces are identical and, in fact, independent of floating or fixed wall conditions. Note that the lowest bin contains forces from 0 N to roughly 1 N which includes both measured forces as well as an estimated number of undetectable contacts, giving it a greater uncertainty than other bins. For forces greater than the mean ($`f>1`$), the probability of a bead having a certain force decays exponentially,$`P(f)e^{\beta f}`$, with $`\beta =1.5\pm 0.1`$.
Also shown in Fig. 3 is a curve corresponding to the functional form
$$P(f)=a(1be^{f^2})e^{\beta f}.$$
(1)
An excellent fit to the data is obtained for $`a`$=3, $`b`$=0.75, and $`\beta `$= 1.5. This functional form captures the exponential tail at large $`f`$, the flattening out of the distribution near $`f1`$, and the slight increase in $`P(f)`$ as $`f`$ decreases towards zero.
For the mean force against the side wall we observe a dependence with the depth, $`z`$, into the pile from the top piston which strongly depends on the boundary conditions (Fig. 4). For fixed wall boundary conditions (solid symbols) the angle-averaged wall force, $`\overline{F}_w(z)`$, is greatest near the upper piston, decaying with increasing depth into the pile. On the other hand, for floating wall conditions (open symbols) $`\overline{F}_w(z)`$ stays roughly constant. Using $`\overline{F}_w(z)`$ we compute the set of normalized forces, $`f_{w,i}F_{w,i}/\overline{F}_w(z_i)`$, exerted by individual beads, $`i`$, against the side walls. We find that the probability distribution, $`P(f_w)`$, is independent of $`z`$ within our experimental resolution and is practically identical to that found on the upper and lower piston surfaces, with a decay constant $`\beta _w=1.5\pm 0.2`$ for the regime $`f_w>1`$. This distribution is shown in Fig. 3 by the solid symbols. Since along the walls we were unable to determine directly the number of contacts with force less than 0.8 N, we estimated it to be 4.3%, based on our result for the disordered piston. The uncertainty in $`\beta _w`$ is predominantly due to the uncertainty in this estimate. Note that within the resolution of our measurements, the probability distributions in Fig. 3 are the same for all surfaces.
In contrast to observations reported previously , we observe that the mean force on any portion of the piston is independent of position. The radial dependence of the mean force against the pistons found previously was an artifact of the compression method, and does not occur if the load is applied using a pneumatic press with carefully aligned pistons.
The first few layers of monodisperse beads coming into contact with the lower piston tend to order in a hexagonal packing while farther into the system a random packing is observed. To probe the effect of boundary-induced crystallization, the degree of bead ordering was varied in some experiments. We used the measured positions of the marks left on the copier paper to compute the radial distribution function,
$$g(r)=\frac{1}{Nn_o\pi r}\underset{i=1}{\overset{N}{}}\underset{j=i+1}{\overset{N}{}}\delta (r_{ij}r)$$
(2)
where $`n_o`$ is the average density of points and $`r_{ij}`$ is the distance between the centers of marks $`i`$ and $`j`$. If filled from the bottom up without container inversion, the packing structure over the lower piston surface clearly exhibits a larger degree of crystalline order than that touching the top piston surface, as seen in Fig. 5a,b. Vertical lines are drawn to indicate peaks expected in $`g(r)`$ for a 2D hexagonal packing. The radial distribution function for the lower piston in an experiment where ordering along this piston is disturbed is shown in Fig. 5c. Despite the significant differences in degree of ordering evident from Fig. 5a-c, no significant effect on $`P(f)`$ was observed.
Since beads generally move downward as the cell is loaded, friction forces tend to be oriented upward. The process of adding beads to fill the cell, therefore, breaks the symmetry of the system by building an overall directionality into the force network. With different packing histories, however, such as inverting the system once or more during or after loading, we systematically disrupted this directionality. Again no measurable effect on $`P(f)`$ was found.
Our experiments also allowed for a direct calculation of correlations between normal forces impinging on a given container surface. We computed the lateral force-force pair correlations
$$K_n(r)=\frac{\underset{i=1}{\overset{N}{}}\underset{j=i+1}{\overset{N}{}}\delta (r_{ij}r)f_i^nf_j^n}{_{i=1}^N_{j=i+1}^N\delta (r_{ij}r)}$$
(3)
over both piston surfaces and the walls. As an example, Fig. 5d shows the first order correlation, $`K_1(r)`$, for the lower piston in experiments where ordering was not disrupted (corresponding to $`g(r)`$ in Fig. 5b). The featureless shape of $`K_1(r)`$ is characteristic of all cases examined ($`n`$ {1,2,3}) and indicates no evidence for force correlations.
## Discussion
The key features of the data in Fig. 3 are the nearly constant value of the probability distribution for $`f<1`$ and the exponential decay of $`P(f)`$ for larger forces. No comprehensive theory exists at present that would predict this overall shape for $`P(f)`$. The exponential decay for forces above the mean is predicted by the scalar q-model as a consequence of a force randomization throughout the packing . In this mean field model the net weight on a given particle is divided randomly between $`N`$ nearest neighbors below it, each of which carries a fraction of the load. Only one scalar quantity is conserved, namely the sum of all force components along the vertical axis. Randomization has an effect analogous to the role played by collisions in an ideal gas . The result is a strictly exponential distribution $`P(f)e^{Nf}`$ for the normal forces across the contact between any two beads.
The calculations for the original q-model were done for an infinite system without walls. If one assumes that each particle at a container boundary has $`N`$ neighbors in the bulk and a single contact with the wall, then the net force transmitted against the wall is a superposition of $`N`$ independent contact forces on each bead, so that the probability distribution for the net wall force is modified by a prefactor $`f^{N1}`$, much in the way a phase-space argument gives rise to the power law prefactor in the Maxwell-Boltzmann distribution. Thus, the original q-model predicts a non-monotonic behavior for $`P(f)`$ with vanishing probability as $`f0`$. Such a “dip” at small force values has also been found in recent simulations by Eloy and Clement . It is, however, in contrast to the data in Fig. 3 and to recent simulation results on 2D and 3D random packings by Radjai and coworkers . These simulations indicated that the distribution of normal contact forces anywhere, and at any orientation, in the packing did not differ from that found for the subset of beads along the walls. In fact, for both normal and tangential contact forces inside and along the surfaces of the packings, Radjai et al. observed distributions that were well-described by
$$P(f)\{\begin{array}{cc}f^\alpha \hfill & f<1\hfill \\ e^{\beta f}\hfill & f>1\hfill \end{array}$$
(4)
with $`\alpha `$ close to zero and positive and $`1.0<\beta <1.9`$, depending on which quantity was being computed, the dimension of the system, and the friction coefficient. While we were unable to experimentally measure forces below about $`f0.1`$, the simulation data by Radjai and coworkers extends to $`f0.0001`$. Power law behavior with $`\alpha >0`$ in Eq. 4, if indeed correct, would lead to a divergence in $`P(f)`$ as $`f0`$. However, we observe that our empirical function, Eq. 1, which does not diverge, provides a fit essentially indistinguishable from a power law $`f^\alpha `$ over the range $`0.001<f<1`$ as long as $`\alpha `$ is positive and close to zero. We can thus equally well fit the simulation data for normal forces in Refs. , over its full range, with Eq. 1. For the case of 3D simulations and friction coefficients close to 0.2, this is possible using the same coefficients as for the experimental data in Fig. 3.
We point out that the fitting function in Eq. 1 is purely empirical. In particular, we do not have a model that would predict the $`(1be^{f^2})`$ prefactor of the main exponential. It may be possible to think of this prefactor, in some type of modified q-model, as arising from considerations similar to phase-space arguments. The fact that it clearly differs from the usual $`f^N`$ dependence expected for $`N`$ independent vector components would then point to the existence of correlations between the contact forces on each bead. Such correlations obviously exist, in the form of constraints; yet how these constraints conspire to give rise to a specific functional form for $`P(f)`$ as in Eq. 1 remains unclear. Eloy and Clement have attempted to take into account some of the correlations that might apply to forces acting locally on a given bead. Using a modified q-model they include the possibility of a bias in the distribution of $`q`$’s, leading to a screening of small contact forces by larger ones. The resulting $`P(f)`$, nevertheless, still tends to zero as $`f0`$.
Finally, we note that a “dip” in $`P(f)`$ for small forces can always be introduced by averaging our data over areas large enough to contain several pressure marks. Data by Miller et al. on shear cells, using stress transducers of various sizes, similarly show an increasingly pronounced “dip” for the larger transducers. They did not, however, observe the pronounced narrowing of the distribution that is expected in the limit of sufficiently large areas and attributed this to possible force correlations. Our data for the force pair correlations in Fig. 5 indicate that no simple correlations exist between forces within the plane of any of the confining walls. This result is in accordance with the q-model .
## Conclusion
We have found that the distribution of forces, shown in Fig. 3, is a robust property of static granular media under uniaxial compression. Its shape turns out to be identical, within experimental uncertainties, for all interior container surfaces and furthermore appears to be unaffected by changes in the boundary conditions or in the preparation history of the system. The exponential decay for forces above the mean emerges as a key characteristic of the force distribution. The exponential tail of the distribution can be understood on the basis of a scalar model (q-model), where it emerges as a result of a randomization process that occurs as forces are transmitted through the bulk of the bead pack. The consequences of the vector nature of the contact forces on the distribution, however, remain unclear. A second key aspect of the measured distribution is the absence of either a “dip” or a powerlaw divergence for small forces; instead, our data is most consistently fit by a functional form that approaches a finite value as $`f0`$. This empirical fitting form, Eq. 1, provides an excellent fit over the full range of forces for our experimental data, as well as for simulation results on 3D packings obtained by Radjai et al. and for simulations performed by Thornton.
## Acknowledgments
We would like to thank Sue Coppersmith, John Crocker, David Grier, Hans Herrmann, Chu-heng Liu, Onuttom Narayan, Farhang Radjai, David Shecter, and Tom Witten for many useful discussions. This work was supported by the NSF under Award CTS-9710991 and by the MRSEC Program of the NSF under Award DMR-9400379.
|
no-problem/9902/hep-ph9902296.html
|
ar5iv
|
text
|
# 1 Typical diagrams of W boson production at HERA: resolved, direct and DIS part.
WUE-ITP-98-058
hep-ph/9902296
February 1999
A Note on $`W`$ Boson Production at HERA<sup>*</sup><sup>*</sup>*Contribution to the 3rd UK Phenomenology Workshop on HERA Physics, Durham, 20-25 Sep. 1998.
P. Nason<sup>1</sup>, R. Rückl<sup>2</sup> and M. Spira<sup>3</sup>
<sup>1</sup>INFN, Sezione di Milano, Milan, Italy
<sup>2</sup> Institut für Theoretische Physik, Universität Würzburg, D-97074 Würzburg, Germany
<sup>3</sup>II. Institut für Theoretische Physik, Universität Hamburg, D-22761 Hamburg, Germany
## Abstract
We discuss $`W`$ boson production at HERA including NLO QCD corrections.
The production of $`W`$ bosons at $`ep`$ colliders is mediated by photon, $`Z`$ and $`W`$ exchange between the electron/positron and the hadronic part of the process . In practice, it is convenient to distinguish two regions, a small $`Q^2`$ one (photoproduction region), that can be treated using the Weizsäcker-Williams photon spectrum convoluted with the cross section for $`\gamma qq^{}W`$, and a large $`Q^2`$ one (DIS region).
While the treatment of the DIS region is straightforward \[a typical contribution is shown in the third diagram of Fig. 1\], the small $`Q^2`$ region requires the inclusion of the contribution of the hadronic component of the photon, in which the photon, behaving like a hadron, produces the $`W`$ in the collision with the proton via the standard Drell-Yan mechanism \[first diagram of Fig. 1\]. This component is, in fact, the dominant one for this process, and full NLO corrections should include both the NLO correction to the Drell-Yan process , and the leading hard photon contribution \[a typical contribution is shown in the second diagram of Fig. 1\], suitably subtracted for collinear singularities.
In the following, we separate the DIS from the small $`Q^2`$ region using an angular cut of 5<sup>o</sup> on the outgoing lepton, corresponding to a cut $`Q_{max}^2`$ in terms of the initial lepton energy .
For the resolved part we have evaluated the QCD corrections in the $`\overline{\mathrm{MS}}`$ scheme. Defining the cross section at the values $`\mu _R=\mu _F=M_W`$ for the renormalization and factorization scales, the QCD corrections enhance the resolved contribution by about 40% for $`W^+`$ and $`W^{}`$ production. In order to demonstrate the theoretical uncertainties, the renormalization/factorization scale dependence of the individual contributions to the process $`e^+pW^++X\mu ^+\nu _\mu +X`$ \[including the branching ratio $`BR(W^+\mu ^+\nu _\mu )=10.84\%`$\] are presented in Fig. 2 for HERA conditions. It is clearly visible that the scale dependence in the sum of direct and resolved contributions is significantly reduced, once the NLO corrections to the resolved part are included. The full curve presents the total sum of NLO resolved, LO direct and the DIS LO contribution, i.e. the total $`W^+`$ production cross section after including the QCD corrections. The residual scale dependence is about 5%. Since the remaining dependence on $`Q_{max}^2`$ is of the same size, the total theoretical uncertainty is estimated to be less than about 10%.
Acknowledgements.
We would like to thank G. Altarelli, U. Baur and D. Zeppenfeld for helpful discussions.
|
no-problem/9902/hep-ph9902434.html
|
ar5iv
|
text
|
# Probing the MSSM Higgs Sector via Weak Boson Fusion at the LHC
## I Introduction
The search for the Higgs boson and the origin of spontaneous breaking of the electroweak gauge symmetry is one of the main tasks of the CERN Large Hadron Collider (LHC). Within the Standard Model (SM), a combination of search strategies will allow a positive identification of the Higgs signal : for small masses ($`m_H140`$ GeV) the Higgs boson can be seen as a narrow resonance in inclusive two-photon events and in associated production in the $`t\overline{t}H`$, $`b\overline{b}H`$ and $`WH`$ channels with subsequent decay $`H\gamma \gamma `$ . For large Higgs masses ($`m_H130`$ GeV), the search in $`HZZ^{()}4\mathrm{}`$ events is promising. Additional modes have been suggested recently: the inclusive search for $`HWW^{}\mathrm{}\mathrm{}/p_\mathrm{T}`$ , and the search for $`H\gamma \gamma `$ or $`\tau \tau `$ in weak boson fusion events . With its two forward quark jets, the weak boson fusion possesses unique characteristics which allow identification with a very low level of background at the LHC. At the same time, reconstruction of the $`\tau \tau `$ invariant mass is possible; modest luminosity, of order of 30 fb<sup>-1</sup>, should suffice for a $`5\sigma `$ signal.
In the minimal supersymmetric extension of the SM the situation is less clear . The search is open for two CP even mass eigenstates, $`h`$ and $`H`$, for a CP odd $`A`$, and for a charged Higgs boson $`H^\pm `$. For large $`\mathrm{tan}\beta `$, the light neutral Higgs boson may couple much more strongly to the $`T_3=1/2`$ members of the weak isospin doublets than its SM analogue. As a result, the total width can increase significantly compared to a SM Higgs boson of the same mass. This comes at the expense of the branching ratio $`B(h\gamma \gamma )`$, the cleanest Higgs discovery mode, possibly rendering it unobservable and forcing the consideration of alternative search channels. Even when discovery in the inclusive $`\gamma \gamma `$ channel is possible, observation in alternative production and decay channels is needed to measure the various couplings of the Higgs resonance and thus identify the structure of the Higgs sector .
In this Letter we explore the reach of weak boson fusion with subsequent decay to $`\tau \tau `$ for Higgs bosons in the MSSM framework. We will show that, except for the low $`\mathrm{tan}\beta `$ region which is being excluded by LEP2, the weak boson fusion channels are most likely to produce significant $`h`$ and/or $`H`$ signals.
## II Neutral Higgs Bosons in the MSSM
Some relevant features of the minimal supersymmetric Higgs sector can be illustrated in a particularly simple approximation : including the leading contributions with respect to $`G_F`$ and the top flavor Yukawa coupling, $`h_t=m_t/(vs_\beta )`$. The qualitative features remain unchanged in a more detailed description. All our numerical evaluations make use of a renormalization group improved next-to-leading order calculation . The inclusion of two loop effects is not expected to change the results dramatically . Including the leading contributions with respect to $`G_F`$ and $`h_t`$, the mass matrix for the neutral CP even Higgs bosons is given by
$`^2`$ $`=`$ $`m_A^2\left(\begin{array}{cc}s_\beta ^2& s_\beta c_\beta \\ s_\beta c_\beta & c_\beta ^2\end{array}\right)+m_Z^2\left(\begin{array}{cc}c_\beta ^2& s_\beta c_\beta \\ s_\beta c_\beta & s_\beta ^2\end{array}\right)+\epsilon \left(\begin{array}{cc}0& 0\\ 0& 1\end{array}\right),`$ (7)
$`\epsilon `$ $`=`$ $`{\displaystyle \frac{3m_t^4G_F}{\sqrt{2}\pi ^2}}{\displaystyle \frac{1}{s_\beta ^2}}\left[\mathrm{log}{\displaystyle \frac{M_{\mathrm{SUSY}}^2}{m_t^2}}+{\displaystyle \frac{A_t^2}{M_{\mathrm{SUSY}}^2}}\left(1{\displaystyle \frac{A_t^2}{12M_{\mathrm{SUSY}}^2}}\right)\right].`$ (8)
Here $`s_\beta ,c_\beta `$ denote $`\mathrm{sin}\beta ,\mathrm{cos}\beta `$. The bottom Yukawa coupling as well as the higgsino mass parameter have been neglected ($`\mu M_{\mathrm{SUSY}}`$). The orthogonal diagonalization of this mass matrix defines the CP even mixing angle $`\alpha `$. Only three parameters govern the Higgs sector: the pseudo-scalar Higgs mass, $`m_A`$, $`\mathrm{tan}\beta `$, and $`\epsilon `$, which describes the corrections arising from the supersymmetric top sector. For the scan of SUSY parameter space we will concentrate on two particular values of the trilinear mixing term, $`A_t=0`$ and $`A_t=\sqrt{6}M_{\mathrm{SUSY}}`$, which commonly are referred to as no mixing and maximal mixing.
Varying the pseudoscalar Higgs boson mass, one finds saturation for very large and very small values of $`m_A`$ – either $`m_h`$ or $`m_H`$ approach a plateau:
$`m_h^2`$ $`m_Z^2(c_\beta ^2s_\beta ^2)^2+s_\beta ^2\epsilon `$ $`\text{for}m_A\mathrm{}`$ (9)
$`m_H^2`$ $`m_Z^2+s_\beta ^2\epsilon `$ $`\text{for}m_A0.`$ (10)
For large values of $`\mathrm{tan}\beta `$ these plateaus meet at $`m_{h,H}^2m_Z^2+\epsilon `$. Smaller $`\mathrm{tan}\beta `$ values decrease the asymptotic mass values and soften the transition region between the plateau behavior and the linear dependence of the scalar Higgs masses on $`m_A`$. These effects are shown in Fig. 1, where the variation of $`m_h`$ and $`m_H`$ with $`m_A`$ is shown for $`\mathrm{tan}\beta =4,30`$. The small $`\mathrm{tan}\beta `$ region will be constrained by the LEP2 analysis of $`Zh,ZH`$ associated production, essentially imposing lower bounds on $`\mathrm{tan}\beta `$ if no signal is observed.<sup>*</sup><sup>*</sup>*Although the search for MSSM Higgs bosons at the Tevatron is promising we only quote the $`Zh,ZH`$ analysis of LEP2 which is complementary to the LHC processes under consideration. The LEP2 reach is estimated by scaling the current limits for $`=158`$ pb<sup>-1</sup> and $`\sqrt{s}=189`$ GeV to $`=100`$ pb<sup>-1</sup> and $`\sqrt{s}=200`$ GeV.
The theoretical upper limit on the light Higgs boson mass, to two loop order, depends predominantly on the mixing parameter $`A_t`$, the higgsino mass parameter $`\mu `$ and the soft-breaking stop mass parameters, which we treat as being identical to a supersymmetry breaking mass scale: $`m_Q=m_U=M_{\mathrm{SUSY}}`$ . As shown in Fig. 1, the plateau mass value hardly exceeds $``$130 GeV, even for large values of $`\mathrm{tan}\beta `$, $`M_{\mathrm{SUSY}}=1`$ TeV, and maximal mixing . Theoretical limits arising from the current LEP and Tevatron squark search as well as the expected results from $`Zh,ZH`$ production at LEP2 assure that the lowest plateau masses are well separated from the $`Z`$ mass peak.
The production of the CP even Higgs bosons in weak boson fusion is governed by the $`hWW,HWW`$ couplings, which, compared to the SM case, are suppressed by factors $`\mathrm{sin}(\beta \alpha ),\mathrm{cos}(\beta \alpha )`$, respectively . In the $`m_h`$ plateau region (large $`m_A`$), the mixing angle approaches $`\alpha =\beta \pi /2`$, whereas in the $`m_H`$ plateau region (small $`m_A`$) one finds $`\alpha \beta `$. This yields asymptotic MSSM coupling factors of unity for $`h`$ production and $`|\mathrm{cos}(2\beta )|0.8`$ for the $`H`$ channel, assuming $`\mathrm{tan}\beta 3`$. As a result, the production cross section of the plateau states in weak boson fusion is essentially of SM strength. In Fig. 1 the SUSY suppression factors for $`\sigma (qqqqh/H)`$, as compared to a SM Higgs boson of equal mass, are shown as a function of $`m_A`$. The weak boson fusion cross section is sizable mainly in the plateau regions, and here the $`h`$ or $`H`$ masses are in the interesting range where decays into $`\overline{b}b`$ and $`\tau ^+\tau ^{}`$ are expected to dominate.
Crucial for the observability of a Higgs boson are the $`\tau \tau `$ or $`bb`$ couplings of the two resonances. Splitting the couplings into the SM prediction and a SUSY factor, they can be written as
$`h_{bbh}`$ $`={\displaystyle \frac{m_b}{v}}\left({\displaystyle \frac{\mathrm{sin}\alpha }{\mathrm{cos}\beta }}\right)`$ $`={\displaystyle \frac{m_b}{v}}\left(\mathrm{sin}(\beta \alpha )\mathrm{tan}\beta \mathrm{cos}(\beta \alpha )\right),`$ (11)
$`h_{bbH}`$ $`={\displaystyle \frac{m_b}{v}}{\displaystyle \frac{\mathrm{cos}\alpha }{\mathrm{cos}\beta }}`$ $`={\displaystyle \frac{m_b}{v}}\left(\mathrm{cos}(\beta \alpha )+\mathrm{tan}\beta \mathrm{sin}(\beta \alpha )\right)`$ (12)
and analogously for the $`\tau `$ couplings. Since for effective production of $`h`$ and $`H`$ by weak boson fusion we need $`\mathrm{sin}^2(\beta \alpha )1`$ and $`\mathrm{cos}^2(\beta \alpha )1`$, respectively, the coupling of the observable resonance to $`\overline{b}b`$ and $`\tau \tau `$ is essentially of SM strength. The SUSY factors for the top and charm couplings are obtained by replacing $`\mathrm{tan}\beta 1/\mathrm{tan}\beta `$ in the final expressions above. They are not enhanced for $`\mathrm{tan}\beta >1`$. This leads to $`\overline{b}b`$ and $`\tau \tau `$ branching ratios very similar to the SM results. In fact, in the plateau regions they somewhat exceed the SM branching ratios for a given mass.
The $`\tau \tau h`$ and $`\tau \tau H`$ couplings vanish for $`\mathrm{sin}\alpha =0`$ and $`\mathrm{cos}\alpha =0`$, respectively, or $`\mathrm{sin}(2\alpha )=0`$. In leading order, as well as in the simple $`\epsilon `$-approximation given in eq.(8), this only happens in the unphysical limits $`\mathrm{tan}\beta =0,\mathrm{}`$. Including further off-diagonal contributions to the Higgs mass matrix might introduce a new parameter region for the mixing angle $`\alpha `$: the off-diagonal element of the Higgs mass matrix and thereby $`\mathrm{sin}(2\alpha )`$ can pass zero at finite $`m_A`$ and $`\mathrm{tan}\beta `$. Indeed, by also considering the dominant contribution with respect to $`(\mu /M_{\mathrm{SUSY}})`$, one finds
$`\left(^2\right)_{12}`$ $`=m_A^2s_\beta c_\beta m_Z^2s_\beta c_\beta \left[1{\displaystyle \frac{\mathrm{tan}\beta }{8\pi ^2}}{\displaystyle \frac{h_t^4}{g^2}}{\displaystyle \frac{\mu A_t^3}{M_{\mathrm{SUSY}}^4}}\right],`$ (13)
$`\mathrm{sin}(2\alpha )`$ $`=2{\displaystyle \frac{\left(^2\right)_{12}}{m_H^2m_h^2}},`$ (14)
and $`\mathrm{sin}(2\alpha )`$ may vanish in the physical region. The exact trajectory $`\mathrm{sin}(2\alpha )=0`$ in parameter space depends strongly on the approximation made in perturbative expansion; we observe this behavior for large $`A_t3M_{\mathrm{SUSY}}`$, i.e. in part of the non-mSUGRA parameter space. If the observed Higgs sector turns out to be located in this parameter region, the vanishing coupling to $`bb,\tau \tau `$ would render the total widths small. This can dramatically increase the $`h/H\gamma \gamma `$ branching ratio, even though $`\mathrm{\Gamma }(h/H\gamma \gamma )`$ may be suppressed compared to the SM case. This situation is shown in Fig. 2, where the scalar masses and the $`\tau \tau `$ and $`\gamma \gamma `$ rates are shown as a function of $`A_t`$: the vanishing of the $`\tau \tau `$ rate is associated with a very large increase of $`\sigma B(\gamma \gamma )`$. Note that the variation of Higgs masses and decay properties with $`A_t`$ is quite mild in general, apart from this $`\mathrm{sin}(2\alpha )=0`$ effect.
## III Higgs Search in Weak Boson Fusion
Methods for the isolation of a SM Higgs boson signal in the weak boson fusion process ($`qqqqh,qqH`$ and crossing related processes) have been analyzed for the $`H\gamma \gamma `$ channel and for $`H\tau \tau `$ . The analysis for the MSSM is completely analogous: backgrounds are identical to the SM case and the changes for the signal, given by the SUSY factors for production cross sections and decay rates, have been discussed in the previous section.
For the $`h,H\gamma \gamma `$ signal, the backgrounds considered are $`\gamma \gamma jj`$ production from QCD and electroweak processes, and via double parton scattering . It was found that the backgrounds can be reduced to a level well below that of the signal, by tagging the two forward jets arising from the scattered (anti)quarks in weak boson scattering, and by exploiting the excellent $`\gamma \gamma `$ invariant mass resolution expected for the LHC detectors , of order 1 GeV.
For $`h,H\tau \tau `$ decays, only the semileptonic decay channel of the $`\tau `$ leptons, $`\tau \tau \mathrm{}^\pm h^{}/p_\mathrm{T}`$ is considered, assuming the $`\tau `$-identification efficiencies and procedures described by ATLAS for the inclusive $`H,A\tau \tau `$ search . According to the ATLAS study, hadronic $`\tau `$ decays, producing a $`\tau `$ jet of $`E_\mathrm{T}>40`$ GeV, can be identified with an acceptance of 26% while rejecting hadronic jets with an efficiency of 99.75%. In weak boson fusion, and with the $`\tau `$ identification requirements of Refs. which ask for substantial transverse momenta of the charged $`\tau `$ decay products ($`p_\mathrm{T}(\mathrm{}^\pm )>20`$ GeV and $`p_\mathrm{T}(h^{})>40`$ GeV), the Higgs boson is produced at high $`p_\mathrm{T}`$. In the collinear $`\tau `$ decay approximation, this allows reconstruction of the $`\tau ^\pm `$ momenta from the directions of the decay products and the two measured components of the missing transverse momentum vector . Thus, the Higgs boson mass can be reconstructed in the $`\tau \tau `$ mode, with a mass resolution of order 10%, which provides for substantial background reduction as long as the Higgs resonance is not too close to the $`Z\tau \tau `$ peak.
With these $`\tau `$-identification criteria, and by using double forward jet tagging cuts similar to the $`h,H\gamma \gamma `$ study, the backgrounds can be reduced below the signal level, for SM Higgs boson masses between 105 to 150 GeV and within a 20 GeV invariant mass bin. Here, irreducible backgrounds from ‘$`Zjj`$ events’ with subsequent decay of the (virtual) $`Z,\gamma `$ into $`\tau `$ pairs, as well as reducible backgrounds with isolated hard leptons from $`Wj+jj`$ and $`b\overline{b}jj`$ events, have been considered. Moreover, it was shown that a further background reduction, to a level of about 10% of the signal, can be achieved by a veto on additional central jets of $`E_\mathrm{T}>20`$ GeV between the two tagging jets. This final cut makes use of the different gluon radiation patterns in the signal, which proceeds via color singlet exchange in the $`t`$-channel, and in the QCD backgrounds, which prefer to emit additional partons in the central region .
Using the SUSY factors of the last section for production cross sections and decay rates, one can directly translate the SM results into a discovery reach for supersymmetric Higgs bosons. The expected signal rates, $`\sigma B(h/H\tau \tau ,\gamma \gamma )`$ are shown in Figs. 1,2. They can be compared to SM rates, within cuts, of $`\sigma B(H\tau \tau )=0.35`$ fb and $`\sigma B(H\gamma \gamma )=2`$ fb for $`m_H=120`$ GeV. Except for the small parameter region where the $`\tau \tau `$ signal vanishes, and for very large values of $`m_A`$ (the decoupling limit), the $`\gamma \gamma `$ channel is not expected to be useful for the MSSM Higgs search in weak boson fusion. The $`\tau \tau `$ signal, on the other hand, compares favorably with the SM expectation over wide regions of parameter space. The SUSY factors for the production process determine the structure of $`\sigma B(h/H\tau \tau )`$. Apart from the typical flat behavior in the asymptotic plateau regions they strongly depend on $`\beta `$, in particular in the transition region, where all three neutral Higgs bosons have similar masses and where mixing effects are most pronounced.
Given the background rates determined in Ref. , which are of order 0.03 fb in a 20 GeV mass bin, except in the vicinity of the $`Z`$-peak, the expected significance of the $`h/H\tau \tau `$ signal can be determined. 5 $`\sigma `$ contours for an integrated luminosity of 100 fb<sup>-1</sup> are shown in Fig. 3, as a function of $`\mathrm{tan}\beta `$ and $`m_A`$. Here the significances are determined from the Poisson probabilities of background fluctuations . Weak boson fusion, followed by decay to $`\tau `$-pairs, provides for a highly significant signal of at least one of the CP even Higgs bosons. Even in the low $`\mathrm{tan}\beta `$ region, where LEP2 would discover the light Higgs boson, the weak boson fusion process at the LHC will give additional information. Most interesting is the transition region, where both $`h`$ and $`H`$ may be light enough to be observed via their $`\tau \tau `$ decay. A possible $`\tau \tau `$ invariant mass spectrum for this scenario, with backgrounds, is shown in Fig. 4. The observation of a triple peak, corresponding to $`Z`$, $`h`$ and $`H`$ decays to $`\tau \tau `$, requires very specific SUSY parameters, of course. Fig. 4 illustrates the cleanness of the weak boson fusion signal, however.
## IV Summary
We have shown that the production of CP even MSSM Higgs bosons in weak boson fusion and subsequent decay to $`\tau `$ pairs gives a significant ($`>5\sigma `$) signal at the LHC. This search, with $``$100 fb<sup>-1</sup> of integrated luminosity, and supplemented by the search for $`h/H\gamma \gamma `$ in weak boson fusion, should cover the entire MSSM parameter space left after an unsuccessful LEP2 search, with a significant overlap of LEP2 and LHC search regions. The two CERN searches combined provide a no-lose strategy by themselves for seeing a MSSM Higgs boson. At the very least, the weak boson fusion measurements provide valuable additional information on Higgs boson couplings.
Our analysis here and in Ref. should be considered as a proof of principle, not as an estimate of the ultimate sensitivity of the LHC experiments. A variety of possible improvements need to be analyzed further.
* For a Higgs resonance close to the $`Z`$ peak ($`m_h110`$ GeV) a shape analysis is needed to estimate the significance of the Higgs contribution. Our sensitivity estimates are solely based on event counting in a 20 GeV invariant mass bin.
* A trigger on the forward jets in weak boson fusion events might allow a reduction of the transverse momentum requirement for the $`\tau `$ decay lepton. A lower lepton $`p_\mathrm{T}`$ threshold would significantly increase the signal rate.
* The $`\tau `$ identification criteria and the rejection of the $`\overline{b}b`$ background has been optimized for the inclusive $`A/H\tau \tau `$ search , not for the weak boson fusion events considered here. Because of the lower backgrounds to the weak boson fusion process, some of the requirements can be relaxed, leading to a larger signal rate.
* Our analysis is based on parton level simulations. A full parton-shower analysis, including hadronization and detector effects, should be performed to optimize the cuts, and to assess efficiencies.
The present analysis relies only on the typical mixing behavior of the CP even mass eigenstates, and on the observability of a SM Higgs boson, of mass up to $``$150 GeV, in weak boson fusion. This suggests that the search discussed here might also cover an extended Higgs sector as well as somewhat higher plateau masses, e.g. for very large squark soft-breaking mass parameters. Because decays into $`\tau `$ pairs are tied to the dominant decay channel of the intermediate mass range Higgs boson, $`h/H\overline{b}b`$, the search for a $`\tau \tau `$ signal in weak boson fusion is robust and expected to give a clear Higgs signal in a wide class of models.
###### Acknowledgements.
This research was supported in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation and in part by the U. S. Department of Energy under Contract No. DE-FG02-95ER40896.
|
no-problem/9902/cond-mat9902266.html
|
ar5iv
|
text
|
# Orbital Degree of Freedom and Phase Separation in Ferromagnetic Manganites at Finite Temperatures
## I Introduction
Doped perovskite manganites and their related compounds have attracted much attention, since they show not only the colossal magnetoresistance (CMR) but many interesting phenomena such as a wide variety of magnetic structure, charge ordering and structural phase transition. Although the ferromagnetic phase commonly appears in the manganites, the origin still remains to be clarified. Almost a half-century ago, the double exchange (DE) interaction was proposed to explain the close connection between the appearance of ferromagnetism and the metallic conductivity. In the scenario, the Hund coupling between carriers and localized spins is stressed. It has been recognized that the ferromagnetic metallic state in the highly doped region of La<sub>1-x</sub>$`A_x`$MnO<sub>3</sub> ($`x0.3`$) with $`A`$ being a divalent ion is understood based on this scenario, where the compounds show the wide band width.
On the contrary, the DE scenario is not applied to the lightly doped region $`(x<0.2)`$ where the CMR effect is observed. In the region, the degeneracy of $`e_g`$ orbitals in a $`\mathrm{Mn}^{3+}`$ ion exists and affects the physical properties. The degeneracy is called the orbital degree of freedom. With taking into account the orbital degree together with the electron correlation, the additional ferromagnetic interaction, that is, the ferromagnetic superexchange (SE) interaction, is derived. This is associated with the alternate alignment of the orbital termed antiferro(AF)-type orbital ordering. The SE interaction dominates the ferromagnetic spin alignment observed in the $`ab`$-plane in $`\mathrm{LaMnO}_3`$ and the quasi two-dimensional dispersion relation of the spin wave in it. When holes are introduced into the insulating $`\mathrm{LaMnO}_3`$, the successive transitions occur in magnetic and transport phase diagrams; with increasing $`x`$, it is observed in La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> as almost two dimensional ferromagnetic (A-type AF) insulator $``$ isotropic ferromagnetic insulator $``$ ferromagnetic metal. The first order phase transition between two ferromagnetic states recently discovered in La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> with $`x0.12`$ indicates that the orbital state also changes at the transition. In order to understand a dramatic change of electronic states in lightly doped region and its relation to CMR, it is indispensable to study the mutual relation between the two ferromagnetic interactions, i.e., DE and SE.
In this paper, we investigate the spin and orbital phase diagram as a function of temperature $`(T)`$ and hole concentration $`(x)`$. We focus on the competition and cooperation between the two ferromagnetic interactions SE and DE. We show that the SE and DE interactions dominate the ferromagnetic phases in the low and high concentration regions of doped holes, respectively, and favor the different orbital structures each other. Between the two phases, the phase separation (PS) appears in the wide range of $`x`$ and $`T`$. It is shown that the phase separation is promoted by the anisotropy in the orbital space. The spin and orbital phase diagram at $`T=0`$ was obtained by the Hartree-Fock theory and interpreted in terms of the SE and DE interactions in Ref. . The PS state between two ferromagnetic phases driven by the DE interaction and the Jahn-Teller distortion at $`T=0`$ was discussed in Ref. . In this paper, we obtain the PS state based on the model with strong correlation of electrons at finite $`T`$.
In Sect. II, the model Hamiltonian, where the electron correlation and the orbital degeneracy are taken into account, is introduced. In Sect. III, formulation to calculate the phase diagram at finite $`T`$ and $`x`$ is presented. Numerical results are shown in Sect. IV and the last section is devoted to summary and discussion.
## II Model
Let us consider the model Hamiltonian which describes the electronic structure in perovskite manganites. We set up the cubic lattice consisting of manganeses ions. Two $`e_g`$ orbitals are introduced in each ion and $`t_{2g}`$ electrons are treated as a localized spin $`(\stackrel{}{S}_{t_{2g}})`$ with $`S=3/2`$. Between $`e_g`$ electrons, three kinds of the Coulomb interaction, that is, the intra-orbital Coulomb interaction ($`U`$), the inter-orbital one ($`U^{}`$) and the exchange interaction($`I`$), are taken into account. There also exist the Hund coupling ($`J_H`$) between $`e_g`$ and $`t_{2g}`$ spins and the electron transfer $`t_{ij}^{\gamma \gamma ^{}}`$ between site $`i`$ with orbital $`\gamma `$ and site $`j`$ with $`\gamma ^{}`$. Among these energies, the Coulomb interactions are the largest one. Therefore, by excluding the doubly occupied state at each site, we derive the effective Hamiltonian describing the low energy spin and orbital states:
$$=_t+_J+_H+_{AF}.$$
(1)
The first and second terms correspond to the so-called $`t`$\- and $`J`$-terms in the $`tJ`$-model for $`e_g`$ electrons, respectively. These are given by
$`_t={\displaystyle \underset{ij\gamma \gamma ^{}\sigma }{}}t_{ij}^{\gamma \gamma ^{}}\stackrel{~}{d}_{i\gamma \sigma }^{}\stackrel{~}{d}_{j\gamma ^{}\sigma }+H.c,`$ (2)
and
$`_J=`$ $``$ $`2J_1{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{3}{4}}n_in_j+\stackrel{}{S}_i\stackrel{}{S}_j\right)\left({\displaystyle \frac{1}{4}}\tau _i^l\tau _j^l\right)`$ (3)
$``$ $`2J_2{\displaystyle \underset{ij}{}}\left({\displaystyle \frac{1}{4}}n_in_j\stackrel{}{S}_i\stackrel{}{S}_j\right)\left({\displaystyle \frac{3}{4}}+\tau _i^l\tau _j^l+\tau _i^l+\tau _j^l\right),`$ (4)
where
$$\tau _i^l=\mathrm{cos}\left(\frac{2\pi }{3}n_l\right)T_{iz}\mathrm{sin}\left(\frac{2\pi }{3}n_l\right)T_{ix},$$
(5)
and $`(n_x,n_y,n_z)`$$`=(1,2,3)`$. $`l`$ denotes the direction of bond connecting $`i`$ and $`j`$ sites. $`\stackrel{~}{d}_{i\gamma \sigma }`$ is the annihilation operator of $`e_g`$ electron at site $`i`$ with spin $`\sigma `$ and orbital $`\gamma `$ with excluding double occupancy. $`\stackrel{}{S}_i`$ is the spin operator of the $`e_g`$ electron and $`\stackrel{}{T}_i`$ is the pseudo-spin operator for the orbital degree of freedom defined as $`\stackrel{}{T}_i=(1/2)_{\sigma \gamma \gamma ^{}}\stackrel{~}{d}_{i\gamma \sigma }^{}(\stackrel{}{\sigma })_{\gamma \gamma ^{}}\stackrel{~}{d}_{i\gamma ^{}\sigma }`$. $`J_1=t_0^2/(U^{}I)`$ and $`J_2=t_0^2/(U^{}+I+2J_H)`$ where $`t_0`$ is the transfer intensity between $`d_{3z^2r^2}`$ orbitals in the $`z`$-direction, and the relation $`U=U^{}+I`$ is assumed. The orbital dependence of $`t_{ij}^{\gamma \gamma ^{}}`$ is estimated from the Slater-Koster formulas. The third and fourth terms in Eq. (1) describe the Hund coupling between $`e_g`$ and $`t_{2g}`$ spins and the antiferromagnetic interaction between $`t_{2g}`$ spins, respectively, as expressed as
$$_H=J_H\underset{i}{}\stackrel{}{S}_{t_{2g}i}\stackrel{}{S}_i,$$
(6)
and
$$_{AF}=J_{AF}\underset{ij}{}\stackrel{}{S}_{t_{2g}i}\stackrel{}{S}_{t_{2g}j}.$$
(7)
The detailed derivation of the Hamiltonian is presented in Ref.. Main features of the Hamiltonian are summarized as follows: 1) This is applicable to the doped manganites, as well as the undoped insulator. 2) Since $`J_1>J_2`$, the ferromagnetic state associated with the AF-type orbital order is stabilized by $`_J`$. Therefore, two kinds of the ferromagnetic interaction, that is, SE and DE are included in the model. 3) As seen in $`_J`$, the orbital pseudo-spin space is strongly anisotropic unlike the spin space.
## III Formulation
In order to calculate the spin and orbital states at finite temperatures and investigate the phase separation, we generalize the mean field theory proposed by de Gennes. Hereafter, the spin ($`\stackrel{}{S}`$) and pseudo-spin ($`\stackrel{}{T}`$) variables are denoted by $`\stackrel{}{u}`$ in the unified fashion. In this theory, the spin and orbital pseudo-spin are treated as classical vectors as follows
$$(S_i^x,S_i^y,S_i^z)=\frac{1}{2}(\mathrm{sin}\theta _i^s\mathrm{cos}\varphi _i^s,\mathrm{sin}\theta _i^s\mathrm{sin}\varphi _i^s,\mathrm{cos}\theta _i^s),$$
(8)
and
$$(T_i^x,T_i^y,T_i^z)=\frac{1}{2}(\mathrm{sin}\theta _i^t,0,\mathrm{cos}\theta _i^t),$$
(9)
where the motion of the pseudo-spin is assumed to be confined in the $`xz`$-plane. $`\theta _i^t`$ in Eq. (9) characterizes the orbital state at site $`i`$ as
$$|\theta _i^t=\mathrm{cos}(\theta _i^t/2)|d_{3z^2r^2}+\mathrm{sin}(\theta _i^t/2)|d_{x^2y^2}.$$
(10)
$`t_{2g}`$ spins are assumed to be parallel to the $`e_g`$ one. The thermal distributions of the spin and pseudo-spin are described by the distribution function which is a function of the relative angle between $`\stackrel{}{u}_i`$ and the mean field $`\stackrel{}{\lambda }_i^u`$,
$$w_i^u(\stackrel{}{u}_i)=\frac{1}{\nu ^u}\mathrm{exp}(\stackrel{}{\lambda }_i^u\stackrel{}{m}_i^u),$$
(11)
where $`\stackrel{}{m}^u(\stackrel{}{u}_i/|\stackrel{}{u}|)`$ is termed the spin(pseudo-spin) magnetization and the normalization factor is defined by
$$\nu ^s=_0^\pi 𝑑\theta _0^{2\pi }𝑑\varphi \mathrm{exp}(\lambda ^s\mathrm{cos}\theta ),$$
(12)
and
$$\nu ^t=_0^{2\pi }𝑑\theta \mathrm{exp}(\lambda ^t\mathrm{cos}\theta ).$$
(13)
The mean fields are assumed to be written as $`\stackrel{}{\lambda }_i^u=\lambda ^u(\mathrm{sin}\mathrm{\Theta }_i^u,0,\mathrm{cos}\mathrm{\Theta }_i^u)`$. By utilizing the distribution functions defined in Eq. (11), the expectation values of operators $`A_i(\stackrel{}{S})`$ and $`B_i(\stackrel{}{T})`$ are obtained as
$$A_i_s=_0^\pi 𝑑\theta ^s_0^{2\pi }𝑑\varphi ^sw_i^s(\stackrel{}{S}_i)A(\stackrel{}{S}),$$
(14)
and
$$B_i_t=_0^{2\pi }𝑑\theta ^tw_i^t(\stackrel{}{T}_i)B(\stackrel{}{T}),$$
(15)
respectively. In this scheme, the free energy is represented by summation of the expectation values of the Hamiltonian and the entropy of spin and pseudo-spin as follows:
$`=NT(𝒮^s+𝒮^t).`$ (16)
$`N`$ is the number of Mn ions and $`𝒮^u`$ is the entropy calculated by
$`𝒮^u=\mathrm{ln}w^u(\stackrel{}{u})_u.`$ (17)
By minimizing $``$ with respect to $`\lambda _i^u`$ and $`\mathrm{\Theta }_i`$, the mean field solutions are obtained. It is briefly noticed that the above formulation gives the unphysical states at very low temperatures ($`T<T_{neg}J_{1(2)}/10`$) where the entropy becomes negative. Therefore, we restrict our calculation in the region above $`T_{neg}`$. However, at $`T=0`$, the spin and orbital states are calculated without any trouble in the entropy with the assumption of the full polarizations of spin and pseudo-spin.
Next, we concentrate on the calculation of $``$ in Eq. (16). As shown in Eq. (4), $`_J`$ is represented by $`\stackrel{}{S}`$ and $`\stackrel{}{T}`$. By introducing the rotating frame in the spin(pseudo-spin) space, the $`z`$-component of the spin (pseudo-spin) in the frame is given by
$$\stackrel{~}{u}_i^z=\mathrm{cos}\mathrm{\Theta }_i^uu_i^z+\mathrm{sin}\mathrm{\Theta }_i^uu_i^x,$$
(18)
which is parallel to the mean field $`\stackrel{}{\lambda }^u`$. Thus, $`\stackrel{~}{u}_i^z_u`$ is adopted as the order parameter which has the relation, $`\stackrel{~}{u}_i^z_u=\frac{1}{2}\stackrel{~}{m}^{uz}_u`$. The spin part in $`_J`$ is rewritten by using $`\stackrel{~}{m}^{sz}_s`$ and the relative angle in the spin space as $`\stackrel{~}{m}^{sz}_s^2\mathrm{cos}(\mathrm{\Theta }_i^s\mathrm{\Theta }_j^s)`$. On the other hand, the orbital part includes the term $`\mathrm{cos}(\mathrm{\Theta }_i^t+\mathrm{\Theta }_j^t)`$, which originates from the anisotropy in the orbital space. $`_𝒜`$ is also rewritten by using $`\stackrel{~}{m}^{sz}_s`$ and $`\mathrm{\Theta }^s`$ under the relation of $`\stackrel{}{S}_s=4\stackrel{}{S}_{t_{2g}}_s`$. As for the transfer term $`_t`$, we introduce the rotating frame and decompose the electron operator as $`\stackrel{~}{d}_{i\gamma \sigma }=h_i^{}z_{i\sigma }^sz_{i\gamma }^t`$ where $`h_i^{}`$ is a spin-less and orbital-less fermion operator and $`z_{i\sigma }^s`$ and $`z_{i\gamma }^t`$ are the elements of the unitary matrix ($`U^{s(u)}`$) in the spin and pseudo-spin frames, respectively. These are defined by
$$U^u=\left(\begin{array}{cc}z_i^u& z_i^u\\ z_i^u& z_i^u\end{array}\right),$$
(19)
with $`z_i^s=\mathrm{cos}(\theta _i^s/2)e^{i\varphi _i^s/2}`$ and $`z_i^s=\mathrm{sin}(\theta _i^s/2)e^{i\varphi _i^s/2}`$ for spin, and $`z_i^t=\mathrm{cos}(\theta _i^t/2)`$ and $`z_i^t=\mathrm{sin}(\theta _i^t/2)`$ for orbital. By using the form, $`_t`$ is rewritten as
$$_t=\underset{ij}{}t_{ij}^st_{ij}^th_ih_j^{}+H.c.,$$
(20)
with $`t_{ij}^s=_\sigma z_{i\sigma }^sz_{j\sigma }^s`$, and $`t_{ij}^t=_{\gamma \gamma ^{}}z_{i\gamma }^tt_{ij}^{\gamma \gamma ^{}}z_{j\gamma ^{}}^t`$. The former gives $`e^{i(\varphi _i^s\varphi _j^s)/2}`$ $`\mathrm{cos}\theta _i^s\mathrm{cos}\theta _j^s`$ $`e^{i(\varphi _i^s\varphi _j^s)/2}`$ $`\mathrm{sin}\theta _i^s\mathrm{sin}\theta _j^s`$ as expected from the double exchange interaction. By diagonalizing the energy in the momentum space, $`_t`$ is given by
$$_t=\underset{\stackrel{}{k}}{}\underset{l=1}{\overset{N_l}{}}\epsilon _\stackrel{}{k}^lh_{l\stackrel{}{k}}^{}h_{l\stackrel{}{k}},$$
(21)
where $`l`$ indicates the band of $`h_{l\stackrel{}{k}}`$ and $`N_l`$ is the number of the bands. $`\epsilon _\stackrel{}{k}^l`$ corresponds to the energy of the $`l`$-th band. As a result, the expectation value of $`_t`$ per site is obtained by
$$E_t=\frac{1}{N}\underset{\stackrel{}{k}}{}\underset{l=1}{\overset{N_l}{}}\epsilon _\stackrel{}{k}^lf_F(\epsilon _\stackrel{}{k}^l\epsilon _F),$$
(22)
which is a function of the spin and pseudo-spin angles at each site, $`\{\mathrm{\Theta }_i^s\}`$ and $`\{\mathrm{\Theta }_i^t\}`$, and the amplitudes of the mean fields, $`\lambda ^s`$ and $`\lambda ^t`$. $`\epsilon _F`$ in Eq. (22) is the fermi energy of $`h_{i\stackrel{}{k}}`$ determined in the equation,
$`x={\displaystyle \frac{1}{N}}{\displaystyle \underset{\stackrel{}{k}}{}}{\displaystyle \underset{l=1}{\overset{N_l}{}}}f_F(\epsilon _\stackrel{}{k}^l\epsilon _F),`$ (23)
where $`f_F(\epsilon )`$ is the fermi distribution function.
## IV Numerical results
### A phase diagram at $`T=0`$
In this subsection, we show the numerical results at $`T=0`$. For examining both spin and orbital orderings, two kinds of sublattice are introduced. We assume ferromagnetic (F)-type and three kinds of antiferro (AF)-type spin (pseudo-spin) orderings, which are layer (A)-type, rod (C)-type and NaCl (G)-type.
In Fig. 1(a), the ground state energy $`(E_{GS})`$ is shown as a function of hole concentration ($`x`$) for several values of $`J_{AF}/t_0`$. Double- or multi-minima appear in the $`E_{GS}`$-$`x`$ curve depending on the value of $`J_{AF}/t_0`$. Therefore, the homogenous phase is not stable against the phase separation. This feature is remarkable in the region of $`0.1<x<0.4`$. In Fig. 1(b), $`E_{GS}`$ is decomposed into $`_t`$ and $`_J`$ for $`J_{AF}/t_0=0`$. By drawing a tangent line in the $`E_{GS}`$-$`x`$ curve as shown in Fig. 1(a), the phase separation is obtained. By using the so-called Maxwell construction, the phase diagram at $`T=0`$ is obtained in the plane of $`J_{AF}`$ and $`x`$ (Fig. 2). The parameter values are chosen to be $`J_1/t_0=0.25`$ and $`J_2/t_0=0.0625`$. $`J_{AF}/t_0`$ for manganites is estimated from the N$`\stackrel{´}{\mathrm{e}}`$el temperature in $`\mathrm{CaMnO}_3`$ to be $`0.0010.01`$. Let us consider the case of $`J_{AF}/t_0=0.004`$. With doping of holes, the magnetic structure is changed as A-AF $``$ PS(A-AF/F<sub>1</sub>) $``$ F<sub>1</sub> $``$ PS(F<sub>1</sub>/F<sub>2</sub>) $``$ F<sub>2</sub>, where PS(A/B) implies the phase separation between A and B phases. The canted spin structure does not appear. F<sub>1</sub> and F<sub>2</sub> are the two kinds of ferromagnetic phase discussed below in more detail. Between F<sub>1</sub> and F<sub>2</sub> phases, the PS state appears and dominates the large region of the phase diagram. For example, at $`x=0.2`$, the F<sub>1</sub> and F<sub>2</sub> phases coexist with the different volume fractions of $`60\%`$ and $`40\%`$, respectively. We also find the PS state between A-AF and F<sub>1</sub> phases in the region of $`0.0<x<0.03`$.
Now we focus on two kinds of ferromagnetic phase and the PS state between them. The F<sub>1</sub> and F<sub>2</sub> phases originate from the SE interaction between $`e_g`$ orbitals and the DE one, respectively. The interactions have different types of orbital ordering as shown in Fig. 2. These are the C-type with $`(\theta _A^t/\theta _B^t)=(\pi /2,3\pi /2)`$ and the A-type with $`(\theta _A^t/\theta _B^t)=(\pi /6,\pi /6)`$, respectively, where $`\theta _{A(B)}^t`$ is the angle in the orbital space in the $`A(B)`$ sublattice. It is known that the AF-type orbital ordering obtained in the F<sub>1</sub> phase is favorable to the ferromagnetic SE interaction through the coupling between spin and orbital degrees in $`_J`$. On the other hand, the F-type orbital ordering promotes the DE interaction by increasing the gain of the kinetic energy. To show the relation between the orbital ordering and the kinetic energy, we present the density of state (DOS) of the spin-less and orbital-less fermions in the F<sub>1</sub> and F<sub>2</sub> phases in Fig. 3(a) and (b), respectively. It is clearly shown that the band width in the F<sub>2</sub> phase is larger than that in the F<sub>1</sub> phase. In addition, DOS in the F<sub>2</sub> phase has a broad peak around $`2<\omega /t_0<0.8`$ which results from the quasi-one dimensional orbital ordering. Because of the structure in DOS, the kinetic energy further decreases in the F<sub>2</sub> phase more than the F<sub>1</sub> phase.
In order to investigate the stability of the PS state appearing between the F<sub>1</sub> and F<sub>2</sub> phases, the ground state energy is decomposed into the contributions from the SE interaction ($`_J`$) and the DE one ($`_t`$) (see Fig. 1(b)). We find that with increasing $`x`$, $`_J`$ increases and $`_t`$ decreases. Several kinks appear in the $`_J`$-$`x`$ and $`_t`$-$`x`$ curves, which imply the discontinuous change of the state with changing $`x`$. The PS(F<sub>1</sub>/F<sub>2</sub>) state shown in Fig. 2 corresponds to the region, where the two ferromagnetic interactions compete with each other and the discontinuous changes appear in the $`_{J(t)}`$-$`x`$ curve. In Fig. 4, we present the $`x`$ dependence of the orbital state assuming the homogeneous phase. It is clearly shown that the discontinuous change of $`_{J(t)}`$-$`x`$ curve is ascribed to that of the orbital state. In particular, in the phase-I and -II, the symmetry of the orbital is lower than that in the F<sub>1</sub> and F<sub>2</sub> phases and the stripe-type (quasi one dimensional) and sheet-type (two dimensional) charge disproportion is realized, respectively. These remarkable features originate from the anisotropy in the orbital pseudo-spin space. We also note that because of the anisotropy, the orbital state dose not change continuously from F<sub>1</sub> to F<sub>2</sub>. It is summarized that the main origin of the PS state in the ferromagnetic state is 1) the existence of two kinds of ferromagnetic interaction which favor the different types of orbital state, and 2) the discontinuous change of orbital state due to the anisotropy in the orbital space unlike the spin case.
### B phase diagram at finite $`T`$
In this subsection, we show the numerical results at finite $`T`$ and discuss how the PS state changes with $`T`$. As the order parameter of spin, we assume the ferromagnetic ordering and focus on the F<sub>1</sub> and F<sub>2</sub> phases and the PS state between them. We consider the G- and F-type orbital orderings which are enough to discuss the orbital state in the ferromagnetic state of the present interest.
In Fig. 5, the phase diagram is presented at finite $`T`$ where the homogeneous phase is assumed. Parameter values are chosen to be $`J_{AF}/t_0=0.004`$, $`J_1/t_0=0.25`$ and $`J_2/t_0=0.0625`$. At $`x=0.0`$, the orbital ordered temperature ($`T_{OO}`$) is higher than the ferromagnetic Curie temperature ($`T_C`$), because the interaction between orbitals $`(3J_1/2)`$ in the paramagnetic state is larger than that between spins $`(J_1/2)`$ in the orbital disordered state, as seen in the first term in $`_J`$. With increasing $`x`$, $`T_C`$ monotonically increases. On the other hand, $`T_{OO}`$ decreases and becomes its minimum around $`x0.25`$. This is the consequence of the change of orbital ordering from G-type to F-type. The G- and F-type orbital orderings are favorable to the SE and DE interactions, respectively, so that the orderings occur in the lower and higher $`x`$ regions. In Fig. 6(a), we present the free energy as a function of $`x`$ at several temperatures. For $`T/t_0<0.025`$, the double minima around $`x=0.1`$ and $`0.4`$ exist as discussed in the previous subsection at $`T=0`$. With increasing $`T`$, the double minima are gradually smeared out and a new local minimum appears around $`x=0.3`$. It implies that another phase becomes stable around $`x=0.3`$ and two different kinds of the PS state appear at the temperature. With further increasing temperature, several shallow minima appear in the $``$-$`x`$ curve. Finally, the fine structure disappears and the homogeneous phase becomes stable in the whole region of $`x`$. In Fig. 6(b), the free energy is decomposed into the contributions from $`T𝒮`$, $`_t`$ and $`_J`$ at $`T/t_0=0.04`$.
By applying the Maxwell construction to the free energy presented in Fig. 6(a), the PS states are obtained and presented in Fig. 7. The PS states dominate the large area in the $`x`$-$`T`$ plane. A variety of the PS states appears with several types of spin and orbital states. Each PS state is represented by the combination of spin and orbital states, such as PS(spin-P, orbital-G/spin-F, orbital-P) for PS-III and PS(spin-F, orbital-G/spin-F, orbital-F)=PS(F<sub>1</sub>/F<sub>2</sub>) for PS-VII. Here, P indicates the paramagnetic (orbital) state. It is mentioned that the phase diagram in Fig. 7 has much analogy with that in eutectic alloys. For example, let us focus on the region below $`T/t_0=0.05`$. Here, the F<sub>1</sub> and F<sub>2</sub> phases and PS-VII correspond to the two kinds of homogeneous solid phases, termed A and B, and the PS state between them (PS(A/B)) in binary alloys, respectively. In the case of the binary alloys, the liquid(L)-phase becomes stable due to the entropy at high temperatures. Thus, with increasing temperature, the successive transition occurs as PS(A/B) $``$ (PS(L/A(B))) $``$ L. The states, L, PS(L/A) and PS(L/B), correspond to the (spin-F, orbital-P) phase, PS-V, and PS-VI in Fig. 7, respectively. By the analogy between two systems, the point at $`T/t_0=0.025`$ and $`x=0.27`$ corresponds to the eutectic point. In the $``$-$`x`$ curve shown in Fig. 6, above three states reflect on the three minima observed at $`T/t_0=0.004`$. By decomposing the free energy into the three terms: $`_J`$, $`_t`$ and $`T𝒮`$, we confirm that the middle part corresponding to the (spin-F, orbital-P) phase is stabilized by the entropy.
In Fig. 8, we present effects of the magnetic field ($`B`$) on the phase diagram. The magnitude of the applied magnetic field is chosen to be $`g\mu _BB/t_0=0.02`$ which corresponds to $`50`$ Tesla for $`t_0=0.3eV`$ and $`g=2`$. We find that the PS state shrinks in the magnetic field. The remarkable change is observed in PS-II and III where the spin-F and -P phases coexist. The magnetic field stabilizes the ferromagnetic phase so that the PS states are replaced by PS-V and the uniform ferromagnetic state. The region of PS-VII (PS(F<sub>1</sub>/F<sub>2</sub>)) is also suppressed in the magnetic field. Because the magnitude of the magnetization in the phase F<sub>1</sub> is smaller than that in the F<sub>2</sub> phase, the magnetic field increases the magnetization and stabilizes the F<sub>1</sub> phase.
## V summary and discussion
In this paper, we study the spin and orbital phase diagram for perovskite manganites at finite $`T`$ and $`x`$. In particular, we pay our attention to two kinds of ferromagnetic phase appearing at different hole concentrations. The SE and DE interactions dominate the ferromagnetic phases in the lower and higher $`x`$ and favor the AF- and F-type orbital orderings, respectively. Between the phases, the two interactions compete with each other and the phases are unstable against the phase separation. The PS states at finite $`T`$ have much analogy with that in the binary alloys.
It is worth to compare PS(F<sub>1</sub>/F<sub>2</sub>) with PS(AF/F). As shown in Fig. 2 ($`J_{AF}/t_0=0.004`$), PS(F<sub>1</sub>/F<sub>2</sub>) appears in the region of higher $`x`$ than PS(A-AF/F). This originates from the following sequential change of the state with doping of holes as I:(spin-A,orbital-G) $``$ II:(spin-F,orbital-G) $``$ III:(spin-F, orbital-F). The orbital state changes at higher $`x`$ than the spin state. As a result, PS(A-AF/F) and PS(F<sub>1</sub>/F<sub>2</sub>) appear between I and II, and II and III, respectively. This is because 1) at $`x=0`$, the ferromagnetic interaction between spins is weaker than the AF one between orbitals, as mentioned in Sect. IV B, and 2) at $`x=0`$, the AF interaction along the $`c`$-axis is much weaker than the ferromagnetic one in the $`ab`$-plane. We also notice in Fig. 2 that PS(F<sub>1</sub>/F<sub>2</sub>) dominates a larger region in the phase diagram than PS(A-AF/F). This mainly results from the anisotropy in the orbital pseudo-spin space. As shown in Fig. 4, $`\theta _{A(B)}^t`$ indicating the orbital state discontinuously changes with $`x`$ in the region of $`0.06<x<0.41`$. Continuous change from F<sub>1</sub> to F<sub>2</sub> is prevented by the anisotropy in the orbital space. This is highly in contrast to the spin case where the incommensurate and/or flux states associated with the continuous change of the spin angle become more stable than some PS states. The anisotropy in the orbital space also stabilizes the homogeneous state in the region of $`x<0.06`$. On the other hand, PS(A-AF/F) appears by doping of infinitesimal holes. Furthermore, the microscopic charge segregation appearing in the phase-I and -II (Fig. 4) is also due to the orbital degree of freedom. Here, the stripe- or sheet-type charge disproportion is realized and the SE and DE interactions dominate different microscopic regions (bonds). These unique phases are ascribed to the dimensionality control of charge carriers through the orbital orderings. It is mentioned that when the orbital degree of freedom is taken into account, PS(AF/F) discussed in the double exchange model is suppressed. This is because A-AF is realized at $`x=0`$ instead of G-AF and the ratio of the band width between A-AF and F is $`W_{AF}/W_F`$=2/3. This ratio is much larger than that between G-AF and F which is of the order of $`O(t_0/J_H)`$. Therefore, the PS region, where the compressibility $`(\kappa =(\mu /x)^1)`$ is negative, shrinks. The $`(d_{3x^2r^2}/d_{3y^2r^2})`$-type orbital ordering expected from the lattice distortion in $`\mathrm{LaMnO}_3`$ further enhances $`W_{AF}/W_F`$, because the transfer intensity along the $`c`$-axis is reduced in the ordering.
It should be noticed that the following effects may suppress the phase separation discussed in the paper. In the present calculation, the order parameters for spin and orbital are restricted in a dice consisting of $`2\times 2\times 2`$ Mn ions. Other types of the ordering become candidates for the solution with the lower energy, especially, in the lightly doped region. However, the orbital ordering with the long periodicity is less important in comparison with that in the spin case. The orbital ordering associated with continuous change of the pseudo-spin is prohibited by the anisotropy, as discussed above. Neither the quantum fluctuation neglected in the mean field theory nor the long range Coulomb interaction favor the phase separation. When the effects are taken into account, the area of PS in the $`x`$-$`T`$ plane shrinks and certain regions will be replaced by the homogeneous phases. In this case, it is expected that the phases with the microscopic charge segregation, such as the phase-I and -II shown in Fig. 4, remain, instead of the macroscopic phase separation.
For observation of the PS(F<sub>1</sub>/F<sub>2</sub>) state proposed in this paper, the most direct probe is the resonant x-ray scattering which has recently been developed as a technique to observe the orbital ordering. Here, the detailed measurement at several orbital reflection points are required to confirm PS where different orbital orderings coexist. Observation of the inhomogeneous lattice distortion is also considered as one of the evidence of PS(F<sub>1</sub>/F<sub>2</sub>), although this is an indirect one. Several experimental results have reported an inhomogenenity in the lattice degree of freedom. In La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, two kinds of Mn-O bond with different lengths are observed by the pair distribution function analyses. These values are almost independent of $`x`$, although the averaged orthohombicity decreases with $`x`$. Since two kinds of the bond are observed far below $`T_C`$ where the magnetization is almost saturated, PS(AF/F) is excluded and PS with different orbital orderings explain the experimental results. The more direct evidence of PS was reported by the synchrotron x-ray diffraction in $`\mathrm{La}_{0.88}\mathrm{Sr}_{0.12}\mathrm{MnO}_3`$. Below 350K, some of the diffraction peaks split and the minor phase with 20$`\%`$ volume fraction appears. This phase shows the larger orthohombic distortion than the major one in the region of $`105\mathrm{K}<T<350\mathrm{K}`$. Thus, the experimental data are consistent with the existence of PS(F<sub>1</sub>/F<sub>2</sub>) where the major and minor phases correspond to the F<sub>2</sub> and F<sub>1</sub> phases, respectively. In this compound, the first order phase transition from ferromagnetic insulator to ferromagnetic metal occurs at $`T=145\mathrm{K}`$. Through the systematic experiments, it has been revealed that this magnetic transition is ascribed to the transition between the orbital ordering and disordering. The experimental results strongly suggest that the two different interactions, i.e., SE and DE, are concerned in the transition and unconventional experimental results are understood in terms of the interactions. It is desired to carry out further experimental and theoretical investigations to clarify roles of the PS state on the unconventional phenomena.
###### Acknowledgements.
Authors would like to thank Y. Endoh, K. Hirota and H. Nojiri for their valuable discussions. This work was supported by the Grant in Aid from Ministry of Education, Science and Culture of Japan, CREST and NEDO. S.O. acknowledges the financial support of JSPS Research Fellowships for Young Scientists. Part of the numerical calculation was performed in the HITACS-3800/380 superconputing facilities in IMR, Tohoku University.
|
no-problem/9902/astro-ph9902187.html
|
ar5iv
|
text
|
# EeV Neutrinos
## Acknowledgments
We are indebted to Máximo Ave, Gonzalo Parente and Ricardo Vázquez who have collaborated with work reported here and we thank CICYT (AEN96-1773) and Xunta de Galicia (XUGA-20602B98) for supporting this research. J. A. thanks the Xunta de Galicia for financial support.
## References
|
no-problem/9902/hep-ph9902454.html
|
ar5iv
|
text
|
# THE PROPERTIES OF THE CRITICAL POINT IN ELECTROWEAK THEORY**footnote *Presented at SEWM’98, Copenhagen, 2.-5.12.1998. Based on Ref. [1].
## Acknowledgments
The work reported here was done in collaboration with K. Kajantie, M. Laine, K. Rummukainen and M. Shaposhnikov . This work was partly supported by the TMR network Finite Temperature Phase Transitions in Particle Physics, EU contract no. FMRX-CT97-0122, and by the Russian Foundation for Basic Research.
## References
|
no-problem/9902/physics9902009.html
|
ar5iv
|
text
|
# Many-electron tunneling in atoms
## 1 Introduction
Many-electron ionization of atoms by laser field was firs observed by Suran and Zapesochny in alkali-earth atoms (the review of that work as well as some earlier ones see in ). At present, such studies form one of main guidelines in physics of strong field interaction with atoms .
A number of theoretical models were proposed for interpretation of the gathered experimental data. Some models dealt with direct influence of laser radiation on the atomic electrons , the others consider highly-stripped ion formation due to nonelastic scattering of previously emitted electrons with the parent ion . These models allow to explain a number of observed features of the phenomenon . Nevertheless, there are some difficulties in theoretical description of highly-stripped ion formation in laser field which is not related to nonelastic collisions . Due to these difficulties, the above mechanisms cannot be properly used for explanation of the experiment.
At the same time, it is well-known fact that the single-charged ion formation by a laser field in tunnelling regime can be satisfactory described in terms of relatively simple formulae of the ADK theory . An empirical generalization of the ADK formulae for describing the highly-stripped ion formation was proposed in . So it would be reasonable to generalize the available theory of tunnelling in atoms to the case of non-sequentional multiple ionization of atom. Solution of this problem is the objective of the present work.
Obviously, the Josephson effect can be considered as a solid-state analogue of the considered phenomenon. Some considerations on difference between the one- and many-particle tunnelling are mentioned in reference . Comparison of these considerations with the results of the present work shows that the mentioned difference for tunnelling in atoms is not so trivial as it was described in .
## 2 Asymptotics of the many-electron wave function
Let us remembers some facts which make the main proposed concepts easer to understand. To describe optical transitions in complex atoms, Bates and Damgaard modified the Slater method . Basically, the nodeless character of Slater orbitals was retained. Unlike the Slater method, the effective nuclear charge ceases to be a fitting parameter for valence electrons in atom since it coincide with the residual ion charge. But the effective principal quantum number is uniquely determined by the electron coupling energy. So the asymptotical region of electron motion is considered, where the atomic potential has Coulomb shape. High accuracy of oscillator strengths calculations using the Bates–Damgaard method and its clear physical justification allows to use this method in calculations of other atomic characteristics determined by large electron-nucleus distances.
The tunnelling probability is also determined by large electron-nucleus distances where the energy of the electron interaction with the external field becomes comparable with the attractive energy of the residual ion. So the Bates–Damgaard method can be used for describing the tunnelling effect. Such a procedure was developed in recent work for tunnelling calculation in Rydberg molecules. In that work some evaluations are presented for the applicability conditions of the method.
Let $`N`$ equivalent (i. e. belonging to the same atomic shell) electrons are removed from the atom via tunnelling. Then the asymptotic behaviour of the radial part of $`N`$-electron wavefunction in the Bates–Damgaard approximation is determined by the product of properly symmetrized one-electron function asymptotics:
$`\psi _{\nu lm}(𝒓)C_{\nu l}b^{3/2}\left({\displaystyle \frac{r}{b}}\right)^{\nu 1}\mathrm{exp}\left({\displaystyle \frac{r}{b}}\right)Y_{lm}\left({\displaystyle \frac{𝒓}{r}}\right),`$ (1)
$`C_{\nu l}=(2\pi \nu )^{1/2}\left({\displaystyle \frac{2}{\nu }}\right)^\nu L(\epsilon ),L(\epsilon )=\left({\displaystyle \frac{1\epsilon }{1+\epsilon }}\right)^{\frac{1}{2}(l+1/2)}(1\epsilon ^2)^{\nu /2}.`$
Here $`b=a\nu /Z`$, $`Z`$ is the residual ion charge, $`a=\mathrm{}^2/\mu e^2`$ is Bohr radius, $`\mu `$, $`e`$ are the mass of electron and the absolute value of its charge, $`\epsilon =(l+\frac{1}{2})/\nu `$. The $`C_{\nu l}`$ constant in (1) is determined in quasiclassical approximation not implying the condition $`l\nu `$, which was required in . It results in the arising of $`L(\epsilon )`$ function with $`L(\epsilon )1`$ at $`\epsilon 0`$. After passage to this limit the expression (1) for the $`C_{\nu l}`$ constant turns into the formula (11) of the reference (with an inaccuracy corrected: the number $`\mathrm{e}=2.718\mathrm{}`$ should be omitted).
The expression (1) for $`C_{\nu l}`$ is obtained under $`\epsilon <1`$. For $`\epsilon >1`$, the quasiclassical approximation is not valid, so calculation of $`C_{\nu l}`$ requires numerical approaches (see, e. g., ).
The principal quantum number $`\nu `$ is determined by the electron coupling energy. Denoting the first, second etc. ionization potentials of the atom as $`E_1/e`$, $`E_2/e\mathrm{}`$, the principal quantum number of $`j`$-th removed electron is
$$\nu _j=\left(\frac{2aE_j}{Z^2e^2}\right)^{1/2}.$$
If the electron are equivalent and are *simultaneously* removed from the atom, then for all the electrons
$$\nu =\left(\frac{2aE_N}{NZ^2e^2}\right)^{1/2},$$
(2)
where
$$E_N=\underset{j=1}{\overset{N}{}}E_j$$
is the coupling energy of $`N`$ electrons. Note that in framework of the considered model, the asymptotic behaviour of the bound electron wave function (1) *depends* on the number of the removed electrons. So a partial account is provided for many-electron effects in the initial state.
Now we consider $`N`$-electron ionization as removal of a $`N`$-electron “bundle” – a peculiar kind of quasiparticle of mass $`N\mu `$ and of charge $`Ne`$. In the region which determines the ionization process, we consider the distances between the electrons in the bundle to be much less than the separation between the atomic core and the center of bundle mass. Denoting the distance between the $`i`$-th and $`j`$-th electrons as $`𝒙_{ij}`$, and the position of the center of bundle mass as $`𝑹`$, we write the corresponding inequality:
$$x_{ij}R.$$
(3)
Since the atom–laser radiation interaction is considered in dipole approximation, the influence of the field on $`N`$ individual electrons is completely equivalent to the influence of the field on a quasiparticle of charge $`Ne`$ which is located at the point $`𝑹`$. As for the interaction of this quasiparticle with the core Coulomb field, the correspondent error value is $`(x_{ij}/R)^2`$, which is small due to the accepted inequality (3).
For the mathematical description of the considered model, one should solve a problem which is analogous to that is occurred, e. g. in nuclear $`\alpha `$-decay theory. This problem is to construct the quasiparticle wave function $`\mathrm{\Psi }_{\{\nu lm\}}^{(N)}(𝑹,\{𝒙_i\})`$ at large distances from the residual system, using the one-particle wave functions of the system in the initial state. Symbols in the braces are sets of quantum numbers or coordinates of individual particles. To solve this problem we consider the asymptotics of the function $`\mathrm{\Psi }_{\{\nu lm\}}^{(N)}`$ at $`R\mathrm{}`$, which is a product of the one-electron function asymptotics (1). It is easy to see that the radial dependencies of the functions (1) bring the factor
$$\mathrm{exp}\left(\frac{NR}{b}\right)\left(\frac{R}{b}\right)^{N(\nu 1)}.$$
into the asymptotics of $`\mathrm{\Psi }_{\{\nu lm\}}^{(N)}`$. To obtain the angular dependence, the mean of the $`𝑹,\{𝒙_i\}`$ variables should be detalized. Since the problem has the axial symmetry for the linearly polarized field, the orbital moment projections of non-interacting electrons onto the polarization direction are conserved. So it is convenient to leave the azimuth angles $`\phi _i`$ the same that in the original spheric coordinate system centered in the atomic nucleus. The change of variables will effect only on the absolute values $`\{r_i\}`$ and polar angles $`\{\theta _i\}`$. At $`\theta 0`$, the behaviour of the Legendre polynomials involved in the spheric functions (1), is determined by
$$P_l^{|m|}(\mathrm{cos}\theta _i)(1)^{|m|}\frac{\mathrm{sin}^{|m|}\theta _i}{2^{|m|}|m|!}=(1)^{|m|}\frac{(r_i^2r_{iz}^2)^{|m|/2}}{2^{|m|}|m|!r_i^{|m|}}$$
Substituting here $`r_iR`$, $`r_{iz}R_z`$ and introducing the parabolic coordinates $`\xi =R+R_z`$, $`\eta =RR_z`$ for the center of the bundle mass, the asymptotics of the $`N`$-electron function at $`\xi \eta `$ can be written in the form
$`\mathrm{\Psi }_{\{\nu lm\}}^{(N)}(𝑹,\{𝒙_i\})=B\varphi (\xi ,\eta )\chi (\{r_i,\theta _i\}){\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{1}{\sqrt{2\pi }}}\mathrm{exp}(\mathrm{i}m_j\phi _j),`$
$`B=a^{3/2}C_{\nu l}^N\left({\displaystyle \frac{Z}{\nu }}\right)^{3N/2}(2l+1)^N{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{(1)^{|m_j|}}{|m_j|!}}\left[{\displaystyle \frac{(l+|m_j|)!}{(l|m_j|)!}}\right]^{1/2},`$ (4)
$`\varphi (\xi ,\eta )\mathrm{exp}\left[{\displaystyle \frac{N(\xi +\eta )}{2b}}\right]\left({\displaystyle \frac{\xi }{2b}}\right)^{N(\nu 1)}\left({\displaystyle \frac{\eta }{\xi }}\right)^{M/2},M={\displaystyle \underset{j=1}{\overset{N}{}}}|m_j|.`$
Here $`\chi `$ is the normalized per unit wave function of the electron inner motion in the bundle. Note that there are only $`2(N1)`$ independent variables $`\{r_i,\theta _i\}`$ of $`2N`$. The function $`\varphi (\xi ,\eta )`$ describes the motion of the center of the bundle mass.
## 3 Tunnelling probability
The further calculation of the tunnelling probability is implemented according the standard technique , an account provided for that the electron bundle mass is $`N\mu `$ and its charge is $`Ne`$. Substituting the function $`\varphi (\xi ,\eta )`$ from (4) into the Schrödinger equation
$$\frac{\mathrm{d}}{\mathrm{d}\xi }\left(\xi \frac{\mathrm{d}\varphi }{\mathrm{d}\xi }\right)+\left(\beta \frac{E_NN\mu }{2\mathrm{}^2}\xi \right)\varphi =0,$$
describing the motion with respect to the parabolic $`\xi `$ coordinate at $`\xi \mathrm{}`$, we obtain the variables separation constant:
$$\beta =\frac{N}{b}\left[N(\nu 1)\frac{M1}{2}\right].$$
(5)
The centrifugal potential is neglected since it vanishes rapidly at $`\xi \mathrm{}`$.
Now we consider the external field $`F(t)`$ to be slow-varying, and use quasiclassical approximation for the wave function $`\varphi _F(\xi ,\eta )`$ which describes the center of the bundle mass motion in the field. In the below-threshold domain
$`\varphi _F(\xi ,\eta )=\varkappa (\xi |p(\xi )|/\mathrm{})^{1/2}\mathrm{exp}\left({\displaystyle \frac{1}{\mathrm{}}}{\displaystyle _{\xi _1}^\xi }|p(\xi )|d\xi \right),`$ (6)
$`p(\xi )=\mathrm{}\left({\displaystyle \frac{E_NN\mu }{2\mathrm{}^2}}+{\displaystyle \frac{\beta }{\xi }}+{\displaystyle \frac{1}{4\xi ^2}}+{\displaystyle \frac{N^2e\mu }{4\mathrm{}^2}}F\xi \right)^{1/2},`$ (7)
where $`\xi _1`$ is the greater root of the equation $`p(\xi )=0`$. Comparing the expression (6) with the function $`\varphi (\xi ,\eta )`$ from (4) at the point $`\xi _0`$ lying in the region
$$\frac{2\mathrm{}^2\beta }{E_NN\mu }b\xi _0\frac{2E_N}{NeF}=\frac{eZ}{b\nu F},$$
(8)
we obtain the $`\varkappa `$ value:
$$\varkappa (\eta ;\xi _0)\left(\frac{N\xi _0}{2b}\right)^{1/2}\mathrm{exp}\left(\frac{1}{\mathrm{}}_{\xi _0}^{\xi _1}|p(\xi )|d\xi \right)\varphi (\xi _0,\eta ).$$
(9)
The condition of existence of the region (8) leads to following restriction to the external field:
$$FF_a\frac{eZ}{b^2\nu }=\frac{e}{a^2}\left(\frac{Z}{\nu }\right)^3,$$
(10)
which differs from the condition arising in the one-electron tunnelling description only by the definition of the $`\nu `$ value. It should be noted that for $`\nu `$ essentially greater than 1 (what holds, e. g. for Rydberg states) the inequality (10) is changed by a stronger one:
$$F<\frac{Z^3e}{16\nu ^4a^2},$$
(11)
which is deduced from the condition of existence of the potential barrier .
The formulae (6) and (9) determine the function $`\varphi _F(\xi ,\eta )`$ outside the barrier. With the account of inequality (8), its squared absolute value is
$`|\varphi _F(\xi ,\eta )|^2`$ $`=`$ $`{\displaystyle \frac{\mathrm{}N\xi _0}{2b\xi p(\xi )}}\left({\displaystyle \frac{\xi _0}{2b}}\right)^{2N(\nu 1)}\left({\displaystyle \frac{\eta }{\xi _0}}\right)^M`$
$`\times `$ $`\mathrm{exp}\left[{\displaystyle \frac{N\eta }{b}}{\displaystyle \frac{16\mathrm{}^2}{3N^2\mu eF}}\left({\displaystyle \frac{E_NN\mu }{2\mathrm{}^2}}\right)^{3/2}\beta \left({\displaystyle \frac{2\mathrm{}^2}{E_NN\mu }}\right)^{1/2}\mathrm{log}{\displaystyle \frac{NeF\xi _0}{8E_N}}\right].`$
Using (2) and (5), it is easy to see that the dependence on the arbitrary parameter $`\xi _0`$ is actually disappeared in (3):
$$|\varphi _F(\xi ,\eta )|^2=\frac{\mathrm{}N(\eta /b)^M}{2^M\xi p(\xi )}\left(\frac{2F_a}{F}\right)^{2N(\nu 1)M+1}\mathrm{exp}\left(\frac{N\eta }{b}\frac{2NF_a}{3F}\right).$$
(13)
The ionization probability is determined by the flux of probability density (13) through a plane perpendicular to $`z`$-axis :
$$W_{\nu l}^{(N)}(F)2\pi _0^{\mathrm{}}v_z|\varphi _F(\xi ,\eta )|^2\rho d\rho ,v_z=\frac{2p(\xi )}{N\mu },\rho =\sqrt{\xi \eta },\mathrm{d}\rho \sqrt{\frac{\xi }{\eta }}\mathrm{d}\eta .$$
Substituting here the formulae (4) and (13), we obtain:
$`W_{\nu l}^{(N)}(F)`$ $`=`$ $`{\displaystyle \frac{\pi \mathrm{}}{a^2\mu }}{\displaystyle \frac{M!(2l+1)^NC_{\nu l}^{2N}}{2^{M2}N^{M+1}}}\left({\displaystyle \frac{Z}{\nu }}\right)^{3N1}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{(l+|m_j|)!}{(|m_j|!)^2(l|m_j|)!}}`$ (14)
$`\times `$ $`\left({\displaystyle \frac{2F_a}{F}}\right)^{2N(\nu 1)M+1}\mathrm{exp}\left({\displaystyle \frac{2NF_a}{3F}}\right).`$
This formula determines the $`N`$-electron tunnelling probability in dc field within a factor accounting for the overlapping of wave functions of the electrons remaining in the atom, with the wavefunctions of the same electrons in the initial state. Obviously, this factor cannot exceed 1, and its more accurate evaluation can be performed only numerically. Note that the $`N`$ multiplier in the exponent in (14) in now ways gives an exhaustive account for the dependence of this exponent on $`N`$, as it was considered in . Due to the formulae (2), (10), this dependence is significantly more complicated and it is determined by the spectrum of the particular atom. We present below (figure 1) a numerical example illustrating this statement.
Now we consider that
$$F(t)=F_0\mathrm{cos}\omega t,$$
(15)
where $`\omega `$ is the laser field frequency. It is a well-known fact that the tunnelling in a laser field is possible for small values of the Keldysh parameter
$$\gamma =\frac{\sqrt{2\mu E_1}}{eF}\omega ,$$
where $`E_1`$ is the coupling energy of one electron. Following the technique developed in for “particle” of mass $`N\mu `$ and charge $`Ne`$, it is easy to see that the $`N`$-electron tunnelling is possible for small values of the parameter
$$\gamma _N=\frac{\sqrt{2\mu E_N/N}}{eF}\omega .$$
(16)
Since the coupling energy is increasing for each subsequent electron, $`N`$-electron tunnelling requires field values lower than $`N`$-electron tunnelling cascade.
Substituting (15) into (14), we average the result over the time interval $`t[\pi /2\omega ,\pi /2\omega ]`$ <sup>1</sup><sup>1</sup>1The values $`t[\pi /2\omega ,3\pi /2\omega ]`$ leads to $`F(t)<0`$ and the tunnelling takes place in the direction of negative $`z`$ semiaxis.. Due to the inequality (10), the integral arising here can be calculated using the saddle-point method. Under the condition (11) fulfilled, the saddle point is $`t=0`$, and the final formula is:
$`W_{\nu l}^{(N)}(F_0)`$ $`=`$ $`{\displaystyle \frac{\sqrt{3\pi }\mathrm{}}{a^2\mu }}{\displaystyle \frac{M!(2l+1)^NC_{\nu l}^{2N}}{2^{M3/2}N^{M+3/2}}}\left({\displaystyle \frac{Z}{\nu }}\right)^{3N1}{\displaystyle \underset{j=1}{\overset{N}{}}}{\displaystyle \frac{(l+|m_j|)!}{(|m_j|!)^2(l|m_j|)!}}`$ (17)
$`\times `$ $`\left({\displaystyle \frac{2F_a}{F_0}}\right)^{2N(\nu 1)M+1/2}\mathrm{exp}\left({\displaystyle \frac{2NF_a}{3F_0}}\right).`$
Remember that the exponent dependence on $`N`$ in (17) is not reduced to the factor $`N`$ which is written explicitly.
## 4 Numerical examples
Unfortunately, the obtained formulae cannot be immediately related to an experiment, because, along with the direct $`N`$-fold ions formation, there are a number of cascade processes as well as other ionization mechanisms due to nonelastic collisions of electrons and ions . For the relation of the theory with an experiment, the correspondent kinetic equations are to be solved, that should be a subject for another work. So only some illustrative examples are considered in this section.
The figure 1 presents the relation of probabilities of 3-fold ions formation in the noble gases resulted from two 2-cascade processes: $`\mathrm{A}\mathrm{A}^+\mathrm{A}^{3+}`$ and $`\mathrm{A}\mathrm{A}^{2+}\mathrm{A}^{3+}`$. These probabilities are denoted as $`W(1;2)`$ and $`W(2;1)`$ correspondingly. They have similar dependence on the laser pulse duration. As it is seen, the relation $`W(1;2)/W(2;1)`$ is not equal to 1, as it is follows from the results of reference .
The following result seems to be curious. The 2-electron tunnelling probabilities for neutral atoms can be greater than the one-electron tunnelling probabilities in correspondent singly charged ions. E. g., for Ar atom the 2-electron tunnelling probability exceeds the 1-electron process probability for $`\mathrm{Ar}^+`$ ion at the intensities $`I>10^{14.88}\mathrm{W}/\mathrm{cm}^2`$. The same result takes place for Kr at $`I>10^{14.76}\mathrm{W}/\mathrm{cm}^2`$, for Xe at $`I>10^{14.34}\mathrm{W}/\mathrm{cm}^2`$. At the same time, for light noble gases atoms He and Ne, the probabilities of one-electron tunnelling in singly charged ions are approximately by two orders greater than the probabilities of two-electron process in the correspondent neutral atoms at $`I10^{15}\mathrm{W}/\mathrm{cm}^2`$. These facts shows wide range of experimental situations arising in multiphoton tunnelling effect.
This work was stimulated by the report . The author is grateful to Professor W. Sandner for the interest to the work, and to WE–Heraeus-Stiftung for the offered opportunity to participate in the seminar work. I also express my deep gratitude to Professor N. B. Delone and to the participants of his seminar in IOF RAN for helpful discussion. This work was partially supported by Russian Foundation for Basic Researches (grant no. 97-02-18035).
|
no-problem/9902/astro-ph9902305.html
|
ar5iv
|
text
|
# Gamma-ray Bursts Produced by Mirror Stars Presented at XXVII ITEP Winter School, Snegiri, Feb. 16 – 24, 1999 Proceedings of the School to be published by Gordon & Breach
## 1 Introduction
The spectacular discovery of GRB afterglows allowed to measure the redshift, and hence the distance to some of them. The energy output up to $`3.4\times 10^{54}`$ ergs $`1.9M_{}c^2`$, for GRB990123 (Kulkarni et al. 1999) poses extremely hard questions to theorists who try to explain these superpowerful events. Even if a beaming is invoked, it can reduce the energy budget by two orders of magnitude, perhaps, but this is still too high for conventional models.
The extraordinary situation requires a revolutionary approach to the modeling of GRBs. In this Lecture I suggest a scenario which seems to be a bizarre one from the first glance, but in fact it has a reasonable theoretical basis, and observational evidence in favour of this scenario is ever growing: I believe, that observing GRBs at cosmological distances we are witnessing catastrophic deaths of stars made of the so called “mirror” matter.
## 2 Problems in GRB modeling
For general recent reviews on GRBs see e.g. Piran (1998a,b), Tavani (1998) and Postnov (1999).
It is well known, that assuming high values of Lorentz factor $`\mathrm{\Gamma }`$ of the GRB ejecta is necessary to solve the compactness problem (Guilbert, Fabian & Rees 1983, Paczyński 1986, Goodman 1986, Krolik & Pier 1991, Rees & Mészáros 1992, Piran 1996). The typical time-scale of the variability of the gamma-ray emission $`\mathrm{\Delta }t10^2`$ seconds implies the size of the emitting region $`R<c\mathrm{\Delta }t`$, as small as $`10^3`$ km. The enormous number of gamma photons in such a small volume should produce electron-positron pairs which make the emitting region optically thick. This conflicts with the observed nonthermal spectra unless one supposes that the emitting region moves towards the observer at a relativistic speed with Lorentz factor $`\mathrm{\Gamma }`$, then its size would be $`\mathrm{\Gamma }^2c\mathrm{\Delta }t`$, and the optical depth correspondingly smaller. The low optical depth and the ultrarelativistic motion requires that the fireball should be very clean (not heavily contaminated with baryons), yet the models suggested up to now are producing ‘dirty’ fireballs.
E.g., the possibility of a GRB to appear during a bare core collapse in a binary system was suggested by Dar et al. (1992). The latter model assumed a GRB to be a result of the neutrino-antineutrino pair creation and annihilation (Goodman et al. 1987) during the accretion-induced collapse of a white dwarf in a close binary system. Although the idea of neutrino annihilation is very compelling for producing GRBs, the model should be rejected on the grounds of being too contaminated by baryonic load, see e.g. Woosley (1993).
Another plausible way of forming GRBs at cosmological distances involves binary neutron star merging (originally proposed by Blinnikov et al. 1984; see more recent references and statistical arguments in favour of this model in Lipunov et al. 1995). However, as detailed hydrodynamical calculations currently demonstrate, this mechanism also fails in producing powerful clean fireballs (Janka and Ruffert 1996, Ruffert et al. 1997). On the GRB models with a moderately high baryon load see Woosley (1993), Ruffert & Janka (1998), Kluźniak & Ruderman (1998), Fuller & Shi (1998), Fryer & Woosley (1998), Popham, Woosley, & Fryer (1998). Vietri & Stella (1998) and Spruit (1999) suggest models that probably have a small contamination, but it is unlikely to derive from them the huge energy required by the most recent GRB observations.
A very interesting idea was put forward by Kluźniak (1998). He suggested that the ordinary neutrinos can oscillate into sterile ones, go out to the regions relatively free of baryons, and then convert back into ordinary neutrinos. For this model the difficulty is the same: if the oscillation length is too short than the baryon contamination is unavoidable. If it is too long then a very small number of neutrinos will annihilate.
Here I point out to the possibility of dramatically extending the latter model. The sterile neutrino are naturally produced by the mirror matter during collapses or mergers of mirror stars, made of mirror baryons. If they oscillate to ordinary neutrinos they do this in the space practically free of ordinary baryons and can give birth to a powerful gamma-ray burst.
## 3 The concept of mirror matter
The concept of the mirror particles stems from the idea of Lee & Yang (1956) who suggested the existence of new particles with the reversed sign of the mirror asymmetry observed in our world. Lee and Yang believed that the new particles (whose masses are degenerate with the masses of ordinary particles) could participate in the ordinary interactions. Later, Kobzarev, Okun & Pomeranchuk (1966) have shown that this conjecture was not correct, and that the ordinary strong, weak and electromagnetic interactions are forbidden for the mirror particles by experimental evidence, only gravity and super-weak interaction is allowed for their coupling to the ordinary matter. But if they really mirror the properties of ordinary particles, this means that there must exist mirror photons, gluons etc., coupling the mirror fermions to each other, like in our world. Thus the possibility of existence of the mirror world was postulated first by Kobzarev, Okun & Pomeranchuk (1966), and the term “mirror” was coined in that paper. The particle mass pattern and particle interactions in the mirror world are quite analogous to that in our world, but the two worlds interact with each other essentially through gravity only.
Later the idea was developed in a number of papers, e.g. Okun (1980), Blinnikov & Khlopov (1983), and the interest to it is revived recently in attempts to explain all puzzles of neutrino observations Foot & Volkas (1995), Berezhiani & Mohapatra (1995), Berezhiani et al. (1996), Berezhiani (1996). It is shown in the cited papers that a world of mirror particles can coexist with our, visible, world, and some effects that should be observed are discussed.
It was shown by Blinnikov & Khlopov (1983) that ordinary and mirror matter are most likely well mixed on the scale of galaxies, but not in stars, because of different thermal or gasdynamic processes like SN shock waves which induce star formation. It was predicted that star counts by HST must reveal the deficit of local luminous matter if the mirror stars do really exist in numbers comparable to ordinary stars and contribute to the gravitational potential of galactic disk. Recent HST results Gould et al. (1997) show the reality of the luminous matter deficit: e.g., instead of 500 stars expected from the Salpeter mass function in the HST fields investigated for the range of absolute visual magnitudes $`14.5<M_V<18.5`$ only 25 are actually detected. It is found that the Salpeter slope does not continue down to the hydrogen-burning limit but has a maximum near $`M0.6M_{}`$, so lower mass stars do not contribute much to the total luminous mass as was thought previously. The total column density of the galactic disk, $`\mathrm{\Sigma }40M_{}\mathrm{pc}^2`$ is a factor of two lower than published estimates of the dynamical mass of the disk Gould et al. (1997). It should be remembered that here we discuss a contribution of invisible stars to the gravity of the galactic disk which has more to do with the local Oort limit (see e.g. Oort 1958) than with the halo dark matter. Other references on the subject see also in Mohapatra & Teplitz (1999).
Okun (1980), Blinnikov & Khlopov (1983), Berezhiani (1996) have pointed out that mirror objects can be observed by the effect of gravitational lensing. After the discovery MACHO microlensing events, I have discussed their interpretation as mirror stars at Atami meeting in 1996 (Blinnikov 1998). Recently, this interpretation is developed by Foot (1999) and Mohapatra & Teplitz (1999).
The mirror world that interacts with ordinary matter exclusively via gravity follows quite naturally from some models in superstring theory (closed strings), but those models are too poor to be useful in our problem. Especially interesting for explaining GRBs are the models that predict the existence of a light sterile neutrino that can oscillate into ordinary neutrino. The development of the idea can be traced from the following references.
Foot et al. (1991), showed that the mirror symmetry is compatible with the standard model of particle physics. Here it was assumed that the neutrinos are massless, and it was shown that there are only two possible ways in addition to gravity, that the mirror particles can interact with the ordinary ones, i.e. through photon-mirror photon mixing \[this had already been discussed independently and earlier (in in a slightly different context) by Glashow (1985)\] and through Higgs-mirror Higgs mixing.
In the next paper Foot et al. (1992) have shown that if neutrinos have mass then the mirror idea can be tested by experiments searching for neutrino oscillations and can explain the solar neutrino problem. The same idea can also explain the atmospheric neutrino anomaly (recently confirmed by SuperKamiokande data), which suggests that the muon neutrino is maximally mixed with another species. Parity symmetry suggests that each of the three known neutrinos is maximally mixed with its mirror partner (if neutrinos have mass). This was pointed out by Foot (1994). Finally, the idea is also compatible with the LSND experiment which suggests that the muon and electron neutrinos oscillate with small angles with each other, see Foot & Volkas (1995). Berezhiani & Mohapatra (1995) extended the latter work to a bit different model with parity symmetry spontaneously broken. In this model the mirror particles have masses on all scales differing by a common factor from the masses of their ordinary counterparts.
## 4 The GRB model
Now I am ready to formulate very briefly the scenario of my model.
If the properties of mirror matter are very similar to the properties of particles of the visible world, then the events like neutron star mergers, failed supernovae (with a collapse to a rotating black hole) etc. must occur in the mirror world. These events can easily produce sterile (for us) neutrino bursts with energies up to $`10^{54}`$ ergs, and the duration and beaming of mirror neutrinos are organized naturally like in the standard references given above. The neutrino oscillations then take place which transform them at least partly to ordinary neutrinos, but without the presence of big amounts of visible baryons. Some number of ordinary baryons is needed, like $`10^5M_{}`$ (Piran 1998b) for producing standard afterglows etc. This number is easily accreted by mirror stars during their life from the uniform ordinary interstellar matter (cf. Blinnikov and Khlopov 1983). The oscillation length required in this scenario must be less than the size of the system (10 – 100 km) multiplied by the number of scatterings of the mirror neutrinos in the body of mirror neutron star, $`10^5`$.
A variety of properties of GRBs can be explained as suggested by Kluźniak & Ruderman (1998) for ordinary matter.
Taking into account magnetic moment of standard neutrinos can help in producing a larger variety of GRB variability due to neutrino interaction with the turbulent magnetic field inevitably generated in the fireball. This is good for temporal features similar to the observed fractal or scale-invariant properties found in gamma-ray light curves of GRB (Shakura et al. 1994; Stern and Svensson 1996). Another extension of the model is possible if heavier neutrinos can decay into lighter ones producing photons directly (see e.g. Jaffe & Turner 1997).
## 5 Conclusion: arguments in favour of mirror matter models
Summarizing, here are the arguments in favour of the propose scenario.
1. The mirror matter is aesthetically appealing, because it restores the parity symmetry of the world (at least partly).
2. It allows to explain neutrino anomalies.
3. It explains the missing mass in the Galaxy disk, and in some models the Dark matter in general.
4. It explains MACHO microlensing events
5. For GRBs it provides the model with the low baryon load
6. The available baryon load on the scale of the mass of a small planet is exactly what is needed for fireball models.
7. All host galaxies for OT of GRBs are strange ones. This may be an indication for the gravitational interaction of the ordinary galaxy with the mirror one in which it can be immersed.
Acknowledgements. I am very grateful to Lev Okun, Konstantin Postnov, Ilya Tipunin, Mikhail Prokhorov, Darja Cosenco, Aleksandra Kozyreva, Elena Sorokina for stimulating discussions and assistance, and to Robert Foot for interesting correspondence.
|
no-problem/9902/astro-ph9902264.html
|
ar5iv
|
text
|
# Untitled Document
A note on the thermal component of the equation of state of solids
(Letter to the Editor)
Vladan Celebonovic
Institute of Physics,Pregrevica 118,11080 Zemun-Beograd,Yugoslavia
(received: 15 November,1990)
reference: Earth,Moon and Planets,54, pp.145-149 (1991).
Abstract: A simple method for determining the thermal component of the EOS of solids under high pressure is proposed.Applications to the interior of the Earth gives results in agreement with recent geophysical data
1.Introduction
Imagine a specimen of any chemical substance.In the case of a thermomechanical system,its equation of stae (EOS) will have the general form $`f(p,V,T)=0`$,where $`p,V,T`$ denote,respectively the pressure to which the system is subjected,its volume and temperature.
Proposing a realistic EOS for a given class of physical systems is,generally speaking a highly non-trivial problem (see,for instance,Eliezer et al.,1986; Schatzman and Praderie,1990 for details ).One usually starts from an assumed interparticle potential,determines the form of the Hamiltonian,and from it the thermodynamical functions and the EOS.In order to render the calculations more tractable,.the EOS is usually determined in form of isoterms in the $`pV`$ plane.The thermal component is often introduced at a later stage (for example,Holian,1986;Eliezer et al.,1986).
In laboratory studies,EOS of various systems can be determined experimentally (examples of recent reviews concerning the subject are Jayaraman,1986;Jeanloz,1989;Drickamer,1990).The situation is much more complicated in astrophysics,because planetary and stellar structure is unaccessible to direct experiments.What is observable,however,are the consequences on the surfaces and in the vicinity of these objects of the processes occuring in their interiors.Progress in the understanding of the Earth’s interior has,for example,been provoked by a combination of observation in geophysics and seismology,with laboratory studies of the relevant materials performed under high pressure and high temperature (Jeanloz,1990a).
The purpose of this letter is to propose a simple method for the determination of the thermal component of the EOS of solids subdued to high pressure.Details of the calculations,and and application to the interior of the Earth are presented in the following section,while the third part contains a discussion of various factors influencing our results.as well as the possibilities of their improvement.Physically,our calculations are based on combination known results of solid-state physics with a particular semiclassical theory of dense matter proposed by Savic and Kasanin (1962/65;for a recent application,see Celebonovic,1990b,and references given there).Compared with recent work on the subject (such as Renero et al.,1990;Kumari and Dass,1990a,b), the approach proposed in this letter has the advantage of physical and mathematical simplicity,but a disadvantage of being applicable only to solids.
2.Calculations
One of the general characteristics of solid bodies is that the atoms (or ions) in them,preform small vibrations about their equilibrium positions.Let $`N`$ be the number of molecules in the body,and denote by $`\nu `$ the number of atoms per molecule.The total number of atoms in the system is $`N\nu `$ and the number of vibrational degrees of freedom is $`3N\nu `$ (speaking eaxctly,it is $`3N\nu 6,`$ but $`3N\nu 6`$ ).
A system of $`3N\nu `$ mutually independent vibrational degrees of freedom is equivalent to an ensemble of independent oscillators.The free energy of such an ensemble can be expressed as
$$F=Nϵ_0+k_BT\underset{\alpha }{}\mathrm{ln}(1\mathrm{exp}(\beta \mathrm{}\omega _\alpha ))$$
(1)
where the first term on the right side represents the interaction energy of all the atoms in the system in the equilibrium state,and the summation is carried over all the normal vibrations,indexed by $`\alpha `$ .
At low temperatures,only low frequency terms (i.e.,sound waves) with $`k_BT\mathrm{}\omega _\alpha `$ have an important contirubution to eq.(1).It can be shown (for example Landau and Lifchitz,1959) that the number of vibrations $`dN`$ per interval of frequency $`d\omega `$ is
$$\frac{dN}{d\omega }=\frac{3V\omega ^2}{2\pi ^2\overline{u}^3}$$
(2)
The volume of the system is denoted by V,and $`\overline{u}`$ is the mean velocity of sound.Changing from summation to integration in eq.(1),one finally obtains
$$F=Nϵ_0+\frac{3k_BTV}{2\pi ^2\overline{u}^3}_0^{\mathrm{}}\mathrm{ln}(1\mathrm{exp}(\mathrm{}\omega _\alpha /k_BT))𝑑\omega $$
(3)
If we apply the standard thermodynamical identity $`E=FT\frac{F}{T}`$ it follows from eq.(3) that the energy of the system is
$$E=Nϵ_0+V\pi ^2(k_BT)^4/10(\mathrm{}\overline{u}^3)$$
(4)
The mean velocity of sound can be approximated by the Bohm-Staver formula
$$\overline{u}^2=(1/3)\left(Zm/Mv_F^2\right)$$
(5)
(Bohm and Staver,1951;Ashcroft and Mermin,1976),where $`m`$ and $`M`$ denote,respectively,the masses of electrons and ions in the solid,$`v_F`$ is the Fermi velocity and $`Z`$ is the charge of ions.
A detailed discussion of the basic ideas of the theory of Savic and Kasanin (the SK theory for short) has recently been published (Celebonovic,1989d). Within the framework of this theory,the energy per unit volume of a solid is given by
$$E=2e^2Z(N_A\rho /A)^{4/3}$$
(6)
where $`A`$ is the mass number and $`N_A`$ denotes Avogadro’s constant.
Using equations (4)-(6),after some algebra one obtains the following expression for the temperature of a solid as a function of its density $`\rho `$
$$T=1.421710^5(\rho /A)^{7/12}(m/M)^{3/8}Z^{7/8}$$
(7)
The thermal component of the EOS of a solid can be obtained by multiplying this result by the density (expressed in suitable units).
As an astrophysical test,eq.(7) was applied to the model of the Earth discussed previously within the SK theory (Savic,1981 and earlier work) .The following results were obtained
TABLE I
Depth (km) 0-39 39-2900 2900-4980 4980-6371
$`\rho _{\mathrm{max}}`$(g cm<sup>-3</sup>) 3.0 6.0 12.0 19.74
$`Z`$ 2 3 3 4
$`T_{\mathrm{max}}(K)`$ 1300 2700 4100 7000
where the values of the temperatures have been rounded to the nearest $`\pm 100K.`$
3.Discussion
The distrribution of temperature with depth within the Earth,or any other celestial body,is not directly measurable.In order to draw conclusions about the thermodynamics of our planet’s interior,geophysicists are bound to combine examination of rock samples from the outher $``$ 200 km of the Earth,with high pressure-high temperature experiments (Jeanloz,1990a,b).It has thus been shown that,for example,$`T=(4500\pm 500)K`$ at the base of the mantle,and that $`T=(6900\pm 1000)K`$ in the center (Jeanloz,1990a).
These experiments were performed on materials known to exist in a thin layer beneath the Earth’s surface,and assumed to represent its composition in bulk.One could conjecture that,such an assumption not being directly verifiable by experiments,it has strong influence on the results.Some influence it certainly has,but it can not be extremely important.Namely,it is possible within the SK theory,to determine the bulk chemical composition of an astronomical object (i.e., the mean value of the mass number of the mixture of materials that it is made of);the only input data needed for such a calculation are the mass and radius of the object.It turns out that the value of A obtained in this way corresponds closely to the value of A for the materials used in experiments.
The calculations discussed in this letter are strongly influenced by a combination of several factors from the domain of solid state physics.
The upper limit of integration in eq.(3) is infinite,which simplifies mathematics,but renders physics unrealistic - there are no infinite frequencies.The difficulty could be,at first sight,circumvented by introducing a suitable cut-ff frequency,in the spirit of Debye’s model.However,one would then encounter the following obstacle,which would be the density dependence of the cut-off frequency (and the coresponding cut-off temperature).Solving this problem demans a knowledge of the elastic constans and inter-particle potentials of the material ( a solution for some types of crystal lattices within Debye’s model is given in de Launay,1953,1954).
It is clear that attempting to perform such a calculation for the case of the interior of the Earth would quickly render the results questionable,due to the accumulation of various approximations.Another ”solid-state” factor influencing our results is the method of calculation of the speed of sound.The use of the Bohm-Staver formula amounts to taking into account only the main term in the calculation of the band-structure energy of a solid (see Harrison,1989 for details),which is ,in turn,used in determining the value of the velocity of sound.The formula is known to give qualitatively correct results when applied to materials under standard conditions,but its applicability can be expected to increase for materials subjected to high pressure.
Finally,a few words about one more ”influential” factor.It could be questioned whether it is correct to compare equations which do not contain parameters of the same physical nature:equation (4) contains the temperature,while eq.(6) does not.It is true that the SK theory has been developed for the case T=0K.However,this should be understood just as a simplifying assumption,whose physical meaning is that this theory is applicable to materials at small teemperatures and subjected to high pressure,for which $`ϵ_F/k_BT1`$ (see Eliezer et al.,1986 for details).
4.Conclusions
In this letter we have discussed a simple method for determining the thermal component of the EOS of solids under high pressure,thus correcting a small error made in a similar discussion (Celebonovic,1982).Application to the interior of the Earth gives results in good agreement with geophysical data.Possible influence of several factors on the values obtained has been described in some detail.Using this method in laboratory high pressure work would necessitate an improvement in the calculation of the velocity of sound,and the intorduction of a suitable density-dependent cut-off frequency in equation (3).
References
Ashcroft,N.W.,and Mermin,D.N.: 1976,Solid State Physics,Holt,Rinehart and Winston,London.
Bohm,D.and Staver,T.:1951,Phys.Rev.,84,836.
Celebonovic,V.:1982,in W.Fricke and G.Teleki (eds.),Sun and Planetary System,D.Reidel Publ.Comp., Dordrecht,Holland.
Celebonovic,V.:1989d,Earth,Moon and Planets,45,291.
Celebonovic,V.:1990b,High Pressure Research,5,693.
de Launay,J.:1953,J.Chem.Phys.,21,1975.
de Launay,J.:1954,J.Chem.Phys.,22,1676.
Drickamer,H.G.:1990,Ann.Rev.Mater.Sci.,20,1.
Eliezer,S.,Ghatak,A.and Hora,H.:1986,An Introduction to Equations of State:Theory and Applications,Cambridge University Press,Cambridge,UK.
Harrison,W.A.:1989,Electronic Structure and the Properties
of Solids,Dover Publications Inc.,New York.
Holian,K.S.: 1986,J.Appl.Phys.,59,149.
Jayaraman,A.:1986,Rev.Sci.Instr.,57,1013.
Jeanloz,R.:1989,Ann.Rev.Phys.Chem.,40,237.
Jeanloz,R.:1990a,Ann.Rev.Earth Planet.Sci.,18,357.
Jeanloz,R.:1990b,preprint,to appear in Gibbs symposium Proceedings.
Kumari,M.and Dass,N.:1990a,J.Phys.:Condens.Matt.,2,3219.
Kumari,M.and Dass,N.:1990b,ibid,7891.
Landau,L.and Lifchitz,E.:1959,Statistical Physics,
Pergamon Press,Oxford.
Renero,C.,Prieto,F.E.and de Icaza,M.:1990,J.Phys.:
Condens.Matt., 2,295.
Savic,P.and Kasanin,R.:1962/65,The Behaviour of Materials Under High Pressure I-IV,Ed.SANU,Beograd.
Savic,P.:1981,Adv.Space Res.,1,131.
Schatzman,E.and Praderie,F.:1990,Astrophysique:Les Etoiles,
InterEditions/Editions du CNRS,Paris.
|
no-problem/9902/cond-mat9902322.html
|
ar5iv
|
text
|
# Crystallization of a classical two-dimensional electron system: Positional and orientational orders
\[
## Abstract
Crystallization of a classical two-dimensional one-component plasma (electrons interacting with the Coulomb repulsion in a uniform neutralizing positive background) is investigated with a molecular-dynamics simulation. The positional and the orientational correlation functions are calculated, to the best of our knowledge, for the first time. We have found an indication that the solid phase has a quasi-long-range (power-law) positional order along with a long-range orientational order. This indicates that, although the long-range Coulomb interaction is outside the scope of Mermin’s theorem, the absence of ordinary crystalline order at finite temperatures applies to the electron system as well. The ‘hexatic’ phase, which is predicted between the liquid and the solid phases by the Kosterlitz-Thouless-Halperin-Nelson-Young theory, is also discussed.
\]
Wigner pointed out in the 1930’s that an electron system should crystallize due to the Coulomb repulsion for low enough densities. Although quantum effects are essential, the concept of electron crystallization can be generalized to classical cases. In fact, Grimes and Adams succeeded in observing a liquid-to-solid transition in a classical two-dimensional (2D) electron system on a liquid-helium surface in 1979. In this system electrons obey classical statistics because the Fermi energy is much smaller than $`k_BT`$. The thermodynamic properties of the classical electron system are wholly determined by the dimensionless coupling constant $`\mathrm{\Gamma }`$, the ratio of the Coulomb energy and the kinetic energy. Here $`\mathrm{\Gamma }(e^2/4\pi ϵa)/k_BT`$, where $`a=(\pi n)^{1/2}`$ is the mean-distance between the electrons, $`n`$ the density of electrons, $`e`$ the charge of an electron, and $`ϵ`$ is the dielectric constant of the substrate. For $`\mathrm{\Gamma }1`$ the system will behave as a gas while for $`\mathrm{\Gamma }1`$ as a solid. Grimes and Adams found that the phase transition occurs at $`\mathrm{\Gamma }_c=137\pm 15`$.
On the other hand, Mermin proved rigorously, more than thirty years ago, that no true long-range crystalline orders are possible in the thermodynamic limit at finite temperatures in 2D. In the proof, short-range interactions are assumed, while the $`1/r`$ Coulomb interaction is too long-ranged to apply Mermin’s arguments (see Ref. for more precise mathematical conditions). Although there have been some theoretical attempts to extend the theorem to the long-range Coulomb case, no rigorous proof is attained up to now. Numerically, Gann et al. investigated the problem with a Monte Carlo method by calculating the root-mean-square displacement. However they found it difficult to rule out the possibility that the root-mean-square displacement approaches a constant value in the thermodynamic limit, which is a conventional definition of a solid.
Another intriguing problem concerning 2D systems is the melting mechanism, where the Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) theory has predicted the existence of the ‘hexatic’ phase between the liquid and the solid phases (see Refs. for reviews). The hexatic phase is characterized by a short-range positional order and a quasi-long-range orientational order. Numerical calculations have been carried out by several authors with a molecular-dynamics (MD) method. Most of the authors have obtained the melting point in good agreement with the Grimes-Adams experiment. However the results are rather controversial on the verification of the hexatic phase. Morf concluded, from a MD result for the shear modulus, that the result agrees well with the KTHNY theory. On the other hand, Kalia et al. observed hysteresis in the temperature dependence of the total energy in their MD simulation to conclude that the melting is a first-order phase transition, which is incompatible with the KTHNY theory. The Monte Carlo study by Gann et al. is also indicative of a first-order phase transition.
In addressing both of the above problems, i.e., the range of the ordering in the solid phase and the nature of the melting, the most direct way is to investigate the positional and the orientational correlation functions, since the phases should be characterized in terms of them. This is exactly the purpose of the present paper, which is done, to the best of our knowledge, for this system for the first time.
In order to accurately incorporate the temperature, we have employed Nosé-Hoover’s canonical MD method for the classical 2D electron system, while the previous calculations were done for micro-canonical ensemble. Electrons are confined to a rectangle with a rigid uniform neutralizing positive background. We impose periodic boundary conditions. An electron then interacts with infinite arrays of the periodic images with the long-range $`1/r`$ potential. We employ the Ewald summation method to take care of this. The rectangle is chosen to be close to a square to minimize surface effects. The aspect ratio of the rectangle is taken to be $`L_y/L_x=2/\sqrt{3}`$, which can accommodate a perfect triangular lattice with $`N=4M^2`$ ($`M`$: an integer) particles. The equations of motion are integrated numerically with Gear’s predictor-corrector algorithm. We adopt a time step of $`1.0\times 10^{12}`$ sec, which guarantees six-digit accuracy in the energy conservation after several tens of thousands of steps.
We have performed the simulation both from a typical liquid ($`\mathrm{\Gamma }=60`$) and from a typical solid ($`\mathrm{\Gamma }=200`$). The initial conditions are set as follows: The electrons are placed randomly in the liquid phase or placed at the perfect triangular lattice points in the solid phase. The velocities of the electrons are assigned according to the Maxwell-Boltzmann distribution in either case. The lowest energy configurations are sought with a simulated annealing method. Namely, the positions and the velocities of the electrons are updated for a certain time interval. Once a thermal equilibrium sets in, the temperature is raised or lowered by a small amount. The latest positions and velocities are used as the initial conditions for the simulation at the new temperature. This procedure temporarily puts the system out of equilibrium, but updating the positions and velocities for a certain time interval equilibrates the system. This annealing process is repeated from a liquid phase to a solid phase (or vice versa) across the transition. We take care that the system is well equilibrated, especially near and after the transition, by allowing large numbers of time steps. The results presented in this paper are for $`N=900`$ electrons with MD runs from 30,000 to 110,000 time steps for each value of $`\mathrm{\Gamma }`$. The correlation functions are calculated for the last 20,000 time steps.
Following Cha and Fertig, we define the positional and the orientational correlation functions from which we identify the order in each phase. First, the positional correlation function is defined by
$`C(r)`$ $``$ $`\rho _𝐆^{}(𝐫)\rho _𝐆(\mathrm{𝟎})`$ (1)
$`=`$ $`{\displaystyle \frac{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)\frac{1}{6}{\displaystyle \underset{𝐆}{}}e^{\mathrm{i}𝐆(𝐫_i𝐫_j)}}{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)}},`$ (2)
where $`𝐆`$ is the reciprocal vector of the triangular lattice, and $`\rho _𝐆(𝐫)=\mathrm{exp}(\mathrm{i}𝐆𝐫)`$. The angular brackets in Eq. (1) stand for both the summation over particles and the thermal average. In Eq. (2) a summation is taken over six reciprocal vector $`𝐆`$’s that give the first peaks of the structure factor. In practice, the $`\delta `$-function must be broadened so that it can be handled numerically.
The orientational correlation function is defined by
$`C_6(r)`$ $``$ $`\psi _6^{}(𝐫)\psi _6(\mathrm{𝟎})`$ (3)
$`=`$ $`{\displaystyle \frac{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)\psi _6^{}(𝐫_i)\psi _6(𝐫_j)}{{\displaystyle \underset{i,j}{}}\delta (r|𝐫_i𝐫_j|)}},`$ (4)
where $`\psi _6(𝐫)=\frac{1}{n_c}_\alpha ^{\mathrm{n}.\mathrm{n}.}e^{6\mathrm{i}\theta _\alpha (𝐫)}`$, and $`\theta _\alpha (𝐫)`$ is the angle of the vector connecting an electron at $`𝐫`$ and the $`\alpha `$-th nearest neighbor with respect to a fixed axis, say, the $`x`$-axis. The summation is taken over $`n_c`$ nearest neighbors which are determined by the Voronoi diagram , or equivalently its dual mapping, the Delaunay triangulation.
We first look at the positional and the orientational correlation functions for $`\mathrm{\Gamma }=200`$ and $`\mathrm{\Gamma }=160`$, typical solid phases, in Fig. 1. Although $`\mathrm{\Gamma }(T)`$ is high (low) enough, the positional correlation is seen to decay slowly in both cases, indicating an algebraic decay at large distances. The round-off in the correlation function around half of the linear dimension of the system size is considered to be an effect of the periodic boundary conditions. The algebraic decay of the positional correlation function implies that the 2D electron solid has only a quasi-long-range positional order at finite temperatures. Thus we have obtained a numerical indication that Mermin’s theorem applies to the electron system as well, which is consistent with the analytical (but not rigorous) results obtained in Refs. .
By contrast, the orientational order is seen to be long-ranged. Therefore while the 2D electron solid has no true long-range crystalline order, it is topologically ordered. The triangular structure is seen as the peaks in both the positional and the orientational correlation functions (see the inset of Fig. 1).
We have plotted the Delaunay triangulation of a snapshot of the electron configuration for a solid ($`\mathrm{\Gamma }=160`$) or for a liquid ($`\mathrm{\Gamma }=90`$) in Fig. 2. We can in particular look at the topological defects, i.e., five-fold and seven-fold coordinated electrons. From the result the defects are seen to appear in isolated pairs, or more precisely in quartets, in the solid phase, which explains how a quasi-long-range positional order is compatible with a long-range orientational one. On the other hand, defects appear with a high density in the liquid phase.
We now focus on the orientational correlation function near the crystallization in Fig. 3. The result is obtained by cooling the system from a liquid to a solid. The orientational order is short-ranged for $`\mathrm{\Gamma }120`$ and long-ranged for $`\mathrm{\Gamma }140`$. Around the liquid-solid boundary ($`\mathrm{\Gamma }=130`$), the orientational correlation function, plotted on a double logarithmic scale in Fig. 3, indicates an algebraic decay (while the positional order is short-ranged). The power evaluated from the data is approximately equal to unity, which is greater than the upper bound of $`1/4`$ predicted by the KTHNY theory. Large statistical errors near the transition, however, prevent us from drawing any definite conclusion on the existence of the hexatic phase. In fact, the correlation functions behave like a solid at $`\mathrm{\Gamma }=130`$ when the system is heated from a solid to a liquid with no indication of the hexatic phase. This may be due to a finite-size effect, where a solid can be pinned in a melting process. For a small system, $`N=100`$, a solid phase in fact persists down to $`\mathrm{\Gamma }=120`$ when heated.
Another finite-size effect is that, when the system crystallizes, the crystal axes can tilt from the unit cell axes of the finite system. The crystallization does occur in a tilted way in the present simulation. The misalignment causes a long relaxation time for the system to reach the lowest energy state. However the fact that the system crystallizes with tilted axes shows in itself that the $`N=900`$ system is sufficiently large in that boundary effects are not too strong. By contrast, we found that the crystal axes are always aligned to the unit cell for $`N=100`$. For the $`N=100`$ system, which is the size employed by Kalia et al. , we found no indication of the hexatic phase, either. A numerical difficulty in MD simulations also arises from finite time steps. Finite time effects might result in insufficient equilibration, especially near a continuous transition. Even if the system is well-equilibrated, it would be difficult to tell a slow exponential decay from an algebraic decay in the correlation function for a finite system.
In summary, we have performed a molecular-dynamics simulation to investigate the ordering of a classical 2D electron system. From the positional and the orientational correlation functions we have found an indication that there is a quasi-long-range positional order and a long-range orientational order in the solid phase, which implies that Mermin’s theorem is not spoiled even for the long-range $`1/r`$ interaction. On the other hand, we have obtained only a sign, not a conclusive result, for the existence of the hexatic phase predicted by KTHNY theory, which thus remains an open question.
Although we have basically electrons on a liquid-helium surface in mind, a planar classical one-component plasma is recently realized as laser-cooled ions trapped in a disk region. The disk has a finite thickness, for which stable crystalline phases are observed. If the ions could be trapped completely in 2D, the present picture would be applicable. Conversely, it is an interesting theoretical problem to extend the present line of approach to planar systems with finite thicknesses.
We wish to thank Kazuhiko Kuroki, Hiroshi Imamura, Katsunori Tagami, and Naruo Sasaki for valuable discussions. The numerical calculations were mainly done with Fujitsu VPP500 at the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
|
no-problem/9902/gr-qc9902026.html
|
ar5iv
|
text
|
# On a Quantum Equivalence Principle.
## 1 Introduction.
One of the most ambitious programs in Modern Physics comprises the quantization of General Relativity . Nevertheless, the proposed solutions to this old conundrum have up to now, in one way or in another, failed .
The solutions that in this context already exist assume always the logical (at least in some limit case) consistency of a theory involving simultaneously the postulates of Quantum Theory (QT) and of General Relativity (GR). In connection with this issue, one of the most thorny points is related to the validity in QT of the Minimal Coupling Principle (MCP). It is true that there are claims which state that the usual MCP is valid even in QT . Neverwithstanding, recently some works have appeared which contemplate the possible inconsistency of MCP on the quantum level .
In this work we will analyze more carefully this last claim and try to give an answer to the following questions:
1) Which postulate(s) of GR can be held responsible for the aforementioned incompleteness?
2) Could we define a logically consistent Quantum Minimal Coupling Principle (QMCP)?
Concerning the first question we will give an argument which claims that this inconsistency stems from the non–local character of the information that is required in order to determine in every point of spacetime the corresponding wave function. This fact will be explained constructing two Gedankenexperimente. One of them will be the usual two–slit experiment with the whole measuring device situated in a region containing a nonvanishing gravitational field in such a way that the whole experimental apparatus lies completely within the validity region of a locally flat coordinate system (LFCS). The second one will be based upon Ahluwalia’s flavor–oscillation clock . These two Gedankenexperimente will be analyzed using Feynman’s formulation in terms of path integrals. As a consequence of MCP the experimental outputs of the first case imply that curvature has no effect on the contribution to the propagator of those trajectories which do not lie completely within the validity region of LFCS, whereas the second one asserts that curvature does affect this propagator. It will be shown that this inconsistency could emerge from the fact that the required geometrical information to calculate the probability of finding a particle in any point of the respective manifold does not lie in a region with finite volume. This does not happen in classical mechanics, where the required geometrical information to describe the movement between two points can always be enclosed in a region with finite volume.
Regarding the second question we will try to obtain a logically consistent description of QT in the context of GR by means of a new QMCP. The original idea here consists in the restriction of the integration domain of the involved path integral. In other words, this new QMCP is expressed in terms of the so called Restricted Path Integral Formalism (RPIF). Afterwards, we will calculate using this idea the case of a free particle and recover, as a limit situation, Feynman’s propagator for a free particle. The physical conditions under which this happens are also obtained.
Finally, recalling the relation between RPIF and Decoherence Model (DM) we will be able to connect the issue of a logically consistent QMCP with the problem of the collapse of the wave function. In other words, with this proposal we could relate the topic of a QMCP with a claim stating that the gravitational field could be one of the physical entities driving the collapse of the wave function .
At this point it is also important to add that there are already attempts to formulate a quantum equivalence principle in the context of Feynman’s idea . This has been done starting from a path integral in a flat space parametrized with euclidean coordinates. Afterwards, a non–holonomic coordinate transformation is carried out and in this way a path integral is obtained, and the involved coordinate transformation is claimed to be a quantum equivalence principle which allows us to generalize the Feynman path integral formula in cartesian coordinates to non–euclidean spaces . Neverwithstanding, it is very important to distinguish between geometrical effects and accelerative effects . If we had a flat space and transform in a non–holonomic manner its euclidean coordinates, then we go to an accelerated reference frame, but this transformation does not endow our initial flat manifold with nonvanishing curvature, i.e., there is still no gravitational field. What has then been obtained is the description in an accelerated reference frame of the corresponding path integral. But if we insist in interpreting the resulting path integral as the propagator in a curved manifold (as a consequence of the equivalence between gravity and accelerated frames), then we may convince ourselves very easily that this proposal can not render the correct path integral in an arbitrary curved manifold. We may understand this point better noting that this non–holonomic coordinate transformation ($`dx^ic_\mu ^i(q)dq^\mu `$, where $`c_\mu ^i(q)=\frac{x^i}{q^\mu }`$ are the so called basis triads ) can be contemplated from a different point of view, namely we begin with a curved manifold in which it is possible to define a globally flat coordinate system. This last condition imposes a very stringent geometrical restriction on our curved manifold. Indeed, we already know that in the most general case this condition is not fulfilled, the presence of tidal forces allows the definition of locally flat coordinate systems but will not in general allow the definition of a globally flat coordinate system. In other words, this quantum equivalence principle does not render the path integral of a particle in an arbitrary curved manifold because from the very begining it assumes the absence of tidal forces, i.e., the absence of nonuniformities in the gravitational field.
## 2 Path Integrals and Minimal Coupling Principle.
To understand a little bit better this incompleteness argument , let us at this point consider an arbitrary nonvanishing gravitational field, and pick out a certain point $`P`$ in this manifold. The geometrical properties of GR allow us to define a LFCS, whose origin coincides with point $`P`$ and which is valid only for points “sufficiently close” to $`P`$. Take now a second point $`A`$, the only condition that we impose on this point is that it has to be located in the validity region of LFCS.
At this point let us be a little more explicit about the meaning of the phrase “validity region of the locally flat coordinate system”. A correct geometrical definition of LFCS is given by the so called Fermi Normal Coordinates , accurate to second order. The deviation from the flat case of the $`g_{\mu \nu }`$ term is proportional to $`R_{\mu l\nu m}x^lx^m`$, i.e., the local metric takes the form $`g_{\mu \nu }=\eta _{\mu \nu }+\alpha R_{\mu l\nu m}x^lx^m+O(|x^j|^3)dx^\alpha dx^\beta `$, here $`|\alpha |[0,4/3]`$, $`x^l`$ are the local space–like Lorentz coordinates, and $`R_{\mu l\nu m}`$ are the components of the Riemann tensor along the world line $`x^j=0`$. With this metric we may estimate the size of the validity region of LFCS, i.e., it comprises those points which satisfy the condition $`|R_{\mu l\nu m}x^lx^m|<<1`$.
Assume now that a freely falling quantum mechanical particle was in point $`P`$, and let us ask for the probability of finding this particle in $`A`$. From MCP we have that in LFCS the particle is described by the free particle Schrödinger equation (here we will restrict ourselves to the analysis of the limit of low velocities of Dirac’s equation). Up to this point everything seems to be logically consistent. But we know that Schrödinger formulation of QT is completely equivalent to Feynman’s one . Therefore we must be able to find this probability using Feynman’s idea, otherwise we would have the breakdown of MCP. Thus, in this formulation, the probability of finding our particle in $`A`$ is constructed with the contribution of all the trajectories that join $`P`$ and $`A`$. At this very same point we face already a conceptual problem, namely in order to perform this sum (integration) we must consider not only trajectories which can be described by LFCS, but also trajectories beyond the validity region of this coordinate system. Therefore it seems that the perspective that we obtain from this issue employing Feynman’s formulation could mean that LFCS and the laws of Physics in Special Relativity could not suffice to obtain a complete description of the aforementioned probability. It seems that this argumentation already supports the incompleteness of GR in the context of QT that has already been pointed out .
Of course, that if we do not have to sum (integrate) over all the possible trajectories and had to sum only over those trajectories which can be described by LFCS, then this inconsistency would disappear. Clearly, that would also require the introduction of a weight functional in Feynman’s path integral formulation, otherwise we could not obtain, at least as a limit case, Feynman’s result. We may understand better this point noting that if the components of the Riemann tensor increase then the validity region of LFCS becomes smaller, and therefore less trajectories will appear in the path integral. But we will expect to obtain under some conditions (for example, when the length of the classical trajectory is much smaller than the validity region of LFCS) Feynman’s case, which considers all possible trajectories. This condition could be fulfilled with the introduction a weight functional.
At this point let us underline by means of two Gedankenexperimente the logical inconsistency of a theory containing the postulates of QT and the usual MCP.
Firstly, consider the usual two–slit experiment , but this time, let us also assume that the whole experimental device is immersed in a region that contains a nonvanishing gravitational field such that this experimental apparatus lies completely inside the validity region of a locally flat coordinate system. Therefore the interference pattern that would appear in the corresponding detecting screen is, as a consequence of MCP, the same interference pattern that emerges in the case without gravitational field. This result implies that the contribution to the interference pattern, coming from those trajectories not lying completely in the validity region of the locally flat coordinate system, is the same as in the case in which we had no gravitational field.
At this point it is noteworthy to mention that this last Gedankenexperiment is not the experimental construction of Colella’s et al . Indeed, this experimental apparatus is not located within the validity region of a LFCS because in it the effect of gravity on the interference pattern of two neutron beams is analyzed.
Secondly, let us take up Ahluwalia’s most important result . He asserts that the frequency of a “flavor–oscillation clock” $`\mathrm{\Omega }^F`$ in a freely falling frame in Earth’s gravity and the same frequency $`\mathrm{\Omega }^{\mathrm{}}`$ in a gravity–free region satisfy the condition $`\mathrm{\Omega }^F<\mathrm{\Omega }^{\mathrm{}}1/\mathrm{\Omega }^{\mathrm{}}<1/\mathrm{\Omega }^F`$. Such clocks are constructed as a quantum mechanical superposition of different mass eigenstates, for instance two neutrinos from different lepton generation, $`|F_a>=cos(\theta )|m_1>+sin(\theta )|m_2>`$ and $`|F_b>=sin(\theta )|m_1>+cos(\theta )|m_2>`$.
In this last argument the gravitational system is composed by the Earth and the local cluster of galaxies, the so called Great Attractor. We must also comment that the aforementioned effect emerges because the gravitational potential of this system $`\varphi _{effe.}`$ is for points near the Earth’s surface given by two contributions $`\varphi _{effe.}=\varphi _E+\varphi _{GA}`$. The first one $`\varphi _E`$ stems from Earth’s mass while the second one $`\varphi _{GA.}`$ comes from the Great Attractor. This second term is constant up to one part in about $`10^{11}`$. Therefore if we go now to a freely falling reference frame near Earth’s surface we may get rid (as a consequence of MCP) of all gradients of the gravitational potential, nevertheless its constant parts will survive, i.e., $`\varphi _E`$ disappears but $`\varphi _{GA}`$ is preserved. In other words, gravity–induced accelerations vanish but the constant parts of it have a physical effect (via $`\varphi _{GA}`$–dependent gravitationally induced phases), something similar to the Aharanov–Bohm effect .
We may now measure time with these clocks, and thus if we consider the clock situated in the freely falling frame and assume that we started with flavor state $`|F_a>`$ and ask now for the probability of having flavor state $`|F_b>`$ at a proper time $`\tau =1/\mathrm{\Omega }^{\mathrm{}}`$, then we find that the result does not match with the probability of the gravity–free case. Clearly, we are allowed to suppose that this second Gedankenexperiment takes place within the validity region of a LFCS of an adequately chosen curved manifold.
We may see that in the context of Feynman’s formulation Ahluwalia’s result seems to claim that the contribution to the corresponding probability coming from those trajectories that are not located within the validity region of LFCS is not the same as in the case in which we had no gravitational field. This conclusion clashes with that coming from the first Gedankenexperiment.
At this point there is an additional argument which could deserve a short remark. Usually, one of the doubts around the possibility of a consistent definition in the quantum realm of MCP concerns the fact that in QT physics is described by fields, which, of course, have a non–local character. This assertion is not very precise. Indeed, if we consider the description of a simple fluid in the context of GR, then we encounter the case of a system described also by fields (velocity field, pressure field, etc., etc.), which in consequence shares this non–local character, Nevertheless, the theory of Hydrodynamics in curved spacetimes does not have the logical inconsistencies that beset QT in the context of GR.
If we take up the path integral formulation of QT, we may easily see that the probability of finding a particle in a certain point depends on geometrical information that is associated to all trajectories that join these two points. Clearly, we can never find a finite neighborhood around any of these two points containing all these trajectories. It is readily seen that this last fact does not appear in the context of classical mechanics, in which we may always find a finite neighborhood, around the starting point or the final one, containing the whole needed information. In other words, in QT the geometrical information that renders the value of the probability in the final point has non–local character, and this nonlocality is incompatible with MCP, which is a postulate based on it.
## 3 Alternative definition of Minimal Coupling in Quantum Theory.
A possible solution to this conceptual problem could be the modification of the integration domain in the corresponding path integral under the presence of a nonvanishing gravitational field. In other words, in order to evaluate the probability of finding our particle in $`A`$, knowing that it was previously in $`P`$, we could integrate only over those trajectories which lie completely inside the validity region of our locally flat coordinate system. This restriction would then allow us to evaluate the asked probability resorting only to those points of the corresponding manifold which lie completely within the validity region of LFCS. Of course, as was commented above, if we want to obtain, at least as a limit case, Feynman’s propagator for a free particle, it seems also unavoidable that a weight functional has to be included.
Clearly, this weight functional must depend on the geometrical structures $`G`$ of the corresponding spacetime.
Therefore we propose the following
Quantum Minimal Coupling Principle.
The probability of finding a spinless particle in $`A`$, knowing that it was previously in $`P`$, is given by $`|U_G|^2`$
$$U_G(A,P)=_\mathrm{\Omega }\omega _G[x(\tau )]d[x(\tau )]exp(iS[x(\tau )]/\mathrm{}).$$
(1)
Here $`\mathrm{\Omega }`$ denotes the set of all trajectories joining $`A`$ and $`P`$, $`S`$ is the classical free particle action, $`\omega `$ is a weight functional that depends on the geometrical structures $`G`$ of the corresponding manifold and also on the trajectory (for instance, it is zero for those trajectories lying not completely within the validity region of the LFCS around $`P`$), and finally $`\tau `$ is any parametrization of the respective trajectories. Of course that in the absence of gravity $`\omega _G[x(\tau )]=1`$, for all trajectory.
As was mentioned before, this definition has the characteristic that in order to calculate the needed probability it suffices to have information about the geometrical structure of the corresponding manifold (it enters in the definition of $`\omega _G[x(\tau )]`$) and also the laws of Physics in Special Relativity. This non–local character of the required information in the context of QT implies that now we must know the geometry of the manifold in order to calculate this probability.
At this point it is noteworthy to mention how this new QMCP differs and at the same time coincides with the usual MCP. To begin, let us comment that if the probability coincides (under the adequate conditions) with Feynman’s case, then we are in the spirit of MCP. Nevertheless, there is an additional point in which this proposal differs radically from the usual spirit of MCP, namely this principle claims that it suffices to have the laws of Special Relativity in order to know the result of any local experiment, i.e., it is unnecessary to have any kind of information concerning the geometry of the corresponding manifold. Here we have a different situation, because in this proposal we do need information about the involved geometry in order to know the result of any “local” quantum experiment.
Let us now underline the mathematical similarity between this QMCP definition and RPIF , which is one of the possible formulations that already exist in the context of DM, which tries to solve the so called quantum measurement problem. In other words, decoherence can be mathematically described in terms of RPIF .
This last remark allows us to interpret expression (1) stating that the geometrical structures of our manifold act on particles as an always present measuring device, and in consequence it could render a geometrical explanation to the collapse of the wave function. This conclusion not only matches with an old claim: gravity should play a fundamental role in approaches which could modify the formalism of QT , but also connects the problem of the logical consistency of MCP in the context of QT with the old conundrum around the collapse of the wave function.
On one hand this model coincides with several proposals that introduce the gravitational field as one of the physical entities that could give an explanation to this collapse . On the other hand, we must at this point also stress that this model has a fundamental difference with respect to these ideas which also use the gravitational field as an agent behind the collapse. If we take at look, for instance, at Diosi’s work we will immediately notice that in it the density operator acquires a stochastic behavior because the gravitational field does have fluctuations (with quantum origin) around the classical newtonian potential. In our case there are no spacetime fluctuations at all, and the gravitational field has a completely classical behavior.
## 4 Free Particle Propagator.
In this section we will calculate the propagator of a free particle in the context of the here proposed QMCP and recover (under the adequate conditions) Feynman’s propagator .
The first problem that in this model we face concerns the correctness of the theoretical predictions of RPIF. At this respect we must say that even though there are already theoretical results which could render a feasible framework against the one these predictions could be confronted, the problem still remains open. The reason for this lies in the fact that those experiments that are required (for example the continuous measurement of the position of a particle in a Paul trap ) have not yet been carried out.
The second problem is related to the choice of the involved weight functional appearing in expression (1). From RPIF we can not deduce the precise form of the correct weight functional, the exact expression depends on the measuring device . In other words, in this particular case it depends on the gravitational field that we could have. But we do not know how a specific gravitational field could define its corresponding weight functional. Nevertheless, in a first approach we may accept a functional that could give the correct order of magnitude of the involved effects. Hence, knowing that in the first two cases in which this formalism was used the results coming from a Heaveside weight functional and those coming from a gaussian one coincide up to the order of magnitude, allows us to consider as our weight functional a gaussian one. This form has already been used to analyze the response of a gravitational wave antenna of Weber type , the measuring process of a gravitational wave in a laser–interferometer , or even to explain the emergence of the classical concept of time . But a sounder justification of this choice comes from the fact that there are measuring processes in which the weight functional has precisely a gaussian form . In consequence we could think about a curved manifold whose weight functional is very close to the gaussain form.
In order to simplify the calculation we will consider the case of a one–dimensional harmonic oscillator subject to the action of a gaussian weight functional. The restriction on the dimensionality of the system does not mean any lose of generality in our calculation, the reason for this stems from the fact that the general case can be obtained from the one–dimensional situation, we just have to multiply the one–dimensional case by itself three times. The condition of being a harmonic oscillator will disappear because we will consider the case of a harmonic oscillator which has vanishing frequency.
Therefore our starting point is the propagator of a particle with mass $`m`$ and frequency $`w`$
$$U_G(A,P)=_\mathrm{\Omega }\omega _G[x(\tau )]d[x(\tau )]exp(\frac{i}{\mathrm{}}S[x(\tau )]),$$
(2)
where we have that
$$\omega _G[x(\tau )]=exp\{\frac{2}{T\mathrm{\Delta }a^2}_\tau ^{}^{\tau ^{\prime \prime }}[x(\tau )a(\tau )]^2𝑑\tau \}.$$
(3)
Here $`T=\tau ^{\prime \prime }\tau ^{}`$ and $`\mathrm{\Delta }a`$ represents the size of the validity region of LFCS.
At this point it is noteworthy to mention that the Weak Equivalence Principle (WEP) is still valid. Therefore in order to be able to recover it from expression (2) (which means that in the classical limit the motion of a free particle is given by geodesics) we have introduced in the weight functional the classical trajectory of the free case $`a(\tau )`$. Indeed, the classical behavior appears when $`S/\mathrm{}>>1`$, and as we also know $`S`$ does not change, at least in first order, in the vicinity of the classical trajectory. Hence if $`\omega _G[x(\tau )]`$ changes much slower than the phase in the case $`S/\mathrm{}>>1`$, then the main contribution to the propagator comes from an infinitesimal strip around the classical path (which in the case of a free particle is a geodesic). In consequence only the classical trajectory has a nonvanishing probability, and in this way we recover WEP.
The validity of WEP imposes a very strong condition on the set of possible weight functionals. Indeed, only those $`\omega _G[x(\tau )]`$ whose rate of change is much slower than the rate of change of the phase (in the limit $`S/\mathrm{}>>1`$) could be allowed. But this restriction yields also a theoretical argument against the one this QMCP could be confronted. If we could calculate the weight functional of any curved manifold, then in order to have the survival of WEP the resulting $`\omega _G[x(\tau )]`$ must fulfill this requirement, otherwise we would have manifolds in which WEP loses its validity.
In expression (2) $`S`$ is the action of a harmonic oscillator
$$S[x(\tau )]=_\tau ^{}^{\tau ^{\prime \prime }}L(x,\dot{x}),$$
(4)
$$L(x,\dot{x})=\frac{1}{2}m\dot{x}^2\frac{1}{2}mw^2x^2.$$
(5)
This case is readily calculated
$$U_G(A,P)=\sqrt{\frac{m\stackrel{~}{w}}{2\pi i\mathrm{}sin(\stackrel{~}{w}T)}}exp\left(2\frac{<a^2>}{\mathrm{\Delta }a^2}+\frac{i}{\mathrm{}}\stackrel{~}{S}_c\right).$$
(6)
In this last expression a few symbols need a short explanation. Here $`\stackrel{~}{w}^2=w^2i\frac{4\mathrm{}}{mT\mathrm{\Delta }a^2}`$, $`<a^2>=\frac{1}{T}_\tau ^{}^{\tau ^{\prime \prime }}a^2(\tau )𝑑\tau `$, and finally $`\stackrel{~}{S}_c`$ is the classical action of the fictitious complex oscillator defined by $`m\ddot{x}+m\stackrel{~}{w}^2x=0`$.
Let us now consider the situation of a free particle, in other words, let us now take the case $`w=0`$. Under this condition expression (6) becomes now
$$U_G(A,P)=\sqrt{\frac{m}{2\pi i\mathrm{}T}\frac{\sqrt{i\frac{4T\mathrm{}}{m\mathrm{\Delta }a^2}}}{sin(\sqrt{i\frac{4T\mathrm{}}{m\mathrm{\Delta }a^2}})}}exp\left(2\frac{<a^2>}{\mathrm{\Delta }a^2}+\frac{i}{\mathrm{}}\stackrel{~}{S}_c\right).$$
(7)
Expression (7) is then the propagator of a free particle in this model. It is similar to the propagator of a one–dimensional free particle whose coordinate is being continuously measured . It contains an exponential damping term which depends on the ratio between $`<a^2>`$ and the size of the validity region of LFCS. If $`<a^2>`$ is much smaller than $`\mathrm{\Delta }a^2`$ then damping plays no role at all in the dynamics of the particle.
Let us now recover Feynman’s propagator and consider the limit $`\sqrt{i\frac{4T\mathrm{}}{m\mathrm{\Delta }a^2}}0`$, which implies then that expression (7) becomes
$$U_G(A,P)=\sqrt{\frac{m}{2\pi i\mathrm{}T}}exp(\frac{i}{\mathrm{}}\frac{m}{2T}l^2)exp\left(2\frac{<a^2>}{\mathrm{\Delta }a^2}\right).$$
(8)
Here $`l`$ is the distance between points $`A`$ and $`P`$. This imposed condition will be fulfilled if $`\frac{T\mathrm{}}{m}<<\mathrm{\Delta }a^2`$.
Let us now analyze expression (8). The first two terms on the right hand side are identical to Feynman’s free particle propagator. The last factor is a new contribution, has a damping character and is a direct consequence of the measuring role that in this QMCP play the degrees of freedom of the involved manifold. If we have also that $`<a^2>/\mathrm{\Delta }a^2<<1`$, then (8) becomes
$$U_G(A,P)=\sqrt{\frac{m}{2\pi i\mathrm{}T}}exp(\frac{i}{\mathrm{}}\frac{m}{2T}l^2).$$
(9)
In order to estimate, at least very roughly, how good are (at points near to the Earth’s surface) these two last approximations, let us take a weak field description for Earth’s gravitational potential $`\varphi `$ near its surface. Under this condition we have that $`R_{oko}^j\frac{^2\varphi }{x^jx^k}`$, which implies $`\mathrm{\Delta }a10^{13}`$ cm. (this is no surprise at all, indeed if we consider a different but related case, namely the size of the validity region of the coordinate system of a uniformly accelerated observer whose acceleration is equal to the magnitude of gravity on Earth’s surface, then we find that this region has a size of approximately 1 light–year $`10^{18}`$ cm. ). Then for a free electron $`\frac{T\mathrm{}}{m}<<\mathrm{\Delta }a^2`$ breaks down if $`T10^{25}`$ sec., and $`<a^2>/\mathrm{\Delta }a^2<<1`$ is not anymore fulfilled when $`l10^{12}`$ cm. In other words, on Earth’s surface the here proposed model can not be distinguished from Feynman’s case.
## 5 Diatomic Interstellar Molecules.
An interesting case could be the analysis in this model of the movement of simple interstellar molecules. The reason for this stems from the fact that recently it was claimed that QT could participate in the determination of the structure and size of galaxies . Therefore we may wonder how the movement of a simple interstellar molecule looks like in this proposal. The idea here is to comprehend better the differences (with respect to the usual case) of the behaviour of this kind of matter and see if some new effects could emerge.
It is already known that the so called giant molecular clouds are an important component of the interstellar medium . These clouds are the coolest components of it, temperatures are in the range 10 up to 100 K, and contain several diatomic molecules, for instance, CO, CH, CN, CS, or C<sub>2</sub> .
In the case of a diatomic molecule the “effective” Hamiltonian contains a potential term $`V(R)`$ (where $`R`$ is the separation between the nuclei) which includes not only the Coulomb repulsion of the nuclei but also the effective potential due to the electron configuration. This attractive potential can be approximated, for small values of $`R`$, with a linear oscillator. This approximation is much better for heavy molecules (better for a Xe–Xe molecule than for a Ne–Ne molecule) . A better description is obtained by means of a Lennard–Jones potential.
If we consider only the vibrational description for the nuclei (rotational degrees of freedom are neglected), then we may reduce the whole problem to the analysis of a harmonic oscillator, at least in a first approach.
If one of these diatomic molecules is located in a region in which a nonvanishing gravitational field exists, then the propagator of its associated harmonic oscillator can be approximated with expression (6).
The relative probability for these systems is in the case of large $`T`$ ($`\frac{4\mathrm{}}{mTw^2\mathrm{\Delta }a^2}<<1`$)
$$P=\frac{mw}{2\pi \mathrm{}sin(wT)}exp\left(4\frac{<a^2>}{\mathrm{\Delta }a^2}\right)$$
(10)
Clearly, the wave function shows a spreading with time which does not appear in the usual case, i.e., $`\frac{mw}{2\pi \mathrm{}sin(wT)}`$. Therefore it seems that in a first approach some simple diatomic interstellar molecules would not be so strongly localized as in the common situation. We may wonder if this new spreading of the wave function of some components of the interstellar matter could imply some change in the current models that seek an explanation for the appearance of cosmic structures.
Assume now the limit $`\frac{4\mathrm{}}{mTw^2\mathrm{\Delta }a^2}>>1`$. Hence the relative probability for this system becomes
$$P=\frac{mw}{2\pi \mathrm{}}\sqrt{\frac{2}{cosh\left(2\sqrt{\frac{2\mathrm{}T}{m\mathrm{\Delta }a^2}}\right)cos\left(2\sqrt{\frac{2\mathrm{}T}{m\mathrm{\Delta }a^2}}\right)}}exp\left(4\frac{<a^2>}{\mathrm{\Delta }a^2}\right)$$
(11)
Here the relative probability diminishes, not only as a consequence of the purely geometrical term $`exp\left(4\frac{<a^2>}{\mathrm{\Delta }a^2}\right)`$, but also as a consequence of time $`T`$.
## 6 Conclusions.
Ahluwalia has pointed out that the general–relativistic description of gravitation at the quantum realm can not be considered complete. In this work we have tried to show that the geometrical information that we need to calculate probabilities (á la Feynman) in any point of a curved manifold is not enclosed in a region with finite volume, a non–local characteristic. Gravity–induced non–locality has already been analyzed and can be interpreted in the context of spinors as a gravity–induced CP violation which renders a dynamical explanation to the collapse of a neutron star into a black hole and to the involved loose of information. Therefore a more profound analysis of this gravity–induced non–locality could be important.
In relation with this non–locality of QT we have introduced a new QMCP, which comprises two important differences with respect to the usual standpoint in GR, namely;
(1) We must now abandon an old postulate, which states: the probability of finding a particle in $`A`$, knowing that it was previously in $`P`$ (both points within the validity region of LFCS), can be calculated without having any kind of information about the geometrical structure of the corresponding manifold.
(2) The second difference concerns the introduction of a restriction in the integration domain of the corresponding path integral. Mathematically this restriction is expressed by means of a weight functional, which would contain information about the geometrical structure of the involved particle.
The price paid, in connection with the need of knowing the information about the geometry of spacetime, allows us to build a bridge which establishes a relation between two topics that up to now have been not related, namely DM and QMCP. In this proposal the degrees of freedom of geometry play the role of a measuring device which acts always on a quantum particle. This idea coincides, at least partially, with a claim stating that the gravitational field could be one of the physical entities behind the collapse of the wave function.
Finally, we calculated the propagator of a free particle and have also found that it contains, under the appropriate conditions, Feynman’s result. The difference comprises an exponentially decaying factor, which depends only on the length of the classical trajectory and on the size of the validity region of LFCS. Only if the distance of the displacement has an order of magnitude similar to $`\mathrm{\Delta }a^2`$ would the damping term appear in scene. This new contribution to the propagator depends only on geometrical parameters (no dependence on the mass), and in consequence is the same for all kind of particles. We have also shown that near the Earth’s surface the new effects that in this model appear are completely negligible. In other words, this proposal coincides in any terrestrial experiment with the usual predictions.
As was commented in section three, in this model gravity could drive a collapse of the wave function (because geometry acts as a measuring device) but the role that it plays is not the same as in other models that in this direction already exist. Indeed, if we take a look at Diosi’s work we may see that the density operator acquires a stochastic behavior, which stems from fluctuations (with quantum origin) of the newtonian gravitational field.
An approximated expression for the propagator of the harmonic oscillator associated to a simple diatomic molecule situated in interstellar space has also been derived. It has been proved that this case shows a spreading with time, which emerges as a consequence of the measuring role that geometry plays in this model.
Let us also mention that the description of the problem of measurement in QT has at least five different approaches, which are mathematically equivalent . One of them is precisely RPIF, and at this point we may wonder if we might express this new QMCP in the formalism of the group approach to the master equation , as a nonlinear stochastic differential equation , or even in terms of the remaining two mathematical models.
Some points that in the here proposed context could be interesting to analyze are the introduction of spin in expression (1), this could allow us to deduce the incompleteness inequality that in connection with flavor–oscillation clocks appears in , as well as the generalization of (1) in order to include the case of Dirac’s equation.
It is noteworthy to mention that we have introduced a modification in QT, which has its origin in the degrees of freedom of geometry, and therefore could shed some light on the problem of a quantum theory of gravity. Indeed, there are some claims which state that not only GR has to be modified in order to have a quantum theory of GR, but also that QT has to suffer modifications.
Acknowledgments.
The author would like to thank A. Camacho–Galván and A. A. Cuevas–Sosa for their help, and D.-E. Liebscher for the fruitful discussions on the subject. The hospitality of the Astrophysikalisches Institut Potsdam is also kindly acknowledged. This work was supported by CONACYT Posdoctoral Grant No. 983023.
|
no-problem/9902/hep-ex9902017.html
|
ar5iv
|
text
|
# The Micro Wire Detector
## 1 Introduction
A variety of micropattern gas detectors have emerged recently, in close relation with the introduction of advanced printed circuit technology. The possibility of kapton etching has allowed new geometries like the Gas Electron Multiplier (GEM) , the Micro Groove Detector , the Well Detector , or the Micro Slit Gas Detector (MSGD) . These avalanche detectors have provided significant improvements towards the construction of a suitable gas detector for the high radiation environment of the LHC. We present here, as a result of a collaboration between the University of Santiago and the CERN CMT and SMT groups, a new idea that arises as an improvement with respect to the MSGD, which has good charge collection properties, but a limited gain (typically around 2000). The openings in the kapton foil have been reduced so as to have a pattern very similar to a GEM on one side of the detector layer. The other side, however, is made of metal strips, or wires, running across the kapton holes. The better mechanical stability of this setup, having less suspended length that in the case of the MSGD, allowed to produce thinner strips. In this way we obtain a single–stage, high gain, proportional device, combining the focused electric field in the micro–hole with the standard wire amplification and charge collection. This is why we call this device a Micro Wire Detector ($`\mu `$WD). The test of the first prototypes presented here have shown very promising results of this scheme, as a real option for a high rate tracking detector with very low amount of material.
## 2 Detector description
Two prototypes 10 x 10cm<sup>2</sup> of the $`\mu `$WD have been built and tested. The first of them consists of a kapton foil with a thickness of 50$`\mu `$m copper metallized (5 $`\mu `$m) on both sides. On one side a pattern of square holes 70 x 70$`\mu `$m<sup>2</sup> has been litographically etched. On the opposite side 25$`\mu `$m wide strips are also etched ensuring that they run in the middle of the square holes pattern. The kapton is then removed in such a way that just an insulating mechanical joint between anode (strips) and cathode (mesh of square holes) remains (see Figure 1). The real setup can be appreciated in the electron microscope photographies shown in Figure 2. The second detector has been built in the same way, with 60 x 60$`\mu `$m<sup>2</sup> cathode apertures and 25$`\mu `$m kapton thickness. The pitch of the anode strips is 100$`\mu `$m. In this design, the strips are joined in groups of two at the detector end. The chamber is finally assembled by enclosing the detector foil between two 3mm height Vectronite frames, sealed with two kapton metallized foils. The foil in front of the cathode provides the drift field while the other is set to ground.
The main differences respect to other micropattern gas detectors are that anodes are suspended and no substrate for them is needed, and that these anodes run aligned respect to the holes in the cathode metallic mesh. The detector foil material represents only 0.037% of a radiation lenght<sup>1</sup><sup>1</sup>1In the complete detector, the contribution on the drift electrode and gas should be added.. Moreover this design allows the construction of a mirror cathode device (see Figure 3) , using a second kapton foil with another cathode mesh. This scheme would imply a faster operation by improving the charge collection time as a consequence of the reduced drift gap. Also, this configuration would be less sensitive with respect to the Lorentz angle under magnetic field.
## 3 Detector performance
The first prototypes have been tested in mixtures of Ar–DME 50–50%. In Figure 4 we present the gain dependence on the cathode and drift voltages, as measured by a charge sensitive preamplifier<sup>2</sup><sup>2</sup>2ORTEC 142PC integrating the avalanche charge produced by a 5.9 keV X-ray from a <sup>55</sup>Fe radioactive source. It can be seen that gains exceeding 15000 are achievable thanks to the high non–uniformity of the electric field<sup>3</sup><sup>3</sup>3Computed with MAXWELL $`3DParameterExtractor`$ program. in the detector foil as it is shown in Figure 5.
In Figure 6 we show the gain dependence on the cathode voltage for the two different prototypes. These results were obtained from X-ray signals from a Cr anode tube. Although the $`\mu `$WD with a kapton thickness of 25$`\mu `$m exhibits higher gains, in the present development stage, the mechanical and electrical robustness of the 50$`\mu `$m foil provided more reliable working conditions. The operation of the device in non flammable mixtures (like Ar-CO<sub>2</sub> 50–50) is also possible as shown in the same figure, although with reduced values of the gain factor.
In order to study the rate capability and current distributions, the detector was irradiated again with a high intensity Cr X-ray tube. Rate capability was tested directly using a current amplifier<sup>4</sup><sup>4</sup>4 ORTEC VT120 at high rate where the peak value from the current spectra was monitored. The gain variations are less than 5% up to rates as high as 4$`\times `$10<sup>5</sup> Hz mm<sup>-2</sup> (Figure 7).
Begining from a cold start, charging up from the kapton spacers affects the gain less than a 10%, as shown in Figure 8. The uniformity of the gain was also measured in 2mm steps over a lenght of 5cm along the strips. The variations with respect to the mean were less than 10% (Figure 9). These results show that the cathode and anode planarity and the thickness uniformity of the kapton spacers are good enough.
In Figure 10 we show the anode, cathode and drift currents versus the cathode voltage (a) and versus the drift field (b) while the detector was irradiated with a high intensity X-ray beam with a 2mm diameter collimation. It is possible to obtain field configurations in which 90% of the ions migrate to the cathode producing a fast charge collection device. In Figure 11 we show the fast avalanche signal from a current amplifier (VT 120) originated by a 5.9 keV photon.
## 4 Conclusions
We present a new gas proportional device, the Micro Wire Detector. This high granularity (100$`\mu `$m pitch) position sensitive detector exhibits excellent performance characteristics: high rate capability (up to .4 10<sup>6</sup> Hz/mm<sup>2</sup>), very low amount of interposed material (0.037% X<sub>0</sub>) and high gain factor ($``$10<sup>4</sup>). Although further tests and improvements are needed, we consider it as a very promising new kind of micropattern gas device.
## 5 Acknowledgements
One of us (J.C. Labbé) would like to thank A. Placci (CERN/EP/TA1) for his encouragement and assistance to the technical development of the detector.
We thank A. Monfort, M. Sánchez (CERN EP/PES/LT Bureau d’ etudes Electroniques), L. Mastrostefano and D. Berthet (CERN EST/SM Section des Techniques Photomécaniques). Also we acknowledge A. Gandi, responsible of the Section des Techniques Photomécaniques, for his logistic support.
We are also grateful to Luciano Sánchez, from LADICIM (Universidad de Cantabria), for the excellent electron microscope pictures.
Figure Captions
| Figure 1: | Design of the Micro Wire Detector foil. |
| --- | --- |
| Figure 2: | Electron microscope image of the detector foil as seen from the cathode side (a), and the same foil seen from the anode side (b). |
| Figure 3: | Schematic view of the double cathode $`\mu `$WD proposed in the article. |
| Figure 4: | Gain of the 50$`\mu `$m $`\mu `$WD as a function of the cathode voltage in Ar-DME 50-50, obtained from the pulse height spectra of a <sup>55</sup>Fe source (a). Gain of the same prototype as a function of drift field (b). |
| Figure 5: | Electric field configuration for one detector cell in the plane transverse to the anode direction (across the middle of one hole). Lines corespond to equal electric field intensity in kV/cm (V<sub>anode</sub>=0V, V<sub>cathode</sub>=-500V, V<sub>drift</sub>=-3000V). Straight lines indicate the limits of the kapton spacer not intersected by the chosen xz plane. |
| Figure 6: | Comparison of the callibrated gain factor between the 50$`\mu `$m and 25$`\mu `$m prototypes tested in Ar-DME 50–50. Also it is shown the gain for the 50$`\mu `$ prototype with Ar–CO<sub>2</sub> 50–50. |
| Figure 7: | Relative values of the peak current spectra (from the signal of a VT120 amplifier) versus the photon rate interactions from a Cr anode X-ray tube. |
| Figure 8: | Relative variations of the gain as a function of time, measured every 30s, beginning from a cold start. |
| Figure 9: | Relative variaations of the detector gain measured in 2mm steps over a lenght of 5cm along the strips. |
| Figure 10: | Anode, drift and cathode currents versus: (a) cathode voltage and (b) drift field, under high intensity X-ray irradiation (2mm diameter collimator). |
| Figure 11: | Avalanche signal of a 5.9 keV photon interaction from a <sup>55</sup>Fe source obtained with the VT120 ORTEC amplifier. Note that the horizontal scale is 10ns/division and vertical scale is 20mV/division. |
|
no-problem/9902/astro-ph9902020.html
|
ar5iv
|
text
|
# Far-UV and deep surveys: bursting dwarfs versus normal galaxies
## 1 Introduction
The apparent excess of the number of galaxies at faint magnitudes in the blue, relative to predictions of non-evolving models, is a long-standing problem of cosmology (Koo & Kron 1992). Pure luminosity evolution (PLE) scenarios fitting the colors of nearby galaxies reproduce the optical and near-infrared observations for an open Universe (Guiderdoni & Rocca-Volmerange 1990; Pozetti et al. 1996; Fioc 1997) or for a flat, $`\mathrm{\Lambda }`$-dominated, one (Fukugita et al. 1990; Fioc 1997). To solve the problem of the blue excess in a flat, $`\mathrm{\Lambda }=0`$, Universe, various solutions were proposed as a strong number density evolution of galaxies via merging (Rocca-Volmerange & Guiderdoni 1990; Broadhurst et al. 1992), a very steep luminosity function (Driver et al. 1994) or a population of fading dwarf galaxies (Babul & Rees 1992; Babul & Ferguson 1996; Campos 1997).
Determining the distance of these galaxies is crucial to understand their nature: they should be relatively high-redshift, intrinsically bright galaxies according to PLE scenarios, and low-redshift, intrinsically faint galaxies in other cases. Despite the very faint magnitudes now reached in the optical by the Hubble Space Telescope (Hubble Deep Field (HDF), Williams et al. 1996) or in the near-infrared (e.g. Moustakas et al. 1997), and the advent of complete, deep spectroscopic surveys (Lilly et al. 1995; Cowie et al. 1996), the question of the blue excess is still open (Ellis 1997).
Far-UV studies can throw new light on this problem. By analyzing bright data at 2000 Å from the balloon experiment FOCA2000, Armand & Milliard (1994) showed that the galaxy number-magnitude counts predicted with classical optical luminosity functions and typical UV-optical colors as a function of morphology are deficient by a factor 2 relatively to the observations. Moreover, the color distribution indicates that this excess is due to very blue galaxies ($`UVB2`$). A question immediately arises: is the UV excess related to the blue excess? Gronwall & Koo (1995) introduced non-evolving populations of faint very blue galaxies, contributing significantly to faint counts in order to find the luminosity function (LF) that best fitted the observational constraints. A similar population was proposed by Pozzetti et al. (1996), but with a much smaller contribution, in order to fit the $`BR`$ color distributions. However, very blue colors require that individual galaxies are bursting and are therefore rapidly evolving. By modelling the spectral evolution of these galaxies taking in consideration post-burst phases, Bouwens & Silk (1996) concluded that the LF adopted by Gronwall & Koo (1995) leads to a strong excess of nearby galaxies in the redshift distribution and that vBG may not be the main explanation of the blue excess. Such a conclusion was also drawn by Koo (1990).
In this paper we combine the findings of recent deep surveys with the new UV data (Sect. 2) in order to develop a model of vBG and to determine their luminosity function (Sect. 3). We then discuss their contribution to faint galaxy counts (Sects. 4 and 5), depending on the cosmology.
## 2 Observational evidence for very blue galaxies
In contrast with the so-called ‘normal’ galaxies of the Hubble sequence, which are supposed to form at high redshift with definite star formation timescales as in FRV, bursting galaxies evolve rapidly without clear timescales. In the red post-burst phases they might be indistinguishable from normal slowly evolving galaxies. However they can be identified during their bluest, bursting phase, allowing determining their evolution and number density.
The existence of a population of galaxies much bluer than normal and classified as starbursts has been recently noted at optical wavelengths by Heyl et al. (1997, hereafter HCEB). At fainter magnitudes ($`B=22.524`$), the Cowie et al. (1996) deep survey has revealed two populations of blue ($`BI<1.6`$) galaxies (Figs. 1 and 7).
Normal star forming galaxies, as predicted by standard PLE models, are observed at high redshift ($`z>0.7`$), but another clearly distinct population of blue galaxies is identified at $`0<z<0.3`$. Some of these galaxies are very blue. This second population was previously observed in the brighter survey ($`b_\mathrm{j}<22.5`$) of Colless et al. (1993) and was thought to be the cause of the blue excess, in the absence of high-redshift galaxies. More recently, Roche et al. (1997) observed two kinds of blue galaxies at $`I<24`$: a population with a number density in agreement with PLE models, and a subset of vBG with small angular sizes.
The best constraint on vBG comes from the far-UV (2000 Å) bright counts observed with the balloon experiment FOCA2000 (Armand & Milliard 1994). By using a standard optical LF, the authors obtain a strong deficit of predicted galaxies in UV counts across the magnitude range ($`UV=1418`$) and argue in favor of a LF biased toward later-type galaxies.
In FRV, we used the LF of Marzke et al. (1994), while here we prefer to adopt the LF of HCEB as it contains a higher fraction of star-forming galaxies, favoring a better fit to the data. Even with this new LF, the star formation scenarios proposed in FRV predict the same factor of 2 deficit in UV counts (Fig. 2, dashed line) as that found by Armand & Milliard (1994). This is despite the rather high normalization of the LF to the bright multispectral galaxy counts of Gardner et al. (1996) and Bertin & Dennefeld (1996) (see the discussion in FRV). Moreover, the predicted $`UVB`$ color distributions show a clear lack of blue galaxies, notably of those with $`UVB<1.5`$ (Fig. 2, dashed lines). A 10 Gyr-old galaxy forming stars at a constant rate, would only have $`UVB1.2`$. Although lowering the metallicity of the models may lead to bluer colors (Fioc & Rocca-Volmerange 1998, in preparation), they will still be too red to explain the data. A population of bursting galaxies is clearly needed to explain UV counts and the Cowie et al. (1996) data.
## 3 Modelling very blue galaxies
### 3.1 Star formation scenario
Galaxies can have very blue colors either if they are very young or if they are undergoing enhanced star formation. Two kinds of models have been advanced by Bouwens & Silk (1996) to maintain a population of vBG over a wide range of redshifts. In the first one, new blue galaxies form continuously, leaving red fading remnants, whereas in the second, star formation occurs recurrently. The second scenario has both observational and theoretical support and is adopted in the following. Smecker-Hane et al. (1996) concluded to an episodic star formation rate (SFR) from the analysis of the stellar populations observed in the Carina dwarf spheroidal galaxy. Recurrent star formation may also provide an attractive explanation of the existence of various types of dwarf galaxies, as proposed by Davies & Phillips (1988): blue compact dwarfs may correspond to bursting phases and dwarf irregulars to quiescent ones. According to the stochastic self propagation star formation theory (Gerola et al. 1980), episodic star formation may be a common phenomenon (Comins 1984). Such a behavior should be more frequent in dwarf galaxies than in giant galaxies, since the probability of propagation of star formation increases with galaxy mass (Coziol 1996). The feedback of massive stars on the interstellar medium via supernovae and winds may also lead to oscillations of the SFR (Wiklind 1987; Firmani & Tutukov 1993; Li & Ikeuchi 1988). Finally, in the models where the SFR is controlled by the interactions with the environment (Lacey et al. 1993), the lower frequency of interactions of dwarf galaxies may result in an episodic star formation.
For the sake of simplicity, we assume that all bursting galaxies form stars periodically. In each period, a burst phase with a constant SFR $`\tau `$ and the initial mass function (IMF) of Rana & Basu (1992) used in FRV for normal galaxies is followed by a quiescent phase without star formation. We do not apply any extinction to these galaxies. Because they are dwarf, we expect them to have a low metallicity and, therefore, to suffer little extinction (Heckman et al. 1996). Moreover, the extinction law (Calzetti et al. 1995), the respective distributions of the dust and stars (Gordon et al. 1997) and the ability of dwarf galaxies to retain the metals expelled by dying stars (e.g. Dekel & Silk 1986), as well as the dilution factor of these ejecta in the interstellar medium, are still under debate. Neglecting the extinction is clearly a critical assumption, but also a conservative one since any extinction would redden the spectrum and force us to adopt more extreme star formation parameters to recover very blue colors. For the same reason, we do not consider the hypothesis of a low upper mass cut-off of the IMF, which has been suggested by Doyon et al. (1992), but is controversial (Lançon & Rocca-Volmerange 1996, Vacca et al. 1995). In the opposite, an IMF enriched in massive stars, either because of a flatter slope or because of a high lower mass cut-off, as advocated for some starbursts (Puxley 1991, Rieke et al. 1993), is more attractive since burst phases will be bluer. Recent studies (Calzetti 1997, Scalo 1997, Massey & Armandroff 1995) seem actually to favor a rather standard IMF, though maybe slightly flatter at high mass than the one used in FRV. Such an IMF, e.g. Salpeter (1955), may indeed provide a better fit to the UV data (Fig. 2, dotted line) than our standard IMF (Rana & Basu 1992, solid line). The Salpeter IMF however gives very similar results for other galaxy counts and even slightly overpredicts the $`(F300WF450W)`$ color distribution of HDF galaxies in the blue. For this reason, and also to preserve the consistency with that of normal galaxies, we prefer in the following to adopt the Rana & Basu IMF for all galaxies.
A good agreement with observational constraints is obtained with 100 Myr-long burst phases occurring every 1 Gyr. Surprisingly, similar values were adopted by Olson & Peña (1976) to fit the colors of the Small Magellanic Cloud. One should however be aware that these values are very uncertain, depending particularly on the metallicity of stars, and are simply mean values aimed at reproducing the $`UVB`$ color distribution.
### 3.2 Luminosity function
Because bursting galaxies evolve very rapidly and have thus a large variety of luminosities and colors at every redshift, we distribute them in subtypes as a function of the time elapsed since the beginning of the last burst. The LF of each subtype is derived from the SFR function $`\mathrm{\Phi }(\tau )`$ that we parameterize, by analogy with the LF of normal galaxies, as
$$\mathrm{\Phi }(\tau )\mathrm{d}\tau =\varphi _\tau ^{}\mathrm{exp}\left(\frac{\tau }{\tau ^{}}\right)\left(\frac{\tau }{\tau ^{}}\right)^{\alpha _\tau }\frac{\mathrm{d}\tau }{\tau ^{}}.$$
The lack of vBG at $`z0.4`$ in the Cowie et al. (1996) redshift distribution (Fig. 7) strongly constrains the LF. This deficit may be interpreted in two ways. Either bursting galaxies formed only at low redshifts ($`z<0.4`$), or the lack of vBG may correspond to the exponential cut-off of the adopted Schechter function. Physical arguments for low redshifts of galaxy formation are weak. Scenarios invoking a large population of blue dwarf galaxies, as proposed by Babul & Rees (1992), generally predict a higher redshift of formation ($`z1`$). Adopting the second explanation, we get $`M_{\mathrm{b}_\mathrm{j}}^{}17`$ (for $`H_0=100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) for the overall LF of bursting galaxies at $`z=0`$ and can constrain the other parameters. A steep LF extending to very faint magnitudes leads to a large local ($`z<0.1`$) excess in the redshift distribution (Bouwens & Silk 1996; Driver & Phillips 1996). A steep slope ($`\alpha <1.8`$) is only necessary to reconcile predicted number counts with observations in a flat Universe. A shallower slope is possible in other cosmologies. In the following, we adopt $`\alpha _\tau =1.3`$ for bursting galaxies. Note that this translates into a steeper slope of the LF ($`\alpha =1.73`$), as already noticed by Hogg & Phinney (1997) for bursts. The LF is normalized to agree with the UV counts and the Cowie et al. (1996) redshift distribution. Table 1 gives the parameters of the SFR function and those of a Schechter fit to the corresponding luminosity function from $`b_\mathrm{j}=19`$ to $`13`$.
## 4 Galaxy counts
Predictions of galaxy counts, with the bursting population added to normal galaxies, have been computed in three cosmologies: an open Universe ($`\mathrm{\Omega }_0=0.1,\mathrm{\Lambda }_0=0,H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), a flat Universe ($`\mathrm{\Omega }_0=1,\mathrm{\Lambda }_0=0,H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), and a flat, $`\mathrm{\Lambda }`$-dominated, Universe ($`\mathrm{\Omega }_0=0.3,\mathrm{\Lambda }_0=0.7,H_0=70\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). The value of $`H_0`$ is chosen in each cosmology in order to obtain a 13 Gyr-old Universe.
The spectral evolution of normal galaxies is modelled as in FRV<sup>1</sup><sup>1</sup>1A constant SFR is assumed for Sd-Im galaxies. and takes into account the stellar emission, the nebular emission and the extinction. A star formation scenario fitting the observed optical to near-infrared spectral energy distribution of nearby templates is adopted for each Hubble type, with a star formation timescale increasing from the early to the late-types. The age of each scenario determines the redshift of formation ($`z_{\mathrm{for}}510`$ for spheroidals and early-spirals, and $`z_{\mathrm{for}}2`$ for late-spirals).
Especially important for predicting galaxy counts (e.g. Pozetti et al. 1998) is the fact that all the light shortward of the Lyman break is absorbed, either by the gas or by the dust surrounding star-forming regions, in agreement with the observations of the Lyman continuum by Leitherer et al. (1995). For high-$`z`$ galaxies, this feature is redshifted in the optical and produces UV-dropouts (see Fig. 3) similar to those observed in the HDF (Steidel et al. 1996, Madau et al. 1996). Although such UV-dropouts are also caused by the intergalactic medium (Madau 1995), the predicted counts will essentially be the same. We neglect the depression of the spectrum between 912 Å and 1215 Å due to the blanketing opacity of Lyman series lines. At $`z<3`$, the absorption shortward of 912 Å is by far more important, as evidenced by the figure 3 of Madau et al. (1996). Only $``$20 % of the galaxies are at higher redshift at $`b_\mathrm{j}=28`$, according to our predictions, and neglecting the additional absorption in the \[912 Å, 1215 Å\] wavelength range should not change significantly the predictions.
To ensure consistency with HCEB’s LF, the LF of ‘normal’ galaxies is constructed in the following way: bursting galaxies are classified into the three broad spectral types defined by HCEB, according to their \[O ii\] equivalent width. For each class, the LF of ‘normal’ galaxies is then added to that of bursting galaxies and its parameters (assuming $`\alpha =1`$) are fitted to the local step-wise LF of HCEB. We prefer to use the step-wise LF, as the Schechter parameterization of HCEB as a function of redshift poorly fits the data near $`M^{}`$ at $`z0`$. Characteristics of the LF finally adopted are given in Table 1.
Galaxy counts, color and redshift distributions at bright UV magnitudes are plotted on Figs. 2 and 4 and compared to the data of Armand & Milliard (1994) and the preliminary results of Treyer et al. (1998). An open Universe is adopted for bright counts, which anyway depend weakly on cosmology. Though faint in the blue, bursting galaxies contribute strongly to the UV bright counts thanks to their blue $`UVB`$ colors, improving significantly the fits to the UV number-magnitude counts and color distributions (Fig. 2, solid lines). The agreement with the redshift distribution of Treyer et al. (1998) is also reasonably good (Fig. 4).
The geometry of the Universe becomes important at fainter magnitudes. Galaxy counts in $`b_\mathrm{j}`$, $`U`$, $`F300W`$, $`I`$ and $`K`$, redshift and color distributions are plotted on Fig. 5, 6, 7, 8 and 9. The contribution of bursting galaxies to counts at longer wavelengths is much smaller than in the UV. They represent less than 10 per cent of the total number of galaxies at $`B=22.524`$ in the Cowie et al. (1996) redshift survey and cannot be the main explanation of the excess of faint blue galaxies observed over the no-evolution predictions. High-redshift, intrinsically bright galaxies forming stars at a higher rate in the past are the most likely explanation. They correspond to the $`z>1`$ tail of the Cowie et al. (1996) redshift distribution and are well modelled with PLE scenarios (Fig. 7). In an open or a flat, $`\mathrm{\Lambda }`$-dominated, Universe, PLE scenarios reproduce the $`b_\mathrm{j}`$, $`U`$, $`I`$ and $`K`$ counts (Fig. 5), assuming a normalization of the LF to the bright counts of Gardner (1996) as discussed in FRV. They also fit the redshift and color distributions (Figs. 6 to 9).
The agreement with the blue counts in the Hubble Deep Field (Williams et al. 1996) is notably satisfying. Though a small deficit may be observed in the $`F300W`$ band (3000 Å), the $`F300WF450W`$ (3000 Å–4500 Å) color distribution is well reproduced (Fig. 9).
The fraction of vBG at these faint magnitudes is small; they are therefore not the main reason for the agreement with HDF data. Since the volume element at high redshift is lower in a flat, $`\mathrm{\Lambda }=0`$, Universe, the contribution of bursting galaxies relative to normal galaxies is higher, but, as modelled here, they are unable to reproduce the faint counts in any of the four bands (dotted and dot-dashed lines, Fig. 5).
## 5 The angular correlation function
The angular correlation function might be a useful constraint on bursting galaxies, as it is directly related to the redshift distribution. In a $`B_\mathrm{J}=2023.5`$ sample, Landy et al. (1996) recently obtained an unexpected increase of the amplitude ($`A_\mathrm{w}`$) of the angular correlation function with galaxy colors $`UR_\mathrm{F}<0.5`$, and suggested that this might be due to a population of vBG located at $`z<0.4`$. We have computed $`A_\mathrm{w}`$ from our redshift distributions, assuming a classical power law ($`\xi (r,z)=(r_0/r)^\gamma (1+z)^{(3+ϵ)}`$) for the local spatial correlation function and no evolution of the intrinsic clustering in proper coordinates ($`ϵ=0`$). A slope $`\gamma =1.8`$ and a single correlation length $`r_0=5.4h^1\mathrm{Mpc}`$ (see Peebles 1993) have been adopted for all types. The increased correlation in the blue naturally arises from our computations (Fig. 10) and is due to the population of bursting galaxies in our model. The interval of magnitudes, the faint $`M^{}`$, and the color criterion conspire to select galaxies in a small range of redshift. In spite of the simplicity of our computation of $`A_\mathrm{w}`$, the trend we obtain is very satisfying. Increasing the complexity of the model might afford a better fit to the $`A_\mathrm{w}`$-color relation, but at the expense of a higher number of parameters.
## 6 Conclusion
We have modelled a vBG population prominent in the bright UV counts by assuming that they are galaxies that undergo cycles of star-formation activity. A cycling star formation rate leads to very blue colors in a more physical way than by assuming a population of unevolving galaxies. Our modelling fits the 2000 Å bright counts (Armand & Milliard 1994), the redshift survey of Cowie et al. (1996) and the angular correlation function of Landy et al. (1996). The nature of vBG is poorly constrained from the data studied here, but we tentatively identify them from their typical luminosities and $`\mathrm{H}\alpha `$ equivalent widths ($`200`$ Å) with H ii galaxies (Coziol 1996). Such a conclusion is reinforced by observations of vBG at $`z0.10.35`$ with low luminosities, but high surface brightnesses, and intense emission lines (Koo et al. 1994). Multispectral observations of individual galaxies, from the UV to the infrared (e.g. Almoznino & Brosch 1998), should help to determine the possible contribution of the underlying population of old stars, and then the star formation history.
Very blue galaxies, as modelled in this paper, are only a small fraction of the number of galaxies predicted at faint magnitudes in the visible and are not the main reason for the excess of faint blue galaxies, although they may cause some confusion in the interpretation of the faint surveys. In an open or a flat, $`\mathrm{\Lambda }`$-dominated, Universe, the population of normal high redshift star-forming galaxies, even with a nearly flat LF, reproduces fairly well the counts down to the faintest magnitudes observed by the Hubble Space Telescope.
As is now well established, this population is unable to explain the excess of faint blue galaxies in a flat Universe (e.g. Metcalfe et al. 1996). Recent morphological surveys have revealed a higher number of irregular/peculiar galaxies (e.g. Glazebrook et al. 1995; Odewahn et al. 1996) with smaller sizes at faint magnitudes than expected from PLE models. Although these results depend sensitively on the evolution of the apparent UV morphology (Abraham et al. 1996), which may make galaxies look more patchy (O’Connell 1997) and even split them in several pieces at high redshift (Colley et al. 1996), they seem to favor bursting dwarfs as a possible explanation of the blue excess in a flat Universe. Simply changing the local luminosity function may, however, not be the solution, since a significant steepening of the slope would lead to an excess of galaxies at very low redshift, whereas a much higher normalization would be in disagreement with UV counts. An alternative is to produce stronger bursts at high redshift. This would brighten them, but would also raise the number of old stars in them and make it more difficult to obtain very blue colors at low redshift. If the blue excess is really due to bursting galaxies, the hypothesis of the conservation of the number of galaxies must be relaxed. Either new galaxies are continually formed or galaxies have merged. In the first case, many old red remnants must have been produced and might be detected by future near-infrared surveys, while in the second case, many merging galaxies should be observed in the far-infrared by the Infrared Space Observatory.
###### Acknowledgements.
We are particularly grateful to Malcolm Bremer for carefully reading the manuscript. We acknowledge fruitful discussions with Bruno Milliard and José Donas and we thank them for providing us with details on the FOCA experiment. We also thank Joe Silk and Gus Evrard for their remarks and recommendations regarding the continuation of this work. M. F. acknowledges partial support from the National Research Council through the Resident Research Associateship Program.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.