id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
|---|---|---|---|
no-problem/9901/astro-ph9901073.html
|
ar5iv
|
text
|
# LiBeB, Cosmic Rays and Gamma-Ray Line Astronomy1footnote 11footnote 1Conference was held in Paris, France in December 1998. Proceedings will be edited by R. Ramaty, E. Vangioni-Flam, M. Cassé, & K. Olive and published in the ASP Conference Series
Conference Highlights
Reuven Ramaty<sup>1</sup>, Elisabeth Vangioni-Flam<sup>2</sup>, Michel Cassé <sup>2</sup> and Keith Olive<sup>3</sup>
<sup>1</sup>NASA Goddard Space Flight Center
<sup>2</sup>Institut d’Astrophysique, Paris
<sup>3</sup>University of Minnesota
The light elements Li, Be and B (LiBeB) play a unique role in astrophysics. The Li abundance of old halo stars is a key diagnostic of Big Bang nucleosynthesis (BBN), along with <sup>2</sup>H and <sup>4</sup>He. The essentially constant Li abundance (Li/H$`2\times 10^{10}`$, Spite plateau) as a function of metallicity \[Fe/H\] for low metallicity stars (\[Fe/H\]$`<`$-1) is believed to be the primordial abundance resulting from BBN. \[Fe/H\]$``$log(Fe/H)$``$log(Fe/H), where Fe/H is the Fe abundance by number relative to H and (Fe/H) is the solar system value.
The rare and fragile LiBeB nuclei are not generated in the normal course of stellar nucleosynthesis and are, in fact, destroyed in stellar interiors, a characteristic that is reflected in their very low abundances. Cosmic-ray interactions contribute to their production, but only <sup>6</sup>Li, <sup>9</sup>Be and <sup>10</sup>B are entirely cosmic-ray produced. Neutrino induced spallation, <sup>12</sup>C($`\nu ,\nu ^{}`$p)<sup>11</sup>B appears to play an important role in the origin of B by producing the excess <sup>11</sup>B needed to account for the B isotopic ratio in meteorites which exceeds the predictions of all viable cosmic-ray scenarios. While reactions on metals (primarily C and O) contribute to all of the LiBeB nuclei, reactions of fast $`\alpha `$ particles on ambient He produce both <sup>7</sup>Li and <sup>6</sup>Li, and are the dominant source of the latter. Nucleosynthesis in a variety of other Galactic objects, including Type II supernovae, novae and giant stars produce the bulk of the <sup>7</sup>Li at epochs when \[Fe/H\] exceeds about -1.
Traditionally, the cosmic-ray role in LiBeB evolution was investigated by assuming that at all epochs of Galactic evolution cosmic rays with energy spectra similar to those observed in the current epoch are accelerated out of the average interstellar medium and interact in the interstellar medium (ISM), mostly with C, N and O. This GCR paradigm, however, appears to be in conflict with recent measurements of Be and B abundances in low metallicity halo stars, achieved with the 10 meter KECK telescope and the Hubble Space telescope. The GCR paradigm predicts a quadratic correlation of Be and B vs. Fe, as opposed to the data which show a quasi linear correlation. As a consequence, the paradigm has been modified (Cassé et al. 1995, Nature, 373, 318; Ramaty et al. 1996, ApJ, 456, 525) by augmenting the cosmic rays accelerated out of the average ISM with a metal enriched component confined predominantly to low energies ($`\stackrel{<}{}`$100 MeV/nucleon) and thought to be accelerated out of the winds of Wolf-Rayet stars and the ejecta of supernovae. More recently, it was suggested (Lingenfelter et al. 1998, ApJ, 500, L153; Higdon et al. 1998, ApJ, 509, L33) that the cosmic rays themselves are accelerated mostly out of supernova ejecta rather than the average ISM, implying that the source material of the cosmic rays would be metal enriched at all epochs of Galactic evolution. Both of these models now converge towards acceleration by shocks in superbubbles, but they differ in the employed particle energy spectra, a distinction that could be tested by nuclear gamma-ray line observations. However, the effect is only marginally detectable by present generation gamma-ray telescopes.
Light element research thus impacts several important astrophysical problems, specifically BBN, the origin of cosmic rays, Galactic chemical evolution, and gamma-ray astronomy. These were then the topics of the Conference and they will be covered in detail in the upcoming Proceedings. Here we summarize some of the highlights.
Critical considerations of the flatness of the Spite plateau were presented by Paolo Molaro. These are essential for establishing the primordial Li abundance. An important issue in this context is the amount of Li destruction (if any) in the observed stars. Marc Pinsonneault and Sylvie Vauclair addressed this problem. Another venue for establishing the primordial nature of Li in connection with binaries was discussed by Francois Spite. The relationship of the light element data to BBN was reviewed by Keith Olive.
The <sup>6</sup>Li observations in low metallicity stars were reviewed by Lewis Hobbs. There are now good indications that <sup>6</sup>Li is present in such stars, with an abundance of a few percent relative to <sup>7</sup>Li and a factor of several tens relative to Be. The abundance ratio relative to Be, compared with the expected ratio from the various cosmic-ray scenarios, implies that <sup>6</sup>Li could not have been severely depleted in the stars where it is detected. Consequently, since <sup>6</sup>Li is more fragile than <sup>7</sup>Li, the <sup>7</sup>Li depletion should also be small. The abundance ratio relative to <sup>7</sup>Li shows that cosmic-ray interactions could not have made a significant contribution to the Li/H of the Spite plateau. All of these reinforce the finding that the plateau value indeed represents the correct primordial abundance. <sup>6</sup>Li so far has been detected in only two stars. As its production history could be quite different from that of Be (being very efficiently produced in interactions involving only He, unlike Be which requires the spallation of metals), future observations over a broad range of metallicities could lead to interesting surprises.
The very important new data on O abundances in low metallicity stars were presented by Ramon Garcia Lopez (see Israelian et al. 1998, ApJ, 507, 805). Contrary to previous data, the new observations, if confirmed, show that O/Fe is a monotonically increasing function of decreasing metallicity, reaching values that exceed the solar ratio by a factor of $``$4 at \[Fe/H\]$`=1.5`$ and by a full order of magnitude at \[Fe/H\]$`=3`$. Some of this increase is due to the absence of Type Ia supernovae in the early Galaxy. The additional increase is not well understood, it could be due to low Fe yields relative to O in the first generation of core collapse supernovae, or possibly due to mixing effects since, as pointed out by Audouze and Silk (1995, ApJ, 451, L49) the ISM of the early Galaxy, being metal enriched by only a small number of core collapse supernovae, could be quite inhomogeneous. In any case, the enhanced early Galactic O abundance makes cosmic-ray acceleration out of the average ISM more efficient. This effect, coupled with the possible lower Fe yield per supernova, allowed Brian Fields and Keith Olive to show that cosmic-ray acceleration out of the average ISM, hitherto believed untenable, could be viable. Their model also implies a decrease of <sup>6</sup>Li/Be as a function of increasing metallicity, a result which appears to be consistent with the fact that the early Galactic ratio mentioned above probably exceeds the meteoritic ratio at solar metallicity.
A critical discussion of the NLTE effects, which are essential for the abundance determinations, particularly that of B, was given by Dan Kiselman. Douglas Duncan reviewed the B observations and Dieter Hartmann discussed the neutrino induced processes in core collapse supernovae, that in particular lead to the production of <sup>11</sup>B. As already mentioned, this process provides a plausible explanation for the excess <sup>11</sup>B measured in meteorites. Stellar evolution, another very important ingredient necessary for understanding the implications of the light element data, was discussed by Marc Pinsonneault. The status of Galactic nuclear gamma-ray line observations, showing that the previously reported observations of Orion are no longer valid, was reviewed by Hans Bloemen. In the absence of nuclear gamma-ray data, the detection of broad soft X-ray lines (particularly the lines of O just below 1 keV) resulting from electron capture and excitation on fast ($``$1MeV/nucleon) ions, could provide independent information on the existence of low energy cosmic rays. This topic was discussed by Vincent Tatischeff. The capabilities of the gamma-ray imaging and spectroscopic mission INTEGRAL, to be launched soon, were discussed by Volker Schönfelder and Bertrand Cordier.
Current epoch cosmic-ray observations of the electron capture radioisotope <sup>59</sup>Ni and its decay product <sup>59</sup>Co, with an instrument on the currently active ACE mission, were presented by Robert Binns. <sup>59</sup>Ni decays by electron capture with a half life of $`7.6\times 10^4`$ years. However the decay is suppressed if the acceleration time scale is shorter than the lifetime because the atom is stripped as it is accelerated (Cassé and Soutoul 1975, ApJ, 200, L75). The fact that much more <sup>59</sup>Co than <sup>59</sup>Ni is observed, suggests a delay ($``$10<sup>5</sup> years) between nucleosynthesis and acceleration. This makes it unlikely that supernovae accelerate their own ejecta, but still allows cosmic-ray acceleration from metal enriched superbubbles, as in the Higdon et al. model mentioned above.
Several theoretical papers on cosmic-ray origin and acceleration mechanisms were presented. Jean-Paul Meyer and Donald Ellison reviewed their previously published model (Meyer et al. 1997, ApJ, 487, 182; Ellison et al. 1997, ApJ, 487, 197) in which the current epoch cosmic rays originate from an average ISM of solar composition and interstellar dust plays an important role in determining the abundances. They also discussed the shortcomings of the recently proposed model (Lingenfelter et al., 1998, ApJ, 500, L153) in which each supernova accelerates its own freshly produced refractory metals. Maurice Shapiro reviewed his previously proposed model based on the preacceleration of the cosmic rays by coronal mass ejection driven shocks on low mass, cool stars. Acceleration in superbubbles was discussed by Andrei Bykov and Etienne Parizot, who emphasized that the conditions in the superbubbles that would to lead to cosmic rays with hard energy spectra at low energies up to a cutoff at an energy which is still nonrelativistic. These are the low energy cosmic rays which have been postulated to produce the bulk of the Be at low metallicities (see Vangioni-Flam et al. 1996, A&A, 468, 199). On the other hand, as pointed out in the publication of Higdon et al., since these giant superbubbles are thought to fill up a large fraction of the ISM, they are the most likely site for the acceleration of the cosmic rays, which of course show no cutoff up to very high ultrarelativistic energies. Thus, it is still not clear whether the postulated Galaxy wide low energy cosmic-ray component exists, a question that should be resolved by future gamma-ray line observations.
In summary, LiBeB research indeed spans a broad range of interesting problems that will be covered in the planned Proceedings.
|
no-problem/9901/cond-mat9901192.html
|
ar5iv
|
text
|
# REFERENCES
A Solvable Model of Interacting Fermions in Two Dimensions
B. Sriram Shastry <sup>*</sup><sup>*</sup>*E-mail address: bss@physics.iisc.ernet.in
Department of Physics,
Indian Institute of Science, Bangalore 560012, India
Diptiman Sen E-mail address: diptiman@cts.iisc.ernet.in
Centre for Theoretical Studies,
Indian Institute of Science, Bangalore 560012, India
## Abstract
We introduce and study an exactly solvable model of several species of fermions in which particles interact pairwise through a mutual magnetic field; the interaction operates only between particles belonging to different species. After an unitary transformation, the model reduces to one in which each particle sees a magnetic field which depends on the total numbers of particles of all the other species; this may be viewed as the mean-field model for a class of anyonic theories. Our model is invariant under charge conjugation $`C`$ and the product $`PT`$ (parity and time reversal). For the special case of two species, we examine various properties of this system, such as the Hall conductivity, the wave function overlap arising from the transfer of one particle from one species to another, and the one-particle off-diagonal density matrix. Our model is a generalization of a recently introduced solvable model in one dimension.
PACS number: 71.10.Pm, 71.27.+a
Exactly solvable models of interacting particles have often been very useful in illustrating some general concepts in many-body physics. While there is a large variety of such models available in one dimension, many of which fall into the class of Tomonaga-Luttinger liquids , there are few models known in two dimensions which are completely solvable. In this paper, we introduce and study a model of several species of fermions which interact with each other through a magnetic field term which depends on the coordinates of pairs of particles belonging to two different species. The model can be solved by a unitary transformation which reduces it to a model of fermions in a magnetic field which depends on the total numbers of fermions belonging to the other species. Our model is a direct generalization of the recent reinterpretation of the well-known model of Luttinger in one dimension . The one-dimensional model also has pairwise “gauge” interactions depending on the coordinates of the particles; the model is exactly solvable because the interactions can be unitarily gauged away at the cost of modifying the boundary conditions in a non-trivial way. As we will see, in our two-dimensional model the interactions cannot be gauged away in the bulk of the system; the unitary transformation leaves behind a static magnetic field.
Let us consider $`\nu `$ species of fermions in two dimensions (say, the $`\widehat{x}\widehat{y}`$ plane), with the charge and number of fermions of type $`\alpha `$ being denoted by $`q_\alpha `$ and $`N_\alpha `$ respectively. The coordinates of the particles will be denoted by $`\stackrel{}{r}_{i,\alpha }`$, where $`1iN_\alpha `$ and $`1\alpha \nu `$. We will consider the Hamiltonian
$``$ $`=`$ $`{\displaystyle \underset{i,\alpha }{}}{\displaystyle \frac{1}{2m_\alpha }}\left(\stackrel{}{p}_{i,\alpha }{\displaystyle \frac{q_\alpha }{c}}\stackrel{}{A}_{i,\alpha }{\displaystyle \frac{q_\alpha }{c}}\stackrel{}{𝒜}_{i,\alpha }\right)^2,`$ (1)
$`\stackrel{}{A}_{i,\alpha }`$ $`=`$ $`{\displaystyle \frac{1}{2}}B_0\widehat{z}\times \stackrel{}{r}_{i,\alpha },`$ (2)
where $`c`$ is the velocity of light. $`B_0\widehat{z}`$ is an external magnetic field pointing in a direction perpendicular to the two-dimensional plane; we have chosen the symmetric gauge for its vector potential $`\stackrel{}{A}_{i,\alpha }`$ in order to explicitly maintain invariance under rotations of the plane. The other vector potential $`\stackrel{}{𝒜}_{i,\alpha }`$ arises from two-body interactions; it will be taken to have the following form which is natural in two dimensions ,
$$\stackrel{}{𝒜}_{i,\alpha }=\eta _\alpha \underset{j\beta }{}\xi _{\alpha \beta }\widehat{z}\times (\stackrel{}{r}_{i,\alpha }\stackrel{}{r}_{j,\beta }),$$
(3)
where $`\eta _\alpha `$ and $`\xi _{\alpha \beta }`$ are some constants to be fixed below. Note that the Hamiltonian (2) is invariant under translations in the plane.
We may now perform an unitary transformation on the Hamiltonian of the form
$`\stackrel{~}{}`$ $`=`$ $`UU^1,`$ (4)
$`U`$ $`=`$ $`\mathrm{exp}\left[{\displaystyle \frac{iq}{\mathrm{}c}}{\displaystyle \underset{\alpha <\beta }{}}{\displaystyle \underset{i,j}{}}\xi _{\alpha \beta }\widehat{z}\stackrel{}{r}_{i,\alpha }\times \stackrel{}{r}_{j,\beta }\right].`$ (5)
where $`q`$ is the charge of an electron. (We note that the phase factor in $`U`$ only depends on the total coordinates $`\stackrel{}{R}_\alpha =_i\stackrel{}{r}_{i,\alpha }`$ of the various species of fermions). This gives the transformed Hamiltonian
$`\stackrel{~}{}`$ $`=`$ $`{\displaystyle \underset{i,\alpha }{}}{\displaystyle \frac{1}{2m_\alpha }}\left(\stackrel{}{p}_{i,\alpha }{\displaystyle \frac{q_\alpha }{c}}\stackrel{}{A}_{i,\alpha }{\displaystyle \frac{q_\alpha }{c}}\stackrel{}{a}_{i,\alpha }\right)^2,`$ (6)
$`\stackrel{}{a}_{i,\alpha }`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\eta _\alpha {\displaystyle \underset{\beta \alpha }{}}\xi _{\alpha \beta }N_\beta \right)\widehat{z}\times \stackrel{}{r}_{i,\alpha },`$ (7)
provided that
$`\xi _{\alpha \beta }`$ $`=`$ $`\xi _{\beta \alpha },`$ (8)
$`\mathrm{and}q_\alpha \eta _\alpha `$ $`=`$ $`q\mathrm{for}\mathrm{all}\alpha .`$ (9)
The antisymmetry of $`\xi _{\alpha \beta }`$ implies that the two-particle magnetic interaction can only act between particles belonging to two different species.
It is interesting to consider the effects of some discrete symmetries such as time reversal ($`T`$), parity ($`P`$) and charge conjugation ($`C`$). Let us first set the external magnetic field $`B_0=0`$. Under $`T`$, the wave functions and factors of $`i`$ are complex conjugated (thus, the momentum operators $`\stackrel{}{p}_{i,\alpha }\stackrel{}{p}_{i,\alpha }`$) and the time coordinate $`tt`$; the space coordinates $`x,y`$ and the various parameters $`q_\alpha ,\eta _\alpha `$ and $`\xi _{\alpha \beta }`$ remain unchanged. Under $`P`$, one of the space coordinates, say, $`xx`$, while $`y`$, $`t`$ and all the parameters remain unchanged. We therefore see that the model is not invariant under $`P`$ and $`T`$ separately, but it is invariant under the combined operation $`PT`$. Under charge conjugation, we demand that $`q_\alpha q_\alpha `$ and $`\xi _{\alpha \beta }\xi _{\alpha \beta }`$, while $`\eta _\alpha `$ and the space-time coordinates remain unchanged; thus the model is invariant under $`C`$ and therefore under $`CPT`$. Finally, if the external magnetic field $`B_0`$ is nonzero, the model is again invariant under $`C`$ and $`PT`$, but not under $`P`$ and $`T`$ separately; this is because a magnetic field (which must be produced by some external currents) changes sign under $`C`$, $`P`$ and $`T`$ separately.
It may be useful to point out here that our model has some resemblance to the mean field theory of several species of anyons. In the usual theories of anyons, the wave function is assumed to pick up a phase $`\theta _{ij}`$ whenever particle $`i`$ is taken in an anticlockwise loop around particle $`j`$, no matter what the size and shape of the loop is . This is often modeled by treating each particle as a point-like composite of charge and magnetic flux; when one particles encircles another, the wave function picks up an Aharonov-Bohm phase. In understanding the many-body properties of such a system, a fruitful approach has been to begin with a mean field theory in which the magnetic flux of each anyon is smeared out over the entire plane . Thus each particle sees a magnetic field proportional to the average density of particles, which is similar to our situation. Of course, the analysis of anyons then goes beyond mean field theory to study the fluctuations about the average magnetic field, while our simplified model has no fluctuations. It is worth remarking that our model has no counterpart for the most popular anyon model which has only one species; we need a a minimum of two species.
To continue, the total magnetic field seen by a particle of type $`\alpha `$ in our model is given by $`B_\alpha \widehat{z}`$, where
$$B_\alpha =B_0+\eta _\alpha \underset{\beta \alpha }{}\xi _{\alpha \beta }N_\beta .$$
(10)
In order to have a well-defined thermodynamic limit $`N_\alpha \mathrm{}`$, the $`\xi _{\alpha \beta }`$ must be taken to scale as $`1/A`$, where $`A`$ is the area of the system; thus the magnetic field strengths $`B_\alpha `$ in (10) remain of order $`1`$ as $`A\mathrm{}`$ with the densities $`\rho _\alpha =N_\alpha /A`$ held fixed. We then expect Landau levels to form for each species . It is well-known that each Landau level has a macroscopic degeneracy equal to $`A|q_\alpha B_\alpha |/(2\pi \mathrm{}c)`$. The filling fraction of fermions of type $`\alpha `$ is given by
$$f_\alpha =\rho _\alpha \frac{2\pi \mathrm{}c}{|q_\alpha B_\alpha |}.$$
(11)
If $`f_\alpha `$ is not equal to an integer for one or more values of $`\alpha `$, the ground state of the system is highly degenerate.
For computational purposes, it is convenient to break this degeneracy in one of two ways. We can either add a simple harmonic confining potential to the Hamiltonians (2) and (7) of the form
$$_{sh}=\frac{k}{2}\underset{i,\alpha }{}\stackrel{}{r}_{i,\alpha }^2,$$
(12)
and take the limit $`k0`$ at the end of the calculation, or we can simply impose a hard wall boundary condition at some large radius $`R`$. Analytically, it is easier to work with the first method since the problem of free particles in a combination of an uniform magnetic field and a simple harmonic confinement is exactly solvable as we will now discuss. (Let us drop the species label $`\alpha `$ in the rest of this paragraph and in the next). Since the problem has rotational symmetry, the energies and wave functions are specified by two quantum numbers, a radial quantum number $`n=0,1,2,\mathrm{}`$ and the angular momentum $`l=0,\pm 1,\pm 2,\mathrm{}`$. If only a magnetic field is present (with, say, the product $`qB`$ being positive), the single-particle states have energies which only depend on the integer $`n`$ which counts the number of nodes in the radial direction; thus
$`E_{n,l}`$ $`=`$ $`\mathrm{}\omega _c(n+{\displaystyle \frac{1}{2}}),`$ (13)
$`\omega _c`$ $`=`$ $`{\displaystyle \frac{qB}{mc}}.`$ (14)
In the lowest Landau level (LLL), $`n=0`$ while $`l`$ can only take non-negative values; all states have the energy $`E_{0,l}=\mathrm{}\omega _c/2`$ independent of $`l`$. The normalized wave functions in the LLL are given in terms of the complex coordinates $`z=x+iy`$ and $`z^{}=xiy`$ as
$$\psi _{0,l}(z,z^{})=\left(\frac{qB}{2\mathrm{}c}\right)^{(l+1)/2}\frac{z^l}{\sqrt{l!\pi }}\mathrm{exp}\left[\frac{qB}{4\mathrm{}c}zz^{}\right],$$
(15)
where $`l=0,1,2,\mathrm{}`$. The amplitudes of these wave functions are peaked on circles of various radii centered about the origin $`\stackrel{}{r}=\stackrel{}{0}`$; the radii of these “ring” states are given by $`r_l=\sqrt{2l\mathrm{}c/(qB)}`$. (If $`qB`$ is negative, the LLL wave functions are given by Eq. (15) with $`z`$ replaced by $`z^{}`$. Then the angular momentum only takes non-positive values).
If we now add a weak simple harmonic potential $`m\omega ^2\stackrel{}{r}^2/2`$ for all the particles, the energies of the ring states in the LLL become
$$E_l=\frac{\mathrm{}}{2}\left[\sqrt{\omega _c^2+4\omega ^2}+(\sqrt{\omega _c^2+4\omega ^2}\omega _c)|l|\right],$$
(16)
which increase from the origin outwards as $`|l|`$ increases from zero. In the many-particle ground state, therefore, the fermions fill up the individual ring states from the origin outwards. In the following discussion, we will assume this order of filling in the LLL, without explicitly mentioning the simple harmonic confinement which justifies it.
We will now specialize to the case of two species of fermions to illustrate some properties of our model. Let us take the masses equal to $`m`$ for both species, the charges equal to $`q_1=q_2=q`$ (thus, $`\eta _1=\eta _2=1`$), and the numbers of particles equal to $`N_1`$ and $`N_2`$ for the two species respectively. We will also set
$$\xi _{12}=\xi _{21}=\frac{\gamma }{A},$$
(17)
where $`\gamma `$ is a number of order $`1`$. After the unitary transformation in (5), the two species see uniform magnetic fields equal to
$`B_1`$ $`=`$ $`B_0+\gamma {\displaystyle \frac{N_2}{A}}`$ (18)
$`\mathrm{and}B_2`$ $`=`$ $`B_0\gamma {\displaystyle \frac{N_1}{A}}`$ (19)
respectively. If the number of particles $`N_1=N_2`$, the model is invariant under the exchange of the species labels $`12`$ and $`\gamma \gamma `$; this is in addition to the discrete symmetries $`C`$ and $`PT`$ discussed in general before.
One of the properties of interest for such a model is the Hall conductivity. In the absence of impurities and any other interactions (such as Coulomb repulsion), what is the Hall conductivity of this system if the filling fractions $`f_1`$ and $`f_2`$ are both integers? It is fairly easy to see that the answer is
$$\sigma _{xy}=[f_1\mathrm{sign}(B_1)+f_2\mathrm{sign}(B_2)]\frac{q^2}{2\pi \mathrm{}}.$$
(20)
This can be derived from the usual formula for the frequency-dependent conductivity
$$\sigma _{xy}=\frac{i}{\omega }\underset{a0}{}\left[\frac{0|J_x|aa|J_y|0}{\omega E_a+E_0+i\eta }\frac{0|J_y|aa|J_x|0}{\omega +E_aE_0+i\eta }\right],$$
(21)
where $`|0`$ is the ground state of the many-body system, and the sum over $`|a`$ runs over all the excited states; $`\eta `$ is an infinitesimal positive number. The current $`\stackrel{}{J}`$ is given by the second-quantized expression
$$\stackrel{}{J}=c\frac{\delta }{\delta \stackrel{}{A}}=\frac{q}{2mi}\underset{\alpha =1}{\overset{2}{}}d^2\stackrel{}{r}\left[\mathrm{\Psi }_\alpha ^{}(\stackrel{}{p}\frac{q_\alpha }{c}\stackrel{}{A}\frac{q_\alpha }{c}\stackrel{}{𝒜}_\alpha )\mathrm{\Psi }_\alpha \mathrm{hermitian}\mathrm{conjugate}\right].$$
(22)
If we now perform the unitary transformation in (5) on both the current and the states, then (22) reduces to the conventional expression for the current operator of two species of fermions placed in the magnetic fields given by (19). Eq. (21) can then be evaluated in the usual way ; in the zero frequency limit, we obtain the expression given in (20). The Hall conductivity will remain unchanged if we make our model more realistic by including Coulomb repulsion between the particles.
Another object of interest in this model is the matrix element of the “hopping” operator
$$M(\stackrel{}{r})=c_1^{}(\stackrel{}{r})c_2(\stackrel{}{r})$$
(23)
between the ground state of the system with $`(N_1,N_2)`$ particles and all possible states of the system with $`(N_1+1,N_21)`$ particles. \[The calculation of this overlap is of interest in connection with the “orthogonality” catastrophe which is known to occur in Luttinger liquids in one dimension. It may also be useful in the context of a two-layer quantum Hall system in which electrons can hop from one layer to the other\]. Since our original Hamiltonian (2) is translation invariant, it is sufficient to compute the matrix element of $`M(\stackrel{}{0})`$ located at the origin. This simplifies the computation for the following reason. In a second quantized form, the annihilation operator for any species is given by
$$c(\stackrel{}{r})=\underset{n,l}{}\psi _{n,l}(\stackrel{}{r})c_{n,l},$$
(24)
where the sum runs over all one-particle states $`(n,l)`$ with wave functions $`\psi _{n,l}`$, and $`c_{n,l}`$ annihilates a fermion in the state $`(n,l)`$. Since only the zero angular momentum states have non-vanishing wave functions at the origin, $`c(\stackrel{}{0})`$ gets a contribution from only $`l=0`$ but all possible radial quantum numbers $`n`$. Thus
$$c(\stackrel{}{0})=\underset{n}{}\psi _{n,0}(\stackrel{}{0})c_{n,0},$$
(25)
where
$$|\psi _{n,0}(\stackrel{}{0})|^2=\frac{qB}{2\pi \mathrm{}c}$$
(26)
for all $`n`$. (This follows from the normalization of the Laguerre polynomials given in Refs. 7 and 8). We can now compute the frequency-dependent hopping function
$$(\omega )=\underset{a}{}|a;N_1+1,N_21|M(\stackrel{}{0})|0;N_1,N_2|^22\pi \delta (\mathrm{}\omega E_a+E_0),$$
(27)
where $`|0;N_1,N_2`$ denotes the ground state of the system with $`(N_1,N_2)`$ particles, while $`|a;N_1+1,N_21`$ denotes all possible states of the system with $`(N_1+1,N_21)`$ particles.
For simplicity, let us consider the case in which the filling fractions $`f_1`$ and $`f_2`$ are both less than $`1`$, and $`N_1=N_2=N`$. Then the ground state $`|0;N,N`$ is one which the both the type $`1`$ and type $`2`$ particles occupy the LLL states with $`n=0`$ and angular momentum $`l=0,1,2,\mathrm{},N1`$. Upon acting on this state with the operator $`M(\stackrel{}{0})`$ in (23), we get a state $`|a;N+1,N1`$ in which a type $`2`$ particle has been removed from the state $`(0,0)`$, and a type $`1`$ particle has been added to the state $`(n,0)`$, where $`n0`$ due to the Pauli exclusion principle. Hence the energy difference is
$$E_aE_0=\frac{\mathrm{}q}{mc}\left[(n+\frac{1}{2})|B_1|\frac{1}{2}|B_2|\right],$$
(28)
where $`n=1,2,3,\mathrm{}`$. This gives the locations of the $`\delta `$-functions on the right hand side of (27). We now have to find the weights. We can use (26) to show that for species $`2`$,
$$|a_2;N1|c_2(\stackrel{}{0})|0;N|^2=\frac{q|B_2|}{2\pi \mathrm{}c},$$
(29)
where $`|a_2;N1`$ represents the state in which a particle of type $`2`$ has been removed from the state $`(0,0)`$. Similarly, for species $`1`$, we have
$$|a_1;N+1|c_1^{}(\stackrel{}{0})|0;N|^2=\frac{q|B_1|}{2\pi \mathrm{}c},$$
(30)
where $`|a_1;N+1`$ represents the state in which a particle of type $`1`$ has been added to the state $`(n,0)`$ where $`n1`$. To use the results (29) and (30) for evaluating the matrix elements in (27), we now perform the unitary transformation in (5). At this point, we have to worry about two things. Firstly, the $`N+1`$ particles of type $`1`$ in the states $`|a`$ see a slightly different magnetic field than the $`N`$ particles of type $`1`$ in the state $`|0`$, since the number of type $`2`$ particles differ by one in the two cases. However, the difference in the two magnetic fields is of order $`1/A`$ which vanishes in the thermodynamic limit. Thus the wave functions in the two cases look almost the same since the magnetic field $`B`$ appearing in (15) differs only slightly in the two cases; so when we use Eq. (30), it does not matter much if we set $`N_2`$ equal to $`N`$ or $`N1`$ to determine the value of $`B_1`$ given by Eq. (19). Secondly, we have to worry about the phase factor appearing in $`U`$ which depends on the total coordinates $`\stackrel{}{R}_{N,\alpha }=_i\stackrel{}{r}_{i,\alpha }`$; recall Eq. (5). At this point, another advantage of locating the hopping operator at the origin $`\stackrel{}{r}=\stackrel{}{0}`$ becomes apparent. Namely, we see that the phase factors $`\widehat{z}\stackrel{}{R}_{N+1,1}\times \stackrel{}{R}_{N1,2}`$ appearing in the states $`|a`$ cancels with the phase factor $`\widehat{z}\stackrel{}{R}_{N,1}\times \stackrel{}{R}_{N,2}`$ appearing in the state $`|0`$, since the two states only differ by the addition or removal of particles at the origin; this does not change the total coordinate $`\stackrel{}{R}_\alpha `$ of either species. Putting Eqs. (29-30) and (28) together, we see that the hopping function is given by
$$(\omega )=\frac{q^2|B_1B_2|}{(2\pi \mathrm{}c)^2}\underset{n=1}{\overset{\mathrm{}}{}}2\pi \delta \left(\mathrm{}\omega \frac{\mathrm{}q}{mc}(n+\frac{1}{2})|B_1|+\frac{\mathrm{}q}{2mc}|B_2|\right).$$
(31)
We thus get an infinite sequence of $`\delta `$-functions with equal weight.
Finally, let us compute the one-particle off-diagonal density matrix for, say, species $`1`$. We assume again that $`N_1=N_2=N`$ and both the filling fractions $`f_\alpha `$ are less than $`1`$. We have to evaluate
$$\rho (\stackrel{}{r},\stackrel{}{r}^{})=\underset{i=2}{\overset{N}{}}d^2\stackrel{}{r}_{i,1}\underset{j=1}{\overset{N}{}}d^2\stackrel{}{r}_{j,2}\psi ^{}(\stackrel{}{r},\stackrel{}{r}_{2,1},\mathrm{},\stackrel{}{r}_{N,1};\stackrel{}{r}_{1,2},\stackrel{}{r}_{2,2},\mathrm{},\stackrel{}{r}_{N,2})\psi (\stackrel{}{r}^{},\stackrel{}{r}_{2,1},\mathrm{},\stackrel{}{r}_{N,1};\stackrel{}{r}_{1,2},\stackrel{}{r}_{2,2},\mathrm{},\stackrel{}{r}_{N,2}),$$
(32)
where we assume that the particles fill up the states $`l=0,1,2,\mathrm{},N`$ in the LLL. We again perform the unitary transformation (5). The integrand in (32) then becomes the product of a phase factor
$$\mathrm{exp}[\frac{iq\mathrm{}\gamma }{cA}\widehat{z}(\stackrel{}{r}\stackrel{}{r}^{})\times \stackrel{}{R}_2]$$
(33)
(where $`\stackrel{}{R}_2=_j\stackrel{}{r}_{j,2}`$ and we have used Eq. (17)), four Van der Monde determinants which typically look like $`_{k<l}(z_{k,\alpha }z_{l,\alpha })`$ and its complex conjugate for both the species, and the Gaussian factor
$$\mathrm{exp}[\frac{q|B_1|}{4\mathrm{}c}(\stackrel{}{r}^2+\stackrel{}{r}^2+2\underset{i=2}{\overset{N}{}}\stackrel{}{r}_{i,1}^2)\frac{q|B_2|}{2\mathrm{}c}\underset{j=1}{\overset{N}{}}\stackrel{}{r}_{j,2}^2].$$
(34)
Since the Van der Monde determinants are invariant under translations, we can immediately integrate over the $`N1`$ independent relative coordinates (i.e., $`\stackrel{}{r}_{k,2}\stackrel{}{r}_{l,2}`$) of the type $`2`$ particles. The total coordinate $`\stackrel{}{R}_2`$ of species $`2`$ then remains in the form
$$\mathrm{exp}[\frac{iq\mathrm{}\gamma }{cA}\widehat{z}(\stackrel{}{r}\stackrel{}{r}^{})\times \stackrel{}{R}_2\frac{q|B_2|}{2\mathrm{}c}\frac{\stackrel{}{R}_2^2}{N}].$$
(35)
When we integrate over $`\stackrel{}{R}_2`$, we get a Gaussian of the form $`\mathrm{exp}[d(\stackrel{}{r}\stackrel{}{r}^{})^2/A]`$, where $`d`$ is a number of order $`1`$. In the thermodynamic limit, we can set this equal to $`1`$ since we can assume that the separation $`|\stackrel{}{r}\stackrel{}{r}^{}|`$ is much smaller than the size of the system. We are now left with only the coordinates $`\stackrel{}{r}_{i,1}`$, with $`i=2,3,\mathrm{},N`$, to integrate over. We finally get
$$\rho (\stackrel{}{r},\stackrel{}{r}^{})=\frac{1}{N}\underset{l=0}{\overset{N}{}}\left(\frac{qB_1}{2\mathrm{}c}\right)^{l+1}\frac{(zz^{})^l}{l!\pi }\mathrm{exp}[\frac{qB_1}{4\mathrm{}c}(zz^{}+z^{}z^{})].$$
(36)
where we have assumed that $`qB_1`$ is positive. If we now take the limit $`N\mathrm{}`$, we find that the off-diagonal density matrix is the product of a Gaussian times a phase,
$$\rho (\stackrel{}{r},\stackrel{}{r}^{})\frac{qB_1}{2\pi \mathrm{}c}\mathrm{exp}\left[\frac{qB_1}{4\mathrm{}c}|zz^{}|^2\frac{qB_1}{4\mathrm{}c}(z^{}z^{}zz^{})\right],$$
(37)
which is the usual result for a single species of particles in the LLL.
To summarize, we have introduced and solved a two-dimensional multi-species fermi system with mutual interactions of a particular type. The interaction can be converted via an unitary transformation into a static magnetic field whose strength depends on the density of particles. The model is quite simple; after all, the exact solvability of a Hamiltonian which has a quadratic form should surprise no one. Yet the physics of the model is quite interesting. We end up with a strongly non-fermi liquid system; further, the elements of the orthogonality catastrophe, i.e., a readjusting of all states in response to the addition of a single particle, also carry over from the one-dimensional physics of the Luttinger model. Our model may thus serve some purpose in understanding the physics of non-fermi liquids in higher dimensions.
We would like to dedicate this paper to the memory of Heinz Schulz.
|
no-problem/9901/physics9901041.html
|
ar5iv
|
text
|
# II. LONGITUDINAL ELECTRON SPIN POLARISATION AT 27.5 GEV IN HERA aafootnote aA talk presented at the 15th ICFA Advanced Beam Dynamics Workshop: “Quantum Aspects of Beam Physics”, Monterey, California, U.S.A., January 1998. Also in DESY Report 98–096, September 1998.
## Summary
An integral part of the design of the HERA $`ep`$ collider has been the provision of longitudinally spin polarised electrons for the high energy physics experiments at the interaction points . At HERA, electrons or positrons of energy up to $`30GeV`$ are brought into collision with $`820GeV`$ protons.
As outlined in Article I, stored electron beams can become vertically polarised by the Sokolov–Ternov effect . The maximum ST polarisation achievable is $`92.4\%`$ corresponding to a planar ring. To provide longitudinal polarisation at an interaction point the naturally occuring vertical polarisation in the arcs must be rotated into the longitudinal direction just before the interaction point (IP) and back to the vertical just after the IP using special magnet configurations called spin rotators.
At HERA the spin rotators consist of strings of interleaved horizontal and vertical bending magnets each of which deflects the orbit by no more than about $`20`$ milliradians. Dipole rotators exploit the prediction of Eq. (4) in Article I that for motion transverse to the magnetic field, the spin precesses around the field at a rate which is $`a\gamma `$ times faster than the rate of rotation of the orbit direction. At HERA energies $`a\gamma `$ is between 60 and 68. So it can be arranged that small commuting deflections of the orbit can result in large noncommuting precessions of the polarisation vector which can be utilised to rotate the polarisation from the vertical to the longitudinal direction and vice versa. For HERA, the Mini-Rotator design of Buon and Steffen was adopted . The first pair of these spin rotators was intalled at the East straight section for the HERMES experiment.
Synchrotron radiation not only generates polarisation but can also cause depolarisation (Article I). This is especially the case in the presence of spin rotators. Furthermore the ratio: (depolarisation rate/polarisation rate) increases strongly with energy. However, the depolarising effects can in principle be minimised by special choice of the optic called ‘spin matching’ . But owing to the difficulty of obtaining reliable numerical predictions of the polarisation in the presence of rotators throughout the preparatory stage of the project and because of the initially very pessimistic predictions, it was by no means clear that longitudinal polarisation could be obtained even after spin matching.
Nevertheless, on the first attempt with the rotators switched on at the chosen energy of $`27.5GeV`$, a longitudinal electron polarisation of about $`56\%`$ was attained. This is to be compared with the $`65\%`$ polarisation attained immediately beforehand with the rotators turned off.
This was the first time in the history of high energy storage ring physics that longitudinal polarisation had been attained. Space limitations prevent my giving more details here but complete information and diagrams can be found in .
Subsequently longitudinal polarisation levels of about $`70\%`$ for periods of up to ten hours for positrons in collision with tens of milliamps of high energy protons have been achieved. Furthermore by measuring the polarisation of individual positron bunches the influence of the beam–beam interaction on the polarisation has been observed and found to be exotic. For example positron bunches in collision with protons can have a higher polarisation than non–colliding bunches and the polarisation of both groups of bunches is very sensitive to the tunes of the machine. Obviously, complicated resonance phenomena are at work .
## Outlook
In the year 2000 two more pairs of spin rotators will be installed so that three HERA experiments can work with longitudinally polarised electrons or positrons.
## References
|
no-problem/9901/quant-ph9901041.html
|
ar5iv
|
text
|
# Untitled Document
Average local values and local variances in quantum mechanics
J.G. Muga, J.P. Palao and R. Sala
Departamento de Física Fundamental y Experimental, Facultad de Física, Universidad de La Laguna, Tenerife, Spain
Abstract
Several definitions for the average local value and local variance of a quantum observable are examined and compared with their classical counterparts. An explicit way to construct an infinite number of these quantities is provided. It is found that different classical conditions may be satisfied by different definitions, but none of the quantum definitions examined is entirely consistent with all classical requirements.
Ref: Physics Letters A 238(1998)90, Electronic version with permission of Elsevier
e-mail address: JMUGA@ULL.ES
Fax number: 34-22-603684
PACS 03.65S - Semiclassical theories and applications.
PACS 03.65 - Formalism.
Keywords: Phase space, quantization rules, classical-quantum correspondence, foundations of quantum mechanics.
Many simple concepts of standard classical statistical mechanics are not easy to translate into quantum mechanics. The average local value and the local spread of a dynamical variable belong to this group. For an ensemble of particles in one dimension characterized by a joint distribution $`F(q,p)`$ of position $`q`$ and momentum $`p`$ the average local value of the dynamical variable $`a(q,p)`$ is given by
$$\overline{a}(q)=\frac{1}{P(q)}a(q^{},p)\delta (q^{}q)F(q^{},p)𝑑q^{}𝑑p,$$
(1)
where $`P(q)F(q,p)𝑑p`$, and the local spread, or local variance, by
$$\sigma _{a|q}^2=\overline{a^2}(q)[\overline{a}(q)]^2,$$
(2)
where $`\overline{a^2}`$ is the local average of the square of $`a`$, or second order “local moment”. Of course an arbitrary power of $`a`$, $`b(q,p)a^n(q,p)`$, is also a function of $`(q,p)`$, so the expression (1), mutatis mutandis, gives also the local moments of arbitrary order,
$`\overline{b}(q)`$ $`=`$ $`{\displaystyle \frac{1}{P(q)}}{\displaystyle b(q^{},p)\delta (q^{}q)F(q^{},p)𝑑q^{}𝑑p}`$ (3)
$`=`$ $`\overline{a^n}(q)={\displaystyle \frac{1}{P(q)}}{\displaystyle a^n(q^{},p)\delta (q^{}q)F(q^{},p)𝑑q^{}𝑑p}.`$ (4)
This observation may appear trivial at this point but it will become important when discussing the quantum case. Also, the following marginal, joint, and conditional probabilities can be defined,
$`P(a)`$ $`=`$ $`{\displaystyle \delta [aa(q,p)]F(q,p)𝑑q𝑑p}`$ (5)
$`P(a,q)`$ $`=`$ $`{\displaystyle \delta [aa(q^{},p)]\delta (q^{}q)F(q^{},p)𝑑q^{}𝑑p}`$ (6)
$`P(a|q)`$ $`=`$ $`P(a,q)/P(q)`$ (7)
where the last equation is simply Bayes’ rule. In terms of these later quantities $`\overline{a}`$ and $`\sigma _{a|q}^2`$ take the form
$$\overline{a}(q)=aP(a|q)𝑑a=\frac{1}{P(q)}aP(a,q)𝑑a$$
(8)
and
$$\sigma _{a|q}^2=\frac{1}{P(q)}(a\overline{a})^2P(a,q)𝑑a.$$
(9)
The total (or global) average is simply the “$`q`$-average” \[i.e. an average over $`q`$ of a $`q`$-dependent function weighted by $`P(q)`$\] of the local averages,
$$a=a(q,p)F(q,p)𝑑q𝑑p=\overline{a}(q)P(q)𝑑q,$$
(10)
whereas the total variance takes the form,
$`\sigma _a^2`$ $``$ $`{\displaystyle \left(aa\right)^2P(a,q)𝑑a𝑑q}`$ (11)
$`=`$ $`{\displaystyle \sigma _{a|q}^2P(q)𝑑q}+{\displaystyle (\overline{a}(q)a)^2P(q)𝑑q}.`$ (12)
$`\sigma _a^2`$ is the $`q`$-average of the local variances plus the “$`q`$-variance” of the local averages. This is an appealing decomposition, and it seems reasonable to seek for definitions of quantum local moments that preserve its structure.
In a recent publication in this journal, L. Cohen has argued that it is natural to interpret the following quantities as the local value of the observable associated with the hermitian operator $`\widehat{A}`$ and its local spread,
$`\overline{A}^S(q)`$ $``$ $`\left({\displaystyle \frac{q|\widehat{A}|\psi }{q|\psi }}\right)_{}`$ (13)
$`[\sigma _{A|q}^2]^C`$ $``$ $`\left({\displaystyle \frac{q|\widehat{A}|\psi }{q|\psi }}\right)_{}^2.`$ (14)
The superscripts $`S`$ and $`C`$ will distinguish these quantities from others to be defined later (the reason for using different letters will become clear soon), and the subscripts $``$ and $``$ indicate “real” and “imaginary” parts. (This interpretation of (13) has also been proposed in \[2-4\].) These quantities obey a relation with the form of Eq. (12),
$$\sigma _A^2=\sigma _{A|q}^2|\psi (q)|^2𝑑q+(\overline{A}^S(q)\widehat{A})^2|\psi (q)|^2𝑑q,$$
(15)
and the analogy with the classical equation has been invoked to understand the significance of the two terms in (15) . In this letter we shall explore how far the classical-quantum analogy goes. We shall see in particular that the definitions (13) and (14) do not always lead to results consistent with the form of equations (2) and (3). To this end it is useful to introduce an operator that symmetrizes $`\widehat{A}`$ and $`\delta (\widehat{q}q)=|qq|`$,
$$\widehat{A_q}=\frac{1}{2}\left[\widehat{A}\delta (\widehat{q}q)+\delta (\widehat{q}q)\widehat{A}\right],$$
(16)
and express (13) in terms of it as,
$$\overline{A}^S(q)=\frac{\widehat{A_q}}{\varrho (q)}.$$
(17)
where $`\varrho (q)=|\psi (q)|^2`$ is the probability density and $`\widehat{A_q}\psi |\widehat{A_q}|\psi `$. Note that the expectation value of $`\widehat{A_q}`$ is a “local density of $`A`$”, i.e., a quantity that integrated over $`q`$ provides the global average,
$$\psi |\widehat{A_q}|\psi 𝑑q=\widehat{A}.$$
(18)
(We shall see below that other definitions also satisfy this condition.) For consistency, the definition for the local average should apply to any operator, and in particular to $`\widehat{A}^2`$. The local average of $`\widehat{A}^2`$ is accordingly given by
$$\overline{A^2}^S(q)=\frac{(\widehat{A}^2)_q}{\varrho (q)}=\frac{1}{2\varrho (q)}\psi |[\widehat{A}^2\delta (\widehat{q}q)+\delta (\widehat{q}q)\widehat{A}^2]|\psi .$$
(19)
Following (2), a quantum local variance for $`\widehat{A}`$ is “naturally” defined as
$`[\sigma _{A|q}^2]^S`$ $`=`$ $`\overline{A^2}^S(q)[\overline{A}^S(q)]^2`$ (20)
$`=`$ $`{\displaystyle \frac{1}{2\varrho (q)}}\psi |[\widehat{A}^2\delta (\widehat{q}q)+\delta (\widehat{q}q)\widehat{A}^2]|\psi [\overline{A}^S(q)]^2`$ (21)
where the superscript $`S`$ reminds that only symmetrized operators are used. This definition for the local variance also satisfies the decomposition (12), but it is different from (14). To see this in more detail let us write (14) as
$`[\sigma _{A|q}^2]^C`$ $`=`$ $`\left({\displaystyle \frac{q|\widehat{A}|\psi }{q|\psi }}\right)_{}^2=\left|{\displaystyle \frac{q|\widehat{A}|\psi }{q|\psi }}\right|^2\left({\displaystyle \frac{q|\widehat{A}|\psi }{q|\psi }}\right)_{}^2`$
$`=`$ $`{\displaystyle \frac{1}{\varrho (q)}}\psi |\widehat{A}\delta (\widehat{q}q)\widehat{A}|\psi [\overline{A}^S(q)]^2`$
Clearly the two procedures imply two different interpretations of the local density for the square of $`\widehat{A}`$. In general,
$$\psi |\widehat{A}\delta (\widehat{q}q)\widehat{A}|\psi \frac{1}{2}\psi |[\widehat{A}^2\delta (\widehat{q}q)+\delta (\widehat{q}q)\widehat{A}^2]|\psi .$$
(23)
However their integrals over $`q`$ are both equal to $`\widehat{A}^2`$. Eq. (20) provides a local variance consistent with the definition given for the local average and it is in agreement with the classical expressions (12) and (2), but it is not semidefinite positive. Its literal interpretation as a variance is therefore impossible. This resembles the status of the Wigner function and other phase space quasi-distribution functions. While not interpretable as probability distributions they can be used to correctly evaluate expectation values and to investigate the classical limit in a classical-like phase space language.
In fact the decompositions of the total variance compatible with (12) and (2) are infinite. They can be found by means of the phase space formalisms described within the general framework provided also by Cohen . Each of these formalisms is associated with a particular function $`f(\theta ,\tau )`$ of auxiliary variables $`\theta `$ and $`\tau `$. The density operator and the operator $`\widehat{A}`$ are related, respectively, to a quasi distribution function $`F(q,p;[f])`$ and a phase space representation $`\stackrel{~}{A}(q,p;[f])`$ \[both depend functionally on $`f`$; Note that $`\stackrel{~}{A}(q,p;[f])`$ is not necessarily equal to the classical function $`a(q,p)`$\] in such a way that the expectation value of $`\widehat{A}`$ is given by the phase space integral, $`\widehat{A}=\stackrel{~}{A}(q,p;[f])F(q,p;[f])𝑑q𝑑p`$. These formalisms are also closely related to the “quantization rules”, that define mappings from phase space functions to operators. The rules are generally used to associate a quantum operator with the classical function $`a(q,p)`$.
Local values can be defined using any of these phase space formalisms, see the classical expression (8), as
$$\overline{A}(q;[f])=\frac{1}{\varrho (q)}\stackrel{~}{A}(q,p;[f])F(q,p;[f])𝑑p.$$
(24)
Their $`q`$-average is the global average, $`\widehat{A}=\overline{A}\varrho 𝑑q`$, and the variance can be decomposed in agreement with (12) by writing
$`\sigma _A^2`$ $`=`$ $`{\displaystyle \left[\frac{\stackrel{~}{A^2}(q,p;[f])F(q,p;[f])}{\varrho (q)}𝑑p\right]\varrho (q)𝑑q}{\displaystyle \overline{A}^2(q;[f])\varrho (q)𝑑q}`$ (25)
$`+`$ $`{\displaystyle \overline{A}^2(q;[f])\varrho (q)𝑑q}\widehat{A}^2,`$
where $`\stackrel{~}{A^2}(q,p;[f])`$ is the phase space representation of $`\widehat{A}^2`$.
The first two terms in (25) may be regarded as the $`q`$-average of the “local variance”
$$\sigma _{A|q}^2[f]\overline{A^2}(q;[f])\overline{A}^2(q;[f]),$$
(26)
while the last two terms take the form of a $`q`$-variance of local averages. An important point is that $`\stackrel{~}{A^2}(q,p;[f])`$ is in general different from the square $`\stackrel{~}{A}^2(q,p;[f])`$ so it is not necessarily a positive function. Unless $`f`$ is suitably chosen $`F`$ is not positive either, so that, contrary to the classical local variance, and in spite of the “square” used in the notation, $`\sigma _{A|q}^2[f]`$ is not necessarily positive. Of course for a given kernel $`f`$, $`\sigma _{A|q}^2[f]`$ can be positive for particular observables and, similarly, for a given observable a family of kernels will give a positive local variances. In the case of momentum, $`\widehat{A}=\widehat{p}`$, Cohen has noted that (26) is positive for a family of kernel functions $`f`$ that lead precisely to the choices (13) and (14) . In general, however, each observable requires a different analysis.
We shall next elaborate on the momentum observable and its powers using two of these formalisms because of their close relation to the previously discussed choices and to classical expressions. The Rivier-Margeneau-Hill (RMH) formalism is obtained by taking $`f=\mathrm{cos}(\theta \tau \mathrm{}/2)`$ . We shall denote the corresponding representation of the state, namely the Margeneau-Hill function, as $`F^{MH}`$. The subscript $`MH`$ will be also used for the local averages and variances, (24) and (26), defined within this framework. The corresponding mapping from phase space representatives to operators is the “Rivier rule” . When applied to a phase space function with the factorized form $`g(q)h(p)`$ it gives the symmetrized operator $`[g(\widehat{q})h(\widehat{p})+h(\widehat{p})g(\widehat{q})]/2`$. Note that for an arbitrary product of phase space functions $`AB`$, this symmetrization is not always equal to the simple symmetrization rule $`(\widehat{A}\widehat{B}+\widehat{B}\widehat{A})/2`$, where $`\widehat{A}`$ and $`\widehat{B}`$ are quantum operators assigned to $`A`$ and $`B`$ by some prescription (that could be in fact Rivier’s rule), see e.g. . However, if $`A(q)`$ and $`B(p)`$ are, respectively, functions of $`q`$ and $`p`$ only, and the associated operators are $`A(\widehat{q})`$ and $`B(\widehat{p})`$, then the two symmetrization procedures agree. This implies in particular that $`\overline{p^n}^S(q)=\overline{p^n}^{MH}(q)`$ and $`[\sigma _{p|q}^2]^S=[\sigma _{p|q}^2]^{MH}`$.
It has been argued that the interpretation of (13) as the local average leads to the Margeneau-Hill function . This connection can indeed be made, but, remarkably, the local values obtained by means of the RMH formalism in general differ from (13). To understand how this may happen let us briefly review the derivation of the Margeneau-Hill function in . The idea is to define a conditional probability density $`P^S(p|q)`$ making use of the characteristic function concept, i.e., by inverting
$$G(\tau ,q)\overline{e^{i\tau p}}^S(q)=e^{i\tau p}P^S(p|q)𝑑p$$
(27)
$$P^S(p|q)\frac{1}{2\pi }e^{i\tau p}\overline{e^{i\tau p}}^S(q)𝑑\tau $$
(28)
Expanding the exponential in $`\overline{e^{i\tau p}}^S`$, there results a series where the coefficients are the local averages of the powers of $`p`$, $`\overline{p^n}^S`$. Using the coordinate representation for the momentum operator the Taylor series for $`\psi (q\pm \mathrm{}\tau )`$ can be recognized,
$$\overline{e^{i\tau p}}^S(q)=\underset{n}{}\frac{(i\tau )^n}{n!}\overline{p^n}^S(q)=\frac{\psi (q+\mathrm{}\tau )}{2\psi (q)}+\frac{\psi ^{}(q\mathrm{}\tau )}{2\psi ^{}(q)}.$$
(29)
If in addition one defines a quasi-joint probability distribution by the product $`\varrho (q)P^S(p|q)`$, i.e., by formally adopting the structure of Bayes’ theorem, this quasi-probability turns out to be the Margeneau-Hill function,
$$F^{MH}(q,p)=[p|\psi \psi |qq|p]_{}.$$
(30)
But, as discussed before, in general $`\overline{A}^S\overline{A}^{MH}`$. The symmetrized operator $`(\widehat{A}^2)_q`$ is equal to the one provided by Rivier’s rule if $`A`$ is only a function of $`p`$, but will differ in general. The use of the structure of Bayes’ theorem in the derivation of (30) is the key point that explains this seeming contradiction. This is a theorem valid for actual probabilities, but $`P^S(p|q)`$ and $`F^{MH}(q,p)`$ are not. In fact they can be negative.
Finally, the comparison with the classical equations satisfied by local average and the variance is completed here by studying the time dependence, in particular the equations for time derivatives of the local averages of the powers of $`p`$, i.e., the equations of hydrodynamics. In this context the Weyl-Wigner (WW) phase space formalism is privileged. This formalism is associated with the simplest choice, $`f=1`$ . The phase space representative of the state is the Wigner function, $`F^W(q,p)`$, and the corresponding rule is the Weyl quantization rule. A superscript $`W`$ will denote the local quantities (24) and (26) calculated with this formalism.
Cohen’s prescription, Eq. (13), and the phase space formalisms RMH and WW lead to the same equation for the local density, namely the continuity equation,
$$\frac{\varrho (q)}{t}=\frac{}{q}[\varrho (q)\overline{p}(q)],$$
(31)
where $`\overline{p}=\overline{p}^S=\overline{p}^{MH}=\overline{p}^W`$. For the first local moment, the Wigner function leads to an “equation of motion” with exactly the same form as the classical one ,
$$\frac{\overline{p}^W}{t}=\frac{\overline{p}^W}{m}\frac{\overline{p}^W}{q}\frac{V(q)}{q}\frac{1}{m\varrho (q)}\frac{\left(\varrho (q)[\sigma _{p|q}^2]^W\right)}{q},$$
(32)
$`m`$ being the mass, and $`V(q)`$ the potential function. (Even though the form is equal, the numerical values differ in general between the classical and the quantum cases, and the local variance $`[\sigma _{p|q}^2]^W`$ is not semidefinite positive .) However, in the other approaches the local variances are different,
$`[\sigma _{p|q}^2]^W`$ $`=`$ $`[\sigma _{p|q}^2]^{MH}+{\displaystyle \frac{1}{4\varrho (q)}}2\widehat{p}\delta (\widehat{q}q)\widehat{p}\widehat{p}\delta (\widehat{q}q)\delta (\widehat{q}q)\widehat{p}`$ (33)
$`=`$ $`[\sigma _{p|q}^2]^C{\displaystyle \frac{1}{4\varrho (q)}}2\widehat{p}\delta (\widehat{q}q)\widehat{p}\widehat{p}\delta (\widehat{q}q)\delta (\widehat{q}q)\widehat{p},`$ (34)
and there appear extra terms which are not present in the classical equation (for the RMH formalism they were studied by Sonego ). At least for several model potentials studied in ref. the WW formalism is not only closer to classical mechanics formally in this context; it is also numerically closer. The local kinetic energy densities $`\varrho (q)\overline{p^2}^W(q)/(2m)`$, $`\varrho (q)\overline{p^2}^{MH}(q)/(2m)`$, and $`\widehat{p}\delta (\widehat{q}q)\widehat{p}/(2m)`$ were evaluated, and the one provided by the WW formalism was clearly the closest (numerically) to the classical values . Determining the extent of this agreement is an interesting open question for a separate study.
In summary, we have examined different definitions of “quantum local averages and variances”, and their similarities and differences with various classical expressions. None of them satisfies all the relations valid classically. It is useful to maximize the agreement with classical mechanics for examining the classical limit, and to gain physical insight in certain applications, but it should be noted that different classical criteria are satisfied by different quantum definitions. For specific applications one of the definitions may turn out to be the most convenient, but in fact each of them contains a piece of information which is only partially analogous to the corresponding classical quantity.
Acknowledgments
We acknowledge the referee for very interesting comments. Support by Gobierno Autónomo de Canarias (Spain) (Grant PI2/95) and by Ministerio de Educación y Ciencia (Spain) (PB 93-0578) is acknowledged. JPP acknowledges an FPI fellowship from Ministerio de Educación y Ciencia.
References
1. L. Cohen, Phys. Lett. A 212 (1996) 315
2. P. R. Holland, The quantum theory of motion (Cambridge Univ. Press, Cambridge, 1993)
3. K. K. Wan and P. Summer, Phys. Lett. A 128 (1988) 458
4. R. I. Sutherland, J. Math. Phys. 23 (1982) 2389
5. L. Cohen, J. Math. Phys. 7 (1966) 781.
6. R. Sala, J. P. Palao and J. G. Muga, Phys. Lett. A, accepted
7. L. Cohen, Found. Phys. 20 (1990) 1455; L. Cohen, Time-Frequency analysis (Prentice-Hall, Englewood Cliffs, New Jersey, 1995)
8. D. C. Rivier, Phys. Rev. 83 (1957) 862.
9. H. Margenau and R. N. Hill, Progr. Theoret. Phys. (Kioto) 26 (1961) 722; G. C. Summerfield and P. F. Zweifel, J. Math. Phys. 10 (1969) 233
10. E. T. García Álvarez and A. D. González, Am. J. Phys. 59 (1991) 279
11. E. Wigner, Phys. Rev. A 40 (1932) 749; J. E. Moyal, Proc. Cambridge Philos. Soc. 45 (1949) 99; H. Weyl, The Theory of Groups and Quantum Mechanics (Dover, New York, 1950)
12. J. G. Muga, R. Sala and R. F. Snider, Physica Scripta 47 (1993) 732
13. S. Sonego, Phys. Rev. A 42 (1990) 3733
14. C. C. Real, J. G. Muga and S. Brouard, Am. J. Phys. 65 (1997) 157
|
no-problem/9901/cond-mat9901111.html
|
ar5iv
|
text
|
# Interfacial tension behavior of binary and ternary mixtures of partially miscible Lennard-Jones fluids: A molecular dynamics simulation.
## I Introduction
The investigation of interfacial tension behavior in complex fluids is of relevance from the theoretical as well as from the practical point of view . In particular, the nature of the liquid-vapor interface has been extensively studied both analytically and numerically, using Monte Carlo and Molecular Dynamics algorithms. On the other hand, the properties of the liquid-liquid interface, which are of importance in different biological systems as well as in various technological applications , have received significantly less attention. One reason may be the complexity of the topology of the phase diagram for liquid mixtures. It is known that the intermolecular properties of a single component fluid have no important effect on the topology of its thermodynamic phase diagram leading to the concept of corresponding states. By contrast, for liquid mixtures the interplay between the molecular properties of the components as well as the type of the intermolecular potential between different species leads to a highly non-trivial topology of the thermodynamic phase diagram. Recently, the properties of liquid-liquid planar interfaces have begun to be studied by means of molecular dynamics simulations using Lennard-Jones (LJ) interactions, by Monte Carlo simulations of benzene-water interface and hexanol-water as well as by density functional theory. In addition, there have also been molecular dynamics simulations for the interface of water-alcohols using more complicated intermolecular potentials that include the relevant molecular degrees of freedom. . The density and pressure profiles of a symmetrical binary mixture of LJ fluids at the reduced temperature $`T^{}=0.827`$ was studied in a region of the phase diagram where the system is separated into two phases. It was found that as the system becomes more miscible there is more diffusion of particles from one phase to the other and, as a consequence, the interfacial tension decreases. Some dynamic properties of this system were investigated at two reduced densities, $`\rho ^{}=`$0.8 and 1.37 for several temperatures from which a crude sketch of the phase diagram was suggested. A third type of particle representing a simple model of amphiphilic molecule was introduced finding that for low concentrations (of the order of 6%) there is no important effect on the binary fluid interface. In recent papers the structural properties of the above mentioned binary mixture was investigated at low temperatures and high pressures finding a stable oscillatory behavior in the density profiles close to the interface. These oscillations decrease as the temperature increases. It is also suggested that as pressure grows the interfacial tension increases. A recent density functional investigation of a binary fluid mixture shows that the interfacial tension as a function of temperature can have a maximum.
With the purpose of getting a more complete understanding of the thermodynamic and structural properties of this liquid-liquid interface we have performed extensive equilibrium molecular dynamics simulations. The main results found in this work are: (i) the non-monotonic behavior of the interfacial tension as a function of temperature and (ii) the monotonic decay of this quantity as the concentration of a third type of particle, amphiphilic-like, increases. The layout of this paper is as follows: In section II we introduce the model and details of the simulations. In section III we present the results for the thermodynamic as well as structural properties of the binary and ternary systems. Finally, in section IV the conclusions of this investigation are presented.
## II Model and simulations.
We consider a symmetrical binary mixture of partially miscible Lennard-Jones fluids. The intermolecular interactions $`F_{_{AA}},F_{_{BB}}`$ and $`F_{_{AB}}`$ between particles A-A, B-B and A-B are described by modified LJ potentials that yield the forces,
$$F_{_{XY}}(r)=\{\begin{array}{cc}\frac{24}{r}ϵ\left[2\left(\frac{\sigma }{r}\right)^{12}\alpha _{_{XY}}\left(\frac{\sigma }{r}\right)^6\right]\hfill & \text{ if }rR_c\hfill \\ 0\hfill & \text{ if }r>R_c\hfill \end{array}$$
(1)
where $`ϵ`$ and $`\sigma `$ are the same for all interactions and the parameter $`\alpha _{_{XY}}`$ controls the miscibility of the two fluids. For a partially miscible binary fluid we choose $`\alpha _{_{AA}}=\alpha _{_{BB}}=1`$ and $`0\alpha _{_{AB}}<1`$. The interaction between particles of the same type is energetically more favourable, stronger binding, than the interaction A-B between different particles. For this reason one would expect a separation of the species at lower temperture when the entropy looses versus potential energy. With the aim of understanding the process by which a “surfactant” weakens the interfacial tension and leads to the mixed state, a third species C is introduced to try to emulate a surfactant-like particle. In this case we consider the same parameters in Eq. 1 and $`\alpha _{_{CC}}=\alpha _{_{AC}}=\alpha _{_{BC}}=1`$.
We have carried out extensive equilibrium molecular dynamics (MD) simulations using the (N,V,T) ensemble for the particular miscibility parameter value $`\alpha _{_{AB}}=0.5`$. The range of the inter-molecular potential was set equal to three times the particle diameter $`\sigma `$ unless otherwise stated. In most of the simulations a total of N=$`1728`$ particles were used. Nonetheless, to check for finite size effects we also carried out simulations with N=$`2592`$ particles. These particles are placed in a paralelepiped of volume $`L_x\times L_y\times L_z`$, with $`L_x=L_y`$ and $`L_z=2L_x`$, applying periodic boundary conditions in the $`x,y`$ and $`z`$ directions. The particles were initially placed on the sites of an FCC lattice forming a perfect planar interface, that is, all particles of type A are on the left side of the box while those of type B are on the opposite side. In this way one can obtain a minimum of two interfaces due to periodic boundary conditions. If one starts with a statistical mixture usually more than two demixed regions develop giving rise to more than two interfaces.
The three component system initial configuration was chosen from an equilibrated separated binary system picking at random $`N_c/2`$ particles from type A and $`N_c/2`$ particles from type B and replacing them by particles of type C. This way of putting the third species in the system makes the total density to remain constant. It is customary to carry out the simulations using the following reduced units for the distance $`r^{}=\frac{r}{\sigma }`$, particle linear momentum $`p^{}=\frac{p}{\sqrt{mϵ}}`$ and time $`t^{}=\frac{t}{\sigma }\sqrt{ϵ/m}`$. In these definitions $`m`$ is the mass of each particle, which is taken to be the same for all particles, $`\sigma `$ is the particle diameter and $`ϵ`$ is the depth of the LJ potential. Similarly, one can define reduced thermodynamic quantities as follows: $`T^{}=\frac{k__BT}{ϵ}`$ that represents the reduced temperature with $`k__B`$=Boltzmann’s constant and $`\rho ^{}=\rho \sigma ^3`$ for the reduced density, with $`\rho =N/V`$. The equations of motion were integrated using a fourth order predictor-corrector algorithm with an integration step-size of $`\mathrm{\Delta }t^{}0.005`$, which in standard units is of the order of $`10^5`$ nanoseconds in the scale of argon. The particles initial velocities were assigned from a Boltzmann distribution. The equilibration times for most of the simulations were of the order of $`10^4`$ time steps. Thermodynamic quantities were measured every 50 time-step iterations up to a total of 5$`\times 10^5`$ to $`10^6`$ measurements from which averages were evaluated. This amounts to a simulation time between 5-10 nanoseconds in the scale of argon. At the start the reduced homogeneous density in the simulation cell was set equal to $`\rho ^{}=0.844`$, which is close to density of the triple point of argon. Due to the inhomogeneity which develops in the system around the interface, the densities of the bulk phases later are slightly higher than this starting density. The bulk densities are evaluated when the system has been equilibrated and they depend on the conditions of the system and on $`T^{}`$.
## III Result and discussion
### A Binary mixture
Since our interest is to study the structural and thermodynamic properties of the interface, we have carried out a set of simulations for a sequence of temperatures below the critical demixing temperature for two system sizes, namely, $`N=1728`$ and $`N=2592`$ particles. We have considered these values of $`N`$ to find out about possible finite size effects. All the quantities studied show qualitatively the same tendency for the two system sizes. The interfacial behavior is investigated by calculating the density profiles, the pressure, and the interfacial tension at different temperatures. The relevant parameters of the investigated systems are summarized in Tables I and II where the bulk-values of the density and the pressure are specified. To emphasize the separated nature of the system, the values of the reduced total density in the bulk A-rich phase $`\rho ^{\mathrm{bulk}\mathrm{A}}`$, the reduced density of particles A, $`\rho __A^{\mathrm{bulk}\mathrm{A}}`$, and particles B, $`\rho __B^{\mathrm{bulk}\mathrm{A}}`$, are given. Due to the symmetry of the interactions, the B-rich phase is symmetric to the A-rich phase.
At sufficiently low temperatures the reduced density of B particles in the phase of A is very small indicating that the system is fully separated. As temperature increases the total density of the bulk region decreases slightly because the diffusion of particles through the interface increases, making the inhomogeneity smaller and driving the system towards a mixed state. This can be seen in Figures 1 and 2 where the density profiles along the $`z`$ direction (longer side of the box) are shown for the $`N=1728`$ and 2592 systems, respectively. The continuous line corresponds to a temperature of $`T^{}=0.827`$, which can be considered low, (systems 2 in Tables I and II) while the dashed line corresponds to a higher temperature $`T^{}=1.4`$ (systems 4 in Tables I and II).
When looking at the density profiles for the two system sizes studied, Figures 1 and 2, we observe that at $`T^{}=0.827`$ these quantities exhibit an oscillatory structure at the interfaces. This reminds us about the behavior of the density profile in front of a hard wall, as discussed in reference . This kind of structure is also found in the liquid-vapor system, but the oscillations are significantly stronger in the liquid-liquid interface. As expected, one also sees from these figures that the oscillations in the bulk region are less pronounced for the larger system.
The values of the reduced normal pressure $`P_n^{}`$ shown in column sixth of Tables I and II follow an almost linear behavior as a function of temperature in the range studied. The normal and transversal pressures profiles $`P_n(z)`$, $`P_t(z)`$ were calculated using the definition of the Irving-Kirkwood pressure tensor, which for a planar interface is given by
$`P_n(z)`$ $`=`$ $`\rho (z)k__BT`$ (2)
$``$ $`{\displaystyle \frac{1}{2A}}{\displaystyle \underset{ij}{}}{\displaystyle \frac{z_{_{ij}}^2u^{}(r_{_{ij}})}{r_{_{ij}}|z_{_{ij}}|}}\theta \left({\displaystyle \frac{zz__i}{z_{_{ij}}}}\right)\theta \left({\displaystyle \frac{z__jz}{z_{_{ij}}}}\right),`$ (3)
$`P_t(z)`$ $`=`$ $`\rho (z)k__BT`$ (4)
$``$ $`{\displaystyle \frac{1}{4A}}{\displaystyle \underset{ij}{}}{\displaystyle \frac{[x_{_{ij}}^2+y_{_{ij}}^2]u^{}(r_{_{ij}})}{r_{_{ij}}|z_{_{ij}}|}}\theta \left({\displaystyle \frac{zz__i}{z_{_{ij}}}}\right)\theta \left({\displaystyle \frac{z__jz}{z_{_{ij}}}}\right).`$ (5)
In these expressions the first term corresponds to the ideal gas contribution while the second comes from the intermolecular forces. For a dense system and long ranged intermolecular potentials the latter term yields the larger contribution. On the left side of Figure 3 we show the normal and tangential components of the pressure tensor as functions of $`z`$ for $`T^{}=0.827`$ and $`T^{}=1.4`$. The first feature to notice is that $`P_n^{}(z)`$ remains constant for both temperatures as one would expect for a system in thermodynamic equilibrium. The pressures $`P_n^{}`$ range from 0.25 kbar to 5.18 kbar in the scale of argon for the systems of Tables I and II. On the right side of the same figure the ideal as well as the configurational parts of $`P_n^{}(z)`$ and $`P_t^{}(z)`$ are plotted. At the lower temperature $`T^{}=0.827`$ the pressure profiles clearly show the oscillations related to the oscillatory behavior of the density profiles. One can learn that the configurational contribution is much larger than the ideal gas part since we are dealing with a dense system. These profiles are needed to evaluate the interfacial tension by means of the mechanical definition.
$$\gamma =_{\mathrm{bulk}_\mathrm{A}}^{\mathrm{bulk}_\mathrm{B}}[P_n(z)P_t(z)]𝑑z.$$
(6)
Also, a straightforward evaluation of $`\gamma `$ can be done using the Kirkwood-Buff formula
$$\gamma =\frac{1}{4A}\underset{i<j}{}\left(1\frac{3z_{_{ij}}^2}{r_{_{ij}}^2}\right)r_{_{ij}}u^{}(r_{_{ij}}).$$
(7)
In this latter equation there appears an additional factor of $`\frac{1}{2}`$ which comes from having two interfaces in the system. From the calculational point of view it is better to use the Kirkwood-Buff formula rather than the mechanical definition since fluctuations in $`P_n(z)`$ and $`P_t(z)`$ may introduce important inaccuracies in the evaluation of $`\gamma `$ according to Eq. 6. As a matter of fact, we calculated the interfacial tension using both expressions and found consistency in the values within the statistics of the simulations. The range of temperatures, in reduced units, in which $`\gamma `$ has been studied is $`0.6T^{}3.0`$, that in the scale of argon corresponds to $`72KT359.5K`$.
Unlike for the liquid-vapor interface we find a non-monotonic behavior of $`\gamma (T)`$ for the liquid-liquid interface. In Figure 4 we show the behavior of $`\gamma ^{}`$ as a function of $`T^{}`$ for systems with $`N=1728`$ and $`N=2592`$ particles. The main feature of this graph is the maximum of the interfacial tension at about $`T^{}1.1`$. Such a maximum has been reported recently in the context of density functional theory. It can be understood physically as follows: A stronger mixing near the interface introduces weaker A-B bonds and raises the potential energy. This leads to an increase of $`\gamma `$, the free energy of the interface, until at high temperatures the entropy contribution drags it down. We have calculated $`\gamma (T)`$ for these two systems to chek for possible finite size effects. In doing so we have to assure that the bulk thermodynamic quantities are approximately the same for both systems. In particular, we monitored very closely the bulk density since small variations in this quantity lead to significant changes in the pressure of the system at low temperatures. So, the important key is to have approximately the same pressure for both systems. This can be achieved by adjusting the side $`L_z`$ of the simulational box for the larger system, mantaining its cross section constant. In this way we eliminate possible variations of $`\gamma (T)`$ due to variations in the cross section area. We found that the optimal average value of $`L_z^{}`$ was 30 for the temperature range studied. In Figure 4 and tables I and II one also sees that at low temperatures, the values of $`\gamma ^{}(T^{})`$ (between the two system sizes) are at the most 4% different, while at higher temperatures ($`T^{}>1.1`$) the values are much closer to each other. One also observes that small variations in the bulk density lead to important changes in the pressure at low temperatures. However, at higher temperatures the variations in pressure are smaller. This is the expected behavior for a single LJ fluid. Therefore, differences in $`\gamma (T)`$ are due to the variations in pressure, although the former are much smaller than the latter. This happens because the interfacial tension is obtained from the difference between the normal and tangential components of the pressure tensor. This behavior explains why the differences and consistencies in the values of $`\gamma ^{}(T^{})`$.
### B Ternary mixture
In this section we investigate the role that a third species plays in the interface properties of a demixed binary system. A third species — C-particles — is introduced in the system and the behavior of the interfacial tension is studied as a function of concentration. The interactions of the C-particles $`F_{_{CC}}=F_{_{CA}}=F_{_{CB}}`$ are all assumed equal to the strong interactions $`F_{_{AA}}=F_{_{BB}}=F_{_{CC}}`$. When placed between the A and B, C particles avoid the weak A-B bonds and lower the potential energy. It is found that as $`N__C`$ increases $`\gamma `$ decays monotonically. This is reminiscent of surfactant-like behavior in some ternary systems. In Figure 5 we plot the density profiles of the ternary system as well as the reduced total density for different concentrations of C particles at $`T^{}=1.4`$. These profiles yield a clear evidence that C-particles like to be at the interface position rather than in the bulk phases. In this way the C-particles diminish (screen) the energetically unfavourable interactions between particles A-B. In Table III a summary of the reduced bulk densities of particles A, B and C in the A-rich phase for $`T^{}=0.827`$ and $`T^{}=1.4`$ is given. As $`N__C`$ increases the density of B in the A-rich phase also increases. This means that the C particles help the B particles to diffuse into the A-phase because the B particles can be solvated by C and therefore avoid the weak A-B interactions. As $`N__C`$ becomes sufficiently large this mechanism drives the system to a mixed state. Therefore, the C particles act like a surfactant or emulgator. This mechanism may have potential technological implications when trying to design compatibilizers. In Figure 6 the reduced interfacial tension is plotted as a function of $`N__C`$ for two different temperatures. With increasing bulk concentrations of C-particles we see a monotonic decay. This low concentration behavior is consistent with that found in reference where amphiphile like-particles C are introduced. In such a model $`\gamma ^{}`$ as a function of $`N__C`$ shows a linear decay in the small amphiphile concentration range.
In this same figure we also observe that the curve $`\gamma ^{}(N__C)`$ at $`T^{}=1.4`$ is slightly above the curve of $`\gamma ^{}(N__C)`$ at $`T^{}=0.827`$. This is an unexpected result since usually $`\gamma ^{}`$ decreases when $`T^{}`$ increases. Nonetheless, we should recall that these two points lie in the region where $`\gamma ^{}`$ has its maximum as a function of $`T^{}`$ for the binary system. This fact might also be responsible for the shoulder in the curve for $`\gamma ^{}`$ at $`T^{}=1.4`$ in the region $`200<N__C<300`$. We would like to emphasize that this structure is outside of the statistics of the simulations since the error bars are smaller than the difference between the points. We have also evaluated the excess $`\mathrm{\Gamma }__C`$ of C-particles given by Equation 8 and plot this quantity as a function of $`N__C`$ in the inset of Figure 6.
$$\mathrm{\Gamma }__C=_{\mathrm{bulk}_\mathrm{A}}^{\mathrm{bulk}_\mathrm{B}}\left(\rho __C(z)\rho __C^^B\right)𝑑z$$
(8)
We find that for all values of $`N__C`$, $`\mathrm{\Gamma }__C(T^{}=0.827)>\mathrm{\Gamma }__C(T^{}=1.4)`$ which means that there are more particles of type C located at the interface at the lower temperature and therefore they screen more effectively the energetically unfavourable interactions between A and B particles, diminishing even more the interfacial tension. In the range $`200N__C400`$ the excess of C particles at the interface at $`T^{}=1.4`$ increases slower than at $`T^{}=0.827`$ giving rise to a shoulder in $`\gamma ^{}`$ as shown in figure 6.
Since in the above results not the pressure but the overall density $`\rho ^{}=0.844`$ was kept constant, in Figure 7 we plot the behavior of $`P_n^{}`$ as a function of $`N__C`$ for the two temperatures indicated above. One notes that the hydrostatic pressure decreases slowly as $`N__C`$ increases. This is due to the fact that $`N__C/2`$ particles from fluid A and the same amount from fluid B were switched to particles C, replacing the weak A-B attraction by the stronger C-A and C-B attraction. This increases the cohesion and reduces the pressure. One could be tempted to obtain the upper curve from the lower by simply adding the ideal gas term $`\rho ^{}\mathrm{\Delta }T^{}`$, however, this is not the case because, as noted above with regard to Figure 3, the contribution of the configurational part to the pressure is the more relevant one.
## IV Conclusions
Molecular dynamics simulations for a mixture of two kinds of molecules A and B lead to a phase separation with an A-rich and B-rich phase and an interface between the phases. This happens if the attractive potential between A-B is weaker than that between A-A and B-B and if density and temperature are in the two phase region of demixing. The interface between the demixed phases has also been investigated. We have calculated density profiles at different temperatures, the components of the pressure tensor across the interface and have evaluated the interfacial tension $`\gamma `$ for a series of temperatures. The important result of the first part of this investigation is the maximum in $`\gamma (T)`$ (Figure 4). We explain this result as a consequence of a strong mixing near the interface that yield weaker A-B bonds thus raising the potential energy, that in turn leads to an increase in $`\gamma (T)`$, until at higher temperatures the interfacial free energy decreases due to an increase of the entropy term.
Then we have added a third kind of particle, “surfactant-like”, and have studied the interfacial tension as a function of its concentration. We found that these particles reduce the interfacial tension $`\gamma `$ as its concentration increases. This happens because these particles screen the unfavorable A-B interactions and lead to a mixing state of the system.
## Acknowledgments
We thank S. Iatsevitch and M. Swiderek for helpful discussions. F.F. would like to thank the Foundation Sandoval Vallarta for financial support and the colleagues at the Physics Department of UAM-I for the grand hospitality during his visit. The calculations were carried out on the Power-Challenge computer at the UAM-I and at DGSCA-UNAM. This work is supported by CONACYT research grants Nos. L0080-E and 25298-E.
|
no-problem/9901/cond-mat9901028.html
|
ar5iv
|
text
|
# Untitled Document
IMPURITY DYNAMICS IN A BOSE CONDENSATE
Siu A. Chin and Harald A. Forbert
Center for Theoretical Physics and Department of Physics
Texas A&M University, College Station, TX 77843 USA
1. INTRODUCTION
Bose-Einstein condensation is the macroscopic occupation of a single quantum state. In bulk liquid Helium, most theoretical calculations agree that, in the limit of zero temperature, roughly 10% of the Helium atoms are condensed in the ground state. For recently observed atomic condensates in <sup>87</sup>Rb , <sup>7</sup>Li and <sup>23</sup>Na , nearly 100% condensation is expected because of their low density. This collective occupation essentially magnifies ground state properties by a factor $`N`$ equal to the number of condensed atoms. For macroscopic $`N10^{24}`$, this exceedingly large factor can be exploited to detect minute changes in the condensate ground state. Thus aside from its intrinsic interest, BEC is potentially a very sensitive probe.
In this work, we will explore changes in the condensate ground state induced by a “sizable” impurity. By “sizable”, we mean an impurity whose size is within a few orders of magnitude of the trap size, and not necessarily atomic in scale. A sizable impurity will “drill” a hole in the condensate wave function and alter its energy. The question is whether this microscopic change can be detected macroscopically because of the Bose-Einstein condensation effect. This is described in the next section. We discuss the effects of interaction in Section 3 and the case of two or more impurities in Section 4.
2. IMPURITY EXPULSION
The effect we seek to describe is generic. We will therefore consider the problem in its simplest conceptual context. We imagine a non-interacting Bose gas of mass $`m`$ confined to a spherical cavity (an infinite square well) of radius $`b`$. We introduce an impurity, which to first order approximation, can be regarded as a hard sphere of radius $`a`$. When the impurity is placed inside the cavity, the wave function of the Bose gas must vanish on the surface of both the cavity and the hard sphere. Since the cavity only serves to confine the Bose gas, it may or may not also trap the impurity. Classically, since there is no interaction between the cavity and the hard sphere, the hard sphere is free to be anywhere inside the cavity. However, when the cavity is filled by a Bose gas, the position of the impurity is dictated by the ground state energy of the Bose gas. Thus there is a quantum mechanically induced interaction between the hard sphere and the cavity. Figure 1. The Bose gas is confined in a spherical cavity of radius $`b`$ and excluded from an off-center impurity, which is a hard sphere of radius $`a`$.
The situation is as shown in Fig. 1, where the hard sphere is displaced by a distance $`d`$ from the cavity center. The impurity is assumed to be sufficiently massive that the Born-Oppenheimer approximation is adequate. The effective potential for the impurity is then just $`V(d)=NE_0(d)`$, where $`E_0(d)`$ is the ground state energy of a single particle confined in a spherical cavity with an off-center hole:
$$\frac{\mathrm{}^2}{2m}^2\psi (𝐫)=E(d)\psi (𝐫),\psi (𝐫)=0\mathrm{at}|𝐫|=a\mathrm{and}|𝐫+𝐝|=b.$$
$`(1)`$
When $`d=0`$, both the ground state wave function and the energy are known analytically
$$\psi _0(r)=\frac{1}{r}\mathrm{sin}\left[\frac{\pi (ra)}{(ba)}\right],$$
$`(2)`$
$$E_0=\frac{\mathrm{}^2}{2m}\frac{\pi ^2}{(ba)^2}.$$
$`(3)`$
When the impurity is off-center, spherical symmetry is broken and the problem is non-trivial. Instead of solving the problem exactly, we gauge this off-center effect by a variational calculation using the following trial function
$$\varphi (𝐫)=\left(1\frac{a}{r}\right)(b|𝐫+𝐝|).$$
$`(4)`$
The first factor is the zero energy solution (Laplace’s equation) satisfying the Dirichlet condition at $`|𝐫|=a`$. The second factor forces the wave function to vanish at the displaced cavity surface. For $`d=0`$, the resulting variational energy simply replaces the factor $`\pi ^2`$ in (3) by 10, which is an excellent approximation. For finite $`d`$, the resulting energies are as shown in Fig. 2. The angular integration is sufficiently cumbersome that we computed the variational energy by the method of Monte Carlo sampling. The energy is lower as the impurity moves off center. This result is easy to understand once one has seen it. The impurity “drills” a hole in the cavity’s ground state wave function. It is more costly to drill a hole near the wave function’s maximum (the center) than at its minimum (the edge). Thus a sizable impurity will be expelled by a Bose condensate. Figure 2. Left: The ground state energy of a particle in a spherical cavity of radius $`b`$ with a displaced hard sphere of radius $`a`$. The energy as a function of the displacement for various impurity sizes is normalized by dividing by the exact on-center energy. Right: Similar calculations for the case of an impurity in a harmonic well. In this case $`b`$ is the harmonic length and the energy is only normalized by dividing by the variational on-center energy. In both cases, the impurity is not assumed to be trapped by the cavity or the well.
Similar results are obtained if one replaces the spherical cavity by a spherical harmonic oscillator. The size paramenter $`b`$ can be identified as the harmonic length via
$$\mathrm{}\omega =\frac{\mathrm{}^2}{m}\frac{1}{b^2}.$$
$`(5)`$
The harmonic oscillator ground state wave function is then
$$\psi _0(𝐫)=\mathrm{exp}[\frac{1}{2}(\frac{r}{b})^2].$$
$`(6)`$
When an impurity is placed in this harmonic well, the ground state wave function is again altered. We now do a variational calculation with the trial function
$$\varphi (𝐫)=\left(1\frac{a}{r}\right)\mathrm{exp}[\frac{1}{2}(\frac{|𝐫+𝐝|}{\alpha })^2],$$
$`(7)`$
where $`\alpha `$ is a variational parameter. The angular integral can be done exactly when we displace the harmonic oscillator rather than the impurity. The remaining radial integral can then be computed by numerical quadrature. The results are also shown in Fig. 2. In order to compare this with the cavity case, we must take at least $`2b`$, and not just $`b`$, as the equivalent cavity radius. Since the exact on-center energy in this case has no simple analytic form, we normalize by dividing by the variational on-center energy.
To get an order-of-magnitude estimate of the expulsion force, consider the case of lithium atoms with $`\frac{\mathrm{}^2}{m}7`$ K Å<sup>2</sup> and $`b3\times 10^4`$ Å. The characteristic energy scale is
$$\mathrm{}\omega =\frac{\mathrm{}^2}{m}\frac{1}{b^2}\frac{7}{(3\times 10^4)^2}10^8\mathrm{K}.$$
The variation in energy is over some fraction of the trap radius. Thus the expulsion force due to each atom is
$$F\frac{10^8\mathrm{K}}{10^4\mathrm{\AA }}\frac{10^8\mathrm{K}\mathrm{\hspace{0.17em}10}^{13}\mathrm{J}/\mathrm{K}}{10^4\mathrm{\AA }\mathrm{\hspace{0.17em}10}^{10}m/\mathrm{\AA }}10^{25}\mathrm{Newton}.$$
For $`N10^{24}`$ Bose condensed atoms, the expulsion force IS macroscopic. Currently, the maximum $`N`$ achieved in any of the atomic trap experiment is only about $`10^6`$. Such a small force may still be detectable if the impurity is sufficiently light.
3. THE EFFECT OF INTERACTION
To gauge the effect of interaction among the Bose condensed atoms, we do a variational Hartree calculation. For interacting Bosons confined in a harmonic well, the Hamiltonian is
$$H=\frac{\mathrm{}^2}{2m}\underset{i}{}(_i^2+\frac{r_i^2}{b^4})+\underset{i>j}{}v(𝐫_{ij}).$$
$`(8)`$
In the low density limit, the two-body potenial can be replaced by the scattering length approximation
$$v(𝐫_{ij})4\pi \frac{\mathrm{}^2}{m}a_{sc}\delta ^3(𝐫_{ij}).$$
$`(9)`$
For a Hartree wave function consisting of a product of normalized single particle states $`\varphi (𝐫_i)`$
$$\mathrm{\Psi }(𝐫_1,𝐫_2\mathrm{}𝐫_n)=\underset{i=1}{\overset{n}{}}\varphi (𝐫_i),$$
$`(10)`$
the variational energy is given by
$$\frac{E_V}{N}=\frac{\mathrm{}^2}{2m}\left[d^3r\varphi (𝐫)(^2+\frac{r^2}{b^4})\varphi (𝐫)+(N1)4\pi a_{sc}d^3r\varphi ^2(𝐫)\varphi ^2(𝐫)\right].$$
$`(11)`$
Minimizing this with respect to $`\varphi (𝐫)`$ yields the Gross-Pitaevskii equation. To account for the hard sphere impurity, one must again require $`\varphi (𝐫)`$ to vanish on the impurity’s surface. Instead of solving this problem exactly, we simply take
$$\varphi (𝐫)=\frac{1}{\sqrt{Z}}\left(1\frac{a}{|𝐫+𝐝|}\right)\mathrm{exp}[\frac{1}{2}(\frac{r}{\alpha })^2],$$
$`(12)`$
where $`Z`$ is the normalization integral, and minimize the energy functional(11) with respect to the parameter $`\alpha `$. This is in the spirit of using Gaussian trial wave functions to study the Gross-Pitaevskii equation, as suggested by Baym and Pethick in the context of BEC.
The interaction is characterized by a strength parameter $`g=Na_{sc}/b`$. The left panel of Fig. 3 shows the effect for positive scattering length. For $`g<5`$, the results are only slightly higher than those in the last section. All atomic experiments are far below the $`g=5`$ limit. In the extreme case of $`g10`$, the picture changes completely and the impurity is stabilized at the center. However, when the effective interaction is this strong, the mean-field approximation is no longer creditable and one must consider two-body correlations, as in the case of liquid Helium. Figure 3. Left: The change in energy at one impurity size $`a=b/5`$ as a function of the interaction strength $`g=Na_{sc}/b`$. Right: The change in energy for various impurity sizes at one negative interaction strength $`g=0.65`$. Variationally, the system would collapse at $`g=0.67`$ without any impurity. The straight segments correspond to impurity locations that would destabilize the condensate.
The effect of negative scattering length is shown on the right panel of Fig. 3. Variationally, as shown by Fetter , the condensate by itself is unstable below $`g<0.67`$ . When $`g`$ is close but above this critical value, the introduction of a sufficiently large impurity will cause the the condensate to collapse. This is shown by the straight line segements on Fig. 3. For these locations of the impurity, there are no energy minima for the variational parameter $`\alpha `$. When the impurity is at the center, it increases the single particle energy and there is no collapse. However, as it moves off center, the energy is lowered and the condensate collapses. This is also understandable from the opposite direction; when $`g`$ is close to the critical value, as the impurity enters the condensate, it increases the condensate density without substantially increasing the single particle energy. It pushes $`g`$ over the critical value and the condensate collapses.
4. TWO OR MORE IMPURITIES
For the case of $`n`$ impurities, each having a different radius $`a_i`$ and located at $`𝐝_i`$, one can simply generalize the single particle trial function (12) to
$$\varphi (𝐫)=\frac{1}{\sqrt{Z}}\underset{i=1}{\overset{n}{}}f_i(𝐫)\mathrm{exp}[\frac{1}{2}(\frac{r}{\alpha })^2],$$
$`(13)`$
with
$$f_i(𝐫)=\left(1\frac{a_i}{|𝐫+𝐝_i|}\right).$$
$`(14)`$
The resulting expression for the variational energy has a simple form
$$\frac{E_V}{N}=\frac{\mathrm{}^2}{m}\left[\frac{3}{2}\frac{1}{\alpha ^2}+\frac{1}{2}\underset{i=1}{\overset{n}{}}𝐠_i^2\left(\underset{i=1}{\overset{n}{}}𝐠_i\frac{𝐫}{\alpha ^2}\right)^2+\frac{r^2}{b^4}\right],$$
$`(15)`$
where
$$𝐠_i=\frac{f_i}{f_i},$$
$`(16)`$
and the expectation value is with respect to the trial function (13). The first two terms in the expectation value involving $`𝐠_i`$ may be interpreted as the kinetic energy of each impurity and their collective interaction with the harmonic well. With more than one impurity, there is no way to do the angular integration exactly. We evaluated the expectation value in (15) by the Monte Carlo method.
Figure 4. Left: Two impurities maintaining the same distance from the harmonic well center while rotating through an angular separation $`\theta `$. Right: The energy for two impurities, each of size $`a=0.2b`$, as a function of angular separation at various distances from the well center. For $`n=2`$, the resulting energy $`E_2=E_V/\mathrm{}\omega `$ is shown in Fig. 4. In order to disentangle the two-impurity interaction energy from the effect of off-center energy dependence, we separate the two impurities by keeping each at equal distance from the trap center. The configuration used is as shown on the left panel of Fig. 4. The resulting energy on the right panel clearly shows that there is an effective attraction between the two impurities. This is induced by the Bose condensate. It is less costly to “drill” a slightly larger hole at one place than to “drill” two separated holes. One can therefore infer that multiple impurities tend to clump together and will be expelled from the trap center.
The effect of interaction can again be assessed by including a Gross-Pitaevskii type mean-field interaction. For the present qualitative discussion, we have not bothered to include this correction.
5. CONCLUSIONS
In this work, we have considered the possible use of BEC as a sensitive probe for detecting microscopic changes in the condensate ground state. Our variational studies suggest that
a) A hard-sphere-like impurity will be expelled from the center of a condensate. For a light but sizable impurity, such an expulsion maybe macroscopically observable.
b) A sizable, hard-sphere-like impurity will accelerate the collapse of a condensate with negative scattering length.
c) A Bose condensate induces an effective attraction among hard-sphere-like impurities. This may have interesting implications for induced dimerization or clusterization of weakly interacting impurities, such as <sup>3</sup>He.
Work is currently in progress to seek exact solutions for problems that have only been solved variationally in this work.
ACKNOWLEDGMENTS
This research was funded, in part, by the U. S. National Science Foundation grants PHY95-12428 and DMR-9509743 (to SAC). The idea of impurity expulsion was evolved from considerations of impurity delocalization in Helium droplets\[7-9\]. The latter was first suggested by my colleague and collaborator E. Krotscheck.
REFERENCES
R. M. Panoff and P. A. Whitlock, in Momentum Distribution, edited by R. N. Silver and P. E. Sokol, Plenum Publishing, 1989.
M. H. Anderson et al., Science 269, 198 (1995)
C. C. Bradley et al., Phys. Rev. Lett. 75, 1687 (1995); 78, 985 (1997)
K. B. Davis et al., Phys. Rev. Lett. 75, 3969 (1995)
G. Baym and C. Pethick, Phys. Rev. Lett. 76, 2477 (1996)
A. L. Fetter, “Ground State and Excited States of a confined Bose Gas,” cond-mat/9510037.
E. Krotscheck and S. A. Chin, Chem. Phys. Lett. 227, 143 (1994).
S. A. Chin and E. Krotscheck, Phys. Rev. B52, 10405 (1995)
S. A. Chin and E. Krotscheck, Recent Progress in Many-Body Theories, Vol. 4, P.85, edited by E. Schachinger et al., Plenum Press, 1995.
|
no-problem/9901/gr-qc9901083.html
|
ar5iv
|
text
|
# The Degree of Generality of Inflation in FRW Models with Massive Scalar Field and Hydrodynamical Matter
## 1 Introduction
In the recent two decades the dynamics of an isotropic Universe filled with a massive scalar field attracted a great attention . From the physical point of view, the most interesting regime is the inflationary one. During inflation the system “forgets” its initial conditions and other characteristics of the pre-inflationary era, such as the possible presence of other types of matter in addition to the scalar field, spatial curvature, etc., due to a rapid growth of the scale factor. This feature enables us to use such a simple model for describing the physics of the early Universe from the instant when the inflationary regime was established.
On the other hand, the problem of pre-inflationary era and initial condition for inflation is for the same reason very difficult because nowadays we have no physical “probe” which might give us information about that time. In such a situation we may hope to extract some information from mathematical studies of the corresponding dynamical system. One of the most important problems is to describe the set of initial conditions which led to the inflationary regime. It can also clarify whether this regime is natural for this dynamical system or it requires some kind of fine tuning of the initial data.
For this question to make sense, it is necessary to specify a measure on the space of initial conditions. It is common to use the hypersurface with the energy density equal to the Planckean one (called the Planck boundary) as the initial-condition space. A common angular measure on the Planck boundary will be described below. As was pointed out in Ref. (see also a detailed discussion of possible choices of the measure in the cited paper), this choice is based on the physical considerations of inapplicability of classical gravity beyond the Planck boundary and of the absence of any information from this region. When the measure is specified, the inflation generality problem can be studied quantitatively.
A solution can depend on the physical condition in the epoch followed by inflation. In Refs. it was found that the distribution of initial data leading to inflation strongly depends on the sign of the spatial curvature. If it is negative or zero, the scale factor of the Universe cannot pass through extremum points (see below). In this case all the trajectories in the configuration space ($`a,\phi `$), where $`a`$ is the scale factor and $`\phi `$ is the scalar field, starting from a sufficiently large value $`\phi _0`$, reach a slow-roll regime and experience inflation. If we start from the Planck energy, a measure of non-inflating trajectories is about $`m/m_P`$ where $`m`$ is the mass of the scalar field and $`m_P`$ is the Planck mass. From observational reasons, this ratio is about $`10^5`$ so almost all trajectories lead to the inflationary regime. But positive spatial curvature allows a trajectory to have a point of maximal expansion which results in increasing the measure of non-inflating trajectories to $`0.3`$ .
Another important characteristic of the pre-inflationary era which is also “forgotten” during inflation is the possible presence of a hydrodynamical matter in addition to the scalar field. Decreasing even more rapidly than the curvature with increasing scale factor $`a`$, the energy density of hydrodynamical matter could not affect the slow-rolling conditions and so has a tiny effect on the dynamics if the spatial curvature is nonpositive. But the conditions for extrema of $`a`$ can change significantly in a closed model, so the latter with a scalar field and hydrodynamical matter requires special analysis.
The structure of this paper is as follows: in Sec. 2 we consider the dynamical system corresponding to an isotropic Universe with a massive scalar field and without any other type of matter. We also show how to use the configuration space $`(a,\phi )`$ for illustrating the generality of inflation problem. In Sec. 3 this method is applied to a closed isotropic Universe with scalar field and hydrodynamical matter in the form of perfect fluid. In Sec. 4 we briefly discuss the generality of the results obtained in Sec. 3.
## 2 Basic equations and dimensionless variables
We consider a cosmological model with the action
$$S=d^4x\sqrt{g}\left\{\frac{m_P^2}{16\pi }R+\frac{1}{2}g^{\mu \nu }_\mu \phi _\nu \phi \frac{1}{2}m^2\phi ^2\right\}.$$
(2.1)
For a closed Friedmann model with the metric
$$ds^2=dt^2a^2(t)d^2\mathrm{\Omega }^{(3)},$$
(2.2)
where $`a(t)`$ is the scale factor, $`d^2\mathrm{\Omega }^{(3)}`$ is the metric on a unit 3-sphere and $`\phi `$ is a homogeneous scalar field, we can get the following equations of motion
$$\frac{m_P^2}{16\pi }\left(\ddot{a}+\frac{\dot{a}^2}{2a}+\frac{1}{2a}\right)+\frac{a\dot{\phi }^2}{8}\frac{m^2\phi ^2a}{8}=0$$
(2.3)
$$\ddot{\phi }+\frac{3\dot{\phi }\dot{a}}{a}+m^2\phi =0.$$
(2.4)
Besides, we can write down the first integral of motion for our system
$$\frac{3}{8\pi }m_P^2(\dot{a}^2+1)+\frac{a^2}{2}\left(\dot{\phi }^2+m^2\phi ^2\right)=0.$$
(2.5)
It is easily seen from (2.5) that the points of maximal expansion and contraction, i.e. the points where $`\dot{a}=0`$ can exist only in a region where
$$\phi ^2\frac{3}{4\pi }\frac{m_P^2}{m^2a^2},$$
(2.6)
which represents the field in the half-plane $`0a<+\mathrm{}`$, $`\mathrm{}<\phi <+\mathrm{}`$ bounded by the hyperbolic curves
$$\phi \sqrt{\frac{3}{4\pi }}\frac{m_P}{ma}\mathrm{𝐚𝐧𝐝}\phi \sqrt{\frac{3}{4\pi }}\frac{m_P}{ma}$$
(see Fig. 1). Sometimes the region determined by the inequalities (2.6) is called Euclidean or “classically forbidden”. One can argue about the validity of such a definition (for details see ), but we shall use it for convenience. Now we would like to distinguish between the maximal contraction points where $`\dot{a}=0,\ddot{a}>0`$ and those of maximal expansion where $`\dot{a}=0,\ddot{a}<0`$. Let us put $`\dot{a}=0`$, in this case one can express $`\dot{\phi }^2`$ from (2.5) as
$$\dot{\phi }^2=\frac{3}{4\pi }\frac{m_P^2}{a^2}m^2\phi ^2.$$
(2.7)
Substituting (2.7) and $`\dot{a}=0`$ into Eq. (2.3), we have
$$\ddot{a}=\frac{4\pi m^2\phi ^2a}{m_P^2}\frac{2}{a}.$$
(2.8)
From (2.8) one can easily see that the possible points of maximal expansion are localized inside the region
$$\phi ^2\frac{1}{2\pi }\frac{m_P^2}{m^2a^2},$$
(2.9)
while those of maximal contraction (bounces) lie outside the region (2.9) being at the same time inside the Euclidean region (2.6) (see Fig. 1) .
It is convenient to employ the dimensionless quantities
$$x=\frac{\phi }{m_P},y=\frac{\dot{\phi }}{mm_P},z=\frac{\dot{a}}{ma}.$$
We will study the dynamics of the Universe starting from the Planck boundary
$$\frac{m^2\phi ^2}{2}+\frac{\dot{\phi }^2}{2}=m_P^4$$
(2.10)
We also introduce the convenient angular parametrization of the Planck boundary
$$\frac{m^2\phi ^2}{2}=m_P^4\mathrm{cos}^2\varphi ,\frac{\dot{\phi }^2}{2}=m_P^4\mathrm{sin}^2\varphi .$$
(2.11)
This variable $`\varphi `$ along with the variable $`z`$ determine the initial point on the Planck boundary completely. $`z`$ can vary in a compact region from $`0`$ to $`z_{\mathrm{max}}=\sqrt{8\pi /3}(m_P/m)`$; the corresponding initial values of the scale factor $`a`$ vary from $`a_{\mathrm{min}}=\sqrt{3/(8\pi m_P^2)}`$ to $`+\mathrm{}`$.
The measure we have used is the area $`S`$ of the initial conditions on the $`(z,\varphi )`$ plane over the total area $`S_{\mathrm{Pl}}`$ of the Planck boundary.
All definitions and properties of these variables remain unchanged in the presence of ordinary matter \[except for adding the matter energy in the left-hand side of (2.10)\], which will be studied in the next section. And now we briefly recall the situation without any matter in addition to the massive scalar field. Though all plots will be in the $`(z,\varphi )`$-plane, it is useful to keep in mind the $`(a,\phi )`$ configuration space.
As we know from , the fate of a trajectory starting from $`\dot{a}=0`$ can be deduced from the coordinates of the initial point: a trajectory with $`z=0`$ and an initial point lying between the two hyperbolae of Fig. 1 will expand, while an initial point lying below the separating curve leads to contraction instead of the inflationary regime. It is clear from (2.6), (2.9) that the scalar field value on the Euclidean boundary is $`\sqrt{2/3}`$ times the value of the scalar field on the separating curve for a fixed value of the scale factor. In the angular parametrization (2.11), the value of $`\phi _0`$ lying on the separating curve and separating these two regimes corresponds to $`\varphi =\mathrm{arccos}(\sqrt{2/3})0.61`$. An initial, sufficiently large positive Hubble constant gives a the universe a chance to pass a “dangerous” region of possible maximal expansion points (to the left from the separating curve) and to reach the inflationry regime. The resulting distribution of initial points on the plane $`(z,\varphi )`$ which do not lead to inflation is as shown in Fig. 2(a). A total measure of such trajectories is about $`30\%`$ of the plane $`(z,\varphi )`$.
## 3 A model with scalar field and perfect fluid
Now we add a perfect fluid with the equation of state $`P=\gamma E`$. The parameter $`\gamma `$ can in principle vary in the range $`1\gamma 1`$. In this paper the case $`1/3<\gamma 1`$ will be considered. This case contains all known kinds of matter in the form of a perfect fluid except the cosmological constant. Three cases of particular physical interest are $`\gamma =0`$ (dust), $`\gamma =1/3`$ (ultrarelativistic matter) and $`\gamma =1`$ (massless scalar field).
The equation of motion are now
$$\frac{m_P^2}{16\pi }\left(\ddot{a}+\frac{\dot{a}^2}{2a}+\frac{1}{2a}\right)+\frac{a\dot{\phi }^2}{8}\frac{m^2\phi ^2a}{8}\frac{Q}{12a^{p+1}}(1p)=0$$
(3.1)
$$\ddot{\phi }+\frac{3\dot{\phi }\dot{a}}{a}+m^2\phi =0.$$
(3.2)
with the constraint
$$\frac{3}{8\pi }m_P^2(\dot{a}^2+1)+\frac{a^2}{2}\left(\dot{\phi }^2+m^2\phi ^2\right)+\frac{Q}{a^p}=0.$$
(3.3)
Here $`p=1+3\gamma `$, $`Q`$ is a constant from the equation of motion for matter which can be integrated in the form
$$Ea^{p+2}=Q=const.$$
(3.4)
Before presenting the results of a numerical integration of (3.1)–(3.3) let us make some qualitative statements.
The Euclidean region is now bounded from large values of the field $`\phi `$ and small values of the scale factor $`a`$ (see Fig. 1). The upper point of the Euclidean boundary $`\phi =\phi _{\mathrm{max}}`$ corresponds to $`a^p=4\pi (p+2)Q/(3m_P^2)`$ and the fact that there is no bounce for bigger values of the scalar field is related to a transition from chaotic to regular types of dynamics, described by (3.1)–(3.3) as we have shown in .
But for our present purposes it is important that the Euclidean region disappears at
$$a^p=\frac{8\pi }{3m_P^2}Q.$$
(3.5)
We also need the equation for $`\ddot{a}`$ at the points of bounce:
$$\ddot{a}=\frac{2}{a}+\frac{4\pi }{m_P^2}m^2a\phi ^2+\frac{4\pi }{3m_P^2}\frac{Q}{a^{p+1}}(4p).$$
(3.6)
Thus for $`p=4`$ (a massless scalar field) the separating curve is the same as it was for the case without matter (Fig 1(b)), while for $`p=1`$ and $`p=2`$ we have an additional term (Fig 1(c)). It is necessary to notice that the part of the separating curve beyond the Euclidian region is related to the so-called Euclidean counterparts of the equations of motion studied in quantum cosmology (see ) and will not be considered here.
In all cases the separating curve crosses the Euclidean boundary at
$$a_{\mathrm{cr}}^p=\frac{4\pi (2+p)}{3m_P^2}Q,$$
(3.7)
Keeping in mind this feature and the geometry of the Euclidean region, it is sufficiently simple to explain the numerical results plotted in Fig. 2.
The influence of matter is significant only for small values of the scale factor which correspond to small values of $`z`$ in Fig. 2. For $`Q`$ small enough to keep $`a_{\mathrm{min}}`$ greater than $`a_{\mathrm{cr}}`$, the situation is like that in Fig. 2(b): only trajectories with small initial values of $`z`$ “feel” the presence of additional matter and can change their behaviour significantly. The measure of trajectories falling to singularity slowly increases with increasing $`Q`$. When $`a_{\mathrm{min}}=a_{\mathrm{cr}}`$, all the trajectories with zero velocity fall to the singularity because their initial points lie lower than the separating curve (see Fig. 2(c)). A further increase of $`Q`$ leads to the situation of Fig. 2(d) - there exists some minimal value of $`z`$ which is necessary for reaching the inflationary asymptotic. The measure of such trajectories keeps on diminishing with increasing $`Q`$.
But for
$$Q=(\frac{3}{8\pi })^{1+p/2}m_P^{2p}$$
(3.8)
$`a_{\mathrm{min}}`$ becomes equal to the value (3.5). The Euclidean boundary does not exist any more at $`a_{\mathrm{min}}`$. This situation corresponds to a density of matter so large, that a large spatial curvature (small $`z`$) is incompatible with the restriction of the initial density by the Planck value. The value $`z_{\mathrm{min}}`$ bounds the physically admissible initial conditions with densities smaller than the Planck density (Fig. 2(e)). In such a situation the fraction of initial conditions leading to inflation among all physically admissible initial conditions ($`z>z_{\mathrm{min}}`$) decreases with increasing $`Q`$ (see Figs. 2(e) – 2(f)).
So the measure of non-inflating trajectories as a function of the initial density of ordinary matter has a maximum. This maximum corresponds to such $`Q`$ that the density of ordinary matter at $`z=0`$ has just the Planck value. The numerical value of this maximum slowly depends on $`p`$. For the three cases mentioned above the numerical values are
$`0.55`$ for $`p=1`$ ($`\gamma =0`$),
$`0.56`$ for $`p=2`$ ($`\gamma =1/3`$),
$`0.58`$ for $`p=4`$ ($`\gamma =1`$).
## 4 Discussion
The results in are not essentially changed if we start not exactly from the Planck energy but from smaller values. This is important because we can not be sure that our model is valid up to exactly the Planck energy scale. Indeed, the essential criterion for inflation is that typical initial values of $`\phi `$ lie in the slow-rolling region. The value of $`\phi _{\mathrm{sep}}`$ separating the slow-roll (for $`\phi >\phi _{\mathrm{sep}}`$) and oscillatory (for $`\phi <\phi _{\mathrm{sep}}`$) regimes can be estimated as $`\phi _{\mathrm{sep}}=m_P/(2\sqrt{\pi })`$.
Now, if we start from the energy density $`E_{\mathrm{in}}=ϵm_P^4`$, $`ϵ<1`$, the maximum possible initial value of $`\phi `$ is $`\phi =\sqrt{2E_{\mathrm{in}}}/m`$ and the condition $`\phi >>\phi _{\mathrm{sep}}`$ leads to $`m^2/m_P^2<<ϵ`$. If this condition is satisfied, than only a tiny part of the trajectories (with the measure $`ϵ^{1/2}m/m_P`$) falls into an oscillatory regime with an insuffitient degree of inflation while the main part of non-inflationary trajectories falls into a singularity due to the spatial curvature and their measure is almost independent of the scalar field mass .
The presence of ordinary matter does not change the slow-roll regime, so all the aforesaid about the influence of the scalar field mass on the measure of non-inflationary trajectories is still valid.
The configuration of the Euclidean boundary and the separating curves in the presence of matter depends on the value of the scale factor. Value of the initial energy density deftermnes the minimal possible value of the scale factor (see the constraint equation) and therefore can influence the results. But it is clear from (3.5), (3.7) that the configuration of the curves is invariant under transformations keeping $`Q/a^p`$ constant. Using the equation for $`a_{\mathrm{min}}`$, this condition can be rewritten as $`Qϵ^{p/2}=const`$. Thus the transformations
$$\begin{array}{c}E_{\mathrm{in}}ϵE_{\mathrm{in}},\hfill \\ Qϵ^{p/2}Q\hfill \end{array}$$
leave the situation unchanged. This symmetry of Euclidean and separating curves indicates that the maximum fraction of non-inflating trajectories does not depend on $`ϵ`$. The maximum fraction is achieved when the density of ordinary matter at $`a_{\mathrm{min}}`$ is equal to $`E_{\mathrm{in}}`$. These qualitative considerations were also confirmed by direct numerical integration of the equations of motion.
As a result, the presence of a perfect fluid with $`0\gamma 1`$ in the Universe filled by a massive scalar field can enlarge the fraction of non-inflationary trajectories, but this fraction cannot exceed $`60\%`$ and the inflationary asymptotic remains rather natural.
## Acknowledgments
The author is grateful to A.Yu. Kamenshchik and I.M. Khalatnikov for discussions and constant support. This work was supported by Russian Basic Research Foundation via grants 96-02-16220 and 96-02-17591.
|
no-problem/9901/cond-mat9901035.html
|
ar5iv
|
text
|
# 1 Crashes are outliers
## 1 Crashes are outliers
It is well-known that the distributions of stock market returns exhibit “fat tails”, deviating significantly from the time-honored Gaussian description : a $`5\%`$ dayly loss occurs approximately once every two years while the Gaussian framework would predict one such loss in about a thousand year.
Crashes are of an even more extreme nature. We have measured the number $`N(D)`$ of times a given level of drawn down $`D`$ has been observed in this century in the Dow Jones daily Average . A draw down is defined as the cumulative loss from the last local maximum to the next minimum. $`N(D)`$ is well fitted by an exponential law
$$N(D)=N_0e^{D/D_c},\mathrm{with}D_c1.8\%.$$
(1)
However, we find that three events stand out blatantly. In chronological order: World War 1 (the second largest), Wall Street Oct. 1929 (the third largest) and Wall Street Oct. 1987 (the largest). Each of these draw down lasted about three days. Extrapolating the exponential fit (1), we estimate that the return time of a draw down equal to or larger than $`28.8\%`$ would be more than $`160`$ centuries. In contrast, the market has sustained two such events in less than a century. This suggests a natural unambiguous definition for a crash, as an outlier, i.e., an extraordinary event with an amplitude above $`15\%`$ .
Large price movements are often modeled as Poisson-driven jump processes. This accounts for the bulk of the statistics. However, the fact that large crashes are outliers implies that they are probably controlled by different amplifying factors, which can lead to observable precursory signatures. Here, we propose that large stock market crashes are analogous to “critical points”, a technical term in Physics which refers to regimes of large-scale cooperative behavior such as close to the Curie temperature when the millions of tiny magnets in a bar magnet start to influence each other and eventually end up all pointing in one direction. We present the theory and then test it against facts.
## 2 A rational imitation model of crashes
Our model contains the following ingredients :
1. A system of traders who are influenced by their “neighbors”;
2. Local imitation propagating spontaneously into global cooperation;
3. Global cooperation among traders causing crash;
4. Prices related to the properties of this system.
The interplay between the progressive strengthening of imitation controlled by the three first ingredients and the ubiquity of noise requires a stochastic description. A crash is not certain but can be characterized by its hazard rate $`h(t)`$, i.e., the probability per unit time that the crash will happen in the next instant if it has not happened yet.
The crash hazard rate $`h(t)`$ embodies subtle uncertainties of the market : when will the traders realize with sufficient clarity that the market is overvalued? When will a significant fraction of them believe that the bullish trend is not sustainable? When will they feel that other traders think that a crash is coming? Nowhere is Keynes’s beauty contest analogy more relevant than in the characterization of the crash hazard rate, because the survival of the bubble rests on the overall confidence of investors in the market bullish trend.
A crash happens when a large group of agents place sell orders simultaneously. This group of agents must create enough of an imbalance in the order book for market makers to be unable to absorb the other side without lowering prices substantially. One curious fact is that the agents in this group typically do not know each other. They did not convene a meeting and decide to provoke a crash. Nor do they take orders from a leader. In fact, most of the time, these agents disagree with one another, and submit roughly as many buy orders as sell orders (these are all the times when a crash does not happen). The key question is to determine by what mechanism did they suddenly manage to organise a coordinated sell-off?
We propose the following answer : all the traders in the world are organised into a network (of family, friends, colleagues, etc) and they influence each other locally through this network : for instance, an active trader is constantly on the phone exchanging information and opinions with a set of selected colleagues. In addition, there are indirect interactions mediated for instance by the media. Specifically, if I am directly connected with $`k`$ other traders, then there are only two forces that influence my opinion: (a) the opinions of these $`k`$ people and of the global information network; and (b) an idiosyncratic signal that I alone generate. Our working assumption here is that agents tend to imitate the opinions of their connections. The force (a) will tend to create order, while force (b) will tend to create disorder. The main story here is a fight between order and disorder. As far as asset prices are concerned, a crash happens when order wins (everybody has the same opinion: selling), and normal times are when disorder wins (buyers and sellers disagree with each other and roughly balance each other out). We must stress that this is exactly the opposite of the popular characterisation of crashes as times of chaos. Disorder, or a balanced and varied opinion spectrum, is what keeps the market liquid in normal times. This mechanism does not require an overarching coordination mechanism since macro-level coordination can arise from micro-level imitation and it relies on a realistic model of how agents form opinions by constant interactions.
In the spirit of “mean field” theory of collective systems , the simplest way to describe an imitation process is to assume that the hazard rate $`h(t)`$ evolves according to the following equation :
$$\frac{dh}{dt}=Ch^\delta ,\mathrm{with}\delta >1,$$
(2)
where $`C`$ is a positive constant. Mean field theory amounts to embody the diversity of trader actions by a single effective representative behavior determined from an average interaction between the traders. In this sense, $`h(t)`$ is the collective result of the interactions between traders. The term $`h^\delta `$ in the r.h.s. of (2) accounts for the fact that the hazard rate will increase or decrease due to the presence of interactions between the traders. The exponent $`\delta >1`$ quantifies the effective number equal to $`\delta 1`$ of interactions felt by a typical trader. The condition $`\delta >1`$ is crucial to model interactions and is, as we now show, essential to obtain a singularity (critical point) in finite time. Indeed, integrating (2), we get
$$h(t)=\frac{B}{(t_ct)^\alpha },\mathrm{with}\alpha \frac{1}{\delta 1}.$$
(3)
The critical time $`t_c`$ is determined by the initial conditions at some origin of time. The exponent $`\alpha `$ must lie between zero and one for an economic reason : otherwise, as we shall see, the price would go to infinity when approaching $`t_c`$ (if the bubble has not crashed in the mean time). This condition translates into $`2<\delta <+\mathrm{}`$ : a typical trader must be connected to more than one other trader. There is a large body of literature in Physics, Biology and Mathematics on the microscopic modeling of systems of stochastic dynamical interacting agents that lead to critical behaviors of the type (3) . The macroscopic model (2) can thus be substantiated by specific microscopic models .
The critical time $`t_c`$ signals the death of the speculative bubble. We stress that $`t_c`$ is not the time of the crash because the crash could happen at any time before $`t_c`$, even though this is not very likely. $`t_c`$ is the most probable time of the crash. There exists a finite probability
$$1_{t_0}^{t_c}h(t)𝑑t>0$$
(4)
of “landing” smoothly, i.e. of attaining the end of the bubble without crash. This residual probability is crucial for the coherence of the model, because otherwise agents would anticipate the crash and not remain in the market.
Assume for simplicity that, during a crash, the price drops by a fixed percentage $`\kappa (0,1)`$, say between $`20`$ and $`30\%`$ of the price increase above a reference value $`p_1`$. Then, the dynamics of the asset price before the crash are given by:
$$dp=\mu (t)p(t)dt\kappa [p(t)p_1]dj,$$
(5)
where $`j`$ denotes a jump process whose value is zero before the crash and one afterwards. In this simplified model, we neglect interest rate, risk aversion, information asymmetry, and the market-clearing condition.
As a first-order approximation of the market organization, we assume that traders do their best and price the asset so that a fair game condition holds. Mathematically, this stylized rational expectation model is equivalent to the familiar martingale hypothesis:
$$t^{}>tE_t[p(t^{})]=p(t)$$
(6)
where $`p(t)`$ denotes the price of the asset at time $`t`$ and $`\mathrm{E}_t[]`$ denotes the expectation conditional on information revealed up to time $`t`$. If we do not allow the asset price to fluctuate under the impact of noise, the solution to Equation (6) is a constant: $`p(t)=p(t_0)`$, where $`t_0`$ denotes some initial time. $`p(t)`$ can be interpreted as the price in excess of the fundamental value of the asset.
Putting (5) in (6) leads to
$$\mu (t)p(t)=\kappa [p(t)p_1]h(t).$$
(7)
In words, if the crash hazard rate $`h(t)`$ increases, the return $`\mu `$ increases to compensate the traders for the increasing risk. Plugging (7) into (5), we obtain a ordinary differential equation. For $`p(t)p(t_0)<p(t_0)p_1`$, its solution is
$$p(t)p(t_0)+\kappa [p(t_0)p_1]_{t_0}^th(t^{})𝑑t^{}\text{before the crash}.$$
(8)
This regime applies to the relatively short time scales of two to three years prior to the crash shown below.
The higher the probability of a crash, the faster the price must increase (conditional on having no crash) in order to satisfy the martingale (no free lunch) condition. Intuitively, investors must be compensated by the chance of a higher return in order to be induced to hold an asset that might crash. This effect may go against the naive preconception that price is adversely affected by the probability of the crash, but our result is the only one consistent with rational expectations.
Using (3) into (8) gives the following price law:
$$p(t)p_c\frac{\kappa B}{\beta }\times (t_ct)^\beta \text{before the crash}.$$
(9)
where $`\beta =1\alpha (0,1)`$ and $`p_c`$ is the price at the critical time (conditioned on no crash having been triggered). The price before the crash follows a power law with a finite upper bound $`p_c`$. The trend of the price becomes unbounded as we approach the critical date. This is to compensate for an unbounded crash rate in the next instant.
## 3 Log-periodicity
The last ingredient of the model is to recognize that the stock market is made of actors which differs in size by many orders of magnitudes ranging from individuals to gigantic professional investors, such as pension funds. Furthermore, structures at even higher levels, such as currency influence spheres (US$, Euro, YEN …), exist and with the current globalisation and de-regulation of the market one may argue that structures on the largest possible scale, i.e., the world economy, are beginning to form. This means that the structure of the financial markets have features which resembles that of hierarchical systems with “traders” on all levels of the market. Of course, this does not imply that any strict hierarchical structure of the stock market exists, but there are numerous examples of qualitatively hierarchical structures in society. Models of imitative interactions on hierarchical structures recover the power law behavior (9). But in addition, they predict that the critical exponent $`\alpha `$ can be a complex number! The first order expansion of the general solution for the hazard rate is then
$$h(t)B_0(t_ct)^\alpha +B_1(t_ct)^\alpha \mathrm{cos}[\omega \mathrm{log}(t_ct)+\psi ].$$
(10)
Once again, the crash hazard rate explodes near the critical date. In addition, it now displays log-periodic oscillations. The evolution of the price before the crash and before the critical date is given by:
$$p(t)p_c\frac{\kappa }{\beta }\left\{B_0(t_ct)^\beta +B_1(t_ct)^\beta \mathrm{cos}[\omega \mathrm{log}(t_ct)+\varphi ]\right\}$$
(11)
where $`\varphi `$ is another phase constant. The key feature is that oscillations appear in the price of the asset before the critical date. The local maxima of the function are separated by time intervals that tend to zero at the critical date, and do so in geometric progression, i.e., the ratio of consecutive time intervals is a constant
$$\lambda e^{\frac{2\pi }{\omega }}.$$
(12)
This is very useful from an empirical point of view because such oscillations are much more strikingly visible in actual data than a simple power law : a fit can “lock in” on the oscillations which contain information about the critical date $`t_c`$. Note that complex exponents and log-periodic oscillations do not necessitate a pre-existing hierarchical structure as mentioned above, but may emerge spontaneously from the non-linear complex dynamics of markets .
In Natural Sciences, critical points are widely considered to be one of the most interesting properties of complex systems. A system goes critical when local influences propagate over long distances and the average state of the system becomes exquisitely sensitive to a small perturbation, i.e., different parts of the system becomes highly correlated. Another characteristic is that critical systems are self-similar across scales: in our example, at the critical point, an ocean of traders who are mostly bullish may have within it several islands of traders who are mostly bearish, each of which in turns surrounds lakes of bullish traders with islets of bearish traders; the progression continues all the way down to the smallest possible scale: a single trader . Intuitively speaking, critical self-similarity is why local imitation cascades through the scales into global coordination.
## 4 Fitting the crashes
Details on our numerical procedure are given in . Figures 1-3 show the behavior of the market index prior to the four crashes of Oct. 1929 (Fig.1), Aug. 1998, Oct. 1997 (Hong-Kong) (Fig.2) and of Oct. 1987 (Fig.3). In addition, Fig. 3 shows the US $ expressed in DEM and CHF currencies before the collapse of the bubble in 1985. A fit with Eq. (11) is shown as a continuous line for each event. The table summarises the key parameters. Note the small fluctuations in the value of the scaling ratio $`2.2\lambda 2.7`$ for the 4 stock market crashes. This agreement constitutes one of the key test of our theory. Rather remarkably, the scaling ratio for the DEM and CHF currencies against the US$ is comparable.
In order to investigate the significance of these results, we picked at random fifty $`400`$-week intervals in the period 1910 to 1996 of the Dow Jones average and launched the fitting procedure described in on these surrogate data sets. The results were very encouraging. Of the eleven fits with a quality of fit comparable with that of the other crashes, only six data sets produced values for $`\beta `$ and $`\omega `$ which were in the same range. All six fits belonged to the periods prior to the crashes of 1929, 1962 and 1987. The existence of a “crash” in 1962 was before these results unknown to us and the identification of this crash naturally strengthens the case. We refer the reader to for a presentation of the best fit obtained for this “crash”.
In the last few weeks before a crash, the market indices shown in Fig. 1-3 depart from the final acceleration predicted by Eq. 5 : this is the regime where the hazard rate becomes extremely high, the market becomes more and more sensitive to “shocks” and the market idiosyncrasies are bound to have an increasing impact. Within the theory of critical phenomena, it is well-known that the singular behavior of the observable, here the hazard rate or the rate of change of the stock market index, will be smoothed out by the finiteness of the market. Technically, this is referred to as a “finite-size effect”.
In order to qualify further the significance of the log-periodic oscillations in a non-parametric way, we have eliminated the leading trend from the price data by the following transformation
$$p\left(t\right)\frac{p\left(t\right)\left[p_c\frac{\kappa }{\beta }B_0(t_ct)^\beta \right]}{\frac{\kappa }{\beta }B_1(t_ct)^\beta },$$
(13)
which should leave us with a pure $`\mathrm{cos}[\omega \mathrm{log}(t_ct)+\varphi ]`$ if no other effects were present. In figure 4, we see this residue prior to the 1987 crash with a very convincing periodic trend as a function of $`\mathrm{log}\left(\frac{t_ct}{t_c}\right)`$. We estimated the significance of this trend by using a so-called Lomb periodogram for the four index crashes and the two bubble collapse on the Forex considered here. The Lomb periodogram is a local fit of a cosine (with a phase) using some user chosen range of frequencies. In figure 5, we see a peak around $`f1.1`$ for all six cases corresponding to $`\omega =2\pi f7`$ in perfect agreement with the previous results. We note that only the relative level of the peak for each separate periodogram should be regarded a measure of the significance of the oscillations. Since the nature of the “noise” is unknown and very likely different for each crash, we cannot estimate the confidence interval of the peak and compare the results for the different crashes. We also note, the the strength of the oscillations is $`5\%`$ of the leading power law behaviour for all 6 cases signfiying that they cannot be neglegted.
## 5 Towards a prediction of the next crash?
How long time prior to a crash can one identify the log-periodic signatures? Not only one would like to predict future crashes, but it is important to further test how robust our results are. Obviously, if the log-periodic structure of the data is purely accidental, then the parameter values obtained should depend heavily on the size of the time interval used in the fitting. We have thus carried out a systematic testing procedure using a second order expansion of the hazard rate and a time interval of 8 years prior to the two crashes of 1929 and 1987. The general picture we obtain is the following. For the Oct. 1987 crash, a year or more before a crash, the data is not sufficient to give any conclusive results. Approximately a year before the crash, the fit begins to lock-in on the date of the crash with increasing precision and our procedure becomes robust. However, if one wants to actually predict the time of the crash, a major obstacle is the fact that several possible dates are possible. In addition, the fit in general “over-shoot” the true day of the crash. For the Oct. 1929 crash, we have to wait until approximately $`4`$ month before the crash for the fit to lock in on the date of the crash, but from that point the picture is the same as for the crash in Oct. 1987. We caution the reader that jumping in the prediction game may be hazardous and misleading : one deals with a delicate optimization problem that requires extensive back and forward testing. Furthermore, the formulas given here are only “first-order” approximations and novel improved methods are needed . Finally, one must never forget that the crash has to remain in part a random event in order to exist!
A general trend for the analysis of the five crashes presented here is that the critical $`t_c`$ obtained from fitting the data tends to over-shoot the time of the crash. This observation is fully consistent with our rational expectation model of a crash. Indeed, $`t_c`$ is not the time of the crash but the most probable value of the skewed distribution of the possible times of the crash. The occurrence of the crash is a random phenomenon which occurs with a probability that increases as time approaches $`t_c`$. Thus, we expect that fits will give values of $`t_c`$ which are in general close to but systematically later than the real time of the crash. The phenomenon of “overshot” that we have clearly documented is thus fully consistent with the theory.
It is a striking observation that essentially similar crashes have punctuated this century, notwithstanding tremendous changes in all imaginable ways of life and work. The only thing that has probably little changed are the way humans think and behave. The concept that emerges here is that the organization of traders in financial markets leads intrinsically to “systemic instabilities”, that probably result in a very robust way from the fundamental nature of human beings, including our gregarious behavior, our greediness, our reptilian psychology during panics and crowd behavior and our risk aversion. The global behavior of the market, with its log-periodic structures that emerge as a result of the cooperative behavior of traders, is reminiscent of the process of the emergence of intelligent behavior at a macroscopic scale that individuals at the microscopic scale have not idea of. This process has been discussed in biology for instance in animal populations such as ant colonies or in connection with the emergence of consciousness .
anders@moho.ess.ucla.edu
sornette@cyclop.ess.ucla.edu
|
no-problem/9901/physics9901043.html
|
ar5iv
|
text
|
# IV. UNRUH EFFECT, SPIN POLARISATION AND THE DERBENEV–KONDRATENKO FORMALISM aafootnote aExtended version of a talk presented at the 15th ICFA Advanced Beam Dynamics Workshop: “Quantum Aspects of Beam Physics”, Monterey, California, U.S.A., January 1998. Also in DESY Report 98–096, September 1998.
## 1 Introduction
In 1986 in the course of investigating quantum fluctuations in accelerated reference frames and striving to assign spin temperatures, Bell and Leinaas (BL) found that in a perfectly aligned, azimuthally uniform, weak focussing electron storage ring, the electron polarisation antiparallel to the dipole field is given by the formula
$`P_{eq}`$ $`=`$ $`{\displaystyle \frac{8}{5\sqrt{3}}}{\displaystyle \frac{1\frac{f}{6}}{1\frac{f}{18}+\frac{13}{360}f^2}}.`$ (1)
where $`f=(g2)Q_z^2/(Q_z^2\nu ^2)`$ and $`\nu =a\gamma `$ <sup>b</sup><sup>b</sup>bThe notation is the same as in Article I..
Over most of the energy range $`P_{eq}`$ is $`8/5\sqrt{3}`$ i.e. $`92.4\%`$. But as one approaches the resonance point $`Q_z=\nu `$ from below, the polarisation dips to $`17\%`$ and then rises through zero at the resonance energy to reach $`99.2\%`$ before levelling off again at $`92.4\%`$.
Such behaviour is not exhibited in the DKM formula (Article I, Eq. (36)) which is based on a calculation of spin motion driven by synchrotron radiation emission in the laboratory frame. Indeed, in a perfectly aligned flat storage ring $`\widehat{n}/\delta `$ is zero and the polarisation is $`92.4\%`$ independently of energy. At the time, the BL result caused considerable surprise and bafflement in the accelerator community.
## 2 The solution
However, the BL effect can be accommodated within the DKM formalism and we were able to provide a detailed treatment . The full story can be found in so that here, owing to space limitations, I will be exceedingly brief.
BL were primarily concerned with the effect of vertical orbit fluctuations driven by the background Unruh radiation . In the laboratory frame these fluctuations stem from the fact that synchrotron radiation photons are emitted at a small angle of order $`1/\gamma `$ with respect to the horizontal plane and thus cause the particles to recoil vertically. This must also be taken into account when considering the change in the $`\widehat{n}`$ axis under photon emission (Article I, Eq. (35)) and the DKM formula for the polarisation along $`\widehat{n}`$ then becomes:
$$P_{dk}=\frac{8}{5\sqrt{3}}\frac{𝑑s\frac{1}{\rho ^3}\left[\widehat{b}\widehat{n}\widehat{b}\stackrel{}{d}\frac{1}{6}\widehat{s}\stackrel{}{f}\right]_s}{𝑑s\frac{1}{\rho ^3}\left[1\frac{2}{9}(\widehat{n}\widehat{s})^2+\frac{11}{18}\stackrel{}{d}^2\frac{1}{18}\frac{\dot{\widehat{s}}}{\dot{\widehat{s}}}(\widehat{n}\times \stackrel{}{f})+\frac{13}{360}\stackrel{}{f}^2\right]_s}$$
(2)
where the vector $`\stackrel{}{f}(2/\gamma )\widehat{n}/\beta _z`$ and $`\stackrel{}{d}=\widehat{n}/\delta `$. See for notation <sup>c</sup><sup>c</sup>cIn particular the vector $`\stackrel{}{f}`$ in Eq. (2) and the quantity $`f`$ in Eq. (1) are distinct..
If $`\widehat{n}/\delta `$ is zero as in the BL ring, the terms containing the very small quantity $`\stackrel{}{f}`$ come into play. Then we obtain:
$`P_{dk}`$ $`=`$ $`{\displaystyle \frac{8}{5\sqrt{3}}}{\displaystyle \frac{1\frac{F}{6}}{1\frac{F}{18}+\frac{13}{360}F^2}}`$ (3)
where $`F=\frac{2}{\gamma }+f`$.
Thus we have recovered the BL result except for the extra piece $`2/\gamma `$. Near to the resonance this is negligible compared to the resonance term and so near to the resonance we may consider the two results to be in agreement. Thus the vertical kicks imparted to the orbit by the Unruh radiation of BL have been identified with vertical recoils caused by synchrotron radiation.
Further instructive interpretations of synchrotron radiation can be found in . In synchrotron radiation emission is considered to result from ‘inverse Compton scattering’ of electrons from the virtual photons of the deflecting magnetic field and the spin dependent Compton cross–section is used to obtain the radiation distribution. It would be interesting to see if an extension of this calculation emphasising spin effects could simulate the Sokolov–Ternov effect.
## Acknowledgments
I thank S. R. Mane for fruitful collaboration on this and other aspects of spin polarisation in storage rings. I would like to thank J.D. Jackson, W. G. Unruh and H. Rosu for useful exchanges of ideas.
## References
|
no-problem/9901/cond-mat9901178.html
|
ar5iv
|
text
|
# Damping of Growth Oscillations
## I 1 Introduction
Layer-by-layer or Frank-van der Merwe growth is a growth mode observed in molecular beam epitaxy and other deposition methods which allows precise control of chemical composition of layers down to atomic thickness. It is therefore particularly well suited for the fabrication of novel electronic devices.
The key microscopic processes in layer-by-layer growth are deposition of atoms onto a high-symmetry surface and diffusion of adatoms on the surface. The adatoms meet and form dimers which then grow into islands of monoatomic height whose edges capture most of the adatoms during the deposition of one monolayer. When the island edges become less available due to coalescence of islands, formation of dimers and islands in the next layer begins. The density of atomic steps – and all other quantities sensitive to the surface morphology – thus oscillates in time.
Generically, these oscillations are damped: Layer-by-layer growth is only a transient. Possible reasons include (i) cessation of periodic formation of islands on the surface and transition to step flow growth , or (ii) roughening of the surface . If the substrate temperature is increased, damping becomes stronger in the first and weaker in the second case, allowing to discriminate between the two. Ignoring the possibility of inhomogeneous deposition (cf. Ref. ), surface roughening can have two different sources. If interlayer transport is inhibited by step-edge barriers, one obtains the growth instability predicted by Villain . If no such instability occurs, the surface may still roughen due to fluctuations in the intensity of the deposition rate. Only the latter case is considered in this paper.
The question of how long layer-by-layer growth persists is of immediate practical importance. If the answer is known, one can devise and optimize annealing schedules for growing thicker films while maintaining a smooth surface. For this purpose, it is important to know both the damping time $`\stackrel{~}{t}`$ and the length scale over which layer-by-layer growth is synchronized. This layer coherence length $`\stackrel{~}{\mathrm{}}`$ is a new characteristic length that determines, e.g., the annealing time needed to reestablish a flat surface before growth can be continued.
A theory has been proposed recently which predicts that the damping time and the layer coherence length depend on the typical distance $`\mathrm{}`$ between islands in the submonolayer region of growth as
$$F\stackrel{~}{t}\mathrm{}^{4d/(4d)}\text{and}\stackrel{~}{\mathrm{}}\mathrm{}^{4/(4d)}.$$
(1)
In this paper, we present the theory of oscillations damping and detailed numerical evidence of the validity of its predictions based on extensive computer simulations of a minimal, one-parameter model at surface dimension $`d=2`$. The methods of extracting the damping time and the layer coherence length from the surface morphology evolution are outlined and thoroughly discussed.
The characteristic length $`\mathrm{}`$ (and thus also $`\stackrel{~}{t}`$ and $`\stackrel{~}{\mathrm{}}`$) has a power-law dependence on the ratio $`D/F`$ of the surface diffusion constant to the deposition rate:
$$\mathrm{}(D/F)^\gamma $$
(2)
(see and references given in ). The exponent $`\gamma `$ depends on the dimensionality $`d`$ of the surface and the (possibly non-integer) dimension $`d_f`$ of the islands. It also depends on whether or not desorption of adatoms or diffusion of dimers or larger clusters is negligible. Finally, $`\gamma `$ is a function of the critical cluster size $`i^{}`$ for the formation of a stable nucleus. For the case considered here in more detail ($`d=2`$, $`d_f=2`$, $`i^{}=1`$, no desorption, immobile clusters), the value is $`\gamma =1/6`$.
The layer coherence length $`\stackrel{~}{\mathrm{}}`$ as well as the damping time $`\stackrel{~}{t}`$ play the roles of natural cutoffs in the continuum growth equation at small length and time scales. For $`t\stackrel{~}{t}`$ one expects that the surface exhibits self–affine scaling :
$$w(t)a_{}(\xi (t)/\stackrel{~}{\mathrm{}})^\zeta \text{and}\xi (t)\stackrel{~}{\mathrm{}}(t/\stackrel{~}{t})^{1/z}.$$
(3)
Here $`w`$ is the root mean square variation of the film thickness (the surface width), $`a_{}`$ the thickness of one atomic layer (which, for convenience, is set to one in the following), and $`\xi `$ the correlation length up to which the surface roughness has fully developed at time $`t`$. $`\zeta `$ is the roughness exponent and $`z`$ the dynamical exponent. The dependence of $`\stackrel{~}{\mathrm{}}`$ and $`\stackrel{~}{t}`$ on the microscopic growth parameters will be derived next.
## II 2 Theoretical results
Coarse-graining the surface configuration at a given time over a length scale of the order of $`\mathrm{}`$, one can write down an evolution equation for the variable $`h(x,t)`$. Since particle desorption can be neglected under conditions typical for molecular beam epitaxy, the equation can be written in the form of a conservation law,
$$_th(x,t)=j(x,t)+\eta (x,t).$$
(4)
$`j`$ is the surface diffusion current, and $`\eta `$ is white noise with second moment
$$\eta (x,t)\eta (y,s)=F\delta ^d(xy)\delta (ts),$$
(5)
which describes the fluctuations in the deposition rate. It was proposed by Villain that in growth processes far from equilibrium where local chemical potentials along the surface are ill defined, diffusion currents should be driven by gradients in the growth-induced, nonequilibrium adatom density $`n`$ ,
$$𝐣=Dn.$$
(6)
On a singular surface, the balance between deposition and capture of adatoms at steps leads to a stationary adatom density $`n=n_0`$ of the order of $`n_0(F/D)\mathrm{}^2`$. On a vicinal surface, the adatom density is reduced due to the presence of additional steps. However, this effect is felt only if the miscut $`m=|h|`$ exceeds $`1/\mathrm{}`$, in which case $`n(F/D)m^2`$. In terms of a coarse-grained description of the surface this implies that the local adatom density depends on the local miscut or surface tilt. A useful interpolation formula which connects the regimes $`m1/\mathrm{}`$ and $`m1/\mathrm{}`$ is
$`n(h)`$ $`=`$ $`{\displaystyle \frac{n_0}{1+(\mathrm{}h)^2}}`$ (7)
$``$ $`(F/D)\mathrm{}^2(F/D)\mathrm{}^4(h)^2+\mathrm{}`$ (8)
Inserting the leading quadratic term of this gradient expansion into (6), which is appropriate for describing long-wavelength fluctuations around the singular orientation, one obtains
$$j=\lambda (h)^2$$
(9)
with
$$\lambda =F\mathrm{}^4.$$
(10)
Considering Eq. (4) and Eq. (9) one sees that the physical dimension of $`\lambda `$ is (length)<sup>4</sup>/(time$``$height). Within the continuum description, the only characteristic length and time scales are the layer coherence length and the damping time, whereas the lattice constant $`a_{}`$ has been chosen as a unit of height. Therefore
$$\lambda \stackrel{~}{\mathrm{}}^4/\stackrel{~}{t}$$
(11)
on dimensional grounds.
Finally, the number of particles deposited during the time $`\stackrel{~}{t}`$ onto an area $`\stackrel{~}{\mathrm{}}^d`$ is $`F\stackrel{~}{t}\stackrel{~}{\mathrm{}}^d\pm (F\stackrel{~}{t}\stackrel{~}{\mathrm{}}^d)^{1/2}`$. Thus the fluctuation of the film thickness over the distance $`\stackrel{~}{\mathrm{}}`$ is $`w(\stackrel{~}{t})\sqrt{F\stackrel{~}{t}\stackrel{~}{\mathrm{}}^d}/\stackrel{~}{\mathrm{}}^d`$. At $`\stackrel{~}{t}`$ this should be the thickness of about one atomic layer, $`w(\stackrel{~}{t})1`$, which results in
$$F\stackrel{~}{t}=\stackrel{~}{\mathrm{}}^d.$$
(12)
Combining (10), (11) and (12) one obtains Eq. (1), or
$$F\stackrel{~}{t}(D/F)^\delta \text{and}\stackrel{~}{\mathrm{}}(D/F)^{\delta /d}$$
(13)
with the exponent
$$\delta =\frac{4d}{4d}\gamma .$$
(14)
Notice that the layer coherence length $`\stackrel{~}{\mathrm{}}`$ is substantially larger than the characteristic distance $`\mathrm{}`$ between islands ($`\stackrel{~}{\mathrm{}}\mathrm{}^2`$ at $`d=2`$).
## III 3 Model
In our model, atoms are deposited onto the (100) surface of a simple cubic lattice with the rate of $`F`$ atoms per unit time and area. The surface size is $`L\times L=128\times 128`$ . Atoms with no lateral neighbors are allowed to diffuse with diffusion constant $`D`$. Atoms with lateral neighbors are assumed to be immobile so that, e.g., dimers are immobile and stable. Growth commences on a flat substrate, $`h(x,0)=0`$ for all sites $`x`$. On deposition at $`x`$, $`h(x,t)`$ is increased by one. We neglect barriers to interlayer transport (Ehrlich–Schwoebel barriers ) so that the only parameter of the model is the ratio $`D/F`$.
## IV 4 Damping time
First we present the results for the damping time extracted from kinematic intensity data,
$$I(N_{\mathrm{even}}N_{\mathrm{odd}})^2/L^2$$
(15)
(see Fig. 1; $`N_{\mathrm{even}}`$ $`(N_{\mathrm{odd}})`$ denotes the number of atoms in even (odd) layers). The brackets $`\mathrm{}`$ denote averaging over different runs. The same analysis was done for the surface width with equivalent results for the damping time.
The kinematic intensity oscillates between zero and maxima which decrease until the oscillations vanish. We measure $`\stackrel{~}{t}`$ as the time where the maxima of the kinematic intensity drop below $`I=0.05`$ . The results are shown in Fig. 2.
Obviously there are strong corrections to scaling which can be attributed to an offset $`\stackrel{~}{t}_0>0`$:
$$\stackrel{~}{t}=A_{\stackrel{~}{t}}\left(\frac{D}{F}\right)^\delta \stackrel{~}{t}_0.$$
(16)
$`\stackrel{~}{t}_0`$ plays the role of a cutoff for the validity of our scaling theory. $`(D/F)_0(\stackrel{~}{t}_0/A_{\stackrel{~}{t}})^{1/\delta }`$ can be interpreted as that value of $`D/F`$ below which the oscillations are not observable anymore .
A three–parameter fit to the data shown in Fig. 2 gives an exponent
$$\delta =0.69\pm 0.05,$$
(17)
where the error bar reflects the variations obtained when the evaluation method is modified or when the data for the surface width are evaluated in the same way. This value is in good agreement with the theoretical prediction of $`\delta =2/3`$ (see Eq. (14)) for compact islands.
In our simulation, neither detachment of adatoms from islands nor edge diffusion are considered. Therefore, the islands are fractal for large diffusion lengths with the fractal dimension $`d_f1.72`$ of two–dimensional diffusion limited aggregation . Then $`\gamma `$ changes from $`\gamma =1/6`$ to $`\gamma 0.175`$. This leads to theoretical prediction $`\delta 0.70`$ which is also within the error bars of Eq. (17) . However, in the present case the values of $`D/F`$ are sufficiently small, so that this complication may be ignored.
The scaling plot of the kinematic intensity with the time divided by the damping time according to Eq. (16) (see Fig. 3) confirms the validity of the approach used.
## V 5 Layer coherence length
The measurement of the height difference correlation function
$$G(x,t)[h(x_0,t)h(x_0+x,t)]^2,$$
(18)
evaluated at $`t=\stackrel{~}{t}`$ shows that $`G`$ has a maximum. This can be explained as follows. At $`t=\stackrel{~}{t}`$ the probability to find the surface at the same height as at a reference point $`x_0`$ is minimal at a distance $`xx_0`$ corresponding to the layer coherence length. For larger distances, deviations from the average height are essentially uncorrelated. At very small distances, their correlation is positive, while around $`\stackrel{~}{\mathrm{}}`$ they are anticorrelated. This is how the data denoted by the squares in Fig. 4 were obtained. The result is in good agreement with the predicted exponent, cf. Eq. (13). (Note that this method could be also used for experimental determination of $`\stackrel{~}{\mathrm{}}`$.)
An alternative method of measuring $`\stackrel{~}{\mathrm{}}`$ is to carry out a finite-size analysis in the following way. The surface does not roughen when the linear system size $`L`$ is smaller than $`\stackrel{~}{\mathrm{}}`$. In this case, the amplitude of the growth oscillations becomes stationary after a transient time, and the oscillations never die out. We monitored the variance
$$A^2(t)w^2(t)_{[t,t+\tau ]}w(t)_{[t,t+\tau ]}^2$$
(19)
of the surface width during the layer completion time $`\tau 1/F`$, where $`\mathrm{}_{[t,t+\tau ]}`$ denotes the time average over the interval $`[t,t+\tau ]`$. Its stationary value decreases with increasing system size and ultimately becomes equal to the statistical fluctuations of $`w`$ when the system size is big enough so that the oscillations can die out completely. The values of $`\stackrel{~}{\mathrm{}}`$, denoted by the circles in Fig. 4 represent the linear system size $`L`$ at which the stationary value of $`A(t)`$ drops below $`0.37`$. Both methods of measuring $`\stackrel{~}{\mathrm{}}`$ are in excellent agreement with each other and with the theoretical prediction .
## VI 6 Conclusions and outlook
We have presented a theory for the damping of growth oscillations caused by kinetic roughening. We have shown that the results of numerical simulations of a minimal model compare very favorably with the theory, and directly determined two key quantities, the damping time and the layer coherence length. The instability associated with barriers to interlayer transport may compete with the kinetic roughening mechanism as a source of oscillation damping . This, as well as the transition to step–flow growth on vicinal surfaces will lead to different power laws for the damping time and the layer coherence length. This remains for future research.
The results of this paper can be directly verified by diffraction or real-space surface sensitive techniques. The determination of the damping time and, in particular, of the layer coherence length as a function of growth conditions should be possible using the methods outlined above.
## VII Acknowledgements
Useful conversations with Martin Rost are gratefully acknowleged. D. E. W. acknowledges support by DFG within SFB 166 Strukturelle und magnetische Phasenübergänge in Übergangsmetall-Legierungen und Verbindungen. J. K. acknowledges support by DFG within SFB 237 Unordnung und grosse Fluktuationen. P. Š. acknowledges the financial support of Alexander von Humboldt Foundation and Volkswagen Stiftung. H. K. acknowledges support by the German Academic Exchange Service within the Hochschulsonderprogramm III.
|
no-problem/9901/hep-ph9901203.html
|
ar5iv
|
text
|
# 1 Anomalous 𝑈(1) in Four–Dimensional String Models
## 1 Anomalous $`U(1)`$ in Four–Dimensional String Models
One of the underlying themes of this conference is possible generalizations to minimal supersymmetric standard model (MSSM) physics resulting from Lorentz and CPT violating effects. There have been very nice talks discussing how strings (or $`M`$–theory higher dimensional objects) could produce such symmetry breaking effects. At this time I would like to focus on another type of symmetry breaking that, as the six dimensions are compactified in string models, often accompanies the reduction of ten–dimensional Lorentz symmetry to four–dimensional Lorentz symmetry. I am referring to the appearance of an anomalous local $`U(1)_\mathrm{A}`$ , for which there is a non–zero charge trace over the massless states in the effective low energy field theory,
$$\mathrm{Tr}Q^{(A)}0.$$
(1.1)
In ten uncompactified dimensions, all local $`U(1)`$ symmetries in heterotic strings are embedded in either $`E_8\times E_8`$ or $`SO(32)`$ gauge groups and, therefore, are necessarily non–anomalous. However, producing the SM group $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ from either of these larger groups via compactification also frees several $`U(1)_i`$ (orthogonal to the SM group) from their $`E_8\times E_8`$ or $`SO(32)`$ embeddings and allows them to become anomalous. Additionally, any of the six $`U(1)_i`$ corresponding to the compactified dimensions may also become anomalous. If more than one $`U(1)_i`$ is initially anomalous (which is the general case), a unique rotation,
$$U(1)_\mathrm{A}c_\mathrm{A}\underset{i}{}\{\mathrm{Tr}Q^{(i)}\}U(1)_i,$$
(1.2)
with $`c_A`$ a normalization coefficient, places the entire anomaly into a single Abelian group, referred to here as $`U(1)_\mathrm{A}`$. All of the Abelian combinations orthogonal to $`U(1)_\mathrm{A}`$ become traceless. From hereon I will assume this rotation has been performed and will denote the traceless combinations as $`U(1)_j^{^{}}`$.
When an anomalous $`U(1)_\mathrm{A}`$ appears in a string model, there is a mechanism (aptly called the standard anomaly cancellation mechanism ) whereby the $`U(1)_\mathrm{A}`$ is broken near the string scale. In the process, a Fayet–Iliopoulos (FI) $`D`$–term,
$$ϵ\frac{g_s^2M_P^2}{192\pi ^2}\mathrm{Tr}Q^{(A)},$$
(1.3)
is generated in the effective Lagrangian, with $`g_s`$ the string coupling and $`M_P`$ the reduced Planck mass, $`M_PM_{Planck}/\sqrt{8\pi }2.4\times 10^{18}`$ GeV. The FI–term will break spacetime supersymmetry near the string scale unless a set of scalar VEVs, $`\{\phi _m\}`$, of fields $`\phi _m`$ carrying anomalous charges $`Q_m^{(\mathrm{A})}`$, can contribute a compensating $`D`$–term, $`D_A(\phi _m)_\alpha Q_m^{(A)}|\phi _m|^2`$, to cancel the FI–term, i.e.,
$$D_A=\underset{m}{}Q_m^{(A)}|\phi _m|^2+ϵ=0.$$
(1.4)
A set of scalar VEVs satisfying eq. (1.4) is also constrained to maintain $`D`$–flatness for all non–anomalous Abelian $`U(1)_j^{^{}}`$ symmetries as well,<sup>*</sup><sup>*</sup>*I am only considering flat directions involving non–Abelian singlet fields solely. In cases where non–trivial non–Abelian representations are also allowed VEVs, generalized non–Abelian $`D`$–flat constraints must also be imposed.
$$D_j=\underset{m}{}Q_m^{(j)}|\phi _m|^2=0.$$
(1.5)
Each superfield $`\mathrm{\Phi }_m`$ (containing the scalar field $`\phi _m`$ and chiral superpartner) in the superpotential imposes further constraints on the set of scalar VEVs. $`F`$–flatness will be broken (thereby again destroying spacetime supersymmetry) at the scale of the VEVs unless,
$$F_m=\frac{W}{\mathrm{\Phi }_m}=0;W=0.$$
(1.6)
Appearance of an anomalous $`U(1)_\mathrm{A}`$ in a string model can have profound effects. An FI–term cancelling (and therefore supersymmetry restoring) flat direction of VEVs can drastically alter the phenomenology of a model in two ways. First, the typical scalars taking on VEVs in a flat direction also carry charges of several non–anomalous $`U(1)_j^{}`$. Through a generalized Higgs effect, the scalars give near string–scale mass to the respective generators of these gauge fields. The standard anomaly cancellation mechanism thereby causes not just the anomalous $`U(1)_\mathrm{A}`$ to be broken, but several non–anomalous $`U(1)_j^{}`$ as well. Second, a non–renormalizable superpotential term, $`\frac{\lambda }{M_\mathrm{S}^{n3}}\mathrm{\Phi }_1\mathrm{\Phi }_2\mathrm{\Phi }_3\mathrm{}\mathrm{\Phi }_n`$, formed from $`n>3`$ superfields produces a new effective mass term, $`\frac{\lambda }{M_\mathrm{S}^{n3}}\mathrm{\Phi }_1\mathrm{\Phi }_2\mathrm{\Phi }_3\mathrm{}\mathrm{\Phi }_{n2}\mathrm{\Phi }_{n1}\mathrm{\Phi }_n`$, if $`n2`$ of the fields take on VEVs, or produces a new effective Yukawa term, $`\frac{\lambda }{M_\mathrm{S}^{n3}}\mathrm{\Phi }_1\mathrm{\Phi }_2\mathrm{\Phi }_3\mathrm{}\mathrm{\Phi }_{n3}\mathrm{\Phi }_{n2}\mathrm{\Phi }_{n1}\mathrm{\Phi }_n`$, if $`n3`$ take on VEVs. ($`\lambda `$ is a generic non–renormalizable coupling coefficient.)
## 2 Three Generation $`SU(3)_C\times SU(2)_L\times U(1)_Y\times _iU(1)_i`$ String Models
As mentioned, in four dimensions, a quasi–realistic three generation $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ heterotic string model has several additional Abelian group factors, $`U(1)_i`$, along with a hidden sector non–Abelian gauge group $`G_{\mathrm{hid}}`$. When this type of string model is compactified from ten to four dimensions via a Calabi–Yau (CY) manifold or $`N=2`$ minimal model, all of the extra $`U(1)_i`$ remain non–anomalous. On the other hand, when this compactification occurs through bosonic lattices, orbifolds, or free fermions, an anomalous $`U(1)_\mathrm{A}`$ generically appears. Thus, while CY and $`N=2`$ compactified models can be thought of as “What you see is what you get” models, lattice, orbifold, and free fermion models cannot be regarded as such. For the latter classes of models, the massless spectrum and gauge groups before anomaly cancellation may be very different from the corresponding ones after a flat direction set of VEVs is chosen (non–perturbatively).
These dynamical, FI–term induced string model transformations may, in fact, be a very valuable tool for producing phenomenologically viable string models. The reason for this relates to a problematic aspect of generic three generation $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ string models. Exotic MSSM states, many carrying fractional electric charge, seem an ubiquitous feature of such models . Most of these exotics, if they remain massless down to the electroweak scale, signify unphysical phenomenology, thereby disallowing a model containing them. Enhancing the probability of this occurring are the “string–selection rules.” These are additional constraints on superpotential terms beyond standard gauge invariance. String selection rules often forbid superpotential terms, otherwise allowed by gauge invariance, that could generate large mass for an exotic via couplings with flat direction VEVs . This generally makes decoupling of all dangerous exotic fields from the low energy effective field theory difficult.
## 3 A String Derived MSSM
Recently a free fermionic string model constructed several years ago was found to contain certain flat (that is, $`D`$– and $`F`$–flat to all finite orders in the superpotential ) directions of near string scale magnitude that simultaneously cancel the FI $`D`$–term and give (near) string scale mass to all exotics (including those with fractional electric charge) . Prior to VEVs being turned on, the gauge group of this model is $`SU(3)_C\times SU(2)_L\times U(1)_Y\times U(1)_A\times _{j=1}^{10}U(1)_j^{^{}}\times SU(3)_H\times SU(2)_H\times SU(2)_H^{}`$. Under any of these special flat directions, exactly three $`U(1)_j^{^{}}`$ survive. The gauge group thus reduces to $`SU(3)_C\times SU(2)_L\times U(1)_Y\times _{j^{}=1}^3U(1)_j^{}^{^{}}\times SU(3)_H\times SU(2)_H\times SU(2)_H^{}`$.
In the pre–VEV stage, the model contains several massless MSSM exotics: one $`SU(3)_C`$ vector–like pair of triplets with electric charges $`Q_{elec}=\pm \frac{1}{3}`$; ten $`SU(2)_L`$ doublets, four of which carry $`Q_{elec}=\pm \frac{1}{2}`$; and 16 $`SU(3)_C\times SU(2)_L`$ singlets, eight with $`Q_{elec}=+\frac{1}{2}`$ and eight with $`Q_{elec}=\frac{1}{2}`$; one vector–like pair of hidden sector $`SU(2)_H`$ doublets with $`Q_{elec}=\frac{1}{2}`$, and a similar pair of $`SU(2)_H^{}`$ doublets. The specific flat directions generate mass through unsuppressed renormalizable superpotential terms for all of the exotics, except for the $`SU(3)_C`$ vector–like triplet pair. The $`SU(3)`$ triplet pair receives mass through a suppressed fifth order term and, thus, its mass scale should be suppressed a bit below that of the other states. The FI–term induced mass scale of the fields was found to be of the order $`7\times 10^{16}`$ GeV.
One physical characteristic distinguishing between these various flat directions is the set of additional non–Abelian singlets and hidden sector non–Abelian non–singlets that take on (near) string–scale mass. Typically, slightly less than half of the remaining 47 singlets become massive. For the example flat direction discussed in ref. , the number is 19, with the corresponding mass terms being unsuppressed for 15 of these. Four singlets receive (suppressed) mass terms at fifth order. Slightly more than half of the hidden sector non–singlets also receive induced masses. For these non-singlets, more masses are generated through higher order terms than through the renormalizable terms. When the flat direction of ref. is applied, 18 of the remaining 30 hidden sector non–singlet states become near string scale massive. Eight of these 18 states gain mass through renormalizable terms, six from fourth order terms, and four via fifth order terms.
## 4 String and MSSM Scale Unification
Assuming the spectrum of the MSSM above the electroweak scale, unification of the $`SU(3)_C`$, $`SU(2)_L`$ and $`U(1)_Y`$ running couplings occurs at a scale $`M_\mathrm{U}=M_{\mathrm{MSSM}}2.5\times 10^{16}`$ GeV. However, for several years the general perturbative string prediction of the scale at which all gauge couplings merge has been on the order of $`M_\mathrm{S}5\times 10^{17}`$ GeV. Several perturbative solutions have been proposed to resolve the apparent factor of 20 inequity between the two scales, $`M_{\mathrm{MSSM}}`$ and $`M_\mathrm{S}`$. Typically these proposals attempt to (i) raise the $`SU(3)_C`$, $`SU(2)_L`$ and $`U(1)_Y`$ unification scale $`M_\mathrm{U}`$ above the MSSM scale $`M_{\mathrm{MSSM}}`$ through the effects of intermediate scale MSSM exotics on the running couplings, (ii) lower the string scale $`M_\mathrm{S}`$ to the MSSM scale $`M_{\mathrm{MSSM}}`$ via threshold effects from the infinite tower of massive string states, (iii) run a unified MSSM coupling to the string scale via a grand unification theory, or (iv) various combinations of (i) through (iii). However, Witten has recently suggested an $`M`$–theory mechanism that offers a non–perturbative resolution to the apparent scale misalignment . This conjecture maintains the successful MSSM prediction and equates the string scale to the MSSM scale, $`M_\mathrm{S}=M_{\mathrm{MSSM}}2.5\times 10^{16}`$ GeV. Thus, this conjecture suggests that the observable gauge group just below the string scale should be $`SU(3)_C\times SU(2)_L\times U(1)_Y`$ and the spectrum of the observable sector should consist solely of the MSSM spectrum. The model of appears to be the first realization of an actual string–derived MSSM.
Detailed analysis of this model is underway and will be presented in forthcoming papers . In particular, for each MSSM–generating flat direction, we will examine the variations in (i) the textures of the MSSM mass matrices, (ii) the non–Abelian singlet and hidden sector non–singlet low energy spectrums, and (iii) the hidden sector effective Yukawa and non–renormalizable terms.
## 5 Acknowledgements
G.C. thanks the coordinator of CPT ’98, Alan Kostelecký, and his staff for organizing a very enjoyable and educational conference. G.C. also thanks his collaborators Alon Faraggi and Dimitri Nanopoulos for valuable discussions. This work is supported in part by DOE Grant No. DE–FG–0395ER40917.
|
no-problem/9901/chao-dyn9901013.html
|
ar5iv
|
text
|
# The predictability problem in systems with an uncertainty in the evolution law
## I Introduction
The ability to predict the future state of a system, given its present state, stands at the foundations of scientific knowledge with relevant implications from an applicative point of view in geophysical and astronomical sciences. In the prediction of the evolution of a system, e.g. the atmosphere, we are severely limited by the fact that we do not know with arbitrary accuracy the evolution equations and the initial conditions of the system. Indeed, one integrates a mathematical model given by a finite number of equations. The initial condition, a point in the phase space of the model, is determined only with a finite resolution (i.e. by a finite number of observations) (Monin 1973).
Using concepts of dynamical systems theory, there have been some progresses in understanding the growth of an uncertainty during the time evolution. An infinitesimal initial uncertainty ($`\delta _00`$) in the limit of long times ($`t\mathrm{}`$) grows exponentially in time with a typical rate given by the leading Lyapunov exponent $`\lambda `$, $`|\delta x(t)|\delta _0\mathrm{exp}(\lambda t)`$. Therefore if our purpose is to forecast the system within a tolerance $`\mathrm{\Delta }`$, the future state of the system can be predicted only up to the predictability time, given by:
$$T_p\frac{1}{\lambda }\mathrm{ln}\left(\frac{\mathrm{\Delta }}{\delta _0}\right).$$
(1)
In literature, the problem of predictability with respect to uncertainty on the initial conditions is referred to as predictability of the first kind.
In addition, in real systems we must also cope with the lack of knowledge of the evolution equations. Let us consider a system described by a differential equation:
$$\frac{d}{dt}𝐱(t)=𝐟(𝐱,t),𝐱,𝐟^n.$$
(2)
As a matter of fact we do not know exactly the equations, and we have to devise a model which is different from the true dynamics:
$$\frac{d}{dt}𝐱(t)=𝐟_ϵ(𝐱,t)\text{where}𝐟_ϵ(𝐱,t)=𝐟(𝐱,t)+ϵ\delta 𝐟(𝐱,t).$$
(3)
Therefore, it is natural to wonder about the relation between the true evolution (reference or true trajectory $`𝐱_T(t)`$) given by (2) and that one effectively computed (perturbed or model trajectory $`𝐱_M(t)`$) given by (3). This problem is referred to as predictability of the second kind.
Let us make some general remarks. At the foundation of the second kind predictability problem there is the issue of structural stability (Guckenheimer et al. 1983): since the evolution laws are known only with finite precision it is highly desirable that at least certain properties are not too sensitive to the details of the equations of motion. For example, in a system with a strange attractor, small generic changes in the evolution laws should not change drastically the dynamics (see Appendix A for a simple example with non generic perturbation).
In chaotic systems the effects of a small generic uncertainty on the evolution law are similar to those due to the finite precision on the initial condition (Crisanti et al. 1989). The model trajectory of the perturbed dynamics diverges exponentially from the reference one with a mean rate given by the Lyapunov exponent of the original system. The statistical properties (such as correlation functions and temporal averages) are not strongly modified. This last feature has been frequently related to the shadowing lemma (Guckenheimer et al. 1983; Ott 1993): almost all trajectories of the true system can be approximated by a trajectory of the perturbed system starting from a slightly different initial condition. However, as far as we know, the shadowing lemma can be proven only in special cases and therefore it cannot be straightforwardly invoked to explain the statistical reproducibility in a generic case. In addition, in real systems the size of an uncertainty on the evolution equations is determinable only a posteriori, based on the ability of the model equations to reproduce some of the features of the phenomenon.
In dynamical systems theory, the problems of first and second kind predictability is essentially understood in the limit of infinitesimal perturbations. However even in this limit we must also consider the fluctuations of the rate of expansion which can lead to relevant modifications of the predictability time (1), in particular for strongly intermittent systems (Benzi et al. 1985; Paladin et al. 1987; Crisanti et al. 1993).
As far as finite perturbations are considered, the leading Lyapunov exponent is not relevant for the predictability issue. In presence of many characteristic times and spatial scales the Lyapunov exponent is related to the growth of small scale perturbations which saturates on short times and has very little relevance for the growth of large scale perturbations (Leith and Kraichnan 1972; Monin 1973; Lorenz 1996). To overcome this shortcoming, a suitable characterization of the growth of non infinitesimal perturbations, in terms of the Finite Size Lyapunov Exponent (FSLE), has been recently introduced (Aurell et al. 1996 and 1997).
Also in the case of second kind predictability one has often to deal with errors which are far from being infinitesimal. Typical examples are systems described by partial differential equations (e.g. turbulence, atmospheric flows). The study of these systems is performed by using a numerical model with unavoidable severe approximations, the most relevant of which is the necessity to cut some degrees of freedom off; basically, the small scale variables.
The aim of this paper is to analyze the effects of limited resolution on the large scale features. This raises two problems: in first place one has to deal with perturbations of the evolution equations which in general cannot be considered small; second, the parameterization of the unresolved modes can be a subtle point. We shall show that the Finite Size Lyapunov Exponent is able to characterize the effects of uncertainty on the evolution laws. Moreover we shall discuss the typical difficulties arising in the parameterization of the unresolved scales.
This paper is organized as follows. In section II we report some known results about the predictability problem of the second kind and recall the definition of the FSLE. In section III we present numerical results on simple models. In section IV we consider more complex systems with many characteristic times. Section V is devoted to summarize the results. In Appendix A we illustrate a simple example of structural unstable system. In Appendix B we describe the method for the computation of the FSLE and in Appendix C we discuss the problem of the parameterization of the unresolved variables.
## II EFFECTS OF A SMALL UNCERTAINTY ON THE EVOLUTION LAW
In the second kind predictability problem, we can distinguish three general cases depending on the original dynamics. In particular, equation (2) may display:
(i) trivial attractors: asymptotically stable fixed points or attracting periodic orbits;
(ii) marginally stable fixed points or periodic/quasi-periodic orbits as in integrable Hamiltonian systems;
(iii) chaotic behavior.
In case (i) small changes in the equations of motion do not modify the qualitative features of the dynamics. Case (ii) is not generic and the outcome strongly depends on the specific perturbation $`\delta 𝐟`$, i.e. it is not structurally stable. In the chaotic case (iii) one expects that the perturbed dynamics is still chaotic. In this paper we will consider only this latter case.
Let us also mention that, in numerical computations of evolution equations (e.g. differential equations), there are two unavoidable sources of errors: the finite precision representation of the numbers which causes the computer phase space to be necessarily discrete and the round-off which introduces a sort of noise. Because of the discrete nature of the phase space of the system studied on computer, orbits numerically computed have to be periodic. Nevertheless the period is usually very large, apart for very low computer precision (Crisanti et. al 1989). We do not consider here this source of difficulties. The round-off produces on eq. (2) a perturbation which can be written as $`\delta 𝐟(𝐱,t)=𝐰(𝐱)𝐟(𝐱,t)`$ and $`ϵ10^\alpha `$ ($`\alpha =`$number of digits in floating point representation) where $`𝐰=O(1)`$ is an unknown function which may depend on $`𝐟`$ and on the software of the computer (Knut 1969). In general, the round-off error is very small and may have, as much as the noise, a positive role, as underlined by Ruelle (1979), in selecting the physical probability measure, the so-called natural measure, from the set of ergodic invariant measures.
In chaotic systems the effects of a small uncertainty on the evolution law is, for many aspects, similar to those due to imperfect knowledge of initial conditions. This can be understood by the following example. Consider the Lorenz equations (Lorenz 1963)
$$\begin{array}{ccc}\frac{dx}{dt}\hfill & =\hfill & \sigma (yx)\hfill \\ \frac{dy}{dt}\hfill & =\hfill & Rxyxz\hfill \\ \frac{dz}{dt}\hfill & =\hfill & xybz.\hfill \end{array}$$
(4)
In order to mimic an experimental error in the determination of the evolution law we consider a small error $`ϵ`$ on the parameter $`R`$: $`RR+ϵ`$. Let us consider the difference $`\mathrm{\Delta }𝐱(t)=𝐱_M(t)𝐱_T(t)`$ with, for simplicity, $`\mathrm{\Delta }𝐱(0)=0`$, i.e. we assume a perfect knowledge of the initial conditions. One has, with obvious notation:
$$\frac{d\mathrm{\Delta }𝐱}{dt}=𝐟_ϵ(𝐱_M)𝐟(𝐱_T)\frac{𝐟}{𝐱}\mathrm{\Delta }𝐱+\frac{𝐟_ϵ}{R}ϵ.$$
(5)
At time $`t=0`$ one has $`|\mathrm{\Delta }𝐱(0)|=0`$, therefore $`|\mathrm{\Delta }𝐱(t)|`$ grows initially only by the effect of the second term in (5). At later times, when $`|\mathrm{\Delta }𝐱(t)|O(ϵ)`$ the leading term of (5) becomes the first one, and we recover the first kind predictability problem for an initial uncertainty $`\delta _0ϵ`$. Therefore, apart from an initial (not particularly interesting) growth, which depends strongly on the specific perturbation, the evolution of $`<\mathrm{log}(|\mathrm{\Delta }𝐱(t)|)>`$ follows the usual linear growth with the slope given by the leading Lyapunov exponent. Typically the value of the Lyapunov exponent computed by using the model dynamics differs from the true one by a small amount of order $`ϵ`$, i.e. $`\lambda _M=\lambda _T+O(ϵ)`$ (Crisanti et al. 1989).
This consideration applies only to infinitesimal perturbations. The generalization to finite perturbations requires the extension of the Lyapunov exponent to finite errors. Let us now introduce the Finite Size Lyapunov Exponent for the predictability of finite perturbations. The definition of FSLE $`\lambda (\delta )`$ is given in terms of the “doubling time” $`T_r(\delta )`$, that is the time a perturbation of initial size $`\delta `$ takes to grow by a factor $`r`$ ($`>1`$):
$$\lambda (\delta )=\frac{1}{T_r(\delta )}_t\mathrm{ln}r$$
(6)
where $`\mathrm{}_t`$ denotes average with respect to the natural measure, i.e. along the trajectory (see Appendix B). For chaotic systems, in the limit of infinitesimal perturbations ($`\delta 0`$) $`\lambda (\delta )`$ is nothing but the leading Lyapunov exponent $`\lambda `$ (Benettin et al. 1980). Let us note that the above definition of $`\lambda (\delta )`$ is not appropriate to discriminate cases with $`\lambda =0`$ and $`\lambda <0`$, since the predictability time is positive by definition. Nevertheless this is not a limitation as long as we deal with chaotic systems.
In many realistic situations the error growth for infinitesimal perturbations is dominated by the fastest scales, which are typically the smallest ones (e.g. small scale turbulence). When $`\delta `$ is no longer infinitesimal, $`\lambda (\delta )`$ is given by the fully nonlinear evolution of the perturbation. In general $`\lambda (\delta )\lambda `$, according to the intuitive picture that large scales are more predictable. Outside the range of scales in which the error $`\delta `$ can be considered infinitesimal, the function $`\lambda (\delta )`$ depends on the details of the dynamics and in principle on the norm used. In fully developed turbulence one has the universal law $`\lambda (\delta )\delta ^2`$ in the inertial range (Aurell et al. 1996 and 1997). It is remarkable that this prediction, which can be obtained within the multifractal model for turbulence, is not affected by intermittency and it gives the law originally proposed by Lorenz (1969). The behavior of $`\lambda (\delta )`$ as a function of $`\delta `$ gives important information on the characteristic times and scales of the system and it has been also applied to passive transport in closed basins (Artale et al. 1997).
Let us now return to the example (4). We compute $`\lambda _{TT}(\delta )`$, the FSLE for the true equations, and $`\lambda _{TM}(\delta )`$, the FSLE computed following the distance between one true trajectory and one model trajectory starting at the same point. These are shown in Figure 2. The true FSLE $`\lambda _{TT}(\delta )`$ displays a plateau indicating a chaotic dynamics with leading Lyapunov exponent $`\lambda 1`$. Concerning the second kind predictability, for $`\delta >ϵ`$ the second term in (5) becomes negligible and we observe the transition to the Lyapunov exponent $`\lambda _{TM}(\delta )\lambda _{TT}(\delta )\lambda `$. In this range of errors the model system recovers the intrinsic predictability of the true system. For very small errors, $`\delta <ϵ`$, where the growth of the error is dominated by the second term in (5), we have $`\lambda _{TM}(\delta )>\lambda _{TT}(\delta )`$.
This example shows that it is possible to recover the intrinsic predictability of a chaotic system even in presence of some uncertainty in the model equations.
The relevance of the above example is however limited by the fact that (4) does not involve different scales. To investigate the effect of spatial resolution on predictability let us consider the advection of Lagrangian tracers in a given Eulerian field. We study a time-dependent, two dimensional, velocity field given by the superposition of large scale (resolved) eddies and small scale (possibly unresolved) eddies.
The streamfunction we consider is a slight modification of a model originally proposed for chaotic advection in Rayleigh-Bénard convection (Solomon et al. 1988):
$$\mathrm{\Psi }(x,y,t)=\psi (x,y,t;k_L,\omega _L,B_L)+ϵ\psi (x,y,t;k_S,\omega _S,B_S)$$
(7)
with
$$\psi (x,y,t;k,\omega ,B)=\frac{1}{k}\mathrm{sin}\left\{k\left[x+B\mathrm{sin}(\omega t)\right]\right\}\mathrm{sin}\left(ky\right)$$
(8)
The first term represents the large-scale flow, i.e. the resolved part of the flow, the second one mimics the unresolved small scale term and $`ϵ`$ measures the relative amplitude. We choose $`k_Sk_L`$ and $`\omega _S\omega _L`$ in order to have a sharp separation of space and time scales.
The Lagrangian tracers evolve according to the equations:
$$\frac{dx}{dt}=\frac{\psi }{y},\frac{dy}{dt}=\frac{\psi }{x}.$$
(9)
We use the complete stream function (7) for the true dynamics and only the large-scale term for the model dynamics.
The time dependence induces chaotic motion and diffusion in the $`x`$ direction, without inserting any noise term (Solomon et al. 1988). For what concerns $`\lambda _{TT}`$ one observes three regimes (Figure 3). For very small errors $`\delta <2\pi /k_S`$ the exponential separation is ruled by the fastest scale and $`\lambda _{TT}\lambda `$, i.e. we recover the Lyapunov exponent of the system. At intermediate errors we observe a second small plateau corresponding to the large-scale term for the model dynamics. For larger errors, $`\delta >2\pi /k_L`$ one has $`\lambda _{TT}\delta ^2`$, i.e. diffusive behavior (see Artale et. al 1997).
The model FSLE $`\lambda _{TM}(\delta )`$ cannot recover the small scale features: for $`\delta ϵ`$ we observe the scaling $`\lambda _{TM}(\delta )\delta ^1`$ which can be understood by the following argument. In this region the distance between the reference and the true trajectories grows as $`d\delta /dtϵ`$ and thus, by a dimensional estimate, one has:
$$\lambda _{TM}(\delta )\frac{1}{T_r(\delta )}\frac{ϵ}{\delta }.$$
(10)
Nevertheless, for larger $`\delta `$ the model fairly captures the small plateau displayed by $`\lambda _{TT}(\delta )`$ which corresponds to the slow time scale, then at large $`\delta `$ (i.e. for $`\delta `$ greater than the $`\delta >2\pi /k_L`$) we recover the diffusive behavior with the correct diffusion coefficient. This last feature can be understood by the fact that the diffusion coefficient, being an asymptotic quantity of the flow, is not influenced by the details of small scale structures.
This example is rather simple: large scales do not interact with the small ones and the number of degrees of freedom is very small. Therefore in this case the crude elimination of the small scale component does not prevent the possibility of a fair description of large scale features.
In the following we will consider more complex situations, in which strongly interacting degrees of freedom with different characteristic times are involved. In these cases the correct parameterization of the unresolved modes is crucial for the prediction of large scale behavior.
## III Systems with two time scales
Before analyzing in detail the effects of non infinitesimal perturbations of the evolution laws in some specific models let us clarify our aims. We consider a dynamical system written in the following form:
$$\begin{array}{ccc}\frac{d𝐱}{dt}\hfill & =\hfill & 𝐟(𝐱,𝐲)\hfill \\ \frac{d𝐲}{dt}\hfill & =\hfill & 𝐠(𝐱,𝐲),\hfill \end{array}$$
(11)
where $`𝐟,𝐱^n`$ and $`𝐠,𝐲^m`$, in general $`nm`$. Now, let us suppose that the fast variables $`𝐲`$ cannot be resolved: a typical example are the subgrid modes in PDE discretizations. In this framework, a natural question is: how must we parameterize the unresolved modes ($`𝐲`$) in order to predict the resolved modes ($`𝐱`$) ?
As discussed by Lorenz (1996), to reproduce – at a qualitative level – a given phenomenology, e.g. the ENSO phenomenon, one can drop out the small scale features without negative consequences. But one unavoidably fails in forecasting the ENSO (i.e. the actual trajectory) without taking into account in a suitable way the small scale contributions.
An example in which it is relatively simple to develop a model for the fast modes is represented by skew systems:
$$\begin{array}{ccc}\frac{d𝐱}{dt}\hfill & =\hfill & 𝐟(𝐱,𝐲)\hfill \\ \frac{d𝐲}{dt}\hfill & =\hfill & 𝐠(𝐲)\hfill \end{array}$$
(12)
In this case, the fast modes $`(𝐲)`$ do not depend on the slow ones $`(𝐱)`$. One can expect that in this case, neglecting the fast variables or parameterizing them with a suitable stochastic process, should not drastically affect the prediction of the slow variables (Boffetta et al. 1996).
On the other hand, if $`𝐲`$ feels some feedback from $`𝐱`$, we cannot simply neglect the unresolved modes. In Appendix C we discuss this point in detail. In practice one has to construct an effective equation for the resolved variables:
$$\frac{d𝐱}{dt}=𝐟_M(𝐱,𝐲(𝐱)),$$
(13)
where the functional form of $`𝐲(𝐱)`$ and $`𝐟_M`$ are found by phenomenological arguments and/or by numerical studies of the full dynamics.
Let us now investigate an example with a recently introduced toy model of the atmosphere circulation (Lorenz 1996; Lorenz et al. 1998) including large scales $`x_k`$ (synoptic scales) and small scales $`y_{j,k}`$ (convective scales):
$$\begin{array}{ccc}\frac{dx_k}{dt}\hfill & =\hfill & x_{k1}\left(x_{k2}x_{k+1}\right)\nu x_k+F_{j=1}^Jy_{j,k}\hfill \\ \frac{dy_{j,k}}{dt}\hfill & =\hfill & cby_{j+1,k}\left(y_{j+2,k}y_{j1,k}\right)c\nu y_{j,k}+x_k\hfill \end{array}$$
(14)
where $`k=1,\mathrm{},K`$ and $`j=1,\mathrm{},J`$. As in (Lorenz 1996) we assume periodic boundary conditions on $`k`$ ($`x_{K+k}=x_k`$, $`y_{j,K+k}=y_{j,k}`$) while for $`j`$ we impose $`y_{J+j,k}=y_{j,k+1}`$. The variables $`x_k`$ represent some large scale atmospheric quantities in $`K`$ sectors extending on a latitude circle, while the $`y_{j,k}`$ represent quantities on smaller scales in $`JK`$ sectors. The parameter $`c`$ is the ratio between fast and slow characteristic times and $`b`$ measures the relative amplitude.
As pointed out by Lorenz, this model shares some basic properties with more realistic models of the atmosphere. In particular, the non-linear terms, which model the advection, are quadratic and conserve the total kinetic energy $`_k(x_k^2+_jy_{j,k}^2)`$ in the unforced ($`F=0`$), inviscid ($`\nu =0`$) limit; the linear terms containing $`\nu `$ mimic dissipation and the constant term $`F`$ acts as an external forcing preventing the total energy from decaying.
If one is interested in forecasting the large scale behavior of the atmosphere by using only the slow variables, a natural choice for the model equations is:
$$\frac{dx_k}{dt}=x_{k1}\left(x_{k2}x_{k+1}\right)\nu x_k+FG_k(𝐱),$$
(15)
where $`G_k(𝐱)`$ represents the parameterization of the fast components in (14) (see Appendix C).
The FSLE for the true system (Boffetta et al. 1998) is shown in Figure 4 and displays the two characteristic plateau corresponding to fast component (for $`\delta 0.1`$) and slow component for large $`\delta `$ . Figure 4 also shows what happens when one simply neglects the fast components $`y_{j,k}`$ (i.e. $`𝐆(𝐱)=0`$). At very small $`\delta `$ one has $`\lambda _{TM}(\delta )\delta ^1`$ as previously discussed. For large errors we observe that, with this rough approximation, we are not able to capture the characteristic predictability of the original system. More refined parameterizations in terms of stochastic processes with the correct probability distribution function and correlation times do not improve the forecasting ability.
The reason for this failure is due to the presence of a feedback term in the equations (14) which induces strong correlations between the variable $`x_k`$ and the unresolved coupling $`_{j=1}^Jy_{j,k}`$. For a proper parameterization of the unresolved variables we follow the strategy discussed in Appendix C. Basically we adopt
$$G(𝐱)=\nu _ex_k,$$
(16)
in which $`\nu _e`$ is a numerically determined parameter. Figure 4 shows that, although small scale are not resolved, the large scale predictability is well reproduced and one has $`\lambda _{TM}(\delta )\lambda _{TT}(\delta )`$ for large $`\delta `$. We conclude this section by observing that the proposed parameterization (16) is a sort of eddy viscosity parameterization.
## IV Large scale predictability in a turbulence model
We now consider a more complex system which mimics the energy cascade in fully developed turbulence. The model is in the class of the so called shell models introduced some years ago for a dynamical description of small-scale turbulence. For a recent review on shell models see Bohr et al. 1998. This model has relatively few degrees of freedom but involves many characteristic scales and times. The velocity field is assumed isotropic and it is decomposed on a finite set of complex velocity components $`u_n`$ representing the typical turbulent velocity fluctuation on a “shell” of scales $`\mathrm{}_n=1/k_n`$. In order to reach very high Reynolds number with a moderate number of degrees of freedom, the scales are geometrically spaced as $`k_n=k_02^n`$ ($`n=1,\mathrm{}N`$).
The specific model here considered has the form (L’vov et al. 1998)
$$\frac{du_n}{dt}=i\left(k_{n+1}u_{n+1}^{}u_{n+2}\frac{1}{2}k_nu_{n1}^{}u_{n+1}+\frac{1}{2}k_{n1}u_{n2}u_{n1}\right)\nu k_n^2u_n+f_n$$
(17)
where $`\nu `$ represent the kinematic viscosity and $`f_n`$ is a forcing term which is restricted only to the first two shells (in order to mimic large scale energy injection).
Without entering in the details, we recall that the Shell Model (17) displays an energy cascade á la Kolmogorov from large scales (small $`n`$) to dissipative scales ($`nN`$) with a statistical stationary energy flux. Scaling laws for the average velocity components are observed:
$$|u_n^p|k_n^{\zeta _p}$$
(18)
with exponents close to the Kolmogorov 1941 values $`\zeta _p=p/3`$.
From a dynamical point of view, model (17) displays complex chaotic behavior which is responsible of the small deviation of the scaling exponents (intermittency) with respect to the Kolmogorov values. Neglecting this (small) intermittency effects, a dimensional estimate of the characteristic time (eddy turnover time) for scale $`n`$ gives
$$\tau _n\frac{\mathrm{}_n}{|u_n|}k_n^{2/3}.$$
(19)
The scaling behavior holds up to the Kolmogorov scale $`\eta =1/k_d`$ defined as the scale at which the dissipative term in (17) becomes relevant. The Lyapunov exponent of the turbulence model can be estimated as the fastest characteristic time $`\tau _d`$ and one has the prediction (Ruelle 1979)
$$\lambda \frac{1}{\tau _d}Re^{1/2}$$
(20)
where we have introduced the Reynolds number $`Re1/\nu `$. It is possible to predict the behavior of the FSLE by observing that the faster scale $`k_n`$ at which an error of size $`\delta `$ is still active (i.e. below the saturation) is such that $`u_n\delta `$. Thus $`\lambda (\delta )1/\tau _n`$ and, using Kolmogorov scaling, one obtains (Aurell et al. 1996 and 1997)
$$\lambda _{TT}(\delta )\{\begin{array}{ccc}\lambda \hfill & \text{for}& \delta u_d\hfill \\ \delta ^2\hfill & \text{for}& u_d\delta u_0\hfill \end{array}$$
(21)
To be more precise there is an intermediate range between the two showed in (21). For a discussion on this point see (Aurell et al. 1996 and 1997).
In order to simulate a finite resolution in the model, we consider a modelization of (17) in terms of an eddy viscosity (Benzi et al. 1998)
$$\frac{du_n}{dt}=i\left(k_{n+1}u_{n+1}^{}u_{n+2}\frac{1}{2}k_nu_{n1}^{}u_{n+1}+\frac{1}{2}k_{n1}u_{n2}u_{n1}\right)\nu _n^{(e)}k_n^2u_n+f_n$$
(22)
where now $`n=1,\mathrm{},N_M<N`$ and the eddy viscosity, restricted to the last two shells, has the form
$$\nu _n^{(e)}=\kappa \frac{|u_n|}{k_n}\left(\delta _{n,N_M1}+\delta _{n,N_M}\right)$$
(23)
where $`\kappa `$ is a constant of order $`1`$ (see Appendix C). The model equations (22) are the analogous of large eddy simulation (LES) in Shell Model which is one of the most popular numerical method for integrating large scale flows. Thus, although Shell Models are not realistic models for large scale geophysical flows (being nevertheless a good model for small scale turbulent fluctuations), the study of the effect of truncation in term of eddy viscosity is of general interest.
In Figure 5 we show $`\lambda _{MM}(\delta )`$, i.e. the FSLE computed for the model equations (22) with $`N=24`$ at different resolutions $`N_M=9,15,20`$. A plateau is detected for small amplitudes of the error $`\delta `$, corresponding to the leading Lyapunov exponent, which increases with increasing resolution – being proportional to the fastest timescale – according to $`\lambda k_{N_M}^{2/3}`$. At larger $`\delta `$ the curves collapse onto the $`\lambda _{TT}(\delta )`$, showing that large-scale statistics of the model is not affected by the small-scales resolution.
The capability of the model to predict satisfactorily the statistical features of the “true” dynamics is not anyway determined by $`\lambda _{MM}(\delta )`$ but by $`\lambda _{TM}(\delta )`$, which is shown in Figure 6.
Increasing the resolution $`N_M=9,15,20`$ towards the fully resolved case $`N=24`$ the model improves, in agreement with the expectation that $`\lambda _{TM}`$ approaches $`\lambda _{TT}`$ for a perfect model. At large $`\delta `$ the curves practically coincide, showing that the predictability time for large error sizes (associated with large scales) is independent on the details of small-scale modeling. Better resolved models achieve $`\lambda _{TM}\lambda _{TT}`$ for smaller values of the error $`\delta `$.
## V Conclusions
In this Paper the effects of the uncertainty of the evolution laws on the predictability properties are investigated and quantitatively characterized by means of the Finite Size Lyapunov Exponent. In particular, we have considered systems involving several characteristic scales and times. In these cases, it is rather natural to investigate what is the effect of small scale parameterization on large scale dynamics.
It has been shown that in systems where there is a negligible feedback on the small scales by the large ones, the dynamics of the former ones can be thoroughly discarded, without affecting the statistical features of large scales and the ability to forecast them. On the other side, when this feedback is present, the crude approximation of cutting the small scale variables off is no longer acceptable. In this case one has to model the action of fast modes (small scales) on slow modes (large scales) with some effective term, in order to recover a satisfactory forecasting of large scales. The renowned eddy-viscosity modelization is an instance of the general modeling scheme that has been here discussed.
## VI Acknowledgments
We thank L. Biferale for useful suggestions and discussions. This work was partially supported by INFM (Progetto Ricerca Avanzata TURBO) and by MURST (program 9702265437). A special acknowledgment goes to B. Marani for warm and continuous support.
## A An example of structural unstable system
In order to see that a non generic perturbation, although very “small”, can produce dramatic changes in the dynamics, let us discuss a simple example following (Berkooz 1994; Holmes et al. 1996). We consider the one-dimensional chaotic map $`x_{t+1}=f(x_t)`$ with $`f(x)=4x`$ mod $`1`$, and a perturbed version of it:
$$f_p(x)=\{\begin{array}{cc}8x\frac{9}{2}\hfill & x[\frac{5}{8},\frac{247}{384}]\hfill \\ & \\ \frac{1}{2}x+\frac{1}{3}\hfill & x[\frac{247}{384},\frac{265}{384}]\hfill \\ & \\ 8x\frac{29}{6}\hfill & x[\frac{265}{384},\frac{17}{24}]\hfill \\ & \\ 4x\text{mod}\mathrm{\hspace{0.33em}1}\hfill & \text{otherwise}.\hfill \end{array}$$
(A1)
The perturbed map is identical to the original outside the interval $`[5/8,17/24]`$, and the perturbation is very small in $`L_2`$ norm. Nevertheless, the fixed point $`x=\frac{2}{3}`$, which is unstable in the original dynamics, becomes stable in the perturbed one. Moreover it is a global attractor for $`f_p(x)`$, i.e. almost every point in $`[0,1]`$ asymptotically approaches $`x=\frac{2}{3}`$ (see Figure 1).
Now, if one compares the trajectories obtained iterating $`f(x)`$ or $`f_p(x)`$ it is not difficult to understand that orbits starting outside $`[5/8,17/24]`$ remain identical for a certain time but unavoidably they differ utterly in the long time behavior. It is easy to realize that the transient chaotic behavior of the perturbed orbits can be rendered arbitrarily long by reducing the interval in which the two dynamics differ. This example shows how even an ostensibly small perturbation (in usual norms) can modify dramatically the dynamics.
## B Computation of the Finite size Lyapunov exponent
In this appendix we discuss in detail the computation of the Finite Size Lyapunov Exponent for both continuous dynamics (differential equations) and discrete dynamics (maps).
The practical method for computing the FSLE goes as follows. Defined a given norm for the distance $`\delta (t)`$ between the reference and perturbed trajectories, one has to define a series of thresholds $`\delta _n=r^n\delta _0`$ ($`n=1,\mathrm{},N`$), and to measure the “doubling times” $`T_r(\delta _n)`$ that a perturbation of size $`\delta _n`$ takes to grow up to $`\delta _{n+1}`$. The threshold rate $`r`$ should not be taken too large, because otherwise the error has to grow through different scales before reaching the next threshold. On the other hand, $`r`$ cannot be too close to one, because otherwise the doubling time would be of the order of the time step in the integration. In our examples we typically use $`r=2`$ or $`r=\sqrt{2}`$. For simplicity $`T_r`$ is called “doubling time” even if $`r2`$.
The doubling times $`T_r(\delta _n)`$ are obtained by following the evolution of the separation from its initial size $`\delta _{min}\delta _0`$ up to the largest threshold $`\delta _N`$. This is done by integrating the two trajectories of the system starting at an initial distance $`\delta _{min}`$. In general, one must choose $`\delta _{min}\delta _0`$, in order to allow the direction of the initial perturbation to align with the most unstable direction in the phase-space. Moreover, one must pay attention to keep $`\delta _N<\delta _{saturation}`$, so that all the thresholds can be attained ($`\delta _{saturation}`$ is the typical distance of two uncorrelated trajectory, i.e. the size of the attractor). For the second kind predictability problem, i.e. the computation of $`\lambda _{TM}(\delta )`$, one can safely take $`\delta _{min}=0`$ because this do not prevent the separation of trajectories.
The evolution of the error from the initial value $`\delta _{min}`$ to the largest threshold $`\delta _N`$ carries out a single error-doubling experiment. At this point one rescales the model trajectory at the initial distance $`\delta _{min}`$ with respect to the true trajectory and starts another experiment. After $`𝒩`$ error-doubling experiments, we can estimate the expectation value of some quantity $`A`$ as:
$$A_e=\frac{1}{𝒩}\underset{i=1}{\overset{𝒩}{}}A_i.$$
(B1)
This is not the same as taking the time average as in (6) because different error doubling experiments may takes different times. Indeed we have
$$A_t=\frac{1}{T}_0^TA(t)𝑑t=\frac{\underset{i}{}A_i\tau _i}{_i\tau _i}=\frac{A\tau _e}{\tau _e}.$$
(B2)
In the particular case in which $`A`$ is the doubling time itself we have from (6) and (B2)
$$\lambda (\delta _n)=\frac{1}{T_r(\delta _n)_e}\mathrm{ln}r.$$
(B3)
The method described above assumes that the distance between the two trajectories is continuous in time. This is not true for maps of for discrete sampling in time and the method has to be slightly modified. In this case $`T_r(\delta _n)`$ is defined as the minimum time at which $`\delta (T_r)r\delta _n`$. Because now $`\delta (T_r)`$ is a fluctuating quantity, from (B2) we have
$$\lambda (\delta _n)=\frac{1}{T_r(\delta _n)_e}\mathrm{ln}\left(\frac{\delta (T_r)}{\delta _n}\right)_e,.$$
(B4)
We conclude by observing that the computation of the FSLE is not more expensive than the computation of the Lyapunov exponent by standard algorithm. One has simply to integrate two copies of the system (or two different systems for second kind predictability) and this can be done also for very complex simulations.
## C Parameterization of small scales
Typically a realistic problem (e.g. turbulence) involves many interacting degrees of freedom with different characteristic times. Let us indicate with $`𝐳`$ the state of the system under consideration, with an evolution law:
$$\frac{\mathrm{d}𝐳}{\mathrm{d}t}=𝐅(𝐳),𝐅,𝐳^N.$$
(C1)
The dynamical variables $`𝐳`$ can be split in two sets:
$$𝐳=(𝐱,𝐲),$$
(C2)
where $`𝐱^n`$ and $`𝐲^m`$ ($`N=n+m`$), being respectively $`𝐱`$ and $`𝐲`$ the “slow” and “fast” variables. The distinction between slow and fast variables is often largely arbitrary.
The evolution equation (C1) is divided into two blocks, the first one containing the dynamics of the slow variables, the second one associated with the dynamics of the fast variables
$$\{\begin{array}{c}\frac{d𝐱}{dt}=𝐅_\mathrm{𝟏}(𝐱)+𝐅_\mathrm{𝟐}(𝐱,𝐲)\\ \frac{d𝐲}{dt}=\stackrel{~}{𝐅}_\mathrm{𝟏}(𝐱,𝐲)+\stackrel{~}{𝐅}_\mathrm{𝟐}(𝐲)\end{array}$$
(C3)
If one is interested only in the slow variables it is necessary to write an “effective” equation for $`𝐱`$. As far as we know there is only one case for which it is simple to find the effective equations for $`𝐱`$. If the characteristic times of the fast variables are much smaller than those ones of the $`𝐱`$ (adiabatic limit), one can write:
$$𝐲=<𝐲>+𝜼(t)$$
(C4)
where $`𝜼`$ is a Wiener process, i.e. a zero mean Gaussian process with
$$<\eta _i(t)\eta _j(t^{^{}})>=<\delta y_i^2>\delta _{ij}\delta (tt^{^{}}).$$
(C5)
Therefore one obtains for the slow variables:
$$\frac{\mathrm{d}𝐱}{\mathrm{d}t}=𝐅_\mathrm{𝟏}(𝐱)+\delta 𝐅_\mathrm{𝟏}(𝐱)+\delta 𝐖(𝐱,𝜼)$$
(C6)
where $`\delta 𝐅_\mathrm{𝟏}(𝐱)=𝐅_\mathrm{𝟐}(𝐱,<𝐲>)+\delta 𝐅_\mathrm{𝟐}`$, $`\delta F_{2,j}=1/2_i^2F_{2,j}/y_j^2<\delta y_i^2>`$ and $`\delta W_i=_iF_{2,j}/y_j|_{<𝐲>}\eta _i(t)`$. Basically the slow variables $`𝐱`$ obey to a non linear Langevin equation.
Here the role of the fast degrees of freedom becomes relatively simple: they give small changes to the drift $`𝐅_\mathrm{𝟏}𝐅_\mathrm{𝟏}+\delta 𝐅_\mathrm{𝟏}`$ and a noise term $`\delta 𝐖(𝐱,𝜼)`$. We remark that the validity of the above argument is rather limited. Even if one has a large time scale separation, the statistics of the fast variables can be very far from the Gaussian distribution. In particular, in system with feedback ($`\stackrel{~}{𝐅}_10`$) one cannot model the fast variable $`𝐲`$ independently of the resolved $`𝐱`$.
In the generic situation the construction of the effective equation for $`𝐱`$ requires to follow phenomenological arguments which depend on the physical mechanism of the particular problem. For example, for the Lorenz ’96 model discussed in sect. III, where $`F_{2,k}(𝐱,𝐲)=_{j=1,J}y_{j,k}`$, we use the following procedure for the parameterization of the fast variables and the building of the effective eq. for $`𝐱`$. Instead of assuming (C4) we mimic the fast variables in terms of the slow ones:
$$𝐲(t)=𝐠(𝐱(t))=<𝐲|𝐱(t)>+\eta (t)$$
(C7)
where $`<|𝐱>`$ stands for the conditional average and $`\eta (t)`$ is a noise term. Inserting (C7) into the first of (C3) one obtains
$$\frac{\mathrm{d}𝐱}{\mathrm{d}t}=𝐅_1(𝐱)+𝐅_2(𝐱,𝐲)=𝐅_1(𝐱)+𝐅_2(𝐱,<𝐲|𝐱>)+\delta 𝐅_2(𝐱)$$
(C8)
where
$$\delta F_{2,i}=\underset{j,k}{}\frac{^2F_{2,i}}{y_jy_k}|_{y=<y|x>}\eta _j\eta _k$$
(C9)
In the Lorenz ’96 model (14), because of the linear coupling between the different scales, the terms $`\delta 𝐅_2`$ are absent and one has a close model for the large scale variables
$$\frac{\mathrm{d}𝐱}{\mathrm{d}t}=𝐅_\mathrm{𝟏}(𝐱)+𝐅_\mathrm{𝟐}(𝐱,<𝐲|𝐱>)$$
(C10)
The ansatz (C7) is well verified in the numerical simulations. We have computed the $`\lambda _{TM}(\delta )`$ by using a best fit for $`𝐅_\mathrm{𝟐}`$ and we have obtained a good reproduction of the $`\lambda _{TT}(\delta )`$ for large $`\delta `$. In the Lorenz ’96 model (14), where the coupling between slow and fast variables is practically linear, one has that $`F_{2,k}(𝐱,<𝐲|𝐱>)=_{j=1,J}<y_{j,k}|x_k>\nu _ex_k`$.
Now we will discuss the case of the Shell Model parameterization which pertains to the general issue of the subgrid-scale modelization. The literature on this field and the related problems (e.g. closure in fully developed turbulence) is enormous and we do not pretend to discuss here in details this field. Let us only recall the basic idea introduced over a century ago by Boussinesq, and later developed further by Taylor, Prandtl and Heisenberg – to cite some of the most famous ones– for fully developed turbulence (Frisch 1995). In a nutshell the idea is to mimic the energy flux from the large to the small scales (in our terms from slow to fast variables) by an effective dissipation: the effect of the small scales on the large ones can be modeled as an enhanced molecular viscosity.
By simple dimensional arguments one can argue that the effects of small scales can be replaced by an effective viscosity at scales $`r`$, given by
$$\nu ^{(e)}r\delta v(r)$$
(C11)
where $`\delta v(r)`$ is the velocity fluctuation on the scale $`r`$.
The above argument for the Shell Model (17) gives (Benzi et al. 1998):
$$\nu _n^{(e)}=\kappa \frac{|u_n|}{k_n}$$
(C12)
where $`\kappa O(1)`$ is an empirical constant. From eq. (C11) one could naively think to use dimensional argument á la Kolmogorov to set a constant eddy viscosity $`\nu _n^{(e)}k_n^{4/3}`$. In this way one forgets the dynamics and this can cause numerical blow up. More sophisticated arguments that do not include the dynamics lead to similar problems.
Let us remark that the parameterization (C12) is not exactly identical to those obtained by closure approaches where the eddy viscosity is given in terms of averaged quantities. In our case this would mean to write $`u_n^2^{1/2}`$ instead of $`|u_n|`$ in (C12).
After this discussion it is easy to recognize that the parameterization in terms of conditional averages introduced for the Lorenz ’96 model is, a posteriori, an eddy viscosity model.
FIGURE CAPTIONS
* The map $`f_p`$ of equation (A1) (solid line) and the original chaotic map $`f`$ (dashed line).
* Finite Size Lyapunov Exponents $`\lambda _{TT}(\delta )`$ ($`+`$) and $`\lambda _{TM}(\delta )`$ ($`\times `$) versus $`\delta `$ for the Lorenz model (4) with $`\sigma =c=10`$, $`b=8/3`$, $`R=45`$ and $`ϵ=0.001`$. The dashed line represents the leading Lyapunov exponent for the unperturbed system ($`\lambda 1.2`$). The statistics is over $`10^4`$ realizations.
* $`\lambda _{TT}(\delta )`$ (crosses, $`\times `$) and $`\lambda _{TM}(\delta )`$ (open squares, $`\mathrm{}`$) versus $`\delta `$ for the Rayleigh-Bénard model (7) with $`C=0.5`$, $`k_L=1`$, $`\omega _L=1`$, $`B_L=0.3`$, $`k_S=4`$, $`\omega _S=4`$, $`B_S=0.3`$ and $`ϵ=0.125`$. The straight line indicates the $`\delta ^2`$ slope. The statistics is over $`10^4`$ realizations.
* Finite Size Lyapunov Exponents for the Lorenz ’96 model $`\lambda _{TT}(\delta )`$ (solid line) and $`\lambda _{TM}(\delta )`$ versus $`\delta `$ obtained by dropping the fast modes ($`+`$) and with eddy viscosity parameterization ($`\times `$) as discussed in (15) and (16). The parameters are $`F=10`$, $`K=36,J=10`$, $`\nu =1`$ and $`c=b=10`$, implying that the typical $`y`$ variable is $`10`$ times faster and smaller than the $`x`$ variable. The value of the parameter $`\nu _e=4`$ is chosen after a numerical integration of the complete equations as discussed in Appendix C. The statistics is over $`10^4`$ realizations.
* The FSLE for the eddy-viscosity shell model (22) $`\lambda _{MM}(\delta )`$ at various resolutions $`N_M=9(+),15(\times ),20()`$. For comparison it is drawn the FSLE $`\lambda _{TT}(\delta )`$ (continuous line). Here $`\kappa =0.4`$, $`k_0=0.05`$.
* The FSLE between the eddy-viscosity shell model and the full shell model $`\lambda _{TM}(\delta )`$, at various resolutions $`N_M=9(+),15(\times ),20()`$. For comparison it is drawn the FSLE $`\lambda _{TT}(\delta )`$ (continuous line). The total number of shell for the complete model is $`N=24`$, with $`k_0=0.05`$, $`\nu =10^7`$.
|
no-problem/9901/hep-ex9901023.html
|
ar5iv
|
text
|
# Measurement of the Top Quark Pair Production Cross Section in the All-Jets Decay Channel
## Abstract
We present a measurement of $`t\overline{t}`$ production in $`p\overline{p}`$ collisions at $`\sqrt{s}=1.8`$ TeV from 110 pb<sup>-1</sup> of data collected in the all-jets decay channel with the DØ detector at Fermilab. A neural network analysis yields a cross section of 7.1 $`\pm `$ 2.8 (stat.) $`\pm `$ 1.5 (syst.) pb, at a top quark mass ($`m_t`$) of 172.1 GeV/$`c^2`$. Using previous DØ measurements from dilepton and single lepton channels, the combined DØ result for the $`t\overline{t}`$ production cross section is 5.9 $`\pm `$ 1.2 (stat.) $`\pm `$ 1.1 (syst.) pb for $`m_t`$ = 172.1 GeV/$`c^2`$.
The standard model predicts that, at Tevatron energies, top quarks are produced primarily in $`t\overline{t}`$ pairs, and that each top quark decays into a $`b`$ quark and a $`W`$ boson. 44% of these events are expected to have both $`W`$ bosons decay into quarks. These pure hadronic, or “all-jets”, $`t\overline{t}`$ events are among the rare collider events with several quarks in the final state. With no final state energetic neutrinos, the all-jets mode is the most kinematically constrained of the top quark decay channels, but is also the most challenging to measure due to the large QCD multijet background. This compelled us to use unique tools such as quark/gluon jet differences, and to make extensive use of neural networks, to separate the $`t\overline{t}`$ final states from the QCD background. The comparison of $`t\overline{t}`$ cross sections from the all-jets and lepton + jets channels allows a search for new phenomena in top decays; for example, top decay via a charged Higgs boson could be observed as a deficit, relative to the all-jets final states, in the $`t\overline{t}`$ final states with energetic leptons.
The signal for these all-jets $`t\overline{t}`$ events is at least six reconstructed jets. The main background is from QCD multijet events that arise from a 2$``$2 parton process producing two energetic (“hard”) leading jets and less energetic (“soft”) radiated gluon jets.
The DØ detector is described in Ref. . We used the same reconstruction algorithms for jets, muons, and electrons as those used in previous top quark analyses. The muons in this analysis are used to identify $`b`$ jets, and are restricted to the pseudorapidity range $`|\eta |1.0`$, where $`\eta =\mathrm{tanh}^1(\mathrm{cos}\theta )`$, and $`\theta `$ is the polar angle relative to the beam axis.
The multijet data sample was selected using a hardware trigger and an online filter requiring five jets of cone size $``$ = 0.5, pseudorapidity $`|\eta |<2.5`$ and transverse energy $`E_T>10.0`$ GeV. Here, $``$ = $`((\mathrm{\Delta }\varphi )^2+(\mathrm{\Delta }\eta )^2)^{\frac{1}{2}}`$, where $`\varphi `$ is the azimuthal angle around the beam axis. Additionally, we required the total transverse energy of the event ($`H_T`$) to be $`>`$ 115 or 120 GeV (depending on run conditions). The data sample after the initial cuts has $``$ 600,000 events. With about 200 expected top events in this channel, the background overwhelms the signal by a factor of $``$ 3000. As discrimination from many variables was needed to separate signal from background, most of which are significantly correlated, we used neural networks (NN) as an integral part of this analysis.
The offline analysis proceeded by excluding events with an isolated muon or electron to maintain a data sample independent of the other $`t\overline{t}`$ samples. We required events to have at least six $``$=0.3 cone jets and less than nine $``$=0.5 cone jets, with jet $`E_T>8.0`$ GeV. We generally used $``$=0.3 cone jets because of their greater reconstruction efficiency, but used $``$=0.5 cone jets to calculate mass-related variables. We required that at least one jet have an associated muon which satisfied muon quality criteria and which was kinematically consistent with a $`b\mu X`$ decay within the jet. As about 20% of $`t\overline{t}`$ all-jets events have such a “$`\mu `$-tagged” jet in the acceptance region for $`t\overline{t}`$ signal, compared to approximately 3% of the QCD multijet background in that region, the tagging requirement reduces the background-to-signal ratio by about an order of magnitude. Of the total 280,000 events surviving the offline cuts, 3853 have at least one $`\mu `$-tagged jet. These tagged events comprise the data sample used to extract the cross section.
Compared with the QCD multijet background, $`t\overline{t}`$ events typically have more energetic jets, have the total energy more uniformly distributed among the jets, are more isotropic, and have their jets distributed at smaller $`\eta `$. To discriminate $`t\overline{t}`$ signal from QCD background, we defined at least two variables describing each of these qualities (total energy, jet energy distribution, event shape, and rapidity distribution):
1. $`H_T`$: The sum of the transverse energies of jets.
2. $`\sqrt{\widehat{s}}`$: The invariant mass of the jets in the final state.
3. $`E_{T_1}`$/$`H_T`$: $`E_{T_1}`$ is the transverse energy of the leading jet.
4. $`H_T^{3j}`$: $`H_T`$ without the transverse energy of the two leading jets.
5. $`N_{\mathrm{jets}}^A`$: The number of jets averaged over a range of $`E_T`$ thresholds (15 to 55 GeV), and weighted by the $`E_T`$ threshold. This parameterizes the number of jets taking their hardness into account.
6. $`E_{T_{5,6}}`$: The square root of the product of the transverse energies of the fifth and sixth jets.
7. $`𝒜`$: The aplanarity, calculated from the normalized momentum tensor.
8. $`𝒮`$: The sphericity, calculated from the normalized momentum tensor.
9. $`𝒞`$: The centrality, $`𝒞=H_T/H_E`$, where $`H_E`$ is the sum of all the jet total energies. This characterizes the transverse energy flow.
10. $`<`$$`\eta ^2`$$`>`$: The $`E_T`$-weighted mean square of the $`\eta `$ distribution of jets in an event.
These ten variables are the inputs to the first neural network (NN1), whose output is used as an input variable for the second neural network (NN2). The three other inputs to NN2 are:
1. $`p_T^\mu `$: The transverse momentum of the tagging muon.
The $`p_T^\mu `$ distribution is harder for tagged jets in $`t\overline{t}`$ events than for tagged jets in QCD multijet events.
1. $``$: The mass-likelihood variable. This variable is defined as $`=(M_{W_1}M_W)^2/\sigma _{m_W}^2+(M_{W_2}M_W)^2/\sigma _{m_W}^2+(m_{t_1}m_{t_2})^2/\sigma _{m_t}^2`$, with the parameters $`M_W`$, $`\sigma _{m_W}`$, and $`\sigma _{m_t}`$ set to 80, 16 and 62 GeV/$`c^2`$, respectively. $`M_{W_i}`$ and $`m_{t_i}`$ refer to the jet combinations that best define the $`W`$ boson and top quark masses in an event.
The mass likelihood variable $``$ is a $`\chi ^2`$-like quantity, minimized when there are two invariant masses consistent with the $`W`$ mass, and two candidate top quark masses that are identical. $`\sigma _{m_W}^2`$ and $`\sigma _{m_t}^2`$ were determined from simple two and three jet combinations using DØ jet resolutions. We did not assume that the muon tagged jet came from a $`b`$ quark.
1. $``$: The jet-width Fisher discriminant. This is defined as $`_{\mathrm{jet}}=(\sigma _{\mathrm{jet}}\sigma _{\mathrm{quark}}(E_T))^2/\sigma _{\mathrm{quark}}^2(E_T)(\sigma _{\mathrm{jet}}\sigma _{\mathrm{gluon}}(E_T))^2/\sigma _{\mathrm{gluon}}^2(E_T)`$, where $`\sigma _{\mathrm{quark}}^2(E_T)`$ and $`\sigma _{\mathrm{gluon}}^2(E_T)`$ are mean square jet widths calculated from herwig Monte Carlo, for quarks and gluons respectively, as functions of jet $`E_T`$.
It has been demonstrated that quark jets are, on average, narrower than gluon jets . The Fisher discriminant, based on the $`\eta `$-$`\varphi `$ rms jet widths, is calculated for the four narrowest jets in the event, and indicates whether the jets were most probably “quark-like” ($`t\overline{t}`$) or “gluon-like” (QCD multijet).
Figure 1 shows a comparison of distributions from the modeled background discussed below, the data, and herwig $`t\overline{t}`$ events for four of the above variables.
The top quark production cross section is calculated from the output of NN2. Both networks were trained to force their output near 1 for $`t\overline{t}`$ events, and near 0 for QCD multijet events, using the back-propagation learning algorithm in jetnet .
The very large background-to-signal ratio in the untagged data indicates an almost pure background sample. With a correction for the very small $`t\overline{t}`$ component expected, and with a method of assigning a muon tag to the untagged event, the background estimate can be determined directly from the data. Separate sets of untagged data with added muon tags were used for network background response training and background modeling. herwig $`t\overline{t}`$ events were used for the $`t\overline{t}`$ network signal response training.
The correct assignment of muon tags to the untagged data was critical to our background model. We derived a “tag rate function” from the entire multijet data set, defined as the probability for any individual jet to have a tagging muon. We chose a function that factorized into two pieces: $`ϵ`$, the detector efficiency dependent on $`\eta `$ of the jet and the run number of the event (to account for chamber aging), and $`f(E_T)`$, the probability that a jet of tranverse energy $`E_T`$ has a tagging muon. We studied two parametrizations of $`f(E_T)`$, and used the difference to estimate the systematic error from this source. Finally, a small dependence of the tag rate on $`\sqrt{\widehat{s}}`$ of the event was found, which was incorporated into $`f(E_T)`$. A detailed discussion of the tag rate function is given in Ref. .
We established that the $`p_T`$ of the tagging muon and the $`E_T`$ of the tagged jet (uncorrected for the muon and neutrino energy) are uncorrelated. Therefore, the muon $`p_T`$ factors out of the tag rate function, and can be generated independently. By applying the tag rate function to each jet in the untagged data sample, and generating a muon $`p_T`$ for those jets determined as tagged, we produced the background model sample.
The NN2 output distributions for data, modeled background and herwig $`t\overline{t}`$ signal are plotted in Fig. 2. We excluded events in the region of NN2 output $`<`$ 0.02. Jets in that region tend to have low $`E_T`$, where the tag rate is not well determined due to the low tagging probability (low statistics), and consequently, the background modeling may be less accurate. The cross section is obtained from a simultaneous fit of the data to the background and herwig $`t\overline{t}`$ shapes, with the background normalization ($`A_{\mathrm{bkg}}`$) and the $`t\overline{t}`$ cross section ($`\sigma _{t\overline{t}}`$) as free parameters. The result of this fit is also shown in Fig. 2.
The stability of our result can be checked by successively eliminating data points at the lowest values of the NN2 output. Figure 3 shows the values of the background normalization and $`t\overline{t}`$ cross section as the data points are removed and the remaining points are refitted. The refitted cross sections are independent of NN2 output region, confirming that the initial NN2 output cut at 0.02, and choosing the region NN2 output $`>`$ 0.1 for our final cross section calculation, does not bias the result. Because of the preponderance of background at the low end of NN2 and the stability of our fits, we use the region NN2 $`>`$ 0.1 for our quoted cross section results.
Values of the cross section and background normalization are obtained from similar fits with herwig $`t\overline{t}`$ events generated at different top quark masses. The results are shown in Table I. Interpolating to the top quark mass as measured by DØ ($`m_t`$ = 172.1 GeV/$`c^2`$), we obtain $`\sigma _{t\overline{t}}`$ = 7.1 $`\pm `$ 2.8 (stat.) $`\pm `$ 1.5 (syst.) pb, consistent with a previous measurement in this channel , and the most precise value for this channel to date. Table II summarizes the contributions to the systematic error on the cross section. These were determined by varying each source by its uncertainty, and calculating the difference in the cross section.
As a check, we calculated the cross section from the excess events over expected background, using the efficiency of the criteria for $`t\overline{t}`$ selection (calculated using herwig), along with the branching ratio and the measured luminosity. For NN2 $`>`$ 0.85 (chosen to minimize the error on the cross section) we observed 41 events with 24.8 $`\pm `$ 2.4 expected background events for an excess of 16.2 events. The excess corresponds to a $`t\overline{t}`$ cross section of 7.3 $`\pm `$ 3.3 $`\pm `$ 1.6 pb at $`m_t`$ = 172.1 GeV/$`c^2`$, consistent with our result above.
The significance of the excess is characterized by the probability $`P`$ of the observed number of events being due to fluctuation. For an NN2 output threshold of $``$ 0.94, where Monte Carlo studies predict maximal expected significance, we observe 18 events where 6.9 $`\pm `$ 0.9 background events are expected, for which $`P`$ = 0.0006, corresponding to a 3.2 standard deviation effect. This is sufficient to establish the existence of a $`t\overline{t}`$ signal in the all-jets final state.
To further check the validity of the tag rate function and hence the background model, we looked at events with more than one tagged jet. The modeled background here consists of those untagged events that had two jets tagged by application of the tag rate function. We assumed that the fraction of the double-tagged events from correlated sources, such as $`b\overline{b}`$ production, is constant over the NN2 output, but refitted the background normalization for a possible overall correlation. A total of 32 double-tagged events are observed for NN2 output $`>`$ 0.02 where 28.7 $`\pm `$ 8.2 events are expected from background. Two events are observed for NN2 output $`>`$ 0.85 with 0.7 $`\pm `$ 0.1 expected background events, and 1.2 top events expected from Monte Carlo. The small excess in the double-tagged sample is consistent with our conclusion that the more significant excess in the singly-tagged sample is from $`t\overline{t}`$ production.
Previous DØ measurements of $`t\overline{t}`$ production in the dilepton and single lepton channels give an average cross section of 5.6 $`\pm `$ 1.4 (stat.) $`\pm `$ 1.2 (syst.) pb at $`m_t`$=172.1 GeV/$`c^2`$, in very good agreement with that from the all-jets channel. We combine the all-jets cross section with these results, assuming the statistical errors are uncorrelated, and that the systematic errors have the appropriate correlation coefficients. The combined DØ result for the $`t\overline{t}`$ production cross section is 5.9 $`\pm `$ 1.2 (stat.) $`\pm `$ 1.1 (syst.) pb for $`m_t`$=172.1 GeV/$`c^2`$.
We thank the Fermilab and collaborating institution staffs for contributions to this work and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), and CONICET and UBACyT (Argentina).
|
no-problem/9901/cond-mat9901207.html
|
ar5iv
|
text
|
# Resistive Anomalies at Ferromagnetic Transitions Revisited: the case of SrRuO3
It is generally believed that near a ferromagnetic phase transition the resistivity, $`\rho `$, exhibits an energy-like singularity so $`\mathrm{d}\rho /\mathrm{d}T|t|^\alpha `$ with $`\alpha 0.1`$ the specific heat exponent and $`t=(TT_c)/T_c`$. In a recent Letter , Klein et al. claimed that in SrRuO<sub>3</sub>, a strongly correlated “bad metal”, the resistive anomaly at $`T_c`$ is anomalous: at $`T>T_c`$, $`\mathrm{d}\rho /\mathrm{d}Tt^x`$ with $`x0.9`$ while for $`T<T_c`$ $`\mathrm{d}\rho /\mathrm{d}T`$ cannot be fit by any power law. Klein et al. obtained exponents by plotting $`\mathrm{ln}|\mathrm{d}\rho /\mathrm{d}TS_0|`$ vs $`\mathrm{ln}T/T_c`$ ($`S_0`$ is a $`T`$-independent background), a procedure shown to be unreliable in many cases .
We show here that the data are consistent with conventional theory, which predicts
$$\mathrm{d}\rho /\mathrm{d}T=(A_\pm /\alpha )|t|^\alpha (1+D_\pm |t|^\theta )+S(t),$$
(1)
where $`A_\pm `$ and $`D_\pm `$ are the amplitudes for the leading singularity and correction to scaling, $`\alpha `$ and $`\theta `$ are the specific heat and correction to scaling exponents and $`+()`$ refer to $`t>0(t<0)`$. $`S(t)`$ is a smooth function of $`t`$ which we approximate by $`S(t)=S_0+S_1t`$. Eq. (1) applies within a critical region $`|t|<t_{\mathrm{crit}}`$. The fits to $`\chi `$ in imply $`t_{\mathrm{crit}}0.13`$; fitting $`|t|>t_{\mathrm{crit}}`$ requires a crossover theory . Due to rounding of the data very near $`T_c`$ points with $`|t|<t_{\mathrm{round}}`$ have been excluded. Following we allowed for 1% variations in $`T_c`$ about the nominal value $`T_c^{}=150`$K. The fits depend crucially on $`t_{\mathrm{crit}}`$, $`t_{\mathrm{round}}`$ and $`t_c`$. Fig. 1 shows the data along with fits to Eq. (1) using Heisenberg exponents $`\alpha _H=0.1`$ and $`\theta _H=0.55`$, and universal amplitude ratios $`(A_+/A_{})_H=1.52`$ and $`(D_+/D_{})_H=1.4`$ with and without corrections to scaling for $`t_{\mathrm{round}}=0.01`$ and parameters given in Table I. Using Ising parameters in Eq. (1) produces slightly worse fits (not shown). The curve with corrections to scaling (i.e. $`D_+0`$) and $`t_{\mathrm{crit}}=0.1`$ is our best fit
with accepted universal amplitude ratios; varying these gives exact fits over the range $`t(0.2,0.5)`$. The results of imply that the $`t>0`$ behavior is also consistent with $`\alpha 0.9`$, but we believe the consistency with conventional theory renders alternative interpretations implausible.
One of us had previously suggested that in systems with very strong carrier-spin interactions, a term in $`\rho (T)`$ proportional to the square of the magnetization, $`^2`$, could arise at $`T<T_c`$. This suggestion was based on an erroneous interpretation of a spherical model calculation and is here withdrawn. Briefly, any scattering process contributing to $`\rho `$ involves a combination of spin operators at nearby points and in a ferromagnet these can only involve the specific heat exponent. In the spherical model the specific heat and $`\mathrm{d}^2/\mathrm{d}T`$ have identical behavior.
We thank L. Klein for data, M. E. Fisher, J. Ye and J. S. Dodge for helpful conversations, J. S. Dodge for pointing out an error in our previous analysis, and NSF - DMR - 9707701 and the Johns Hopkins MRSEC.
|
no-problem/9901/cond-mat9901078.html
|
ar5iv
|
text
|
# Temperature dependence of the resistivity in the double-exchange model
## Abstract
The resistivity around the ferromagnetic transition temperature in the double exchange model is studied by the Schwinger boson approach. The spatial spin correlation responsible for scattering of conduction electrons are taken into account by adopting the memory function formalism. Although the correlation shows a peak lower than the transition temperature, the resistivity in the ferromagnetic state monotonically increases with increasing temperature due to a variation of the electronic state of the conduction electron. In the paramagnetic state, the resistivity is dominated by the short range correlation of scattering and is almost independent of the temperature. It is attributed to a cancellation between the nearest-neighbor spin correlation, the fermion bandwidth, and the fermion kinetic energy. This result implies the importance of the temperature dependence of the electronic states of the conduction electron as well as the localized spin states in both ferromagnetic and paramagnetic phases.
The recent discovery of colossal magnetoresistance (CMR) has revived interest in perovskite manganites such as La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>. It is widely accepted that significant changes in the transport properties, as well as CMR, are observed around the transition between the ferromagnetic phase and the paramagnetic one. More than 40 years ago, Zener proposed a double-exchange (DE) interaction to explain the correlation between electrical conduction and the ferromagnetism, in which the spin of a conduction electron and a localized core spin ($`\stackrel{}{S}`$) on the same site are strongly coupled by Hund’s rule. Since the hopping amplitude of the electron to the neighboring sites is maximum when the two neighboring core spins are parallel, the ferromagnetic metallic state is achieved by gaining the kinetic energy of the conduction electron. These concepts were settled as the so-called DE model and the magnetic and transport properties in this model have been investigated intensively and extensively.
One of the main interest in this research field is the temperature dependence of the electrical resistivity. The resistivity in the DE model was studied in a mean-field theory by Kubo and Ohata, and similar results have been reproduced by a dynamical mean-field theory by Furukawa. In these calculations, however, the spatial correlation of the core spins is not included properly, although it is pointed out that it plays a crucial role in the electric transport near the Curie temperature ($`T_c`$). The short range spin correlation was only considered in Ref. , and the spatial correlation was neglected in Ref. , where the dynamical fluctuation was taken into account. Millis et al. discussed a possibility that the behavior of the resistivity is greatly modified when the spatial correlation of the core spins is properly taken into account. They showed that the resistivity is still increased below $`T_c`$ with decreasing the temperature. Being based on the calculated results which disagree with the experimental one, they concluded that the additional gradients, such as, the Jahn-Teller effect, are necessary to reproduce the observed behaviors.
In this paper, we calculate the temperature dependence of the resistivity in the DE model by the Schwinger boson approach. In order to include the effects of the spatial correlation of the core spins properly, we adopt the memory function formalism which was also used in Ref. . In addition, the temperature dependence of the electronic structure is determined self-consistently together with that of the core spins. The calculated resistivity monotonically decreases with decreasing temperature in the ferromagnetic states and does not show a peak below $`T_c`$, although the spin correlation has its maximum at $`T<T_c`$.
The Hamiltonian of DE model in the limit of strong Hund’s coupling is given by the Schwinger boson representation as follows,
$$=\frac{t}{2S_R}\underset{ij\sigma }{}\left[b_{i\sigma }^{}b_{j\sigma }f_i^{}f_j+\text{h.c.}\right]$$
(1)
with the local constraint $`_\sigma b_{i\sigma }^{}b_{i\sigma }f_i^{}f_i=2S`$ at every lattice site $`i`$. Here, $`b_{i\sigma }`$ ($`\sigma =,`$) is a boson and $`f_i`$ is a spinless fermion operator, $`S_R=S+(1x)/2`$, and $`x`$ is the doping concentration of holes ($`f_i^{}f_i=1x`$). We shall exclusively consider the case of $`S=\frac{3}{2}`$. A transition at $`T_c`$ from a ferromagnetic state to a paramagnetic state (described as Bose condensation of Schwinger bosons) was investigated by using a mean field Hamiltonian
$`_{MF}`$ $`=`$ $`{\displaystyle \frac{Bt}{S_R}}{\displaystyle \underset{ij}{}}\left[f_i^{}f_j+\text{h.c.}\right]`$ (3)
$`{\displaystyle \frac{Dt}{2S_R}}{\displaystyle \underset{ij\sigma }{}}\left[b_{i\sigma }^{}b_{j\sigma }+\text{h.c.}\right]`$
with a global constraint
$$\underset{\sigma }{}b_{i\sigma }^{}b_{i\sigma }=2S_R,$$
(4)
where $`B`$ and $`D`$ are given by $`\frac{1}{2}_\sigma b_{i\sigma }^{}b_{j\sigma }`$ and $`f_i^{}f_j`$, respectively, and both are determined self-consistently. This mean field treatment, however, leads to an additional transition at slightly higher temperature than $`T_c`$ (about $`1.4T_c`$) into an artifact state in which $`B=D=0`$. As a result, above $`T_c`$ the fermion bandwidth ($`W_f12Bt/S_R`$) rapidly decreases. This decrease obviously causes a misleading diverging increase of the resistivity.
In the present study, we assume that Eq. (3) itself has a suitable form as a mean field Hamiltonian, but in order to avoid the difficulty mentioned above, the fermion bandwidth $`B`$ is determined as
$`B`$ $``$ $`{\displaystyle \frac{1}{2}}\sqrt{|{\displaystyle \underset{\sigma }{}}b_{i\sigma }^{}b_{j\sigma }|^2}`$ (5)
$`=`$ $`{\displaystyle \frac{S_R}{\sqrt{2}}}\sqrt{1+({\displaystyle \frac{1}{2S_R^2}}{\displaystyle \underset{\sigma \sigma ^{}}{}}b_{i\sigma }^{}b_{i\sigma ^{}}b_{j\sigma ^{}}^{}b_{j\sigma }1)+{\displaystyle \frac{1}{S_R}}}`$ (6)
$``$ $`{\displaystyle \frac{S_R}{\sqrt{2}}}\left[{\displaystyle \frac{1}{2}}+{\displaystyle \frac{1}{4S_R^2}}{\displaystyle \underset{\sigma \sigma ^{}}{}}b_{i\sigma }^{}b_{i\sigma ^{}}b_{j\sigma ^{}}^{}b_{j\sigma }\right]`$ (7)
together with
$$Df_i^{}f_j$$
(8)
in a self-consistent manner. Here, we ignore the Berry’s phase in the electron hopping and use the fact that $`_{\sigma \sigma ^{}}b_{i\sigma }^{}b_{i\sigma ^{}}b_{j\sigma ^{}}^{}b_{j\sigma }2S_R^2`$ in the high-temperatures limit. The approximation in Eq. (7) corresponds to the expansion with respect to $`\stackrel{}{S}_i\stackrel{}{S}_j`$. It should be noted that the fermion bandwidth obtained in Eq. (4) remains finite for $`T\mathrm{}`$ as expected, while $`\frac{1}{2}_\sigma b_{i\sigma }^{}b_{j\sigma }`$ characterizing a nearest-neighbor magnetic correlation (denoted by $`C`$) vanishes for $`T\mathrm{}`$ in this model. Therfore, the behavior obtained in the above formulas is physically reasonable.
The Curie temperature ($`T_c`$) in this model as a function of the doping concentration is shown in the inset of Fig. 1. Since $`T_c`$ is determined by Eq. (4), the results are essentially independent on the formula of $`B`$ and are the same as those in Ref. . The transition temperature is scaled independent on $`x`$ by $`D`$ as $`T_c/t2.5D`$.
The temperature dependencies of the fermion bandwidth normalized by a bare band-width ($`W=12t`$) are plotted in Fig. 1. At the transition temperature, the bandwidth does not directly depend on the doping concentration (it depends on $`x`$ only through $`S_R`$). This is because both the chemical potential and the condensate density of bosons are zero, and $`Dt/T_c`$, which is independent of $`x`$, is only a parameter in Eq. (7). Further, $`W_f/W`$ approaches $`1/\sqrt{2}`$ in an infinite temperature limit. As a result, the behavior of the fermion bandwidth as a function of $`T/T_c`$ is almost universally independent of $`x`$. On the other hand, the behavior of the kinetic energy of the fermion ($`K_fBtf_i^{}f_j/S_R`$) as a function of $`T/T_c`$ (Fig. 1) changes depending on $`x`$. It is important that the bandwidth varies even in the disordered-spin regime of $`T>T_c`$. This behavior agrees well with the result in Ref. . In fact, the bandwidth at $`T_c`$ is 1.16 times bigger than that in the $`T\mathrm{}`$ limit.
The resistivity as a function of the temperature is calculated by the memory function method, where the lowest order fluctuation from the mean field can be included automatically. In this lowest-order perturbational treatment, a static approximation for Schwinger bosons is appropriate. This is because the bandwidth of Schwinger bosons is much smaller than that of the fermion ($`DB`$) and the effects of the quantum fluctuation of bosons can be negligible for $`T>0.5T_c`$ where $`Dt/(2S_RT)1`$. The memory function is evaluated to leading order in $`1/S_R`$, and thus, the resistivity is written as
$$\rho =\frac{\mathrm{}^2a}{2e^2K_f\tau },$$
(9)
with
$`{\displaystyle \frac{1}{\tau }}`$ $`=`$ $`{\displaystyle \frac{\pi t^4}{8\mathrm{}K_fS_R^4}}{\displaystyle \underset{i}{}}\mathrm{\Gamma }(\stackrel{}{R}_i){\displaystyle \frac{1}{N^2}}{\displaystyle \underset{\stackrel{}{p}_1\stackrel{}{p}_2}{}}\left({\displaystyle \frac{f(\epsilon _{\stackrel{}{p}_1})}{\epsilon _{\stackrel{}{p}_1}}}\right)`$ (10)
$`\times `$ $`\delta (\epsilon _{\stackrel{}{p}_1}\epsilon _{\stackrel{}{p}_2})(e^{i\kappa _x}1)(e^{i\kappa _x}1)e^{i\stackrel{}{\kappa }\stackrel{}{R}_i},`$ (11)
where $`\mathrm{\Gamma }(\stackrel{}{R}_i)`$ represents the spatial spin correlation defined as
$$\mathrm{\Gamma }(\stackrel{}{R}_i)\underset{\sigma \sigma ^{}\rho \rho ^{}}{}b_{0\sigma }^{}b_{x\sigma }b_{x\sigma ^{}}^{}b_{0\sigma ^{}}b_{i\rho }^{}b_{i+x\rho }b_{i+x\rho ^{}}^{}b_{i\rho ^{}}.$$
(12)
Here, $`f(\epsilon _\stackrel{}{p})`$ is the Fermi distribution function of the spinless fermion, $`\stackrel{}{\kappa }=\stackrel{}{p}_1\stackrel{}{p}_2`$ is the momentum transfer of the fermion due to scattering, and the average $`\mathrm{}`$ is evaluated by the mean field Hamiltonian in Eq. (3). The same expression given by spin-variables has been derived in Ref. . It should be noted that in this model the actual value of the resistivity in units of $`ha/e^2`$ does not depend on that of $`t`$. This is because $`\epsilon _\stackrel{}{p}`$ and $`K_f`$ are scaled by $`t`$, and $`\tau `$ is scaled by $`1/t`$; thus, $`K_f\tau `$ becomes independent of $`t`$.
Results of the calculated resistivity ($`\rho `$) are shown in Fig. 2 as a function of the temperature ($`T`$) for several doping concentrations. The resistivity monotonically increases with increasing $`T`$ in a ferromagnetic state for all doping concentrations. In a paramagnetic state, the resistivity still weakly increases, i.e., metallic for $`x<0.2`$. In the case of $`x=0.2`$ to 0.25, the $`T`$-dependence almost vanishes for $`T>1.2T_c`$. For a further high doping concentration, $`x>0.2`$, the resistivity comes to weakly increase again. On the other hand, the corresponding inverse relaxation time ($`1/\tau `$) shown in the inset of the figure shows a different $`T`$-dependence: it decreases in the paramagnetic state for all cases of $`x`$. This difference clearly indicates the importance of the $`T`$-dependence of $`K_f`$ since $`\rho 1/(K_f\tau )`$.
These behaviors in the paramagnetic state can be understood in terms of Fisher and Langer’s scheme, although it was originally proposed to analyze the electron transport in the transition metal ferromagnets. The inverse relaxation time in the DE model can be rewritten as
$$\frac{1}{\tau }\frac{D(E_F)^2}{K_f}\underset{i}{}\mathrm{\Gamma }(\stackrel{}{R}_i)f(\stackrel{}{R}_i),$$
(13)
where $`f(\stackrel{}{R}_i)`$ is the decaying oscillatory function, and $`D(E_F)`$ is the density of states at the Fermi level. In the case of the transition metal ferromagnets, $`\mathrm{\Gamma }(0)`$ does not depend on $`T`$ in the paramagnetic state; thus, the temperature dependent part of the resistivity is wholly determined by $`\mathrm{\Gamma }(\stackrel{}{R}_i)`$ for $`\stackrel{}{R}_i0`$. In the case of the DE model, however, the term with $`\stackrel{}{R}_i=0`$ depends on $`T`$ through $`C`$ as
$$\mathrm{\Gamma }(0)8S_R^2(7C^2+S_R^2)\text{ for }S_R1,$$
(14)
where $`C`$ is the correlation function between nearest neighboring spins. It gives a dominant contribution in $`1/\tau `$. Further, $`D(E_F)`$ ($`1/W_f`$) and $`K_f`$ also depend on $`T`$; therefore, the temperature dependence of $`1/\tau `$ and $`\rho `$ are given by
$$\frac{1}{\tau }\frac{7C^2+S_R^2}{W_f^2K_f},$$
(15)
$$\rho \frac{7C^2+S_R^2}{W_f^2K_f^2}.$$
(16)
These quantities for $`x=0.3`$ are plotted in Fig. 3 and roughly agree with the calculated results shown in Fig. 2. The rounding of the calculated $`\rho `$ at very close to $`T_c`$, however, must come from $`\mathrm{\Gamma }(\stackrel{}{R}_i)`$ at $`\stackrel{}{R}_i0`$.
It is worth to note that the calculated results in the temperature dependence of the resistivity are different from those in Ref. , although the memory function formalism is adopted in both cases. The discrepancy is attributed to the fact that in the present calculation the temperature dependence of the electronic structure, that is, $`W_f`$ and $`K_f`$ (shown in Fig. 1), are taken into account, as well as that of $`_i\mathrm{\Gamma }(\stackrel{}{R}_i)f(\stackrel{}{R}_i)`$. In fact, $`_i\mathrm{\Gamma }(\stackrel{}{R}_i)f(\stackrel{}{R}_i)`$ plotted in Fig. 4 shows a peak at $`T<T_c`$ which is smeared out in the resistivity due to the variation of the electronic structure. The peak structure originates from the process in which bosons in the condensate part are scattered to the non-condensate part, and vice versa. It should be noted that, since the number of condensate bosons is macroscopically large, the lowest order perturbational treatment for such scattering processes might overestimate the scattering amplitude, and the resistivity might become slightly smaller than the present results because of higher order perturbations. In any case, the resistivity is unlikely to show the peak in the ferromagnetic state.
On the other hand, our results are similar to the previous results by Kubo and Ohata except for the discontinuity in $`d\rho /dT`$ at $`T_c`$. In Ref. , only the shortest correlation of scattering ($`\stackrel{}{R}_i=0`$) is included and this is a suitable approximation as discussed above. It should be noted, however, that the physical mechanism leading to the weak temperature dependence of the resistivity in the paramagnetic state is completely different. The mechanism is a result of the cancellation between the band-width ($`W_f`$), the kinetic energy ($`K_f`$), and the nearest-neighbor spin correlation ($`C`$) in a somewhat complicated manner (Eq. (16)) in our calculations. On the other hand, these quantities used in Ref. are temperature independent in the paramagnetic state. Further, the discontinuity in $`d\rho /dT`$ at $`T_c`$ comes from the sudden change of $`W_f`$, which is absent in our results since the contribution from $`\mathrm{\Gamma }(\stackrel{}{R}_i)`$ with $`R_i0`$ is not neglected near $`T_c`$ and $`W_f`$ is a smooth function of $`T`$. Although our results are also similar to the results by Furukawa, it is likely to be just a coincidence because the spatial correlation of core spins between different sites are neglected and scattering processes responsible for the resistivity are different. It should be noted further that our results qualitatively agree with the recent results by Monte Carlo simulations. For $`a=4\AA `$, at $`T_c`$, we obtain $`\rho 2\times 10^3`$ $`\mathrm{\Omega }`$cm, whose order also agrees with these results.
To conclude, using the Schwinger boson approach, we have calculated the resistivity in the double-exchange model. In this approach, the fermion bandwidth has been determined by the absolute value of the hopping amplitude giving a physically reasonable temperature dependence in contrast to the conventional Schwinger boson approach. The resistivity monotonically increases with increasing temperature in the ferromagnetic state, which is different from the previous results in Ref. . In the paramagnetic state, the resistivity is dominated by the short range correlation of scattering. Although the behavior slightly changes depending on the doping concentration, the temperature dependence almost vanishes due to a cancellation between the nearest-neighbor magnetic correlation, the fermion bandwidth, and the fermion kinetic energy. These results imply the importance of the temperature dependence of the electronic structure of the conduction electron in the both ferromagnetic and paramagnetic phases.
The results agree with the experiments for “higher” doping concentrations $`x0.3`$, where experimentally observed resistivity is relatively low as a whole ($`\rho 5\times 10^3`$ $`\mathrm{\Omega }`$cm at $`T_c`$) and shows a metallic behavior in the paramagnetic state. For lower concentrations $`x0.2`$, however, the experimentally observed singular behavior around the transition temperature and the insulating behavior in the paramagnetic state are not reproduced in our calculations. Other effects might play a crucial role in cooperation with the double exchange mechanism for the system with such the lower concentration.
One of the authors (Ishizaka) would like to thank T. Hiroshima for helpful discussions.
|
no-problem/9901/quant-ph9901017.html
|
ar5iv
|
text
|
# 1 INTRODUCTION
## 1 INTRODUCTION
Our derivation of the free space Maxwell equations using the discrete ordered calculus (DOC) mentioned that the postulated commutation relations between position and velocity could be interpreted as a consequence of a fixed discrepancy between first measuring position and then velocity or visa versa. However, these commutation relations were not given a careful physical justification in terms of our finite measurement accuracy philosophy . A second deficiency, which in fact caused us to warn the reader that we had only derived one part of the formalism of classical electrodynamics rather than the theory itself, was that no attempt was made to identify the sources and sinks of the “fields” and derive the inhomogeneous Maxwell equations from them. We took a step in that direction by our derivation of a finite and discrete version of the 1+1 free space Dirac equation from a fixed step-length Zitterbewegung postulate using finite difference equations. Although it was noted that an attempt had been made by me to attribute the Zitterbewegung to the conservation of spin or particle number in the presence of random electromagnetic fluctuations, no attempt was made to relate these interactions to the source terms needed to complete the argument in the Maxwell equations paper. Neither Kauffman nor I have attempted to relate the non-commutativity known to arise from the Dirac equation to the commutation relations needed to derive the Maxwell equations in our finite and discrete context. In this paper I take a few steps to remedy both defects, but more work is needed.
## 2 ELECTROMAGNETIC MEASUREMENT OF A CHARGED PARTICLE TRAJECTORY
In earlier work I have made use of what I called “the counter paradigm” to cut the Gordian knot of specifying what a physicist means when he says that a particle was or was not present in a finite spacial volume for a finite time duration. As a first approximation, I assume that this volume is the “sensitive volume” of a counter, and the time duration is the time during which the recording device attached to the counter could have recorded an event, often called a “firing”. This I call a NO-YES event, depending on whether the counter did not or did “fire”. A more careful treatment specifies the probability of “spurious events”, i.e. cases when the counter “should have fired” but did not (counter inefficiency), and the probability of cases when the counter “should not have fired”, but according to the record did in fact fire (background events). Ted Bastin has often objected that this abrupt transition from the laboratory to Boolean logic sweeps too much under the rug, and I have often replied that to justify this way of talking about laboratory practice would require a book. Fortunately, Peter Galison has taken ten years to write the book I needed. He separates the history of the material culture of particle physics into a “logic” tradition contrasted with an “image” tradition. My “counter paradigm” finds its appropriate niche as part of the logic tradition. Galison shows that by now the two alternatives have fused in the mammoth “detectors” which are integrated into the accelerators in all high energy particle physics laboratories . It took over a century for this language and practice to mature, and a decade to make a convincing argument as to why it should be accepted by philosophers. I now have a simple tactic open. I can ask any critic of my conceptual leap from counter firings to NO-YES events to first convince me that Galison’s defense of the mainstream tradition is inadequate. Only then will I feel any need to take his or her criticism seriously.
This ploy allows me to use conventional language in my descriptions of laboratory measurement. In particular I can now construct a simple paradigm for what I mean by the measurement of the electromagnetic trajectory of a particle. First recall that by a “particle” I mean “a conceptual carrier of conserved quantum numbers between events”. I can take the simplest interpretation of two sequential counter firings a fixed distance $`L`$ apart with a time interval $`T`$ between them to be that a particle conserving mass, momentum and energy passed between them with velocity $`L/T`$. I assume available a “source” of particles which allows a large number of repetitions of these paired sequential events to occur. This data set is assumed to provide both statistical and systematic accuracy adequate for calibrating the changes in the magnitude and/or direction of this velocity caused by inserting electromagnetic devices into the path defined by sequential counter firings
The electromagnetic device we consider first, inserted between two counters previously used to measure velocity, is simply two parallel conducting plates with a hole through them across which a constant voltage can be applied. This voltage is measured by standard techniques. When the voltage is negligible, our original source and sink counters still give a velocity $`v=L/T`$ for each particle “passing through the two holes”, showing that we can maintain the same particulate interpretation of the two sequential events with the plates in place, even though we do not “measure” the presence of the particles between the plates. We now apply a voltage $`V`$ across the plates, which are large enough compared to the holes so that, according to standard electrostatic theory, the electric field between the plates and along the direction of motion of the particle is $`=V/\mathrm{\Delta }d`$ where $`\mathrm{\Delta }d`$ is the separation between the plates. We now study the change in the velocity of a particle of the type being studied (i.e. produced in the same way or available from the same source) during a time when the voltage across the plates is held at $`V`$. Counter firings before the presumed arrival and after the presumed departure of the particle at the device allow us to say that the particle arrived at the position of the plates with velocity $`v_1`$ and left with velocity $`v_2`$. We then say that the particles have a charge $`e`$, a (rest) mass $`m`$, an energy $`E_1`$ before they enter the first hole, and an energy $`E_2`$ after leaving the second hole when, for various experiments, the velocity change produced by the device is equivalent to an energy change
$$\mathrm{\Delta }E=E_2E_1=\pm e\mathrm{\Delta }q;=V/\mathrm{\Delta }d$$
(1)
with
$$E_1=\frac{m}{\sqrt{1(v_1^2/c^2)}};E_2=\frac{m}{\sqrt{1(v_2^2/c^2)}}$$
(2)
We then take this as our paradigm for the measurement of an electric field in a region of length $`\mathrm{\Delta }d`$ of strength $``$.
We emphasize that this measurement requires a change in the velocity of the particle. The minimum change to which we can reliably assign a number quantizes our measurement accuracy at the level of technology we are using. Note that our paradigm assumes constant velocity between measurements in field-free regions. \[Recall that we derived a discrete version of the constant velocity law from bit-string physics in our foundational paper, Sec. 6.5, pp 94-95.\] Alternatively, if we know the field (or voltage) and the (constant velocity) trajectories before and after the device, we can use the same device as a paradigm for position measurement to an accuracy $`\mathrm{\Delta }d`$. By fleshing out this paradigm, we can recursively use electromagnetic language to justify the construction of laboratory counters which have a conceptual connection to those used in our counter paradigm.
Our paradigm for magnetic field (or momentum) measurement assumes that we have two double plates across each of which independently adjustable voltages can be applied. We call the entrance hole of the first pair 1 and the exit hole 2, and for the second plate the entrance hole 3 and the exit hole 4; thus the gaps are $`d_{12}`$ and $`d_{34}`$, and the trajectory is 1,2,3,4. The plates are located geometrically in the laboratory in such a way that a path connecting the exit hole 2 from the first pair to the entrance hole 3 into the other can be an arc of a circle of radius R whose center lies in a plane with the two gaps; the gaps between the plates are two (short) arcs of that circle. The arc between the two devices is of length $`R\mathrm{\Delta }\theta `$. The magnetic field we wish to measure is perpendicular to the plane of the circle and is of constant strength $``$, along this arc. This is “guaranteed” by the geometry and the standard theory of magnetostatic fields. According to electromagnetic theory, this field does not change the energy of the particle, or the magnitude of its velocity, but does cause the direction of the velocity to change. This change is simply described in terms of the momentum $`𝐏`$ of vector magnitude
$$𝐏=\frac{m𝐯}{\sqrt{1(v^2/c^2)}};|𝐯|=\frac{R\mathrm{\Delta }\theta }{t_3t_2}$$
(3)
where the time $`t_2`$ when the particle exits hole 2 and the time $`t_3`$ when it enters hole 3 are usually inferred rather than directly measured; v is the vector velocity of constant magnitude with a (varying) direction assumed tangent to the arc. The radius of the circle is related to the magnitude of the momentum by
$$R=\frac{eP}{c}$$
(4)
and the change in momentum (due to change in direction since the magnitude is constant) by
$$\mathrm{\Delta }P=2Psin^2\mathrm{\Delta }\theta /2=P(1cos\mathrm{\Delta }\theta )$$
(5)
As as in the case of electric field measurement, we can consider this arrangement as either a measurement of the field $``$ at (perpendicular to) the arc $`R\mathrm{\Delta }\theta `$ geometrically defined or as a measurement of the velocity of the particle along that arc. But as a velocity measurement, it is important to realize that there is an ambiguity as to whether this is the measurement of velocity after the particle has traversed the first double plate 12, which could be a counter measuring position, or a measurement of velocity before it traverses the second double plate 34.
If all we have available are not individual particle detectors, but only devices that measure the charged current flowing along the trajectory, the arrangement discussed above can only be used to measure $`e/m`$ and not charge and mass separately. Such experiments were, historically, sufficient to convince the proponents of various models of the charge distribution “within the electron” (Abraham, Lorentz, Poincaré) that their models were wrong, and that the Einstein equation connecting mass to velocity used above was correct even though it violated their way of thinking about space and time (, Sec. 9.6, pp 810-816). Galison shows by this historically examined case that experimental tradition and the material culture of physics allow theoretical physicists on opposite sides of what Kuhn would call a “paradigm shift” to agree on the significance of experimental results..
The fact that electric and magnetic fields acting on a moving charge effect changes in velocity along or at right angles to the direction of motion respectively allows one to build a “velocity selector” by setting up a region of electrostatic and magnetostatic fields in which the fields are at right angles to each other and both are at right angles to the direction of motion of the charge. The force on the charge due to the electric field is $`e`$ while the force due to the magnetic field is $`ev/c`$ and the geometry we have specified requires these forces to be in the same direction. Consequently there is a unique direction for which they cancel, provided the velocity has magnitude $`v=c/`$. A particle of that charge with any other velocity will be deflected away from this direction.
At first glance, such a device would seem to allow us to measure position and velocity “simultaneously”. But this is not correct. So long as the charged particle has this velocity and the magnitude and direction of the fields does not change along this straight line trajectory, no force acts and the particle maintains constant velocity. However, we have no way of knowing where it is within this region, and hence when it enters and leaves it, without a measurement. But this measurement will change the velocity. So we must measure when the particle enters the region and when it leaves the region in order to know how long and when it is in the region with that velocity. As before, we can first measure position and then velocity or first measure velocity, and then position but not both simultaneously. An extended discussion of this case should allow us to see that three points on the trajectory are needed to establish the field at the intermediate point, and four if we are to measure both $`𝐄`$ and $`𝐁`$. On another occasion we hope to be able to go on to derive the free field commutation relations by such considerations (or directly from our DOC equations), and not just the uncertainty principle restrictions obtained by Bohr and Rosenfeld.
In closing we note that, even though we started out to devise a paradigm for electromagnetic field measurement, we have ended up deriving from this paradigm the DOC postulate that we can first measure position and then velocity or first measure velocity and then position, but not both simultaneously. We hope that this discussion makes it less of a mystery why the DOC postulate leads so directly to the formalism of free-field electromagnetism.
## 3 FROM FREE DIRAC PARTICLES TO FIELD SOURCES AND SINKS
The derivation of the finite difference version of the free particle Dirac equation for fixed step length $`\mathrm{}/mc`$ with step velocity $`\pm c`$ tells us immediately that we can cut the trajectory of a free particle into segments of constant velocity between “points” at which the velocity changes discontinuously. On the other hand our DOC equations for the free space electromagnetic field support solutions corresponding to the propagation of crossed electromagnetic fields with velocity $`c`$ and constant frequency which, for finite segments, can be interpreted as “photons” if they have the right amplitude. All we seem to need to produce a quantum electrodynamics which is finite and discrete, and hence “born renormalized”, would seem to be to assign a charge to the massive particle which satisfies the Dirac equation in such a way that its discrete changes in velocity correspond to the emission or absorption of such photons. I hope to do this on another occasion. The details will obviously take some time to work out, but will provide a lot of fun along the way.
Since this amounts to solving a finite and discrete “three particle problem”, an approach to the same theory which starts more directly from bit-string physics would be to treat the photon as a bound state of a particle-antiparticle pair in the relativistic three body theory now under active development .
|
no-problem/9901/physics9901032.html
|
ar5iv
|
text
|
# On the reaction field for interaction site models of polar systems
## 1 Introduction
The calculation of dielectric quantities by computer experiment requires an explicit consideration of effects associated with the truncation of long-range interactions. The concrete success in this direction has been achieved within the reaction field (RF) geometry \[1–5\]. As a result, computer adapted dielectric theories have been proposed \[6–10\]. In the framework of these theories, a bulk dielectric constant can be determined on the basis of a fluctuation formula via correlations obtained in simulations for finite samples. However, main attention in the previous investigations has been focused on polar systems with the point dipole interaction. As is now well established, the model of point dipoles can not reproduce adequately features of real polar liquids.
At the same time, attempts to apply the RF geometry for more realistic interaction site (IS) models have also been made \[11–13\]. However, acting within a semiphenomenological approach, it was not understood how to perform the truncation of intermolecular potentials. As a consequence, the molecular cut-off and the usual point dipole RF (PDRF) have been assumed. Obviously, such an approach includes effects connected with finiteness of the molecule inconsistently. Indeed, the interdipolar potential is replaced by site-site Coulomb interactions, whereas the RF is remained in its usual form. An additional complication for IS models consists in a spatial distribution of charges and this fact is not taken into account by the standard PDRF geometry.
In the present paper we propose two alternative approaches to remedy this situation. The first one follows from the usual fluctuation formula which is constructed, however, on the microscopic operator of polarization density for IS models. This leads to an ISRF geometry, where the cut-off radius is applied with respect to individual charges rather than to the molecule as a whole. Nevertheless, the molecular cut-off scheme can also be acceptable, but the reaction field together with the fluctuation formula need to be corrected. In the second approach a molecular RF (MRF) geometry is proposed and a new quadrupole term is identified. On the basis of a MCY water model we show that uncertainties of the dielectric quantities can be significant if the standard PDRF geometry is used in computer simulations.
## 2 Interaction site reaction field
We consider an isotropic, classical system of $`N`$ identical molecules enclosed in volume $`V`$. The microscopic electrostatic field created by the molecules at point $`𝒓V`$ is equal to
$$\widehat{𝑬}(𝒓)=\underset{i=1}{\overset{N}{}}\underset{a}{}q_\stackrel{}{a}\frac{𝒓𝒓_i^a}{|𝒓𝒓_i^a|^3}=\underset{V}{}𝑳(𝒓𝒓^{})\widehat{Q}(𝒓^{})d𝒓^{},$$
(1)
where $`𝒓_i^a`$ denotes the position for charge $`q_\stackrel{}{a}`$ of $`i`$th molecule, $`\widehat{Q}(𝒓)=_{i,a}q_\stackrel{}{a}\delta (𝒓𝒓_i^a)`$ is the microscopic operator of charge density, $`𝑳(𝝆)=\mathbf{}1/\rho `$ and the summation extends over all molecules and charged sites. For the investigation of dielectric properties, it is more convenient to rewrite the electric field (1) in the polarization representation
$$\widehat{𝑬}(𝒓)=\underset{V}{}𝐓(𝒓𝒓^{})\widehat{𝑷}(𝒓^{})d𝒓^{}=\frac{4\pi }{3}\widehat{𝑷}(𝒓)+\underset{\rho +0}{lim}\underset{\rho <|𝒓𝒓^{}|}{\underset{V}{}}𝐓(𝒓𝒓^{})\widehat{𝑷}(𝒓^{})\mathrm{d}𝒓^{}.$$
(2)
Here $`𝐓(𝝆)=\mathbf{}\mathbf{}1/\rho `$ is the dipole-dipole tensor, $`\widehat{𝑷}(𝒓)`$ denotes the microscopic operator of polarization density, defined as $`\mathbf{}\mathbf{}\widehat{𝑷}(𝒓)=\widehat{Q}(𝒓)`$, and the singularity $`lim_{\rho 0}𝐓(𝝆)=4\pi /3\delta (𝝆)𝐈`$ has been avoided, where $`𝐈`$ is the unit tensor of the second rank. The both charge (1) and polarization (2) representations are equivalent and applicable for infinite ($`N,V\mathrm{}`$) systems.
In simulations, which deal with finite samples, the sum (1) can not be calculated exactly taking into account an infinitely large number of terms. Therefore, we must restrict ourselves to a finite set of terms in (1) or to a finite range of the integration in (1) and (2) for which $`|𝒓𝒓^{}|R`$, where $`R`$ is a cut-off radius. Now the following problem appears. How to estimate the cut-off field caused by the integration over unaccessible region $`|𝒓𝒓^{}|>R`$? The solution of this problem has been found for the first time for systems with point dipoles in the RF geometry. The result for conducting boundary conditions is
$$\widehat{𝑬}(𝒓)\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\frac{4\pi }{3}\widehat{𝑷}(𝒓)+\underset{\rho +0}{lim}\underset{\rho <|𝒓𝒓^{}|R}{\underset{V,\mathrm{tbc}}{}}\left(𝐓(𝒓𝒓^{})+\frac{𝐈}{R^3}\right)\widehat{𝑷}(𝒓^{})\mathrm{d}𝒓^{},$$
(3)
where a cubic finite sample and toroidal boundary conditions (TBC) have been used, so that $`R\sqrt[3]{V}/2`$. The additional term $`𝐈/R^3`$ in the right-hand site of (3) describes the RF which is used for an approximation of the real cut-off field. For a pure spherical cut-off (SC) without the RF correction, we have $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{SC}}(𝒓)={\displaystyle \gamma (|𝒓𝒓^{}|)𝑳(𝒓𝒓^{})\widehat{Q}(𝒓^{})d𝒓^{}}`$, where $`\gamma (\rho )=1`$ if $`\rho R`$ and $`\gamma (\rho )=0`$ otherwise. Obviously, that $`lim_R\mathrm{}\widehat{𝑬}_\stackrel{}{R}^{\mathrm{SC}}(𝒓)=lim_R\mathrm{}\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\widehat{𝑬}(𝒓)`$.
Let us perform the spatial Fourier transform $`(𝒌)=d𝒓\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓}(𝒓)`$ for arbitrary functions $``$. Then one obtains
$$\widehat{𝑬}_\stackrel{}{R}^{\mathrm{SC}}(𝒌)=𝑳(𝒌)\widehat{Q}(𝒌),\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)=\frac{4\pi }{3}\widehat{𝑷}(𝒌)+\left(𝐓(𝒌)+4\pi \frac{j_\stackrel{}{1}(kR)}{kR}𝐈\right)\widehat{𝑷}(𝒌),$$
(4)
where
$$𝑳(𝒌)=4\pi \left(1j_\stackrel{}{0}(kR)\right)\frac{\mathrm{i}𝒌}{k^2},𝐓(𝒌)=\frac{4\pi }{3}\left(13\frac{j_\stackrel{}{1}(kR)}{kR}\right)\left(3\widehat{𝒌}\widehat{𝒌}𝐈\right),$$
(5)
$`\widehat{Q}(𝒌)=_{i,a}q_\stackrel{}{a}\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓_i^a}=\mathrm{i}𝒌\mathbf{}\widehat{𝑷}(𝒌)`$, $`𝒌=2\pi 𝒏/\sqrt[3]{V}`$ is one of the allowed wavevectors of the reciprocal lattice, $`𝒏`$ designates a vector with integer components, $`k=|𝒌|`$, $`\widehat{𝒌}=𝒌/k`$ and $`j_\stackrel{}{0}(z)=\mathrm{sin}(z)/z`$, $`j_\stackrel{}{1}(z)=\mathrm{cos}(z)/z+\mathrm{sin}(z)/z^2`$ are the spherical Bessel functions of zero and first order, respectively. In view of (5), the relations (4) transform into
$$\widehat{𝑬}_\stackrel{}{R}^{\mathrm{SC}}(𝒌)=4\pi \left(1j_\stackrel{}{0}(kR)\right)\widehat{𝑷}_\mathrm{L}(𝒌),\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)=4\pi \left(13\frac{j_\stackrel{}{1}(kR)}{kR}\right)\widehat{𝑷}_\mathrm{L}(𝒌),$$
(6)
where $`\widehat{𝑷}_\mathrm{L}(𝒌)=\widehat{𝒌}\widehat{𝒌}\mathbf{}\widehat{𝑷}(𝒌)=\mathrm{i}𝒌\widehat{Q}(𝒌)/k^2`$ is the longitudinal component of the microscopic operator of polarization density.
It is easy to see from (6) that the both functions $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{SC}}(𝒌)`$ and $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)`$ tend to the same value $`\widehat{𝑬}(𝒌)=4\pi \widehat{𝑷}_\mathrm{L}(𝒌)`$ of the infinite system at $`R\mathrm{}`$ ($`k0`$). However, the results converge as $`R^1`$ for the pure SC scheme, while as $`R^2`$ in the RF geometry, i.e., more quickly, because a main part of the truncation effects is taken into account by the RF. This is very important in our case, where we hope to reproduce features of infinite systems on the basis of finite samples. That is why the pure truncation, which is standard for simple fluids with short-range potentials, is generally not recommended for polar systems with long-range nature of the dipolar interaction. The influence of the TBC and the difference between micro- and canonical ensembles are of order $`N^1R^3`$ and, therefore, they can be excluded from our consideration. It is worth mentioning that electrostatic fields are pure longitudinal. They can be defined via the longitudinal component of the microscopic operator of polarization density, that is confirmed by Eq. (6).
Let us enclose the system in an external electrostatic field $`𝑬_\stackrel{}{0}(𝒓)`$. The material relation between the macroscopic polarization $`𝑷_\mathrm{L}(𝒌)=\widehat{𝑷}_\mathrm{L}(𝒌)`$ in the weak external field and total macroscopic field is $`4\pi 𝑷_\mathrm{L}(𝒌)=\left(\epsilon _\stackrel{}{\mathrm{L}}(k)1\right)𝑬_\mathrm{L}(𝒌)`$, where $`\epsilon _\stackrel{}{\mathrm{L}}(k)`$ denotes the longitudinal wavevector-dependent dielectric constant. Applying the first-order perturbation theory with respect to $`𝑬_\stackrel{}{0}`$ yields for rigid molecules $`Vk_\mathrm{B}T𝑷_\mathrm{L}(𝒌)=\widehat{𝑷}_\mathrm{L}(𝒌)\mathbf{}\widehat{𝑷}_\mathrm{L}(𝒌)_\stackrel{}{0}𝑬_\stackrel{}{0}(𝒌)`$, where $`k_\mathrm{B}`$ and $`T`$ are Boltzmann’s constant and temperature, respectively, and $`\mathrm{}_\stackrel{}{0}`$ is the equilibrium average in the absence of the external field. Then, taking into account that $`𝑬_\mathrm{L}(𝒌)=𝑬_\stackrel{}{0}(𝒌)+\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)`$ and eliminating $`𝑬_\stackrel{}{0}(𝒌)`$, we obtain the fluctuation formula
$$\frac{\epsilon _\stackrel{}{\mathrm{L}}(k)1}{\epsilon _\stackrel{}{\mathrm{L}}(k)}=\frac{9yG_\stackrel{}{\mathrm{L}}(k)}{1+27yG_\stackrel{}{\mathrm{L}}(k)j_\stackrel{}{1}(kR)/(kR)}=9yg_\stackrel{}{\mathrm{L}}(k).$$
(7)
Here $`G_\stackrel{}{\mathrm{L}}(k)=\widehat{𝑷}_\mathrm{L}(𝒌)\mathbf{}\widehat{𝑷}_\mathrm{L}(𝒌)_\stackrel{}{0}/N\mu ^2`$ is the longitudinal component of the finite-system wavevector-dependent Kirkwood factor, $`y=4\pi N\mu ^2/9Vk_\mathrm{B}T`$ and $`\mu =|𝝁_i|=|_aq_\stackrel{}{a}𝒓_i^a|`$ denotes the permanent magnitude of molecule’s dipole moment. It is necessary to note that we consider rigid IS molecules so that effects associated with molecular and electronic polarizabilities are not included in our investigation. In the case of $`R\mathrm{}`$, we have $`j_\stackrel{}{1}(kR)/(kR)0`$ and computer adapted formula (7) reduces to the well-known fluctuation formula for macroscopic systems in terms of the infinite-system Kirkwood factor $`g_\stackrel{}{\mathrm{L}}(k)=lim_R\mathrm{}G_\stackrel{}{\mathrm{L}}(k)`$.
As was mentioned earlier, the electric field $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}`$ in the form (3), (4) as well as the fluctuation formula (7) have been proposed for the first time to investigate polar systems of point dipoles . However, acting within a semiphenomenological framework, it was not understood how to perform the truncation of the intermolecular potential $`\phi _{\stackrel{}{ij}}`$ at attempts to extend this formula for IS models. As a result, the molecular cut-off $`r_{ij}=|𝒓_i𝒓_j|R`$, where $`𝒓_i`$ is the center of mass for $`i`$th molecule, and the usual PDRF have been suggested \[11–13\]:
$$\phi _{\stackrel{}{ij}}=\underset{a,b}{}\frac{q_\stackrel{}{a}q_\stackrel{}{b}}{|𝒓_i^a𝒓_j^b|}\frac{𝝁_i\mathbf{}𝝁_j}{R^3},r_{ij}R.$$
(8)
It is essentially to emphasize that the fluctuation formula (7) takes into account finiteness of the system explicitly by the factor $`j_\stackrel{}{1}(kR)/(kR)`$. As a result, if the system size is sufficiently large (terms of order $`R^2`$ can be neglected), the bulk ($`N,V\mathrm{}`$) dielectric constant can be reproduced via the finite-system Kirkwood factor $`G_\stackrel{}{\mathrm{L}}(k)`$ which depends on $`R`$ in a characteristic way. However, to achieve this self-consistency in the evaluation of the bulk dielectric constant, the equilibrium averaging in $`G_\stackrel{}{\mathrm{L}}(k)`$ must be calculated for systems with the intermolecular potential which leads exactly to the microscopic electric field $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)`$ (3). As we shall below, the intermolecular potential (8) does not obey this condition.
To derive the exact intermolecular potential in the charge representation, we perform the inverse Fourier transform $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\frac{1}{(2\pi )^3}{\displaystyle d𝒌\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓}}`$ and obtain using (6)
$$\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\underset{i,a}{}q_\stackrel{}{a}\frac{𝒓𝒓_i^a}{|𝒓𝒓_i^a|^3}\left(1\frac{6}{\pi }\frac{|𝒓𝒓_i^a|^2}{R}\underset{0}{\overset{\mathrm{}}{}}j_\stackrel{}{1}(kR)j_\stackrel{}{1}(k|𝒓𝒓_i^a|)dk\right).$$
(9)
Taking into account that $`\frac{6}{\pi }_0^{\mathrm{}}j_\stackrel{}{1}(kR)j_\stackrel{}{1}(k\rho )dk=\rho /R^2`$ if $`\rho R`$ and is equal to $`R/\rho ^2`$ if $`\rho >R`$, we have
$$\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\underset{i,a}{}q_\stackrel{}{a}\frac{𝒓𝒓_i^a}{|𝒓𝒓_i^a|^3}\left(1\frac{|𝒓𝒓_i^a|^3}{R^3}\right)\mathrm{if}|𝒓𝒓_i^a|R$$
(10)
and $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=0`$ otherwise, where the first term in the right-hand side is the Coulomb field, while the second contribution corresponds to the RF in the IS description.
In order to understand nature of this field, we consider a spherical cavity of radius $`R`$ with the center at point $`𝒓`$, embedded in an infinite conducting medium. Let us place a point charge $`q_\stackrel{}{a}`$ at point $`𝒓_i^a`$ in the cavity, so that $`|𝒓𝒓_i^a|R`$. The total electric field $`𝒆_i^a(𝒓)`$ at point $`𝒓`$ consists of the field due to the charge $`q_\stackrel{}{a}`$ and the field created by induced charges located on the surface of the cavity. According to the method of electrostatic images , this last field can be presented as the field of an imaginary charge $`q_\stackrel{}{a}^{}=q_\stackrel{}{a}R/|𝒓𝒓_i^a|`$ which is located at point $`𝒓_{}^{}{}_{i}{}^{a}=𝒓R^2(𝒓𝒓_i^a)/|𝒓𝒓_i^a|^2`$ outside the sphere. Then $`𝒆_i^a(𝒓)=q_\stackrel{}{a}(𝒓𝒓_i^a)/|𝒓𝒓_i^a|^3+q_\stackrel{}{a}^{}(𝒓𝒓_{}^{}{}_{i}{}^{a})/|𝒓𝒓_{}^{}{}_{i}{}^{a}|^3=q_\stackrel{}{a}(𝒓𝒓_i^a)(1/|𝒓𝒓_i^a|^31/R^3)`$ that is completely in line with the term of sum (10).
In the potential representation ($`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒓)=\mathbf{}\mathrm{\Phi }(𝒓)`$), we obtain $`\mathrm{\Phi }(𝒓)=_{i,a}\varphi _i^a(𝒓)`$, where $`\varphi _i^a(𝒓)=q_\stackrel{}{a}(1/\rho _i^a+\frac{1}{2}\rho _{i}^{a}{}_{}{}^{2}/R^3+C)`$, $`\rho _i^a=|𝒓𝒓_i^a|`$ and $`C`$ is, in general, an arbitrary constant which for infinite systems is chosen as $`\varphi _i^a|_{\rho _i^a\mathrm{}}=0`$. In our case, according to the toroidal boundary conventional, $`\varphi _i^a|_{\rho _i^a=R}=0`$ whence $`C=3/2R^1`$. Then the intermolecular potential of interaction is $`\phi _{\stackrel{}{ij}}=_{a,b}q_\stackrel{}{b}\varphi _i^a(𝒓_j^b)=_{a,b}q_\stackrel{}{a}\varphi _j^b(𝒓_i^a)=_{a,b}\phi _{\stackrel{}{ij}}^{ab}`$, where
$$\phi _{\stackrel{}{ij}}^{ab}=\{\begin{array}{ccc}q_\stackrel{}{a}q_\stackrel{}{b}\left(\frac{1}{|𝒓_i^a𝒓_j^b|}+\frac{1}{2}\frac{|𝒓_i^a𝒓_j^b|^2}{R^3}\frac{3}{2R}\right)& ,\hfill & |𝒓_i^a𝒓_j^b|R\\ 0& ,\hfill & |𝒓_i^a𝒓_j^b|>R\end{array}$$
(11)
and the site-site cut-off is performed.
It is easily seen from (11) that the ISRF part $`\frac{1}{2}_{a,b}q_\stackrel{}{a}q_\stackrel{}{b}|𝒓_i^a𝒓_j^b|^2/R^3`$ transforms into the usual form $`𝝁_i\mathbf{}𝝁_j/R^3`$ of point dipoles for $`r_{ij}Rd`$ only, where $`d=2\mathrm{max}|𝜹_i^a|`$ is the diameter of the molecule and $`𝜹_i^a=𝒓_i^a𝒓_i`$. In the case if the molecular rather than the site-site cut-off is applied to the potential (11), this transformation is valid for arbitrary $`r_{ij}R`$. Moreover, in the last case the constant $`C=3/2R^1`$ is canceled owing electroneutrality ($`_aq_\stackrel{}{a}=0`$) of the molecule and we recover the result (8) of previous work . However, the potential of interaction (11) corresponds completely to the conditions at which the fluctuation formula (7) is derived. Therefore, this potential, instead of (8), must be used in simulations to obtain a correct value for the dielectric constant.
## 3 Molecular reaction field
In the case of point dipoles, where $`d+0`$, $`q_\stackrel{}{a}\mathrm{}`$ provided $`\mu const`$, both (8) and (11) representations are identical and reduced to the well-known result
$$\phi _{\stackrel{}{ij}}=𝝁_i\mathbf{}𝐓(𝒓_{ij})\mathbf{}𝝁_j\frac{𝝁_i\mathbf{}𝝁_j}{R^3},r_{ij}R$$
(12)
for the interdipolar interaction in the RF geometry. It is easy to see that in the case of IS models, the intermolecular potential (8) takes into account effects associated with finiteness of the molecule inconsistently. For example, the interdipolar potential is replaced by the real site-site Coulomb ones, whereas the reaction field is remained in its usual form of point dipoles. From this point of view a natural question of how to improve the RF within the molecular cut-off scheme arises. The simplest way to solve this problem lies in the following.
Let us consider the mentioned above spherical cavity, centered now at some fixed point $`𝒓_\stackrel{}{0}`$, in the infinite conducting medium. We place an $`i`$th molecule in such a way that all sites of the molecule would be located in the cavity. This condition is fulfilled providing $`|𝒓_i𝒓_\stackrel{}{0}|R_dRd/2`$. The potential of a molecular reaction field at point $`𝒓`$ belonging the cavity can be presented, according to the method of electrostatic images, as
$$\phi _i^{\mathrm{RF}}(𝒓)=\underset{a}{}\frac{q_\stackrel{}{a}^{}}{|𝝆𝝆_{}^{}{}_{i}{}^{a}|}=\underset{a}{}\frac{q_\stackrel{}{a}R/\rho _i^a}{\left|𝝆\left({\displaystyle \frac{R}{\rho _i^a}}\right)^2𝝆_i^a\right|}=\underset{a}{}\frac{q_\stackrel{}{a}}{\left|{\displaystyle \frac{\rho _i^a}{R}}𝝆{\displaystyle \frac{R}{\rho _i^a}}𝝆_i^a\right|},$$
(13)
where $`𝝆=𝒓𝒓_\stackrel{}{0}`$ and $`𝝆_i^a=𝒓_i^a𝒓_\stackrel{}{0}`$. Differentiating (13) over $`𝒓`$ at point $`𝒓_\stackrel{}{0}`$ yields
$$\frac{\phi _i^{\mathrm{RF}}(𝒓)}{𝒓}|_{𝒓_0}=\frac{𝝁_i}{R^3},\frac{^2\phi _i^{\mathrm{RF}}(𝒓)}{𝒓𝒓}|_{𝒓_0}=\frac{𝐪_i^{𝒓_0}}{R^5},\frac{^3\phi _i^{\mathrm{RF}}(𝒓)}{𝒓𝒓𝒓}|_{𝒓_0}=\frac{𝐠_i^{𝒓_0}}{R^7},\mathrm{}$$
(14)
Here $`𝝁_i=_aq_\stackrel{}{a}𝝆_i^a=_aq_\stackrel{}{a}𝜹_i^a`$ is the dipole moment of $`i`$th molecule, which does not depend on $`𝒓_\stackrel{}{0}`$ owing electroneutrality of the molecule, while $`𝐪_i^{𝒓_0}=_aq_\stackrel{}{a}(3𝝆_i^a𝝆_i^a\rho _{i}^{a}{}_{}{}^{2}𝐈)`$ and $`𝐠_i^{𝒓_0}`$ are the tensors of quadrupole and octupole moments, correspondingly, of $`i`$th molecule with respect to $`𝒓_\stackrel{}{0}`$. The third rank tensor $`𝐠_i^{𝒓_0}`$ has the following components $`𝐠_{i}^{𝒓_0}{}_{\alpha \beta \gamma }{}^{}=3_aq_\stackrel{}{a}\left(5𝝆_{i}^{a}{}_{\alpha }{}^{}𝝆_{i}^{a}{}_{\beta }{}^{}𝝆_{i}^{a}{}_{\gamma }{}^{}\rho _{i}^{a}{}_{}{}^{2}(𝝆_{i}^{a}{}_{\alpha }{}^{}\delta _{\beta \gamma }+𝝆_{i}^{a}{}_{\beta }{}^{}\delta _{\alpha \gamma }+𝝆_{i}^{a}{}_{\gamma }{}^{}\delta _{\alpha \beta })\right)`$. It is more convenient to present multipoles of higher order with respect to the molecular center of mass. For the tensor of quadrupole moment we obtain $`𝐪_i^{𝒓_0}=𝐪_i+𝐰_i`$, where $`𝐪_i=_aq_\stackrel{}{a}(3𝜹_i^a𝜹_i^a\delta _{i}^{a}{}_{}{}^{2}𝐈)`$ is the tensor of quadrupole moment of $`i`$th molecule with respect to its center of mass, $`𝐰_i=3(𝝁_i𝝆_i+𝝆_i𝝁_i)2𝝁_i\mathbf{}𝝆_i𝐈`$ and $`𝝆_i=𝒓_i𝒓_\stackrel{}{0}`$. It is necessary to underline that tensor $`𝐪_i`$ is split into dynamical $`𝝎_i=_aq_\stackrel{}{a}𝜹_i^a𝜹_i^a`$ and conservative $`_aq_\stackrel{}{a}\delta _{i}^{a}{}_{}{}^{2}𝐈`$ parts for rigid molecules.
Putting $`𝒓_\stackrel{}{0}=𝒓_j`$ and assuming $`dR`$, we obtain the energy of $`j`$th molecule in the MRF of $`i`$th molecule
$$\varphi _{ji}^{\mathrm{RF}}=𝝁_j\mathbf{}\frac{\phi _i^{\mathrm{RF}}(𝒓)}{𝒓}|_{𝒓_j}+\frac{1}{6}𝐪_j\mathbf{:}\frac{^2\phi _i^{\mathrm{RF}}(𝒓)}{𝒓𝒓}|_{𝒓_j}+\mathrm{}=\frac{𝝁_j\mathbf{}𝝁_i}{R^3}\frac{1}{6}\frac{𝐪_j\mathbf{:}𝐪_i^{𝒓_j}}{R^5}+\mathrm{},$$
(15)
where multipoles of higher order have been neglected. Finally, using the RF potential $`\phi _{ij}^{\mathrm{RF}}=(\varphi _{ij}^{\mathrm{RF}}+\varphi _{ji}^{\mathrm{RF}})/2`$ yields the desired intermolecular potential
$$\phi _{\stackrel{}{ij}}=\{\begin{array}{ccc}\underset{a,b}{}\frac{q_\stackrel{}{a}q_\stackrel{}{b}}{|𝒓_i^a𝒓_j^b|}\frac{𝝁_i\mathbf{}𝝁_j}{R^3}\frac{𝐪_i\mathbf{:}𝐪_j3(𝐪_i\mathbf{:}𝝁_j𝒓_{ij}+𝐪_j\mathbf{:}𝝁_i𝒓_{ji})}{6R^5}& ,\hfill & r_{ij}R_d\\ 0& ,\hfill & r_{ij}>R_d\end{array}$$
(16)
where equality $`𝐪\mathbf{:}𝐈=0`$ has been used.
The total reaction field, created by all molecules at point $`𝒓`$ near $`𝒓_\stackrel{}{0}`$ is
$$𝑬_{\stackrel{}{\mathrm{RF}}}(𝒓)=\underset{i}{\overset{\rho _iR_d}{}}\frac{\phi _i^{\mathrm{RF}}(𝒓)}{𝒓}=\frac{𝑴(R_d)}{R^3}+\frac{𝐐(R_d)+𝐖(R_d)}{R^5}𝝆+\mathrm{},$$
(17)
where $`𝑴(R_d)=_i^{\rho _iR_d}𝝁_i`$ and $`𝐐(R_d)=_i^{\rho _iR_d}𝐪_i`$ denote the total dipole and own quadrupole moment, respectively, within the sphere of radius $`R_d`$ and $`𝐖(R_d)=_i^{\rho _iR_d}𝐰_i`$. In the case of point dipoles, we have $`R_dR`$, $`𝐪_i,𝐠_i,\mathrm{}0`$ and the MRF (17) transforms into $`𝑴(R)/R^3+𝐖(R)𝝆/R^5`$. This last formula shows that the reaction field of finite systems is inhomogeneous even for point dipoles. Only for macroscopic ($`R\mathrm{}`$) systems, we reproduce the well-known homogeneous reaction field $`𝑴(R)/R^3`$ introduced by Barker and Watts . For finite IS systems, additional higher multipole terms appear. This brings, for example, into existence of the new quadrupole-dipole and quadrupole-quadrupole interactions in the intermolecular potential (16). We note that the idea of using the higher multipole moments in the RF has been proposed for the first time by Friedman .
However, the modified intermolecular potential (16) still needs to be complemented by a self-consistent fluctuation formula as this has already been done in the preceding section by the fluctuation formula (7) for the potential of interaction in the site-site cut-off scheme (11). Unfortunately, it is not a simple matter to construct fluctuation formulas in the molecular cut-off approach. This problem will be considered in further studying.
The difference in the RF geometry between IS and PD models lies in the distinction for their microscopic operators of polarization density. For IS models
$$\widehat{𝑷}_\mathrm{L}(𝒌)=\frac{\mathrm{i}𝒌}{k^2}\underset{i=1}{\overset{N}{}}\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓_i}\underset{a}{}q_\stackrel{}{A}\text{e}^{\mathrm{i}𝒌\mathbf{}𝜹_i^a}=\widehat{𝑴}_\mathrm{L}(𝒌)\frac{\mathrm{i}𝒌}{2}\widehat{𝒌}\widehat{𝒌}\mathbf{:}\underset{i=1}{\overset{N}{}}𝝎_i\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓_i}+\mathrm{},$$
(18)
where $`\widehat{𝑴}_\mathrm{L}(𝒌)=\widehat{𝒌}_{i=1}^N\widehat{𝒌}\mathbf{}𝝁_i\text{e}^{\mathrm{i}𝒌\mathbf{}𝒓_i}`$ is the microscopic operator of polarization density for point dipoles and an expansion over small parameter $`𝒌\mathbf{}𝜹_i^a`$ has been made . However, putting $`\widehat{𝑷}_\mathrm{L}(𝒌)\widehat{𝑴}_\mathrm{L}(𝒌)`$ in the microscopic electric field $`\widehat{𝑬}_\stackrel{}{R}^{\mathrm{RF}}(𝒌)`$ (6) at the very beginning and taking attempts to perform the inverse Fourier transform, we obtain that the corresponding integral is divergent in $`𝒌`$-space when $`k\mathrm{}`$. This divergence is involved by the specific nature of point dipoles for which the parameter $`𝒌\mathbf{}𝜹_i^a`$ becomes indeterminate in the limit $`k\mathrm{}`$ because of $`𝜹_i^a+0`$ and the expansion (18) fails. Therefore, we must manipulate with the full operator $`\widehat{𝑷}_\mathrm{L}(𝒌)`$ to obtain the interdipolar potential (12) consequently and let $`𝜹_i^a+0`$ at the end of the calculation only.
Since $`𝝁d`$ and $`𝐪d^2`$, the quadrupole contribution with respect to the dipole term is varied in (16) from of order $`(d/R)^2`$ at $`r_{ij}=0`$ to $`d/R`$ at $`r_{ij}=R_d`$. Therefore, as far as the usual intermolecular potential (8) is applied in simulations, the dielectric constant can not be reproduced with the precision better than $`d/R`$. It is evident that using the modified intermolecular potential (16) will lead to the uncertainties of order $`(d/R)^2`$. They decrease at increasing the size of the sample as $`R^2`$, i.e., with the same rate as those connected with the truncation of the potential. Effects of the octupole and higher order multipole contributions into the MRF are of order $`(d/R)^3`$ and can be ignored.
## 4 Applying the ISRF to a MCY water model
In the previous investigations \[11–13\], the standard PDRF geometry (8) has been applied to actual simulations of the MCY and TIP4P models. As a result, the static, frequency-dependent and wavevector-dependent dielectric constant has been determined. For these models $`d=1.837\mathrm{\AA }`$ and the cut-off radius $`R=9.856\mathrm{\AA }`$ has been used in the simulations. From the afore said in the preceding section, it is expected that the precision of these calculations can not exceed $`d/R20\%`$. We shall show now by actual calculations that this prediction indeed takes place.
As an example we apply the ISRF geometry (11) to the MCY potential . The calculations have been performed with the help of Monte Carlo (MC) simulations, details of which are similar to those reported earlier , at the density of $`\rho `$= 1.0 g/cm<sup>3</sup> and at the temperature of $`T=292`$ K, i.e., in the same thermodynamic point and yet with the same number $`N=256`$ of molecules and cut-off radius $`R=9.856\mathrm{\AA }`$ as considered in .
Our result of the calculation (7) for the longitudinal components of the wavevector-dependent infinite-system Kirkwood factor $`g_\stackrel{}{\mathrm{L}}(k)`$ and dielectric constant $`\epsilon _\stackrel{}{\mathrm{L}}(k)`$ obtained within the ISRF geometry is presented in Figs. 1 and 2, respectively, as the full circles connected by the solid curves. For the purpose of comparison, analogous calculations performed previously within the PDRF are also included in these figures (the open circles connected by the dashed curves). It is obvious that oscillations observing in the shape of $`g_\stackrel{}{\mathrm{L}}(k)`$ and $`\epsilon _\stackrel{}{\mathrm{L}}(k)`$ obtained within the PDRF method are nonphysical and caused by the finite molecular size which is assumed to be zero in this approach. At the same time, the ISRF geometry gives the true, more smooth dependencies for the Kirkwood factor and dielectric constant because the influence of the finite molecular size is included here explicitly. As we can see from the figures, deviations of values for the wavevector-dependent dielectric quantities obtained using the PDRF from those evaluated within the ISRF geometry are significant. These deviations achieve maximal values about $`25\%`$ near $`k=3\mathrm{\AA }^1`$, where the Kirkwood factor has the first maximum. For great wavevector values $`(k>6\mathrm{\AA }^1)`$ the both geometries lead to identical results because the influence of boundary conditions is negligible in this range of $`k`$.
We remark that the wavevector-dependent quantities were calculated directly for the discrete set $`k=nk_{\mathrm{min}}`$ of grid points accessible in the simulations, where $`k_{\mathrm{min}}=0.319\mathrm{\AA }^1`$ and $`n`$ is an integer number. These quantities are marked in the figures by the symbols. To obtain intermediate values between the grid points we have used the cubic spline interpolation for the most smooth dependency, namely, for $`g_\stackrel{}{\mathrm{L}}(k)`$. Then values of $`\epsilon _\stackrel{}{\mathrm{L}}(k)`$ can be evaluated anywhere in the considered domain of $`k`$-space on the basis of the interpolation values of $`g_\stackrel{}{\mathrm{L}}(k)`$ via Eq. 7. In particular, the first singularity of $`\epsilon _\stackrel{}{\mathrm{L}}(k)`$ (see Fig. 2a) has been investigated in such a way.
## 5 Conclusion
Two alternative methods (ISRF and MRF) to overcome the difficulties associated with finiteness of the molecule with respect to the system size have been proposed for IS models of polar systems. It has been shown rigorously that the fluctuation formula, which is commonly used for the calculation of the dielectric constant in computer experiment, corresponds to the ISRF geometry with the site-site cut-off for Coulomb interaction potentials. The molecular cut-off scheme leads to the MRF geometry with an additional quadrupole term to the well-known PDRF.
It has been corroborated by actual calculations that the ISRF geometry exhibits to be much more efficient with respect to the usual PDRF method for the investigation of the dielectric properties of IS models. The modified MRF approach seem to be comparable in efficiency with the ISRF geometry. An application of the MRF to practical simulations we hope to perform in further studying.
Figure captions
Fig. 1. Longitudinal component of the wavevector-dependent Kirkwood factor for the MCY water. The results in the ISRF and PDRF geometries are plotted by the solid and dashed curves, respectively.
Fig. 2. Longitudinal component of the wavevector-dependent dielectric constant for the MCY water. Notations as for fig. 1. The vertical lines indicate positions of a singularity.
|
no-problem/9901/cond-mat9901131.html
|
ar5iv
|
text
|
# Classical diffusion of 𝑁 interacting particles in one dimension: General results and asymptotic laws
## I Introduction
Low dimensionality, geometrical constraints and interactions between classically diffusing particles are expected to modify transport coefficients and/or the nature of the asymptotic regimes. As a trivial example, a brownian particle subjected to a purely reflecting wall has a anomalous drift, its average position increasing as $`t^{1/2}`$, (which corresponds to a vanishing velocity) whereas the centered second moment has a normal diffusive spreading with a lowered diffusion constant as compared to the diffusion without barrier. In the case of two particles with a contact repulsion, each of them plays for the other the role of a fluctuating boundary condition which affects both the transport coefficients linked to the average position and the mean square deviation. The aim of this letter is to discuss such questions in the general case of $`N`$ mutually interacting particles with a hard-core interaction.
Classical diffusion with interactions does not seem to have drawn so much attention. A notable exception is the so-called “tracer problem” defined as the diffusion of a tagged particle in an infinite sea of other diffusing particles. The one-dimensional model was first solved by Harris and discussed in subsequent papers . The main result is that the mean square dispersion of the position, $`\mathrm{\Delta }x^2`$, displays a subdiffusive behaviour, which originates from the fact that the motion of the tagged particle is, anywhere and at any time, hindered by all surrounding particles. In one dimension Harris found that $`\mathrm{\Delta }x^2`$ grows as $`t^{1/2}`$ at large times; this implies that the typical distance travelled by the tracer at time $`t`$ goes like $`t^{1/4}`$ instead of $`t^{1/2}`$ in the free case. More recently, Derrida et al. found several exact results for the asymmetric simple exclusion process ; recent bibliography and other results on such subject can be found in Mallick’s thesis .
The problem here considered is frankly different, although it belongs to the so-called single-file diffusion problem encountered in many fields (one-dimensional hopping conductivity , ion transport in biological membranes , channelling in zeolithes ). At some initial time, a compact cluster of $`N`$ pointlike particles is launched at the origin of a one-dimensional space; each of them undergoes ordinary brownian motion but has a contact repulsive interaction with its neighbours. As a consequence, the particles located at the egde of the cluster can move freely on one side and are subjected to a fluctuating boundary condition on the other, whereas the particles inside the cluster are subjected to such boundary conditions on either side.
The questions to be solved are to find the (anomalous) drift of one particle and its diffusion constant as a function of the position within the cluster and of the number of $`N`$. In addition, two-particle correlations are worthy to be analyzed, as well as the asymptotic one-body probability distributions.
## II One-particle transport coefficients
At the initial time, the $`N`$ particles are assumed to form a compact cluster located at the origin $`x=0`$, each of them having the same diffusion constant $`D`$ as all the others. Due to the contact repulsion, two particles can never cross each other, so that order in space is preserved at any time; this means that the $`N`$ coordinates $`x_i`$ can be labelled so that:
$$x_1<x_2<\mathrm{}<x_Nt.$$
(2.1)
The solution of the diffusion equation for such an initial condition is the following:
$$p(x_1,x_2,\mathrm{},x_N;t)=N!\underset{n=1}{\overset{N}{}}\frac{\mathrm{e}^{x_i^2/(4Dt)}}{\sqrt{4\pi Dt}}\underset{n=1}{\overset{N1}{}}Y(x_{i+1}x_i),$$
(2.2)
where $`Y`$ is the Heaviside unit step function ($`Y(x)=1`$ if $`x>0`$, $`0`$ otherwise). In a recent and important paper , the general formal solution of the same problem with an arbitrary initial condition was given, using the reflection principle. ¿From eq. (2.2), one readily gets the reduced one-particle density for the $`n^{\mathrm{th}}`$ particle of the cluster:
$$p_n^{(1)}(x;t)=\frac{2^{1N}N!}{(n1)!(Nn)!}\left[1+\mathrm{\Phi }\left(\frac{x}{\sqrt{4Dt}}\right)\right]^{n1}\left[1\mathrm{\Phi }\left(\frac{x}{\sqrt{4Dt}}\right)\right]^{Nn}\frac{\mathrm{e}^{x^2/(4Dt)}}{\sqrt{4\pi Dt}}.$$
(2.3)
In the last equation, $`\mathrm{\Phi }`$ denotes the probability integral , satisfying $`\mathrm{\Phi }(\pm \mathrm{})=\pm 1`$. The two factors $`(1\pm \mathrm{\Phi })`$ represent the steric effects on the $`n^{\mathrm{𝑡ℎ}}`$ particle due to the other ones. Knowing $`p_n^{(1)}`$, it is in principle possible to compute the first few moments giving the average position and the mean square dispersion for anyone of the particles. As a first result, one immediately observes that any moment of the coordinate has a quite simple variation in time, at any time, not only in the final stage of the motion. Indeed, using eq. (2.3), it is readily seen that the the $`k^{\mathrm{th}}`$ moment $`<x_n^k>`$ increases at any time as $`t^{k/2}`$ since $`(Dt)^{1/2}`$ is the only lengthscale of the problem. As a consequence, $`t`$, the average coordinate of the $`n^{\mathrm{th}}`$ particle $`<x_n>(t)`$ increases as $`t^{1/2}`$ – except for instance for the central particle of the cluster when $`N`$ is odd –, whereas the mean square displacement $`\mathrm{\Delta }x_n^2<x_n^2><x_n>^2`$ increases as $`t`$. The drift, due to left-right symmetry breaking, is thus always anomalous and the diffusion is always normal. As a consequence, one can define, $`n`$ and $`N`$, the following transport coefficients $`V_{1/2,n}`$ and $`D_n`$:
$$<x_n>=V_{1/2,n}(N)t^{\frac{1}{2}},$$
(2.4)
$$\mathrm{\Delta }x_n^2=\mathrm{\hspace{0.17em}2}D_n(N)t,$$
(2.5)
It remains to find the functions $`V_{1/2,n}(N)`$ and $`D_n(N)`$, which incorporate the dependence of the transport coefficients upon the number of particles. For $`N=2`$, one readily finds:
$$<x_2>=<x_1>=\sqrt{\frac{2}{\pi }Dt},$$
(2.6)
and
$$\mathrm{\Delta }x_n^2=\mathrm{\hspace{0.17em}2}(1\frac{1}{\pi })t.$$
(2.7)
In the case of a non-fluctuating (fixed) perfectly reflecting barrier, one has:
$$<x>=\sqrt{\frac{4}{\pi }Dt}\mathrm{\Delta }x^2=\mathrm{\hspace{0.17em}2}(1\frac{2}{\pi })t.$$
(2.8)
Thus, for a cluster of two particles, each of them acting for the other as a fluctuating barrier, the drift is slowered and the diffusion is enhanced as compared to a fixed barrier. These facts are easily understood on physical grounds.
Unfortunately, it does not seem possible to write the exact expressions of $`V_{1/2,n}`$ and $`D_n`$ in a closed, tractable form, starting from eq. (2.3). On the other hand, since it is worthy to analyze the case of a large number $`N1`$ and since the factors involving the $`\mathrm{\Phi }`$’s functions have rather sharp derivatives, especially when $`N`$ is large, it is expected that a gaussian approximation can indeed produce the correct large-$`N`$ variation of the $`V_{1/2,n}`$ and $`D_n`$.
Let us first consider one of the two particles located at one extremity of the cluster, the right one for instance. From eq. (2.3), the one-body density can be written as ($`u=x/\sqrt{4Dt}`$):
$$p_N^{(1)}(x;t)=\frac{1}{\sqrt{4Dt}}\frac{d}{du}\left[\frac{1+\mathrm{\Phi }(u)}{2}\right]^N.$$
(2.9)
This expression naturally has the form encountered in the statistics of extreme values . When $`N1`$, this is a very sharp function with a maximum $`u_0`$ defined by:
$$\frac{\mathrm{e}^{u_0^2}}{u_0}\frac{2\sqrt{\pi }}{N}.$$
(2.10)
Making now a gaussian approximation for $`p_N^{(1)}(x;t)`$, one readily finds, up to logarithmic corrections:
$$<x_N>=<x_1>\sqrt{\mathrm{ln}\frac{N}{2\sqrt{\pi }}}\sqrt{4Dt}.$$
(2.11)
Thus, for large $`N`$, the coefficient for the anomalous drift has a logarithmic increase with respect to the number $`N`$ of particles of the cluster:
$$V_{1/2N}(N)(\mathrm{ln}N)^{1/2}.$$
(2.12)
The fact that $`V_{1/2N}`$ increases with $`N`$ is evident on physical grounds (all the “inside” particles are pushing on those which are at the edges), but this increase is extremely slow. In addition, the same approximation yields:
$$\mathrm{\Delta }x_N^2=\mathrm{\Delta }x_1^2\frac{\mathrm{e}^{2/3}}{(2\pi )^{1/3}\mathrm{ln}\frac{N}{2\sqrt{\pi }}}Dt,$$
(2.13)
so that:
$$D_1(N)=D_N(N)(\mathrm{ln}N)^1.$$
(2.14)
Although the diffusion is normal, the diffusion constant decreases and tends toward $`0`$ for infinite $`N`$. In a pictorial way, the more there are particles pushing on its back, the less quickly spreads any one of the edge particles on either side of its average position, which drifts proportionnally to $`t^{1/2}`$. Note that, from eqs. (2.11) and (2.13), the relative fluctuations for the edge particles behave as $`(\mathrm{ln}N)^1`$; this extremely slow decrease of fluctuations, as compared to $`N^{1/2}`$ in ordinary cases, implies that convergence toward a large-number law, if any, is quite poor.
The correctness of the gaussian approximation for the two first moments was checked by numerically computing the exact average position and exact mean square displacement and by looking at $`<x_N>/[4Dt\mathrm{ln}(N/2\sqrt{\pi })]^{1/2}`$ and $`[\mathrm{\Delta }x_N^2/(4Dt)]\mathrm{ln}\frac{N}{2\sqrt{\pi }}`$. Fig. 1 displays the rather rapid convergence of such quantities toward constants at large $`N`$, confirming the validity of the gaussian approximation at least for the two first moments.
Obviously, things go quite differently for the particle located at the center of the cluster, assuming $`N`$ to be an odd number for simplicity. First, it does not move in the mean. Second, it must have a rather small diffusion constant as compared to the edge particles, since it is strongly inhibited by its numerous erratic partners on either side of it. Indeed, the gaussian approximation yields:
$$\mathrm{\Delta }x_{(N+1)/2}^2\frac{\pi }{N}Dt.$$
(2.15)
This provides the large-$`N`$ dependence of the diffusion constant for the central particle:
$$D_{(N+1)/2}(N)\frac{1}{N},$$
(2.16)
entailing that the fluctuations are now of the order of $`1/\sqrt{N}`$.
Thus, in any case, the diffusion is normal, as contrasted to the Harris’ case for which $`\mathrm{\Delta }x^2t^{1/2}`$. Yet, note that in the $`N\mathrm{}`$ limit, both $`D_N`$ and $`D_{(N+1)/2}`$ vanish, which indicates a lowering of the dynamical exponent. The vanishing in all cases of the diffusion constants in the $`N`$-infinite limit signals the onset of a subdiffusive regime in the finite concentration situation. Considering the middle particle, which is surrounded by infinitely many other, this is in conformity with Harris’ result. For the two edge particles, the marginal logarithmic decrease of $`D_N`$ comes from the fact that the former still face a free semi-infinite space to wander in.
Note that the scaling upon $`N`$ as described by eqs. (2.12) and (2.14) are the same as those obtained in ref. ; nevertheless, the asymptotic distribution law is not of the Gumbel type (see below).
## III Correlations
Statistical correlations inside the cluster are also worthy to analyze. As an example, let us consider the correlations between the two edge particles. For the latter, the two-body probability density is easily found from eq. (2.2) as the following:
$$p_{1N}^{(2)}(x_1,x_N;t)=\frac{N(N1)}{\pi \mathrm{\hspace{0.17em}2}^NDt}\left[\mathrm{\Phi }(u_N)\mathrm{\Phi }(u_1)\right]^{N2}\mathrm{e}^{(u_1^2+u_N^2)}Y(u_Nu_1).$$
(3.17)
Two-body correlations are most simply measured by $`C_{1N}=<x_1x_N><x_1><x_N>`$; making again a gaussian approximation for the two-body density, this correlator has the following approximate expression:
$$C_{1N}(t)\mathrm{\hspace{0.17em}4}\mathrm{ln}\frac{N}{2\sqrt{\pi }}Dt.$$
(3.18)
Due to scaling in space, the normalized ratio $`C_{1N}(t)/\mathrm{\Delta }x_1^2`$ is a constant in time. This constant turns out to be an increasing function of the number $`N`$ of particles; from eqs. (2.11) and (2.13), one finds:
$$\frac{C_{1N}(t)}{\mathrm{\Delta }x_1^2}\mathrm{\hspace{0.17em}4}\left(\frac{2\pi }{\mathrm{e}^2}\right)^{1/3}\left[\mathrm{ln}\frac{N}{2\sqrt{\pi }}\right]^2.$$
(3.19)
Thus, increasing the number of inner particles enhances, although quite slowly, the correlations between the two edge particles. Far from inducing some kind of screening effect, repeated numerous collisions from inner particles enhance the statistical correlations between the edge particles. In a pictorial way, it can be said that the former act as “virtual bosons” by going from one to the other edge particles; the more they are, stronger is the effective (statistical) coupling.
## IV Asymptotic distribution laws
Interestingly enough, it is also posible to obtain the asymptotic form of the one-body distribution given by eq. (2.3). For the right particle ($`n=N`$), starting from eq. (2.9), one easily finds, with still $`u=x/\sqrt{4Dt}>\mathrm{\hspace{0.17em}0}`$ :
$$p_N^{(1)}(x,t)\frac{N}{\sqrt{4\pi Dt}}\left(1+\frac{1}{2u^2}\right)\mathrm{exp}\left[u^2\frac{N}{2u\sqrt{\pi }}\mathrm{e}^{u^2}\right].$$
(4.20)
The maximum occurs for $`uu_0`$ and the front is clearly asymmetric around $`u_0`$ – although the gaussian approximation, as shown above, well accounts for the large-$`N`$ dependence of the two first moments (expectation value and fluctuations). This asymmetry represents the pressure exerted by the inner particles on the edge ones. For the left particle, one simply has $`p_1^{(1)}(x,t)=p_N^{(1)}(x,t)`$. Fig. 2 shows that the large-$`N`$ expression, eq. (4.20), reproduces quite well the exact $`p_N^{(1)}`$ even for a moderately large value of $`N`$. From eq. (4.20), it is seen that $`p_N^{(1)}(x,t)`$ is not exactly a Gumbel distribution; on the other hand, the rescaled variable $`u^2\mathrm{ln}(N/2\sqrt{\pi })`$ has, up to logarithmic corrections, the same dependence upon $`N`$ as a true Gumbel variable as far as the two first moments are concerned.
As constrasted, starting again from eq. (2.3) for the central particle ($`n=(N+1)/2`$), the asymptotic form of the one-body density turns out to be simply the following normal law:
$$p_{(N+1)/2}^{(1)}(x,t)\frac{1}{\pi }\sqrt{\frac{N}{2Dt}}\mathrm{e}^{Nx^2/(2\pi Dt)},$$
(4.21)
in agreement with eq.(2.15).
## V Acknowlegdements
I am indebted to Jean-Philippe Bouchaud for a helpful discussion on the statistics of extremes.
Figure Captions
1. Illustration of the asymptotic dependence upon $`N`$ of the transport coefficients for the edge particles (average position, eq. (2.11) and mean square displacement, eq. (2.13)).
2. Comparison of the exact (solid line) and asymptotic distribution (dashed line) functions, respectively given by eqs. (2.3) and (4.20), for a cluster of $`1000`$ particles; the absissa is the reduced variable $`u=x/\sqrt{4Dt}`$.
|
no-problem/9901/hep-ph9901251.html
|
ar5iv
|
text
|
# COSMIC STRINGS IN REALISTIC PARTICLE PHYSICS THEORIES AND BARYOGENESIS
## I Introduction
Many particle physics theories admit cosmic strings. For most cosmological studies the simple abelian Higgs model is used as a prototypical cosmic string theory. However, in realistic particle physics theories the situation is more complicated. The resulting cosmic strings can have a rich microstructure. Additional features can be aquired at the string core at each subsequent symmetry breaking. This additional microstructure can, in some cases, be used to constrain the underlying particle physics theory to ensure consistency with standard cosmology. For example, if the theory admits cosmic strings which aquire fermion zero modes, or bose condensates, either at formation or due to a subsequent symmetry then the zero modes can be excited and will move up or down the string, depending on whether they are left or right movers. This will result in the string carrying a current . An intially weak current on a string loop will be amplified as the loop contracts. The current could become sufficiently strong to halt the contraction of the loop, preventing it from decaying. A stable state, or vorton , is formed. The density of vortons is tightly constrained by cosmological requirements. For example, if vortons are sufficiently stable so that they survive until the present time, then we require that the universe is not vorton dominated. However, if vortons only survive a few minutes then they can still have cosmological implications. We then require that the universe be radiation dominated at nucleosynthesis. These requirements have been used in to constrain such models.
Vortons are classically stable , but the quantum stability is an open question. It has been assumed that, if vortons decay, they do so by quantum mechanical tunnelling. This would result in them being very long lived. However, in the case of fermion superconductivity, the existence of fermion zero modes at high energy does not guarantee that such modes survive subsequent phase transitions. The disappearance of such zero modes could give another channel for the resulting vortons to decay. Fermion zero modes could also be created at subsequent phase transitions. It is thus necessary to trace the microphysics of the cosmic string from formation through all subsequent phase transitions in the history of the universe.
For example, many popular particle physics theories above the electroweak scale are based on supersymmetry. Such theories can also admit cosmic string solutions . Since supersymmetry is a natural symmetry between bosons and fermions, the fermion partner of the Higgs field forming the cosmic string is a zero mode. Thus, the particle content and interactions dictated by supersymmetry naturally give rise to current-carrying strings. Gauge symmetry breaking can arise either by introduction of a super-potential or by means of a Fayet-Iliopoulos term. In both cases fermion zero modes arise.
However, supersymmetry is not observed in nature and must therefore be broken. We consider general soft supersymmetry breaking terms that could arise and consider the resulting affect of these on the fermion zero modes. For most soft breaking terms, the zero modes are destroyed. Hence, any vortons formed would dissipate. However, in the case of gauge symmetry breaking via a Fayet-Iliopoulos term, the zero modes, and hence vortons, survive supersymmetry breaking. Hence, supersymmetric theories which break a $`U(1)`$ symmetry this way would result in cosmologically stable vortons, and would therefore be ruled out. However, in the more general case, the problem of cosmic vortons seems to solve itself. That is to say, vortons will be formed at high energy, but will dissipate after the supersymmetry breaking scale.
If the underlying supersymmetric theory is a grand unified one, then, in the string core the grand unified symmetry is restored and typical grand unified processes will be unsuppressed in the string core. Once the vortons decay, the grand unified particles will be released. Their out-of-equilibrium decay results in a baryon asymmetry being produced. Depending on the scale of supersymmetry breaking, the baryon asymmetry produced could account for that required by nucleosynthesis.
In this talk we address this problem. We first review cosmic strings in supersymmetric theories, displaying the string zero modes . We then consider the effect of supersymmetry breaking on these zero modes, showing that the zero modes are destroyed in the general case . The vortons density is estimated in these supersymmetric theories. We show that the underlying theory can be constrained in the case where the vortons are stable. If the vortons are unstable, we estimate resulting baryon asymmetry from dissipating cosmic vortons. We also take into account the change in entropy density from the vorton decay and show that, for supersymmetry breaking just before the vorton density dominates that of radiation, results in a baryon asymmetry, in agreement with observation .
## II Cosmic Strings in Supersymmetric Theories
We consider supersymmetric versions of the spontaneously broken gauged $`U(1)`$ abelian Higgs model. These models are related to or are simple extensions of those found in reference. In superfield notation, such a theory consists of a vector superfield $`V`$ and $`m`$ chiral superfields $`\mathrm{\Phi }_i`$, ($`i=1\mathrm{}m`$), with $`U(1)`$ charges $`q_i`$. In the Wess-Zumino gauge these may be expressed in component notation as
$$V(x,\theta ,\overline{\theta })=(\theta \sigma ^\mu \overline{\theta })A_\mu (x)+i\theta ^2\overline{\theta }\overline{\lambda }(x)i\overline{\theta }^2\theta \lambda (x)+\frac{1}{2}\theta ^2\overline{\theta }^2D(x),$$
(1)
$$\mathrm{\Phi }_i(x,\theta ,\overline{\theta })=\varphi _i(y)+\sqrt{2}\theta \psi _i(y)+\theta ^2F_i(y),$$
(2)
where $`y^\mu =x^\mu +i\theta \sigma ^\mu \overline{\theta }`$. Here, $`\varphi _i`$ are complex scalar fields and $`A_\mu `$ is a vector field. These correspond to the familiar bosonic fields of the abelian Higgs model. The fermions $`\psi _{i\alpha }`$, $`\overline{\lambda }_\alpha `$ and $`\lambda _\alpha `$ are Weyl spinors and the complex bosonic fields, $`F_i`$, and real bosonic field, $`D`$, are auxiliary fields. Finally, $`\theta `$ and $`\overline{\theta }`$ are anticommuting superspace coordinates. In the component formulation of the theory one eliminates $`F_i`$ and $`D`$ via their equations of motion and performs a Grassmann integration over $`\theta `$ and $`\overline{\theta }`$. Now define
$`D_\alpha `$ $`=`$ $`{\displaystyle \frac{}{\theta ^\alpha }}+i\sigma _{\alpha \dot{\alpha }}^\mu \overline{\theta }^{\dot{\alpha }}_\mu ,`$ (3)
$`\overline{D}_{\dot{\alpha }}`$ $`=`$ $`{\displaystyle \frac{}{\overline{\theta }^{\dot{\alpha }}}}i\theta ^\alpha \sigma _{\alpha \dot{\alpha }}^\mu _\mu ,`$ (4)
$`W_\alpha `$ $`=`$ $`{\displaystyle \frac{1}{4}}\overline{D}^2D_\alpha V,`$ (5)
where $`D_\alpha `$ and $`\overline{D}_{\dot{\alpha }}`$ are the supersymmetric covariant derivatives and $`W_\alpha `$ is the field strength chiral superfield. The superspace Lagrangian density for the theory is then given by
$$\stackrel{~}{}=\frac{1}{4}\left(W^\alpha W_\alpha |_{\theta ^2}+\overline{W}_{\dot{\alpha }}\overline{W}^{\dot{\alpha }}|_{\overline{\theta }^2}\right)+\left(\overline{\mathrm{\Phi }}_ie^{gq_iV}\mathrm{\Phi }_i\right)|_{\theta ^2\overline{\theta }^2}+W(\mathrm{\Phi }_i)|_{\theta ^2}+\overline{W}(\overline{\mathrm{\Phi }}_i)|_{\overline{\theta }^2}+\kappa D.$$
(6)
In this expression $`W`$ is the superpotential, a holomorphic function of the chiral superfields (i.e. a function of $`\mathrm{\Phi }_i`$ only and not $`\overline{\mathrm{\Phi }}_i`$) and $`W|_{\theta ^2}`$ indicates the $`\theta ^2`$ component of $`W`$. The term linear in $`D`$ is known as the Fayet-Iliopoulos term . Such a term can only be present in a $`U(1)`$ theory, since it is not invariant under more general gauge transformations.
For a renormalizable theory, the most general superpotential is
$$W(\mathrm{\Phi }_i)=a_i\mathrm{\Phi }_i+\frac{1}{2}b_{ij}\mathrm{\Phi }_i\mathrm{\Phi }_j+\frac{1}{3}c_{ijk}\mathrm{\Phi }_i\mathrm{\Phi }_j\mathrm{\Phi }_k,$$
(7)
with the constants $`b_{ij}`$, $`c_{ijk}`$ symmetric in their indices. This can be written in component form as
$$W(\varphi _i,\psi _j,F_k)|_{\theta ^2}=a_iF_i+b_{ij}\left(F_i\varphi _j\frac{1}{2}\psi _i\psi _j\right)+c_{ijk}\left(F_i\varphi _j\varphi _k\psi _i\psi _j\varphi _k\right)$$
(8)
and the Lagrangian (6) can then be expanded in Wess-Zumino gauge in terms of its component fields using (2,1). The equations of motion for the auxiliary fields are
$$F_i^{}+a_i+b_{ij}\varphi _j+c_{ijk}\varphi _j\varphi _k=0,$$
(9)
$$D+\kappa +\frac{g}{2}q_i\overline{\varphi }_i\varphi _i=0.$$
(10)
Using these to eliminate $`F_i`$ and $`D`$ we obtain the Lagrangian density in component form as
$$=_B+_F+_YU,$$
(11)
with
$`_B`$ $`=`$ $`(D_\mu ^i\overline{\varphi }_i)(D^{i\mu }\varphi _i){\displaystyle \frac{1}{4}}F^{\mu \nu }F_{\mu \nu },`$ (12)
$`_F`$ $`=`$ $`i\psi _i\sigma ^\mu D_\mu ^i\overline{\psi }_ii\lambda _i\sigma ^\mu _\mu \overline{\lambda }_i,`$ (13)
$`_Y`$ $`=`$ $`{\displaystyle \frac{ig}{\sqrt{2}}}q_i\overline{\varphi }_i\psi _i\lambda \left({\displaystyle \frac{1}{2}}b_{ij}+c_{ijk}\varphi _k\right)\psi _i\psi _j+(\text{c.c.}),`$ (14)
$`U`$ $`=`$ $`|F_i|^2+{\displaystyle \frac{1}{2}}D^2`$ (15)
$`=`$ $`|a_i+b_{ij}\varphi _j+c_{ijk}\varphi _j\varphi _k|^2+{\displaystyle \frac{1}{2}}\left(\kappa +{\displaystyle \frac{g}{2}}q_i\overline{\varphi }_i\varphi _i\right)^2,`$ (16)
where $`D_\mu ^i=_\mu +\frac{1}{2}igq_iA_\mu `$ and $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu `$.
Now consider spontaneous symmetry breaking in these theories. Each term in the superpotential must be gauge invariant. This implies that $`a_i0`$ only if $`q_i=0`$, $`b_{ij}0`$ only if $`q_i+q_j=0`$, and $`c_{ijk}0`$ only if $`q_i+q_j+q_k=0`$. The situation is a little more complicated than in non-SUSY theories, since anomaly cancellation in SUSY theories implies the existence of more than one chiral superfield (and hence Higgs field). In order to break the gauge symmetry, one may either induce SSB through an appropriate choice of superpotential, or, in the case of the $`U(1)`$ gauge group, one may rely on a non-zero Fayet-Iliopoulos term.
We shall refer to the theory with superpotential SSB (and, for simplicity, zero Fayet-Iliopoulos term) as theory F and the theory with SSB due to a non-zero Fayet-Iliopoulos term as theory D. Since the implementation of SSB in theory F can be repeated for more general gauge groups, we expect that this theory will be more representative of general defect-forming theories than theory D for which the mechanism of SSB is specific to the $`U(1)`$ gauge group.
### A Theory F: Vanishing Fayet-Iliopoulos Term
The simplest model with vanishing Fayet-Iliopoulos term ($`\kappa =0`$) and spontaneously broken gauge symmetry contains three chiral superfields. It is not possible to construct such a model with fewer superfields which does not either leave the gauge symmetry unbroken or possess a gauge anomaly. The fields are two charged fields $`\mathrm{\Phi }_\pm `$, with respective $`U(1)`$ charges $`q_\pm =\pm 1`$, and a neutral field, $`\mathrm{\Phi }_0`$. A suitable superpotential is then
$$W(\mathrm{\Phi }_i)=\mu \mathrm{\Phi }_0(\mathrm{\Phi }_+\mathrm{\Phi }_{}\eta ^2),$$
(17)
with $`\eta `$ and $`\mu `$ real. The potential $`U`$ is minimised when $`F_i=0`$ and $`D=0`$. This occurs when $`\varphi _0=0`$, $`\varphi _+\varphi _{}=\eta ^2`$, and $`|\varphi _+|^2=|\varphi _{}|^2`$. Thus we may write $`\varphi _\pm =\eta e^{\pm i\alpha }`$, where $`\alpha `$ is some function. We shall now seek the Nielsen-Olesen solution corresponding to an infinite straight cosmic string. We proceed in the same manner as for non-supersymmetric theories. Consider only the bosonic fields (i.e. set the fermions to zero) and in cylindrical polar coordinates $`(r,\phi ,z)`$ write
$`\varphi _0`$ $`=`$ $`0,`$ (18)
$`\varphi _+`$ $`=`$ $`\varphi _{}^{}=\eta e^{in\phi }f(r),`$ (19)
$`A_\mu `$ $`=`$ $`{\displaystyle \frac{2}{g}}n{\displaystyle \frac{a(r)}{r}}\delta _\mu ^\phi ,`$ (20)
$`F_\pm `$ $`=`$ $`D=0,`$ (21)
$`F_0`$ $`=`$ $`\mu \eta ^2(1f(r)^2),`$ (22)
so that the $`z`$-axis is the axis of symmetry of the defect. The profile functions, $`f(r)`$ and $`a(r)`$, obey
$$f^{\prime \prime }+\frac{f^{}}{r}n^2\frac{(1a)^2}{r^2}=\mu ^2\eta ^2(f^21)f,$$
(23)
$$a^{\prime \prime }\frac{a^{}}{r}=g^2\eta ^2(1a)f^2,$$
(24)
with boundary conditions
$`f(0)=a(0)=0,`$ (25)
$`\underset{r\mathrm{}}{lim}f(r)=\underset{r\mathrm{}}{lim}a(r)=1.`$ (26)
Note here, in passing, an interesting aspect of topological defects in SUSY theories. The ground state of the theory is supersymmetric, but spontaneously breaks the gauge symmetry while in the core of the defect the gauge symmetry is restored but, since $`|F_i|^20`$ in the core, SUSY is spontaneously broken there.
We have constructed a cosmic string solution in the bosonic sector of the theory. Now consider the fermionic sector. With the choice of superpotential (17) the component form of the Yukawa couplings becomes
$$_Y=i\frac{g}{\sqrt{2}}\left(\overline{\varphi }_+\psi _+\overline{\varphi }_{}\psi _{}\right)\lambda \mu \left(\varphi _0\psi _+\psi _{}+\varphi _+\psi _0\psi _{}+\varphi _{}\psi _0\psi _+\right)+(\text{c.c.})$$
(27)
As with a non-supersymmetric theory, non-trivial zero energy fermion solutions can exist around the string. Consider the fermionic ansatz
$$\psi _i=\left(\begin{array}{c}1\\ 0\end{array}\right)\psi _i(r,\phi ),$$
(28)
$$\lambda =\left(\begin{array}{c}1\\ 0\end{array}\right)\lambda (r,\phi ).$$
(29)
If we can find solutions for the $`\psi _i(r,\phi )`$ and $`\lambda (r,\phi )`$ then, following Witten, we know that solutions of the form
$$\mathrm{\Psi }_i=\psi _i(r,\phi )e^{\chi (z+t)},\mathrm{\Lambda }=\lambda (r,\phi )e^{\chi (z+t)},$$
(30)
with $`\chi `$ some function, represent left moving superconducting currents flowing along the string at the speed of light. Thus, the problem of finding the zero modes is reduced to solving for the $`\psi _i(r,\phi )`$ and $`\lambda (r,\phi )`$.
The fermion equations of motion derived from (11) are four coupled equations given by
$$e^{i\phi }\left(_r\frac{i}{r}_\phi \right)\overline{\lambda }\frac{g}{\sqrt{2}}\eta f\left(e^{in\phi }\psi _{}e^{in\phi }\psi _+\right)=0,$$
(31)
$$e^{i\phi }\left(_r\frac{i}{r}_\phi \right)\overline{\psi }_0+i\mu \eta f\left(e^{in\phi }\psi _{}+e^{in\phi }\psi _+\right)=0,$$
(32)
$$e^{i\phi }\left(_r\frac{i}{r}_\phi \pm n\frac{a}{r}\right)\overline{\psi }_\pm +\eta fe^{m_\mathrm{p}in\phi }\left(i\mu \psi _0\pm \frac{g}{\sqrt{2}}\lambda \right)=0.$$
(33)
The corresponding equations for the lower fermion components can be obtained from those for the upper components by complex conjugation, and putting $`nn`$. The superconducting current corresponding to this solution (like (30), but with $`\chi (tz)`$) is right moving.
We may enumerate the zero modes using an index theorem , as discussed further in . This gives $`2n`$ independent zero modes, where $`n`$ is the winding number of the string. However, in supersymmetric theories we can calculate them explicitly using SUSY transformations. This relates the fermionic components of the superfields to the bosonic ones and we may use this to obtain the fermion solutions in terms of the background string fields. A SUSY transformation is implemented by the operator $`G=e^{\xi Q+\overline{\xi }\overline{Q}}`$, where $`\xi _\alpha `$ are Grassmann parameters and $`Q_\alpha `$ are the generators of the SUSY algebra which we may represent by
$`Q_\alpha `$ $`=`$ $`{\displaystyle \frac{}{\theta ^\alpha }}i\sigma _{\alpha \dot{\alpha }}^\mu \overline{\theta }^{\dot{\alpha }}_\mu ,`$ (34)
$`\overline{Q}^{\dot{\alpha }}`$ $`=`$ $`{\displaystyle \frac{}{\overline{\theta }_{\dot{\alpha }}}}i\overline{\sigma }^{\mu \dot{\alpha }\alpha }\theta _\alpha _\mu .`$ (35)
In general such a transformation will induce a change of gauge. It is then necessary to perform an additional gauge transformation to return to the Wess-Zumino gauge in order to easily interpret the solutions. For an abelian theory, supersymmetric gauge transformations are of the form
$`\mathrm{\Phi }_i`$ $``$ $`e^{i\mathrm{\Lambda }q_i}\mathrm{\Phi }_i,`$ (36)
$`\overline{\mathrm{\Phi }}_i`$ $``$ $`e^{i\overline{\mathrm{\Lambda }}q_i}\overline{\mathrm{\Phi }}_i,`$ (37)
$`V`$ $``$ $`V+{\displaystyle \frac{i}{g}}\left(\mathrm{\Lambda }\overline{\mathrm{\Lambda }}\right),`$ (38)
where $`\mathrm{\Lambda }`$ is some chiral superfield.
Consider performing an infinitesimal SUSY transformation on (22), using $`_\mu A^\mu =0`$. The appropriate $`\mathrm{\Lambda }`$ to return to Wess-Zumino gauge is
$$\mathrm{\Lambda }=ig\overline{\xi }\overline{\sigma }^\mu \theta A_\mu (y)$$
(39)
The component fields then transform in the following way
$`\varphi _\pm (y)`$ $``$ $`\varphi _\pm (y)+2i\theta \sigma ^\mu \overline{\xi }D_\mu \varphi _\pm (y),`$ (40)
$`\theta ^2F_0(y)`$ $``$ $`\theta ^2F_0(y)+2\theta \xi F_0(y),`$ (41)
$`\theta \sigma ^\mu \overline{\theta }A_\mu (x)`$ $``$ $`\theta \sigma ^\mu \overline{\theta }A_\mu (x)`$ (43)
$`+i\theta ^2\overline{\theta }{\displaystyle \frac{1}{2}}\overline{\sigma }^\mu \sigma ^\nu \overline{\xi }F_{\mu \nu }(x)i\overline{\theta }^2\theta {\displaystyle \frac{1}{2}}\sigma ^\mu \overline{\sigma }^\nu \xi F_{\mu \nu }(x).`$
Writing everything in terms of the background string fields, only the fermion fields are affected to first order by the transformation. These are given by
$`\lambda _\alpha `$ $``$ $`{\displaystyle \frac{2na^{}}{gr}}i(\sigma ^z)_\alpha ^\beta \xi _\beta ,`$ (44)
$`(\psi _\pm )_\alpha `$ $``$ $`\sqrt{2}\left(if^{}\sigma ^rm_\mathrm{p}{\displaystyle \frac{n}{r}}(1a)f\sigma ^\phi \right)_{\alpha \dot{\alpha }}\overline{\xi }^{\dot{\alpha }}\eta e^{\pm in\phi },`$ (45)
$`(\psi _0)_\alpha `$ $``$ $`\sqrt{2}\mu \eta ^2(1f^2)\xi _\alpha ,`$ (46)
where we have defined
$`\sigma ^\phi `$ $`=`$ $`\left(\begin{array}{cc}0& ie^{i\phi }\\ ie^{i\phi }& 0\end{array}\right),`$ (49)
$`\sigma ^r`$ $`=`$ $`\left(\begin{array}{cc}0& e^{i\phi }\\ e^{i\phi }& 0\end{array}\right).`$ (52)
Let us choose $`\xi _\alpha `$ so that only one component is nonzero. Taking $`\xi _2=0`$ and $`\xi _1=i\delta /(\sqrt{2}\eta )`$, where $`\delta `$ is a complex constant, the fermions become
$`\lambda _1`$ $`=`$ $`\delta {\displaystyle \frac{n\sqrt{2}}{g\eta }}{\displaystyle \frac{a^{}}{r}},`$ (53)
$`(\psi _+)_1`$ $`=`$ $`\delta ^{}\left[f^{}+{\displaystyle \frac{n}{r}}(1a)f\right]e^{i(n1)\phi },`$ (54)
$`(\psi _0)_1`$ $`=`$ $`i\delta \mu \eta (1f^2),`$ (55)
$`(\psi _{})_1`$ $`=`$ $`\delta ^{}\left[f^{}{\displaystyle \frac{n}{r}}(1a)f\right]e^{i(n+1)\phi }.`$ (56)
It is these fermion solutions which are responsible for the string superconductivity. Similar expressions can be found when $`\xi _1=0`$. It is clear from these results that the string is not invariant under supersymmetry, and therefore breaks it. However, since $`f^{}(r),a^{}(r),1a(r)`$ and $`1f^2(r)`$ are all approximately zero outside of the string core, the SUSY breaking and the zero modes are confined to the string. We note that this method gives us two zero mode solutions. Thus, for a winding number one string, we obtain the full spectrum, whereas for strings of higher winding number, only a partial spectrum is obtained.
The results presented here can be extended to non-abelian gauge theories. This is done in . The results are very similar to those presented here, so we leave the interested reader to consult the original paper.
### B Theory D: Nonvanishing Fayet-Iliopoulos Term
Now consider theory D in which there is just one primary charged chiral superfield involved in the symmetry breaking and a non-zero Fayet-Iliopoulos term. In order to avoid gauge anomalies, the model must contain other charged superfields. These are coupled to the primary superfield through terms in the superpotential such that the expectation values of the secondary chiral superfields are dynamically zero. The secondary superfields have no effect on SSB and are invariant under SUSY transformations. Therefore, for the rest of this section we shall concentrate on the primary chiral superfield which mediates the gauge symmetry breaking.
Choosing $`\kappa =\frac{1}{2}g\eta ^2`$, the theory is spontaneously broken and there exists a string solution obtained from the ansatz
$`\varphi `$ $`=`$ $`\eta e^{in\phi }f(r),`$ (57)
$`A_\mu `$ $`=`$ $`{\displaystyle \frac{2}{g}}n{\displaystyle \frac{a(r)}{r}}\delta _\mu ^\phi ,`$ (58)
$`D`$ $`=`$ $`{\displaystyle \frac{1}{2}}g\eta ^2(1f^2),`$ (59)
$`F`$ $`=`$ $`0.`$ (60)
The profile functions $`f(r)`$ and $`a(r)`$ then obey the first order equations
$$f^{}=n\frac{(1a)}{r}f$$
(61)
$$n\frac{a^{}}{r}=\frac{1}{4}g^2\eta ^2(1f^2)$$
(62)
Now consider the fermionic sector of the theory and perform a SUSY transformation, again using $`\mathrm{\Lambda }`$ as the gauge function to return to Wess-Zumino gauge. To first order this gives
$`\lambda _\alpha `$ $``$ $`{\displaystyle \frac{1}{2}}g\eta ^2(1f^2)i(I+\sigma ^z)_\alpha ^\beta \xi _\beta `$ (63)
$`\psi _\alpha `$ $``$ $`\sqrt{2}{\displaystyle \frac{n}{r}}(1a)f(i\sigma ^r\sigma ^\phi )_{\alpha \dot{\alpha }}\overline{\xi }^{\dot{\alpha }}\eta e^{in\phi }`$ (64)
If $`\xi _1=0`$ both these expressions are zero. The same is true of all higher order terms, and so the string is invariant under the corresponding transformation. For other $`\xi `$, taking $`\xi _1=i\delta /\eta `$ gives
$`\lambda _1`$ $`=`$ $`\delta g\eta (1f^2)`$ (65)
$`\psi _1`$ $`=`$ $`2\sqrt{2}\delta ^{}{\displaystyle \frac{n}{r}}(1a)fe^{i(n1)\phi }`$ (66)
Thus supersymmetry is only half broken inside the string. This is in contrast to theory F which fully breaks supersymmetry in the string core. The theories also differ in that theory D’s zero modes will only travel in one direction, while the zero modes of theory F (which has twice as many) travel in both directions. In both theories the zero modes and SUSY breaking are confined to the string core.
Thus, a necessary feature of cosmic strings in SUSY theories is that supersymmetry is broken in the string core and the resulting strings have fermion zero modes. As a consequence, cosmic strings arising in SUSY theories are automatically current-carrying. In general, cosmic strings arise as infinite strings or as closed loops, The usual non current-carrying string loops decay via gravitational radiation. However, in current-carrying strings loops do not necessarily suffer the same fate. The loops could be stabilised by the angular momentum of the current carriers, forming a stable, vorton, configuration. Vortons are classically stable objects , though their quantum mechanical stability of an open question. The presence of vortons puts severe constraints on the underlying theory since the density of vortons could overclose the universe if vortons are stable enough to survive to the present time. If they only live for a few minutes then the vorton density could affect nucleosynthesis. This is discussed in detail in . However, in some theories the vorton problem solves itself.
## III Soft Susy Breaking
Supersymmetry is not observed in nature. Hence, it must be broken. Supersymmetry breaking is achieved by adding soft SUSY breaking terms which do not induce quadratic divergences.
In a general model, one may obtain soft SUSY breaking terms by the following prescription.
1. Add arbitrary mass terms for all scalar particles to the scalar potential.
2. Add all trilinear scalar terms in the superpotential, plus their hermitian conjugates, to the scalar potential with arbitrary coupling.
3. Add mass terms for the gauginos to the Lagrangian density.
Since the techniques we have used are strictly valid only when SUSY is exact, it is necessary to investigate the effect of these soft terms on the fermionic zero modes we have identified.
As we have already commented, the existence of the zero modes can be seen as a consequence of an index theorem . The index is insensitive to the size and exact form of the Yukawa couplings, as long as they are regular for small $`r`$, and tend to a constant at large $`r`$. In fact, the existence of zero modes relies only on the existence of the appropriate Yukawa couplings and that they have the correct $`\phi `$-dependence. Thus there can only be a change in the number of zero modes if the soft breaking terms induce specific new Yukawa couplings in the theory and it is this that we must check for. Further, it was conjectured in that the destruction of a zero mode occurs only when the relevant fermion mixes with another massless fermion.
We have examined each of our theories with respect to this criterion and list the results below.
### A Theory-F
As discussed previously, the superpotential for this theory is,
$$W=\mu \mathrm{\Phi }_0(\mathrm{\Phi }_+\mathrm{\Phi }_{}\eta ^2).$$
(67)
The trilinear and mass terms that arise from soft SUSY breaking are
$$m_0^2|\varphi _0|^2+m_{}^2|\varphi _{}|^2+m_+^2|\varphi _+|^2+\mu M\varphi _0\varphi _+\varphi _{}$$
(68)
The derivative of the scalar potential with respect to $`\varphi _0^{}`$ becomes
$$\varphi _0(\mu ^2|\varphi _+|^2+\mu ^2|\varphi _{}|^2+m_0^2)+\mu M(\varphi _+\varphi _{})^{}$$
(69)
This will be zero at a minimum, and so $`\varphi _00`$ only if $`M0`$.
New Higgs mass terms will alter the values of $`\varphi _+`$ and $`\varphi _{}`$ slightly, but will not produce any new Yukawa terms. Thus these soft SUSY-breaking terms have no effect on the existence of the zero modes.
However, the presence of the trilinear term gives $`\varphi _0`$ a non-zero expectation value, which gives a Yukawa term coupling the $`\psi _+`$ and $`\psi _{}`$ fields. This destroys all the zero modes in the theory since the left and right moving zero modes mix.
For completeness note that a gaugino mass term also mixes the left and right zero modes, aiding in their destruction.
### B Theory-D
The $`U(1)`$ theory with gauge symmetry broken via a Fayet-Iliopoulos term and no superpotential is simpler to analyse. New Higgs mass terms have no effect, as in the above case, and there are no trilinear terms. Further, although the gaugino mass terms affect the form of the zero mode solutions, they do not affect their existence, and so, in theory-$`D`$, the zero modes remain even after SUSY breaking. For this class of theories, the strings remain current-carrying and, hence, have a potential vorton problem. This could lead to the theories being in comflict with cosmology.
## IV Current-Carrying Strings and Vortons
For the theories considered in the previous sections, the strings become current-carrying due to fermion zero modes as a consequence of supersymmetry. These zero modes are present in the string core at formation. If we call the temperature of the phase transition forming the strings $`T_\mathrm{x}`$, we can estimate the vorton density. The more general case to consider would be when the string becomes current-carrying at a subsequent phase transition, but this is beyond the scope of this paper and we refer the reader to .
The string loop is characterised by two currents, the topologically conserved phase current and the dynamically conserved particle number current. Thus the string carries two conserved quantum numbers; $`N`$ is topologically conserved integral of the phase current and $`Z`$ is the particle number. A non conducting Kibble type string loop must ultimately decay by radiative and frictional drag processes until it disappears completely. However, a conducting string loop may be saved from disappearance by reaching a state in which the energy attains a minimum for given non zero values of $`N`$ and $`Z`$.
It should be emphasised that the existence of such vorton states does not require that the carrier field be gauge coupled. If there is indeed a non-zero charge coupling then the loop will have a corresponding total electric charge, $`Q`$, such that the particle number is $`Z=Q/e`$. However, the important point is that, even in the uncoupled case where $`Q`$ vanishes the particle number $`Z`$ is perfectly well defined.
The physical properties of a vorton state is determined by the quantum numbers, $`N`$ and $`Z`$. However, these are not arbitrary. For example, to avoid decaying completely like a non conducting loop, a conducting loop must have a non zero value for at least one of the numbers $`N`$ and $`Z`$. In fact, one would expect that both these numbers should be reasonably large compared with unity to diminish the likelihood of quantum decay by barrier tunneling. There is a further restriction on the values of their ratio $`Z/N`$ in order to avoid spontaneous particle emission as a result of current saturation. In this contribution we are going to consider the special case where $`|Z|N`$. These are the so called chiral vortons.
For chiral vortons we have,
$$E_\mathrm{v}\mathrm{}_\mathrm{v}m_\mathrm{x}^{\mathrm{\hspace{0.17em}2}}.$$
(70)
In order to evaluate this quantity all that remains is to work out $`\mathrm{}_\mathrm{v}`$. Assuming that vortons are approximately circular, with radius given by $`R_\mathrm{v}=\mathrm{}_\mathrm{v}/2\pi `$ and angular momentum quantum number $`J`$ given by $`J=NZ`$. Thus, eliminating $`J`$, one obtains
$$\mathrm{}_\mathrm{v}(2\pi )^{1/2}|NZ|^{1/2}m_\mathrm{x}^1.$$
(71)
Thus we obtain an estimate of the vorton mass energy as
$$E_\mathrm{v}(2\pi )^{1/2}|NZ|^{1/2}m_\mathrm{x}Nm_\mathrm{x},$$
(72)
where we are assuming the classical description of the string dynamics. This is valid only if the length $`\mathrm{}_\mathrm{v}`$ is large compared with the relevant quantum wavelengths. This will only be satisfied if the product of the quantum numbers $`N`$ and $`Z`$ is sufficiently large. A loop that does not satisfy this requirement will never stabilise as a vorton.
We can now calculate the vorton abundance. Assuming that the string becomes current carrying at a scale $`T_\mathrm{x}`$ by fermion zero modes then one expects that thermal fluctuations will give rise to a non zero value for the topological current, $`|j|^2`$. Hence, a random walk process will result in a spectrum of finite values for the corresponding string loop quantum numbers $`N`$ and $`Z`$. Therefore, loops for which these numbers satisfy the minimum length condition will become vortons. Such loops will ultimately be able to survive as vortons if the induced current, and consequently $`N`$ and $`Z`$ are sufficiently large, such that
$$|NZ|^{1/2}1.$$
(73)
Any loop that fails to satisfy this condition is doomed to lose all its energy and disappear.
The total number density of small loops with length and radial extension of the order of $`L_{\mathrm{min}}`$, the minimum length for vortons, will be not much less than the number density of all closed loops and hence
$$n\nu L_{\mathrm{min}}^3$$
(74)
where $`\nu `$ is a time-dependent parameter. The typical length scale of string loops at the transition temperature, $`L_{\mathrm{min}}(T_\mathrm{x})`$, is considerably greater than relevant thermal correlation length, $`T_{\mathrm{x}}^{}{}_{}{}^{1}`$, that characterises the local current fluctuations. It is because of this that string loop evolution is modified after current carrier condensation. Indeed, since $`L_{\mathrm{min}}(T_\mathrm{x})T_{\mathrm{x}}^{}{}_{}{}^{1}`$ and loops present at the time of the condensation satisfy $`LL_{\mathrm{min}}(T_\mathrm{x})`$, the random walk effect can build up reasonably large, and typically comparable initial values of the quantum numbers $`|Z|`$ and $`N`$. The expected root mean square values produced in this way from carrier field fluctuations of wavelength $`\lambda `$ can be estimated as
$$|Z|N\sqrt{\frac{L}{\lambda }},$$
(75)
where $`\lambda T_\mathrm{x}^1`$. Thus, one obtains
$$|Z|N\sqrt{L_{\mathrm{min}}(T_\mathrm{x})T_\mathrm{x}}1,$$
(76)
For current condensation during the friction dominated regime this requirement is always satisfied.
Therefore, the vorton mass density is
$$\rho _\mathrm{v}Nm_\mathrm{x}n_\mathrm{v}.$$
(77)
In the friction dominated regime the string is interacting with the surrounding plasma. We can estimate $`L_{\mathrm{min}}`$ in this regime as the typical length scale below which the microstructure is smoothed . This then gives the quantum number, $`N`$
$$N\left(\frac{m_\mathrm{P}}{\beta T_\mathrm{x}}\right)^{1/4},$$
(78)
where $`\beta `$ is a drag coefficient for the friction dominated era that is of order unity. We then obtain the number density of mature vortons
$$n_\mathrm{v}\nu _{}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{3/2}T^3,$$
(79)
This gives the resulting mass density of the relic vorton population to be
$$\rho _\mathrm{v}\nu _{}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{5/4}T_\mathrm{x}T^3.$$
(80)
### A The Nucleosynthesis Constraint.
One of the most robust predictions of the standard cosmological model is the abundances of the light elements that were fabricated during primordial nucleosynthesis at a temperature $`T__\mathrm{N}10^4\mathrm{GeV}`$.
In order to preserve this well established picture, it is necessary that the energy density in vortons at that time, $`\rho _\mathrm{v}(T__\mathrm{N})`$ should have been small compared with the background energy density in radiation, $`\rho __\mathrm{N}g^{}T__\mathrm{N}^4`$, where $`g^{}`$ is the effective number of degrees of freedom. Assuming that carrier condensation occurs during the friction damping regime and that $`g^{}`$ has dropped to a value of order unity by the time of nucleosynthesis, this gives
$$\nu _{}g_{s}^{}{}_{}{}^{1}\beta ^{5/4}m_\mathrm{P}^{5/4}T_\mathrm{x}^{\mathrm{\hspace{0.17em}9}/4}T__\mathrm{N}.$$
(81)
The case for which strings become current-carrying at formation has been studied previously and yields rather strong restrictions for very long lived vortons . If it is only assumed that the vortons survive for a few minutes, which is all that is needed to reach the nucleosynthesis epoch we obtain a much weaker restriction.
$$\left(\frac{\nu _{}}{g_s^{}}\right)^{4/9}T_\mathrm{x}\left(\frac{m_\mathrm{P}}{\beta }\right)^{5/9}T__\mathrm{N}^{4/9}.$$
(82)
Taking $`g_s^{}10^2`$ yields the inequality
$$T_\mathrm{x}(\nu _{})^{4/9}\beta ^{5/9}\times 10^9\mathrm{GeV}.$$
(83)
This is the condition that must be satisfied by the formation temperature of cosmic strings that become superconducting immediately, subject to the rather conservative assumption that the resulting vortons last for at least a few minutes. If we assume that the net efficiency factor $`(\nu _{})^{4/9}`$ and drag factor $`\beta ^{5/9}`$ are of order unity this condition rules out the formation of such strings during any conceivable GUT transition, but is consistent with their formation at temperatures close to that of the electroweak symmetry breaking transition.
### B The Dark Matter Constraint.
Let us now consider the rather stronger constraints that can be obtained if at least a substantial fraction of the vortons are sufficiently stable to last until the present epoch. It is generally accepted that the virial equilibrium of galaxies and particularly of clusters of galaxies requires the existence of a cosmological distribution of “dark” matter. This matter must have a density considerably in excess of the baryonic matter density, $`\rho _\mathrm{b}10^{31}`$ gm/cm<sup>3</sup>. On the other hand, on the same basis, it is also generally accepted that to be consistent with the formation of structures such as galaxies it is necessary that the total amount of this “dark” matter should not greatly exceed the critical closure density, namely
$$\rho _\mathrm{c}10^{29}\mathrm{gm}\mathrm{cm}^3.$$
(84)
As a function of temperature, the critical density scales like the entropy density so that it is given by
$$\rho _\mathrm{c}(T)g^{}m_\mathrm{c}T^3,$$
(85)
where $`m_\mathrm{c}`$ is a constant mass factor. For comparison with the density of vortons that were formed at a scale $`T_\mathrm{x}`$ we can estimate this to be
$$g_s^{}m_\mathrm{c}10^{26}m_\mathrm{P}10^2\text{eV}.$$
(86)
The general dark matter constraint is
$$\mathrm{\Omega }_\mathrm{v}\frac{\rho _\mathrm{v}}{\rho _\mathrm{c}}1.$$
(87)
In the case of vortons formed as a result of condensation during the friction damping regime the relevant estimate for the vortonic dark matter fraction is obtainable from (80) as
$$\mathrm{\Omega }_\mathrm{v}\beta ^{5/4}\left(\frac{\nu _{}m_\mathrm{P}}{g_s^{}m_\mathrm{c}}\right)\left(\frac{T_\mathrm{x}}{m_\mathrm{P}}\right)^{9/4}.$$
(88)
The formula (88) is applicable to the case considered in earlier work , in which it was supposed that vortons sufficiently stable to last until the present epoch, with the strings becoming current-carrying at formation, as in the case of supersymmetric theories. In this case one obtains,
$$\beta ^{5/9}\frac{T_\mathrm{x}}{m_\mathrm{P}}\left(\frac{\nu _{}m_\mathrm{P}}{g_s^{}m_\mathrm{c}}\right)^{4/9}1.$$
(89)
Substituting the estimates above we obtain
$$T_\mathrm{x}(\nu _{})^{4/9}\beta ^{5/9}\times 10^7\mathrm{GeV}.$$
(90)
This result is based on the assumptions that the vortons in question are stable enough to survive until the present day. Thus, this constraint is naturally more severe than its analogue in the previous section. It is to be remarked that vortons produced in a phase transition occurring at or near the limit that has just been derived would give a significant contribution to the elusive dark matter in the universe. However, if they were produced at the electroweak scale, i.e. with $`T_\mathrm{x}T_\mathrm{s}T_{_{\mathrm{EW}}}`$, where $`T_{_{\mathrm{EW}}}10^2\mathrm{GeV}`$, then they would constitute such a small dark matter fraction, $`\mathrm{\Omega }_\mathrm{v}10^9`$, that they would be very difficult to detect.
These constraints are very general for long lived vortons. However, if the microphysics of the underlying theory is such that the fermion zero modes are destroyed by subsequent phase transitions, then an entirely different situation pertains. For example, in our F-type SUSY theory, the zero modes didnot survive supersymmetry breaking. In this case, the current, and hence the resulting vortons, would dissipate. We turn to this case in the next section. If the zero modes do survive SUSY breaking, as in the case of our D-type theory, then the theory faces a vorton problem. It seems possible that such theories are in conflict with observation.
## V Dissipating Cosmic Vortons
In general, SUSY breaking occurs at a fairly low energy. In which case a sizeable random current will have built up in the string loops, resulting from string self-intersections and intercommuting. When the string self-intersects or intercommutes there is a finite probability that the fermi levels will be excited. This produces a distortion in the fermi levels, resulting in a current flow. As a consequence, vortons will form prior to SUSY breaking.
For strings that are formed at a temperature $`T_\mathrm{x}`$ and become superconducting at formation, the vorton number density is
$$n_\mathrm{v}=\nu _{}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{3}{2}}T^3,$$
(91)
while the vorton mass density is
$$\rho _\mathrm{v}=\nu _{}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{5}{4}}T_\mathrm{x}T^3,$$
(92)
where $`\nu _{}`$ and $`\beta `$ are factors of order unity.
In the F-type theory, the zero modes do not survive SUSY breaking. As a conequence, the current decays, angular momentum is lost and the vorton shrinks and eventually decays. As the vortons decay, grand unified particles are released from the string core. Since these GUT particles are also unstable, they also decay, but in a baryon number violating manner. As they decay, they create a net baryon asymmetry.
Given the number density of vortons at the SUSY breaking transition we can estimate the baryon asymmetry produced by vorton decay using,
$$\frac{n_b}{s}=\frac{n_\mathrm{v}}{s}ϵK,$$
(93)
where $`s`$ is the entropy density, $`ϵ`$ is the baryon asymmetry produced by a GUT particle and $`K`$ is the number of GUT particles per vorton. We need to consider two cases: firstly the vortons may decay before they dominate the energy density of the universe and we do not need to know the time scale for vorton decay since $`n_\mathrm{v}/s`$ is an invariant quantity. Alternatively, if the vorton energy density does dominate the energy density of the universe we must modify the temperature evolution of the universe to allow for entropy generation.
Assuming that the universe is radiation dominated until after the electroweak phase transition, the temperature of the universe is simply that of the standard hot big bang. We can estimate the entropy density following vorton decay using the standard result,
$$s=\frac{2\pi ^2}{45}g^{}T^3,$$
(94)
where $`g^{}`$ is the effective number of degrees of freedom at the electroweak scale ($`100`$). The vorton to entropy ratio is then
$$\frac{n_\mathrm{v}}{s}\left(\frac{T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{3}{2}}\frac{45}{2\pi ^2g^{}}5\times 10^6,$$
(95)
for $`T_\mathrm{x}10^{16}`$GeV.
The number of GUT particles per vorton is obtained from (72)
$$K=\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{1}{4}}10,$$
(96)
and we have
$$\frac{n_b}{s}10^5ϵ.$$
(97)
Alternatively, the vorton energy density may come to dominate and we must allow for a non-standard temperature evolution. The temperature of vorton-radiation equality, $`T_{\mathrm{veq}}`$, is given by
$$T_{\mathrm{veq}}=\frac{\nu _{}}{g^{}}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{5}{4}}T_\mathrm{x}.$$
(98)
If we assume that the vortons decay at some temperature $`T_d`$ and reheat the universe to a temperature $`T_{\mathrm{rh}}`$, we have
$$\widehat{g}^{}T_{\mathrm{rh}}^4=\rho _\mathrm{v}(T=T_\mathrm{d})=\nu _{}\left(\frac{\beta T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{5}{4}}T_\mathrm{x}T_\mathrm{d}^3,$$
(99)
where $`\widehat{g}^{}`$ is the number of degrees of freedom for this lower temperature. This reheating and entropy generation leads to an extra baryon dilution factor. In this case the baryon asymmetry produced by the decaying vortons is given by
$$\frac{n_b}{s}=\frac{n_\mathrm{v}}{s}Kϵ[\frac{g^{}}{\widehat{g}^{}}\frac{T_{\mathrm{eq}}}{T_\mathrm{d}}]^{\frac{3}{4}},$$
(100)
where the entropy, $`s`$, is that of the standard big bang model. The universe now evolves as in the standard big bang model and $`n_b/s`$ remains invariant. Using the above results the asymmetry becomes,
$$\frac{n_b}{s}=ϵ(\nu ^{}\frac{\widehat{g}^3}{g^^{}4})^{\frac{1}{4}}\beta ^{\frac{5}{16}}\left(\frac{T_d^{12}}{m_\mathrm{P}^5T_\mathrm{x}^7}\right)^{\frac{1}{16}}.$$
(101)
This form is valid if the vortons dominate the energy density of the universe before they decay, if this is not the case the dilution factor is absent and we have
$$\frac{n_b}{s}\frac{ϵ}{g^{^{}}}\left(\frac{T_\mathrm{x}}{m_\mathrm{P}}\right)^{\frac{5}{4}},$$
(102)
as above.
The maximal asymmetry produced is if the vortons decay just before they dominate the energy density. This requires $`T_d10^6Gev`$ for grand unified strings. In this case, since $`ϵ`$ is of order $`0.01`$ in many GUT theories, the mechanism can easily produce
$$\frac{n_b}{s}10^{10},$$
(103)
as required by nuclesynthesis.
## VI Discussion
In this contribution we have considered the microphysics of cosmic strings arising in physical particle physics theories. In the first part we concentrated on cosmic strings in supersymmetric theories, uncovering many novel features, including the possibility of them carrying persistent currents. We then considered the fate of current-carrying string loops, showing that they form stable vortons. We were able to use this to constrain the underlying theory. We then considered the possibility, suggested in the first part, that the vortons could dissipate and, in doing so, create the observed baryon asymmetry.
In particular we investigated the structure of cosmic string solutions to supersymmetric abelian Higgs models. For completeness we have analysed two models, differing by their method of spontaneous symmetry breaking. However, we expect theory F to be more representative of general defect forming theories, since the SSB employed there is not specific to abelian gauge groups.
We have shown that although SUSY remains unbroken outside the string, it is broken in the string core (in contrast to the gauge symmetry which is restored there). In theory F supersymmetry is broken completely in the string core by a nonzero $`F`$-term, while in theory D supersymmetry is partially broken by a nonzero $`D`$-term. We have demonstrated that, due to the particle content and couplings dictated by SUSY, the cosmic string solutions to both theories are superconducting in the Witten sense. We believe this to be quite a powerful result, that all supersymmetric abelian cosmic strings are superconducting due to fermion zero modes. An immediate and important application of the results of the present paper is that SUSY GUTs which break to the standard model and yield abelian cosmic strings (such as some breaking schemes of $`SO(10)`$) must face strong constraints from cosmology.
While we have performed this analysis for an abelian string, the techniques to be quite general and the results for non-abelian theories are very similar.
We have also analysed the effect of soft SUSY breaking on the existence of fermionic zero modes. The Higgs mass terms did not affect the existence of the zero modes. In the theories with $`F`$-term symmetry breaking, gaugino mass terms destroyed all zero modes which involved gauginos and trilinear terms created extra Yukawa couplings which destroyed all the zero modes present. In the theory with $`D`$-term symmetry breaking, the zero modes were unaffected by the SUSY breaking terms. If the remaining zero modes survive subsequent phase transitions, then stable vortons could result. Such vortons would dominate the energy density of the universe, rendering the underlying GUT cosmologically problematic.
Therefore, although SUSY breaking may alleviate the cosmological disasters faced by superconducting cosmic strings , there are classes of string solution for which zero modes remain even after SUSY breaking. It remains to analyse all the phase transitions undergone by specific SUSY GUT models to see whether or not fermion zero modes survive down to the present time. If the zero modes do not survive SUSY breaking, the universe could experience a period of vorton domination beforehand, and then reheat and evolve as normal afterwards.
We then went on to calculate the remnant vorton density, assuming that the strings become current-carrying at formation, as is the case for the supersymmetric theories under consideration. We used this density to constrain the underlying theory for the case of a persistent current. Two separate cases were considered. If the vortons survive for only a few minutes, we demanded that the universe be radiation dominated throughout nucleosynthesis to constrain the scale of symmetry breaking to be less than $`10^9`$ GeV. However, if the vortons survive to the present time, then we can demand that they do not overclose the universe. In this case we obtained a much stronger constraint that the scale of symmetry breaking must be less than $`10^7`$ GeV. This suggests that GUT theories based on D-type supersymmetric theories, which would automatically predict the existence of cosmic strings with the properties we have uncovered, are in conflict with observation.
On the constructive side, we have shown that it is possible for various conceivable symmetry breaking schemes to give rise to a remnant vorton density sufficient to make up a significant portion of the dark matter in the universe.
We have also shown that vortons can decay after a subsequent phase transition and these dissipating vortons can create a baryon asymmetry. For example, the zero modes in the F-type theories do not automatically survive SUSY breaking. In this case, the decaying vortons could account for the observed baryon asymmetry, depending on the scale of supersymmetry breaking. If the SUSY breaking scale were just above the electroweak scale, then the resulting asymmetry may well not be enough. This is due to the fact that vortons dominate the energy density of the universe long before they decay. Their decay results in a reheating of the universe and an increase in the entropy density. This reheating is unlikely to have any effect on the standard cosmology following the phase transition. If however the scale of SUSY breaking were such that the vortons didnot dominate the energy density of the universe, then their decay could explain the observed baryon asymmetry of the universe.
## Acknowledgments
I wish to thank my collaborators:- Robert Brandenberger, Brandon Carter, Stephen Davis, Warren Perkins and Mark Trodden for fruitful and enjoyable collaborations. This work was supported in part by PPARC. Finally, I wish to thank Edgard Gunzig for inviting me to such a stimulating meeting and Mady Smet for allowing us to use her wonderful venue of Peyresq.
|
no-problem/9901/gr-qc9901057.html
|
ar5iv
|
text
|
# Some remarks on a nongeometrical interpretation of gravity and the flatness problem
## 1 Introduction
It seems that a gravitational theory based on a scalar or a vector field in a flat Minkowski space cannot describe known experimental data , . On the other hand, the fenomenological success of Einstein’s theory of gravity suggests that gravity should be described completely, or at least partially, by a symmetric second-rank tensor field. In general, a symmetric second-rank tensor field contains components of spin-0, spin-1 and spin-2 . There are many theories of gravity based on a symmetric second-rank tensor field , . However, if we require that a symmetric second-rank tensor $`\mathrm{\Phi }_{\mu \nu }`$ describes a massless spin-2 field in a flat Minkowski space with metric $`\eta _{\mu \nu }`$ and satisfies a second-order differential equation in which $`\mathrm{\Phi }_{\mu \nu }`$ is consistently coupled to itself and to other fields, then the most general such equation can be written in the form of the Einstein equation (with a cosmological term) -, where the “effective metric” is given by
$$g_{\mu \nu }(x)=\eta _{\mu \nu }+\mathrm{\Phi }_{\mu \nu }(x).$$
(1)
The Einstein equation, when written in terms of $`\mathrm{\Phi }_{\mu \nu }`$ and $`\eta _{\mu \nu }`$, possesses an infinite number of terms. On the other hand, this equation looks much simpler when it is written in terms of $`g_{\mu \nu }`$. This suggests, but in no way proves, that $`g_{\mu \nu }`$, and not $`\mathrm{\Phi }_{\mu \nu }`$, is a fundamental field. Such an interpretation leads to the standard geometrical interpretation of gravity. However, such an interpretation makes gravity very different from other fields, because other fields describe some dynamics for which spacetime serves as a background, while gravity describes the dynamics of spacetime itself. This may be one of the obstacles to formulate a consistent theory of quantum gravity.
The aim of this paper is to investigate a nongeometrical interpretation (NGI) of gravity, in which $`\mathrm{\Phi }_{\mu \nu }(x)`$ is a fundamental gravitational field propagated in a flat Minkowski spacetime with the metric $`\eta _{\mu \nu }`$, while $`g_{\mu \nu }(x)`$ has the role of the effective metric only. Some aspects of such an interpretation have already been discussed . In this paper we reconsider some conclusions drawn in and stress some novel conclusions. We find that such an interpretation is not only consistent, but also leads to several advantages with respect to the standard interpretation. In particular, it leads to a natural resolution of the flatness problem. We also comment on some disadvantages of such an interpretation.
## 2 Global topology and cosmology in the NGI
It has recently been suggested that gravity, as a dynamical theory of the metric tensor $`g_{\mu \nu }(x)`$, should not be interpreted as a dynamical theory of the space-time topology. The topology should be rather fixed by an independent axiom, while the Einstein (or some other) equation determines only the metric tensor on a fixed manifold. For the Cauchy problem to be well posed, it is neccessary that the topology is of the form $`\mathrm{\Sigma }\times 𝐑`$. The most natural choice is $`𝐑^D`$ as a global topology, which admits a flat metric $`\eta _{\mu \nu }`$. Thus the NGI of gravity, which we consider in this paper, supports this nontopological interpretation, because in the NGI it is manifest that the topology is fixed by the background spacetime with a flat metric $`\eta _{\mu \nu }`$.
The nongeometrical (or nontopological) interpretation may seem to be inconsistent on global level, because it starts with a global $`𝐑^D`$ topology of spacetime, while the Einstein equation, which determines $`\mathrm{\Phi }_{\mu \nu }`$ and $`g_{\mu \nu }`$, possesses solutions for the metric $`g_{\mu \nu }`$ which correspond to a different topology.
However, this problem is resolved in the Cauchy-problem approach. For example, if the space has $`𝐑^3`$ topology on the “initial” Cauchy surface, then it has the same topology at all other instants. Quite generally, if the Cauchy problem is well posed, then the space topology cannot change during the time evolution . The fact that the topology of time in the Friedman universes is not $`𝐑`$, but a connected submanifold of $`𝐑`$ which is singular on its end(s), can be interpreted merely as a sign of nonaplicability of the Einstein equation for high-energy densities.
However, the interesting question is whether the NGI is consistent if the Einstein equation is not treated as a Cauchy problem and singularities are not treated as pathologies of the model. In it was concluded that the NGI of gravity was not appropriate for cosmological problems. Contrary to this conclusion, we argue that the application of the NGI of gravity to cosmological problems is actually the main advantage of this interpretation with respect to the conventional interpretation, because the NGI predicts that the effective metric $`g_{\mu \nu }`$ of a homogeneous and isotropic universe is flat, in agreement with observation. In the conventional approach, the assumption that the Universe is homogeneous and isotropic leads to the Robertson-Walker metric
$$ds^2=dt^2R^2(t)\frac{dx^2+dy^2+dz^2}{\left[1+(k/4)(x^2+y^2+z^2)\right]^2}.$$
(2)
If $`k=0`$, this corresponds to a flat universe. The observed flatness cannot be explained in the conventional approach. However, in the NGI, (2) is interpreted as an effective metric, whereas the fundamental quantity is the gravitational field $`\mathrm{\Phi }_{\mu \nu }`$. The nonvanishing components of $`\mathrm{\Phi }_{\mu \nu }`$ in (2) are
$$\mathrm{\Phi }_{ij}(x)=\left\{1\frac{R^2(t)}{\left[1+(k/4)(x^2+y^2+z^2)\right]^2}\right\}\delta _{ij},i,j=1,2,3.$$
(3)
Now the assumption that the Universe is homogeneous and isotropic means that $`\mathrm{\Phi }_{\mu \nu }`$ does not depend on $`x,y,z`$, which leads to the conclusion that the relation $`k=0`$ must be satisfied.
## 3 The question of local consistency of the NGI
The fact that the NGI leads to a natural resolution of the flatness problem suggests that the NGI could be the right interpretation. Thus, it is worthwhile to further explore the consistency of such an interpretation.
Let us start with the motion of a particle in a gravitational field. If we neglect the contribution of the particle to the gravitational field $`\mathrm{\Phi }_{\mu \nu }(x)`$, then the action of the particle with a mass $`m`$ can be chosen to be ,
$$S=m\left[\frac{1}{2}𝑑\tau \dot{x}^\mu \dot{x}^\nu \eta _{\mu \nu }\kappa 𝑑\tau h_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu \right],$$
(4)
where $`\tau `$ is the proper time of the particle, $`\dot{x}^\mu =dx^\mu /d\tau `$, $`h_{\mu \nu }(x)`$ is a redefined gravitational field $`2\kappa h_{\mu \nu }(x)\mathrm{\Phi }_{\mu \nu }(x)`$, and $`\kappa `$ is a coupling constant. The value of $`\kappa `$ is determined by the definition of $`h_{\mu \nu }(x)`$. For example, $`h_{\mu \nu }(x)`$ can be defined such that, in the weak-field limit, $`h_{00}(x)`$ is equal to Newton’s gravitational potential. The action (4) also can be written as
$$S=m\left[\frac{1}{2}𝑑\tau g_{\mu \nu }(x)\dot{x}^\mu \dot{x}^\nu \right],$$
(5)
which is the convential form of the action of the particle in the gravitational field. Both forms of the action lead to the same equations of motion which determine the trajectory $`x^\mu (\tau )`$. In the conventional geometrical interpretation, this trajectory is interpreted as a motion along a geodesic, which is not the case for the NGI.
In (4) and (5) it was stated that $`\tau `$ is the proper time, but the proper time was not defined. For (4) one could naively take the definition $`d\tau ^2=\eta _{\mu \nu }dx^\mu dx^\nu `$. On the other hand, in (5) the proper time is defined as $`d\tau ^2=g_{\mu \nu }dx^\mu dx^\nu `$, which leads to results which are in agreement with observations. We require that (4) is equivalent to (5), so in (4) we must take
$$d\tau ^2=[\eta _{\mu \nu }+2\kappa h_{\mu \nu }(x)]dx^\mu dx^\nu .$$
(6)
It is interesting to note that the existence of the geometrical interpretation is in no way the property of the symmetric second-rank tensor field only. For example, as noted in , the interaction of a particle with a scalar field $`\varphi (x)`$ can be described by the interaction part of the action $`S_I=m\kappa 𝑑\tau \varphi (x)\dot{x}^\mu \dot{x}^\nu \eta _{\mu \nu }`$, which leads to the action of the form (5), where the effective metric is $`g_{\mu \nu }(x)=\eta _{\mu \nu }(1+2\kappa \varphi (x))`$.
Now a few comments on the interpretation of various components of $`g_{\mu \nu }`$. For example, if $`g_{00}`$ depends on $`x`$, in the conventional interpretation this is interpreted as a phenomenon that the lapse of time depends on $`x`$. In the NGI, it is interpreted that the effect of gravity is such that all kinds of matter (massive and massless) move slower or faster, depending on $`x`$. Because of the equivalence principle (the coupling constant $`\kappa `$ in (4) is the same for all kinds of particles), the motion of all kinds of matter is changed in the same way, namely, in such a way as if the metric of the time itself depended on $`x`$. Similarly, if $`g_{ij}`$ depends on $`x`$, in the NGI it is interpreted that the effect of gravity is such that all kinds of matter are contracted or elongated in the same way, depending on $`x`$. More details on this aspect of the NGI can be found in .
In the NGI, the actual distances are given by $`\eta _{\mu \nu }`$ instead of by $`g_{\mu \nu }`$. For example, the actual time distance is given by $`dt`$ instead of by $`\sqrt{g_{00}}dt`$. Similarly, the actual space distance in the $`x^1`$-direction is given by $`dx^1`$ instead of by $`\sqrt{g_{11}}dx^1`$. Consequently, the actual velocity of light $`d𝐱/dt`$ (with $`ds^2=0`$) is no longer a constant. However, as stressed in , these actual distances are unobservable. Only the effective metric $`g_{\mu \nu }`$ can be measured. This is one of the unpleasent features of the NGI, but this does not make it inconsistent.
However, there is even a more serious problem of the NGI. These actual distances are not only unobservable, but they are not uniquely defined, because of the invariance with respect to general coordinate transformations of the Einstein equation. The NGI makes sense only if some coordinate condition is fixed. If we can somehow find the right coordinate condition, then we can also define the actual distances. However, it is difficult to find this, because all coordinate conditions lead to the same observable effects, at least in classical physics.
However, it is possible that, in quantum gravity, different coordinate conditions are not equivalent. Moreover, some alternative classical theories of gravity do not possess the invariance with respect to general coordinate transformations (see, for example, ). All this suggests that, perhaps, there is a possibility, at least in principle, of identifying the right coordinate condition experimentally. At present, we can only guess what that might be, using some simplicity and symmetry arguments. If we require that this condition should be expressed in terms of $`\eta _{\mu \nu }`$ and $`\mathrm{\Phi }_{\mu \nu }`$, and that this should not violate Lorentz covariance, then the simplest choice is the harmonic condition
$$D^\mu \mathrm{\Phi }_{\mu \nu }=0,$$
(7)
where $`D^\mu `$ is the covariant derivative with respect to a flat metric (i.e., a metric which can be transformed to $`\eta _{\mu \nu }`$ by a coordinate transformation). This condition is preferred by many authors , . The metric (2) does not satisfy this condition, but one can easily transform (2) into coordinates for which this condition is satisfied, and conclude in the same way that $`k=0`$. One can also see that (3) for $`k=0`$ already satisfies (7).
## 4 Conclusion
The NGI of gravity is consistent and leads to a natural resolution of the flatness problem. The flatness problem can also be resolved by the inflationary model, which predicts that today the Universe should be very close to be flat, even if it was not so flat in early stages of its evolution. On the other hand, the NGI predicts that in a homogeneous and isotropic universe, the exact flatness must be observed in all stages of its evolution. Both predictions are in agreement with present observational data.
The gravitational field $`\mathrm{\Phi }_{\mu \nu }(x)`$ does not differ much from other fields, because it is a field propagated in a nondynamical flat spacetime. The consistency of the NGI requires that some coordinate condition should be fixed, so the resulting theory is no longer covariant with respect to general coordinate transformations. However, the Einstein equation written in terms of $`\eta _{\mu \nu }`$ and $`\mathrm{\Phi }_{\mu \nu }`$, and supplemented by (7), is Lorentz covariant.
The disadvantages of the NGI are the following: The actual metric $`\eta _{\mu \nu }`$ is unobservable, only the effective metric $`g_{\mu \nu }`$ can be measured, at least if the equivalence principle is exact. The Einstein equation seems very complicated when written in terms of $`\mathrm{\Phi }_{\mu \nu }(x)`$ and $`\eta _{\mu \nu }`$. The action for a particle in a gravitational field, given by (4) and (6), in the NGI also seems more complicated than in the conventional, geometrical interpretation.
However, if some of the alternative theories of gravity is more appropriate than the theory based on the Einstein equation, it is possible that the equivalence principle is not exact and that the correct equation of motion is not so complicated when written in terms of $`\eta _{\mu \nu }`$, $`\mathrm{\Phi }_{\mu \nu }(x)`$, and possibly some additional dynamical fields.
## Acknowledgment
The author is grateful to N. Bilić and H. Štefančić for some useful suggestions. This work was supported by the Ministry of Science and Technology of the Republic of Croatia under Contract No. 00980102.
|
no-problem/9901/astro-ph9901271.html
|
ar5iv
|
text
|
# A VLT colour image of the optical Einstein ring 0047–2808Based on observations obtained at the European Southern Observatory, Paranal, Chile, and the United Kingdom Infrared Telescope, Hawaii
## 1 Introduction
Einstein–ring gravitational lens images should be much more common at optical wavelengths than at radio wavelengths (Miralda–Escudé and Lehár miralda (1992)), but so far all but one of the known Einstein rings have been discovered by radio techniques. The exception is 0047–2808 (Warren et al. 1996a ) where a high–redshift $`z=3.595`$ star–forming galaxy, with strong Ly$`\alpha `$ emission at $`5589`$Å, is lensed by a massive early–type galaxy at $`z=0.485`$. We are engaged in a survey to detect similar systems (Warren et al. 1996b ). The search strategy is to identify anomalous emission lines (Ly$`\alpha `$ from star–forming galaxies at $`2<z<4`$) in the spectra of a large sample of distant early–type galaxies at $`z0.4`$. This has the advantage that by the very nature of the identification procedure the redshifts of both the source and deflector are obtained and so the full lensing geometry is known. In addition, because the sources are extended the resulting images, rings or arcs, offer the prospect of providing powerful constraints on the mass distribution in the deflecting galaxies (Kochanek kochanek (1995)). Finally, because of the magnification, it is possible to study these very faint sources both spectroscopically (Warren et al. warren (1998)) and morphologically, resolving angular scales much smaller than is possible for unlensed objects. The latter prospects are of particular interest because the sources are similar to but fainter than the population of high-redshift star–forming objects identified by Steidel and coworkers (Steidel et al. steidel (1996)), and cannot presently be studied in any other way. In this paper we present VLT UT1 broad– and narrow–band imaging, together with UKIRT $`K`$–band imaging of 0047–2808.
## 2 Observations and data reduction
### 2.1 VLT imaging
Broad-band $`B`$ and Ly$`\alpha `$ narrow-band images of 0047–2808 were obtained on the night of 1998 August 30, with the VLT UT1 test camera as part of the Science Verification programme (Leibundgut et al. leibundgut (1998)). The CCD pixel size is $`0\stackrel{}{.}045`$. However the CCD was binned $`2\times 2`$, so the pixel size in all the frames was $`0\stackrel{}{.}09`$. The narrow-band filter has a central wavelength $`5589`$Å and width $`20`$Å FWHM. Integration times were $`3\times 300`$sec ($`B`$) and $`3\times 1200`$sec (Ly$`\alpha `$). The seeing was $`0.60.7\mathrm{}`$. Procedures followed for bias subtraction and flatfielding were mostly standard. However the flatfielded narrow-band images required a correction for large-scale gradients. This was achieved by firstly combining the deregistered frames, clipping out objects. The resulting frame was smoothed, and normalised, and the flatfielded frames were divided by this correction frame.
### 2.2 UKIRT imaging
Broad-band $`K`$ images of 0047–2808 were obtained with the UKIRT IRCAM3 instrument on the nights of 1997 September 12 and 13. The pixel size was $`0\stackrel{}{.}286`$. The final image is a mosaic from two positions, one centred on 0047–2808, total integration time $`\mathrm{15\hspace{0.17em}120}`$sec, and another at a position $`58\mathrm{}`$ to the SSW centred on a second distant elliptical, total integration time $`4725`$sec. At each position several sequences of 9–point dithers were summed. The seeing averaged $`0\stackrel{}{.}7`$. The data were flat–fielded using a sequence of twilight sky exposures, and then an appropriate sky frame, formed from a running median filter through the stack of images, was subtracted from each data frame, and the resulting frames registered and summed.
### 2.3 Results
Fig. 1 shows the rgb colour image resulting from combining the $`B`$ (=b), Ly$`\alpha `$ (=g), and $`K`$ (=r) images. The ring stands out strongly in green because of the strong Ly$`\alpha `$ line in the narrow-band filter, while the lensing galaxy is very red and is visible inside the ring. A minimum $`\chi ^2`$ fit of a de Vaucouleurs model for the light profile of the lensing galaxy in the $`K`$–band image was computed by convolving two-dimensional $`r^{1/4}`$ profiles with the psf, measured from a star in the frame. (The $`K`$–band image is best for fitting the galaxy profile because the ring is not detected at this wavelength, and the contrast between the galaxy and the ring is maximised.) The model was then convolved with the Ly$`\alpha `$–band psf, scaled to the central counts in the Ly$`\alpha `$ image, and subtracted. The resulting image of the ring, rebinned to a pixel size of $`0\stackrel{}{.}18`$, is shown in the top left-hand panel of Fig. 2.
## 3 Gravitational lens model
Compared with the original NTT image (Warren et al. 1996a ) the new VLT image has much higher signal–to–noise ratio. The counter image predicted by our original model, but not convincingly detected in the NTT image, is now clearly seen. The same modelling procedure as described in Warren et al. (1996a ), where the projected surface mass density was assumed to follow the (intrinsic) de Vaucouleurs profile, now measured from the $`K`$–band image (as described above), has been applied. The single free parameter is the global mass–to–light ratio (M/L).
Utilising the computational technique for arbitrary lenses with elliptical symmetry (Schramm schramm (1994)) the M/L ratio in the model was adjusted to produce the most compact configuration for the unlensed image in the source plane. Here we briefly review the key steps in the procedure; a position in the image plane, represented by the complex coordinate $`\mathrm{z}`$, is mapped onto the source plane position $`\omega `$, according to
$$\omega (\mathrm{z},\overline{\mathrm{z}})=\mathrm{z}\mathrm{\Phi }(\mathrm{z},\overline{\mathrm{z}}),$$
(1)
where $`=2/\overline{\mathrm{z}}`$. The deflection potential, $`\mathrm{\Phi }`$, is related to the projected surface mass density in the lens, $`\mathrm{\Sigma }`$, by
$$^2\mathrm{\Phi }(\mathrm{z},\overline{\mathrm{z}})=2\mathrm{\Sigma }(\mathrm{z},\overline{\mathrm{z}})/\mathrm{\Sigma }_{\mathrm{crit}}$$
(2)
where $`\mathrm{\Sigma }_{\mathrm{crit}}`$ is the critical surface mass density to gravitational lensing and is given by
$$\mathrm{\Sigma }_{\mathrm{crit}}=\frac{\mathrm{c}^2}{4\pi \mathrm{G}}\frac{\mathrm{D}_{\mathrm{os}}}{\mathrm{D}_{\mathrm{ol}}\mathrm{D}_{\mathrm{ls}}}.$$
(3)
Here, $`\mathrm{D}_{\mathrm{ij}}`$ are the angular diameter distances between the source (s), lens (l) and observer (o) (Schneider et al. schneider (1992)).
A 61$`\times `$61 pixel ($`5.5\times 5.5\mathrm{}`$) region of the VLT narrow–band image, centred on the galaxy, was used for the computation. Assuming a fiducial value of M/L, the mapping given in Equation 1 gives the coordinates in the source plane of any image-plane coordinate. The M/L was adjusted, focusing the emission over the source plane into a small region. This determines the centroid of the source. The source was then modelled as a Gaussian profile. The structure in the ring (i.e. the angular extent of the gaps, size of the counterimage) is dictated by the angular extent of the source. A source of FWHM of $`0\stackrel{}{.}2`$ when reimaged by the lensing potential was found to reproduce the structure in the ring well. To reimage the source each pixel was sub–pixelated into a 10$`\times `$10 grid; these grid points were mapped to the source plane to measure the surface brightness at each grid point. Mapping of the surface brightness in this way is accurate provided the grid spacing mapped to the source plane is substantially smaller than the scale over which the surface brightness of the source varies.
Having fixed the source position and profile the lens M/L was then finely readjusted to provide the best fit, in terms of $`\chi ^2`$, of the model of the ring to the data. The results of this procedure are presented in Fig. 2. The upper left–hand panel presents the VLT image of the ring after subtraction of the model for the surface brightness distribution of the foreground galaxy. Below, the model source is shown on the same scale together with the caustic lines defined by the gravitational lens model; the caustics delineate the regions of multiple imaging over the source plane. The resultant image for this source configuration is presented in the lower right–hand panel. In the upper right–hand panel this image has been convolved with a Gaussian seeing profile to approximate the observing conditions. There is good correspondence between the structure in the model and the observed structure of the ring.
The measured angular radius of the ring in the VLT image is $`1\stackrel{}{.}08\pm 0.03`$. This more accurate value is smaller than the value measured from the NTT image ($`1\stackrel{}{.}35\pm 0.1`$) by $`20\%`$ and this significantly lowers the mass estimate. Part of the discrepancy between the two measurements is due to the fact that the ring is elliptical in shape and that the counterimage (invisible in the old data) lies on the minor axis i.e. the old value for the radius was measured along the major axis of the ellipse. The two measurements are consistent therefore. The computed mass within the Einstein radius is $`1.73(1.95)\times 10^{11}h^1\mathrm{M}_{}`$ for $`q_o=0.5(0.1)`$<sup>1</sup><sup>1</sup>1$`h`$ is the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. The uncertainty in the mass estimate within the Einstein radius is dominated by the uncertainty in the radius rather than the form of the mass profile. Changing the radius by $`0\stackrel{}{.}06`$ (i.e. $`2\sigma `$) changes the computed mass by $`10\%`$. The M/L ratio for the model, corrected for luminosity evolution (Paper I), is $`\mathrm{M}/\mathrm{L}_{B(0)}=14.2h(12.1h)`$.
###### Acknowledgements.
We are grateful to the ESO UT1 Science Verification team for providing the optical data. GFL acknowledges support from the Pacific Institute for Mathematical Sciences (1998-1999). The authors acknowledge the data and analysis facilities provided by the Starlink Project which is run by CCLRC on behalf of PPARC.
|
no-problem/9901/astro-ph9901385.html
|
ar5iv
|
text
|
# Dynamical evolution of bulge shapes
## 1 Introduction
It is now widely believed that the effects of central black holes and cusps on the dynamics of triaxial galaxies are well understood: the box orbits which form the back bone of triaxial elliptical galaxies become chaotic due to scattering by the divergent central force (e.g. \[Gerhard & Binney 1985\]). The scattering of these orbits then results in the evolution of the triaxial galaxy to an axisymmetric one whose dynamics is dominated by well behaved families of regular orbits. Thus most studies of elliptical galaxies still focus on the nature of the regular orbits. Recent investigations of the structure of phase space in triaxial ellipticals have shown that phase space is rich in regular and chaotic regions even in the absence of black holes and steep cusps.
Studying the effects of central black holes on galaxies has taken on renewed importance because of the discovery that many if not most bulge dominated galaxies have central black holes. The existence of central black holes as the end products of the QSO and AGN phenomena is justified by energetic arguments. But less is known about the interplay between the growth of a black hole and the shape of its host galaxy. Most models for the fueling of QSO and AGN require a high degree of triaxiality to transport fuel to the center and to simultaneously transport angular momentum outwards (\[Rees 1990\]). Understanding the interplay between black hole growth and galaxy shape is one motivation for studying the behavior of orbits in triaxial potentials.
There have been several studies of the effect of figure rotation on the orbits of stars in triaxial galaxies. Most studies have focused on the behavior of the periodic orbits in the plane perpendicular to the rotation axis. Some authors (\[Martinet & Udry 1990\]) found that increasing figure rotation resulted in a decrease in the phase space occupied by the unstable $`x_3`$ family and consequently a reduction in the overall chaos. Others (\[Udry & Pfenniger 1988\] and \[Udry 1991\]) found that increasing figure rotation had negligible effect on the stochasticity of orbits in 3-dimensional models. More recently it has been shown (\[Tsuchiya et al. 1993\]) that orbits of all 4 major families in a perfect ellipsoidal model (completely integrable when stationary) became stochastic when figure rotation is added. Rapidly rotating triaxial bars can be almost completely regular (Pfenniger & Friedli 1991) although more slowly rotating bars and bars with high central concentrations generally contain a large fraction of stochastic orbits that eventually destroy the bars (\[Norman et al. 1996\], \[Sellwood & Moore 1999\]).
We use the frequency analysis technique (\[Laskar 1990\]) to study the behavior of orbits in a family of triaxial density models with figure rotation. The models have a density law that fits the observed luminosity profiles of ellipticals and the bulges of spirals and is given by Dehnen’s law
$$\rho (m)=\frac{(3\gamma )M}{4\pi abc}m^\gamma (1+m)^{(4\gamma )},0\gamma <3$$
(1.1)
where
$$m^2=\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{z^2}{c^2},abc0$$
(1.2)
and $`M=1`$ is the total mass. The parameter $`\gamma `$ determines the slope of the central density cusp and $`a,b,c`$ are the semi-axes of the model. In some cases we also introduced a central point mass $`M_h`$ representing a nuclear black hole. The figure rotates about its short axis and the degree of figure rotation can be small (as in the case of giant ellipticals) or reasonably large as in the case of bulges. The co-rotation radius $`R_\mathrm{\Omega }`$ is parameterized in units of the half-mass radius of the model and ranges from $`R_\mathrm{\Omega }=25`$ (slowly rotating) to $`R_\mathrm{\Omega }=3`$ (rapidly rotating). Frequency analysis was restricted to $`10^4`$ orbits in each model. Orbits were launched from the equi-effective-potential surface corresponding to the half-mass radius. (Thus all orbits have the same Jacobi Integral, $`E_J=E\frac{1}{2}|𝛀\times 𝐫|^2`$). The initial conditions for the orbits were selected in two different ways to study orbits from all four major families.
## 2 Frequency mapping and resonant tori
Laskar’s (1990) frequency analysis technique is based on the idea that regular orbits have 3 isolating integrals of motion which are related to 3 fundamental frequencies. A filtered Fourier transform technique can be used to accurately determine these 3 frequencies ($`\omega _x`$, $`\omega _y`$, $`\omega _z`$). While stochastic orbits do not really have fixed frequencies, quantities resembling frequencies which measure their local behavior can be used to determine how they diffuse in frequency space. Regular orbits come in three types: (1) Orbits in regions that maintain their regular character in spite of departures of the potential from integrable form; (2) orbits associated with stable resonant tori; (3) orbits associated with stable periodic orbits, or “boxlets”.
The use of frequency mapping has shown that even in weakly chaotic systems, it is the resonant tori that provide the skeletal structure to regular phase space (\[Valluri & Merritt 1998\]). Frequency mapping provides the simplest method for finding resonant tori. They are families of orbits which satisfy a condition: $`l\omega _x+m\omega _y+n\omega _z=0`$ with $`(l,m,n)`$ integers. Such orbits are restricted to 2-dimensional surfaces in phase space and we refer to them as thin orbits. Thin boxes are the most generic box orbits in non-integrable triaxial potentials. They avoid the center because they are two-dimensional surfaces. They generate families of 3-D boxes whose maximum thickness is determined by the strength of the central cusp or black hole (\[Merritt 1999\]). The closed periodic boxlet orbits lie at the intersection of two or more resonance zones. High order resonances also exist for tube orbit families. Unlike the well known thin tube families around the long and short axes, thin resonant tubes are often surrounded by unstable regions, making it difficult to find them without a technique like frequency mapping.
## 3 Results: Destruction of the Resonant Tori
A box or boxlet orbit reverses its sense of progression around the rotation axis every time it reaches a turning point. In a rotating frame this means that the path described during the prograde segment of the orbit is not retraced during the retrograde segment. This “envelope doubling” is a consequence of the Coriolis forces on the two segments being different (\[de Zeeuw & Merritt 1983\]). Envelope doubling effectively thickens the thin box orbits driving them closer to the center. This results in a narrowing of the stable portion of the resonance layer and renders a large fraction of the orbits stochastic. The degree of “thickening” increases with increasing figure rotation and results in a corresponding rise in the fraction of stochastic box-like orbits.
Figure 1 (a) shows a plot of a quantity measuring the diffusion rates of $`10^4`$ orbits started at rest at the half-mass equi-potential surface in a non-rotating triaxial model with central cusp slope $`\gamma =0.5`$. Only one octant of the surface is plotted. The grey scale is proportional to the logarithm of the diffusion rate: the dark regions indicate initial conditions corresponding to stochastic orbits, the white regions correspond to regular orbits. Figure 1 (b) shows the same set of orbits started from the equi-effective-potential surface of a model with $`R_\mathrm{\Omega }=8`$. Rotation results in the broadening of the unstable regions with a resultant narrowing of the stable (white) regions. It also gives rise to new unstable and stable resonances which are seen in Figure 1 (b) as dark striations within the white regions. The increase in the number of resonances and their broadening results in greater overlap of nearby stochastic layers eventually leading to the onset of global stochasticity (e.g. \[Chirikov 1979\]).
Contrary to the finding of Tsuchiya et al. (1993) we find that figure rotation has a strong destabilizing effect on inner-long axis tubes. The low angular momentum $`z`$-tubes and the outer $`x`$-tubes also become more stochastic. The high angular momentum $`z`$-tubes are much less affected. The increased stochasticity of tube orbits can be attributed largely to the increase in the width of the stochastic layers associated with the resonant tube orbit families. We emphasize that for the tube orbits it is the destabilization of resonant tubes and not scattering by divergent central forces that determines their stability.
## 4 Conclusions
It is a popular misconception that in the presence of figure rotation box orbits in a triaxial elliptical will loop around the center due to Coriolis forces thereby reducing stochasticity. We find that on the contrary stochasticity increases with increasing figure rotation primarily because the thin box orbits and resonant tubes, which play a crucial role in structuring phase space, are broadened and destabilized by the “envelope doubling” effect.
Models for the fueling of AGN and QSOs require triaxial central potentials which aid accretion onto a black hole, but the same black holes would tend to destroy triaxiality. Low luminosity ($`M_B>19`$) ellipticals and the bulges of spirals are expected to evolve into axisymmetric shapes on time scales much shorter than the age of the Universe (\[Valluri & Merritt 1998\]). If the peanut-shaped bulges in nearby galaxies are in fact triaxial they are probably dynamically young or are composed of only tube like orbits.
###### Acknowledgements.
I thank David Merritt for useful discussions. This work was supported by NSF grants AST 93-18617 and AST 96-17088 and NASA grant NAG 5-2803 to Rutgers University.
|
no-problem/9901/quant-ph9901005.html
|
ar5iv
|
text
|
# Absence of Chaos in Bohmian Dynamics
## Abstract
The Bohm motion for a particle moving on the line in a quantum state that is a superposition of $`n+1`$ energy eigenstates is quasiperiodic with $`n`$ frequencies.
In a recent paper , O. F. de Alcantara Bonfim, J. Florencio, and F. C. Sá Barreto claim to have found numerical evidence of chaos in the motion of a Bohmian quantum particle in a double square-well potential, for a wave function that is a superposition of five energy eigenstates. But according to the result proven here, chaos for this motion is impossible. We prove in fact that for a particle on the line in a superposition of $`n+1`$ energy eigenstates, the Bohm motion $`x(t)`$ is always quasiperiodic, with (at most) $`n`$ frequencies. This means that there is a function $`F(y_1,\mathrm{},y_n)`$ of period 2$`\pi `$ in each of its variables and $`n`$ frequencies $`\omega _1,\mathrm{},\omega _n`$ such that $`x(t)=F(\omega _1t,\mathrm{},\omega _nt).`$
The Bohm motion for a quantum particle of mass $`m`$ with wave function $`\psi =\psi (x,t)`$, a solution to Schrödinger’s equation, is defined by
$$dx/dt=(\mathrm{}/m)\text{Im}\psi /\psi .$$
(1)
The right hand side of (1) depends upon $`\psi `$ only through its associated ray. In particular, if the wave function
$$\psi (x,t)=\mathrm{\Sigma }_{i=0}^na_ie^{iE_it/\mathrm{}}\varphi _i(x)$$
(2)
is a superposition of $`n+1`$ energy eigenstates $`\varphi _i`$, then the right hand side of (1) is, in its dependence upon $`t`$, quasiperiodic with $`n`$ frequencies, as is $`|\psi |`$.
The quasiperiodicity in time of the vector field defining a dynamical system in general does not imply any corresponding property of the motion, since an autonomous system (one defined by a time independent vector field) can be chaotic. (In fact, it is autonomous systems that are normally studied in chaos theory.) However, for the Bohm motion on the line, the position of the particle is anchored in the (normalized) wave function, in such a way that its motion $`x(t)`$ inherits the quasiperiodicity of $`|\psi |`$:
A crucial feature of the motion (1) is the equivariance of $`|\psi |^2`$, i.e., the fact that probabilities for configurations given by $`|\psi (x,t)|^2`$ are consistent with the dynamics (1). This is a completely general feature of the Bohmian dynamics, valid in any dimension for any wave function satisfying Schrödinger’s equation. For a single particle moving on the line, it has the following important consequence:
$$_{\mathrm{}}^{x(t)}|\psi (x^{},t)|^2𝑑x^{}=_{\mathrm{}}^{x(0)}|\psi (x^{},0)|^2𝑑x^{},$$
(3)
which follows from equivariance since in one-dimension the dynamics is order-preserving, and in particular the evolution from time 0 to time $`t`$ carries the interval $`(\mathrm{},x(0))`$ to $`(\mathrm{},x(t))`$.
Given $`\psi (x,0)`$ and $`x(0)`$, equation (3) determines $`x(t)`$ as a functional of $`|\psi (x,t)|^2`$, and thus $`x(t)`$, like $`|\psi (x,t)|^2`$, is quasiperiodic with $`n`$ frequencies. In fact $`x(t)=F(\omega _1t,\mathrm{},\omega _nt)`$ with $`\omega _i=(E_iE_0)/\mathrm{}`$ for $`i=1,\mathrm{},n`$ and $`F(y_1,\mathrm{},y_n)=G(_{\mathrm{}}^{x(0)}|\psi (x^{},0)|^2𝑑x^{})`$ where $`G`$ is the inverse of the function $`H(x)=_{\mathrm{}}^x|\psi (x^{})|^2𝑑x^{}`$ with
$$\psi (x)\psi _{y_1,\mathrm{},y_n}(x)=a_0\varphi _0(x)+\mathrm{\Sigma }_{i=1}^na_ie^{iy_i}\varphi _i(x).$$
(4)
For the one-dimensional motion $`x(t)`$ the Lyapunov exponent $`\lambda `$ is given by
$$\lambda =\underset{t\mathrm{}}{lim}t^1\mathrm{ln}\frac{dx(t)}{dx(0)}.$$
(5)
It presumably follows from the quasiperiodicity of $`x(t)`$ alone that $`dx(t)/dx(0)`$ is similarly quasiperiodic. In any case, we have by equivariance that $`|\psi (x(t),t)|^2dx(t)=|\psi (x(0),0)|^2dx(0)`$, so that
$$dx(t)/dx(0)=\frac{|\psi (x(0),0)|^2}{|\psi _{y_1,\mathrm{},y_n}(F(y_1,\mathrm{},y_n))|^2}$$
(6)
with $`y_i=\omega _it`$. Hence $`dx(t)/dx(0)`$ is quasiperiodic with $`n`$ frequencies and thus $`\lambda =0`$.
Remarks: (i) In one-dimension we always have that $`dx(t)/dx(0)=|\psi (x(0),0)|^2/|\psi (x(t),t)|^2`$. Thus the vanishing of the Lyapunov exponent $`\lambda `$ is more general than described here, and should be valid for any wave function, on the circle as well as the line. After all, for bound states the ratio on the right is not likely to grow or decrease in any systematic way at all, while for states with continuous spectrum the behavior will be at most power law; in no case will there be exponential growth or decay. (ii) Another aspect of chaos, the weak convergence of densities to the “equilibrium” distribution (for Bohmian mechanics given by $`|\psi (x(t),t)|^2`$) will, as a simple consequence of the order preserving character of such motions, almost always fail for any one-dimensional flow, Bohmian or otherwise. The sole exception can occur only when the asymptotic “equilibrium” distribution is concentrated on a single (perhaps moving) point, something that is impossible for Bohmian mechanics.
I am grateful to Michael Kiessling for helpful suggestions. This work was supported in part by NSF Grant No. DMS-9504556.
|
no-problem/9901/cond-mat9901160.html
|
ar5iv
|
text
|
# Component separation in harmonically trapped boson-fermion mixtures
## I Introduction
Since the recent experimental realization of Bose-Einstein condesation in dilute gases of rubidium , sodium , lithium , and hydrogen a great deal of interest in Bose condensed systems has concentrated on the topic of multi-component condensates. This field was stimulated by the succesful demonstration of overlapping condensates in different spin states of rubidium in a magnetic trap and of sodium in an optical trap , the (binary) mixtures being produced either by sympathetic cooling, which involves one species being cooled to below the transition temperature only through thermal contact with an already condensed Bose gas, or by radiative transitions out of a single component condensate. Since then a host of experiments has been conducted on systems with two condensates, exploring both the dynamics of component separation , and measuring the relative quantum phase of the two Bose-Einstein condensates . Most of the theoretical work concerning multi-component condensates has been devoted to systems of two Bose condensates. However, other systems are of fundamental interest, one of these being a Bose condensate with fermionic impurities, a system reminiscent of superfluid <sup>3</sup>He-<sup>4</sup>He mixtures. In particular the possibilty of sympathetic cooling of fermionic isotopes has been predicted in both $`{}_{}{}^{6}\mathrm{Li}`$-$`{}_{}{}^{7}\mathrm{Li}`$ , $`{}_{}{}^{39}\mathrm{K}`$-$`{}_{}{}^{40}\mathrm{K}`$, and $`{}_{}{}^{41}\mathrm{K}`$-$`{}_{}{}^{40}\mathrm{K}`$ . Magneto-optical trapping of the fermionic potassium isotope $`{}_{}{}^{40}\mathrm{K}`$ has been reported .
The boson-fermion mixture was discussed in a previous paper within the Thomas-Fermi approximation, which amounts to neglecting the kinetic energy of the bosons, and to apply a semi-classical filling of phase space of the fermions. For the bosons, this is a valid approximation in the limit of strong interactions or large particle numbers, see . In this paper we present a numerical analysis of the system, incorporating the correct operator form of the kinetic energy of the particles.
The paper is structured as follows. In Sec. II we study in detail the case of an isotropic external potential and we develop both the Thomas-Fermi approximation and the full quantum mechanical description of the fermions. The numerical procedure is briefly introduced. In Sec. III the case of the anisotropic harmonic oscillator trap is outlined within the Thomas-Fermi approximation for the fermions. In Sec. IV we present our quantitative results for the isotropic and anisotropic trapping potentials, demonstrating the accuracy of the predictions made in , and addressing the issue of symmetry breaking in elongated traps. Sec. V summarizes the main results.
Throughout, we assume that the bosons and fermions have the same mass, $`M`$, and that the atoms are all trapped in the same external harmonic oscillator potential. This choise is of course only a convenience; all our calculations are readily generalized to differing experimental parameters.
## II Isotropic traps
### A Gross-Pitaevskii equation for the bosons
In the mean field description the behavior of the single particle wavefunction $`\psi (\stackrel{}{r})`$, assumed to describe all $`N_B`$ bosons in the gas, is governed by the Gross-Pitaevskii equation . In the presence of fermions, this equation is modified by the addition of an interaction term proportional to the fermion density, $`n_F(\stackrel{}{r})`$
$$\left[\frac{\mathrm{}^2}{2M}^2+V_{ext}(\stackrel{}{r})+gN_B|\psi (\stackrel{}{r})|^2+hn_F(\stackrel{}{r})\right]\psi (\stackrel{}{r})=\mu \psi (\stackrel{}{r}),$$
(1)
where $`V_{ext}(\stackrel{}{r})=\frac{1}{2}M(\omega _x^2x^2+\omega _y^2y^2+\omega _z^2z^2)`$ is the external confining potential, and $`\mu `$ is the boson chemical potential (energy per particle). The value of $`\mu `$ is fixed by the normalisation condition, $`d^3rn_B(\stackrel{}{r})=N_B`$ on the boson density $`n_B(\stackrel{}{r})=N_B|\psi (\stackrel{}{r})|^2`$.
The low kinetic energies of the atoms permit the replacement of their short range interaction potential by a delta function potential of strength g or h. This is known as the pseudopotential method . There is no fermion-fermion interaction in this description, see below. In (1) g and h thus represent the boson-boson and the boson-fermion interaction strengths proportional to the respective $`s`$-wave scattering lengths .
In isotropic traps we have $`V_{ext}(\stackrel{}{r})=\frac{1}{2}M\omega ^2r^2`$, r being the distance from the trap center. By the substitution $`\chi =r\psi `$ in (1) we obtain the radial equation
$$\frac{\mathrm{}^2}{2M}\frac{d^2\chi }{dr^2}+\left[V_{ext}(r)+gN_B\left|\frac{\chi (r)}{r}\right|^2+hn_F(r)\right]\chi (r)=\mu \chi (r).$$
(2)
In order to simplify the formalism, we rescale (2) in terms of harmonic oscillator units, that is
$`\stackrel{}{r}`$ $`=`$ $`a_0\stackrel{~}{\stackrel{}{r}},`$ (3)
$`\mu `$ $`=`$ $`\mathrm{}\omega \stackrel{~}{\mu },`$ (4)
$`\psi (\stackrel{}{r})`$ $`=`$ $`a_0^{3/2}\stackrel{~}{\psi }(\stackrel{~}{\stackrel{}{r}}),`$ (5)
$`\chi (r)`$ $`=`$ $`a_0^2\stackrel{~}{\chi }(\stackrel{~}{r}),`$ (6)
where $`a_0=\sqrt{\mathrm{}/M\omega }`$ is the width of the oscillator ground state. Defining
$$\stackrel{~}{g}=\frac{gM}{a_0\mathrm{}^2},\stackrel{~}{h}=\frac{hM}{a_0\mathrm{}^2},$$
(7)
we arrive at the simplified equation for the radial function
$$\frac{1}{2}\frac{d^2\stackrel{~}{\chi }}{d\stackrel{~}{r}^2}+\left[\frac{1}{2}\stackrel{~}{r}^2+\stackrel{~}{g}N_B\left|\frac{\stackrel{~}{\chi }(\stackrel{~}{r})}{\stackrel{~}{r}}\right|^2+\stackrel{~}{h}\stackrel{~}{n_F}(\stackrel{~}{r})\right]\stackrel{~}{\chi }(\stackrel{~}{r})=\stackrel{~}{\mu }\stackrel{~}{\chi }(\stackrel{~}{r}).$$
(8)
In the remaining parts of the paper we shall omit the tilde from this equation.
To solve the Gross-Pitaevskii equation for the bosons, we must find $`n_F(\stackrel{}{r})`$. To this end we invoke two methods: A semi-classical (Thomas-Fermi) approximation and a quantum mechanical treatment.
### B Thomas-Fermi approximation for the fermions
In the semi-classical (Thomas-Fermi) approximation the particles are assigned classical positions and momenta, but the effects of quantum statistics are taken into account. That is: The density in the occupied part of phase space is simply $`(2\pi )^3`$, and sums over states can be replaced by the corresponding integrals over $`\stackrel{}{r}`$ or $`\stackrel{}{k}`$. The fermions experience a potential $`V(\stackrel{}{r})=V_{ext}(\stackrel{}{r})+hn_B(\stackrel{}{r})`$ and for particle motion in such a potential it is posible to define a local Fermi vector $`\stackrel{}{k}_F(\stackrel{}{r})`$ by
$$E_F=\frac{\mathrm{}^2k_F(\stackrel{}{r})^2}{2M}+V(\stackrel{}{r}),$$
(9)
so that the volume of the local Fermi sea in $`k`$ space is simply
$$\frac{4}{3}\pi k_F(\stackrel{}{r})^3=(2\pi )^3n_F(\stackrel{}{r}).$$
(10)
In the low temperature limit, where $`p`$-wave (and higher multipole) scattering can be neglected, the supression of the $`s`$-wave scattering amplitude due to the antisymmetry of the many-body wavefunction implies that the spin polarized fermions constitute a noninteracting gas (for the case of an interacting Fermi gas, see ). Hence the density of the fermionic component is given by
$$n_F(\stackrel{}{r})=\left\{\frac{2M}{\mathrm{}^2}\left[E_FV_{ext}(\stackrel{}{r})hn_B(\stackrel{}{r})\right]\right\}^{3/2}/(6\pi ^2).$$
(11)
As in the case of the bosons, where the chemical potential must be adjusted for the integral of the density over space to give the correct number of particles, the Fermi energy determines the proper normalisation; $`d^3rn_F(\stackrel{}{r})=N_F`$. For a thorough discussion of trapped fermions (also at $`T>0`$), and comments on the range of validity of the Thomas-Fermi approximation see .
### C Slater determinant description
The many-body wavefunction, $`\mathrm{\Psi }(\stackrel{}{r}_1\mathrm{}\stackrel{}{r}_{N_F})`$, may be represented by a Slater determinant
$$\mathrm{\Psi }(\stackrel{}{r}_1\mathrm{}\stackrel{}{r}_{N_F})=\frac{1}{\sqrt{N!}}𝒜\underset{i=1}{\overset{N_F}{}}\phi _i(\stackrel{}{r}_i),$$
(12)
where $`𝒜`$ is the antisymmetrization operator. This Slater determinant solves a stationary Schrödinger equation
$$\widehat{H}\mathrm{\Psi }(\stackrel{}{r})=E\mathrm{\Psi }(\stackrel{}{r}),$$
(13)
with a Hamiltonian that is the sum of $`N_F`$ independent single-particle operators
$`\widehat{H}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{N_F}{}}}\widehat{H}_i,`$ (14)
$`\widehat{H}_i`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2M}}_{r_i}^2+{\displaystyle \frac{1}{2}}M\omega ^2r_i^2+hn_B(\stackrel{}{r}_i).`$ (15)
The orbitals $`\phi _i(\stackrel{}{r}_i)`$ solve the eigenvalue equation
$$\widehat{H}_i\phi _i(\stackrel{}{r}_i)=E_i\phi _i(\stackrel{}{r}_i).$$
(16)
We make the substitution $`\phi (\stackrel{}{r})=\frac{u_n\mathrm{}(r)}{r}Y_\mathrm{}m(\theta ,\varphi )`$, where $`Y_{lm}(\theta ,\varphi )`$ are the usual spherical harmonics, and we thus obtain a radial equation for the functions $`u_n\mathrm{}`$ in harmonic oscillator units
$$\frac{1}{2}\frac{d^2u_n\mathrm{}}{dr^2}+\left[\frac{1}{2}r^2+\frac{\mathrm{}(\mathrm{}+1)}{2r^2}+hn_B(r)\right]u_n\mathrm{}(r)=E_n\mathrm{}u_n\mathrm{}(r).$$
(17)
It is important to keep in mind that the radial functions must satisfy the boundary condition $`u_n\mathrm{}(0)=0`$, to ensure a finite particle density at the center of the trap.
Equation (17) can be solved once for every $`\mathrm{}`$-value, thus producing the energy spectrum. The centrifugal term in the radial equation, implies that the fermions can be considered to move in an isotropic effective potential, $`V_{eff}(r)=V_{ext}(r)+hn_B(r)+\frac{\mathrm{}(\mathrm{}+1)}{2r^2}`$. The energy levels $`E_n\mathrm{}`$ are $`2\mathrm{}+1`$ times degenerate, and we have the fermion density given by
$$n_F(\stackrel{}{r})=\underset{\stackrel{occupied}{states}}{}\left|\frac{u_n\mathrm{}(r)}{r}Y_\mathrm{}m(\theta ,\varphi )\right|^2=\underset{n\mathrm{}}{}(2\mathrm{}+1)\frac{|u_n\mathrm{}(r)|^2}{4\pi r^2},$$
(18)
since $`_{m=\mathrm{}}^{m=\mathrm{}}|Y_\mathrm{}m(\theta ,\varphi )|^2=(2\mathrm{}+1)/4\pi `$. Once found the eigenstates are sorted after energy and the energy levels are filled from below with $`N_F`$ particles. The Fermi energy is the energy of the highest occupied orbital. The maximum value of the angular momentum, may be estimated from the Thomas-Fermi expressions (10,11) for $`h=0`$ by maximizing $`rk_F(\stackrel{}{r})`$, the maximal length of $`\mathrm{}`$ at the point $`\stackrel{}{r}`$. This yields the simple result, $`\mathrm{}_{max}E_F`$, where the Fermi energy in the noninteracting limit is $`E_F=(6N_F)^{1/3}`$ in harmonic oscillator units. To test our numerical calculations for fermions not interacting with the bosons ($`h=0`$), we have compared our spatial density distributions with those of Schneider and Wallis and found excellent agreement.
### D Numerical Procedure
We note that the solution of both Eqs. (11) and (17) require prior knowledge of the boson density $`n_B(\stackrel{}{r})`$. To obtain the density profiles of the two components, we insert iteratively the density of one component into the equation for the other until a desired convergence is reached.
To solve (8) for the boson density, we use the method of steepest descend, that is we propagate a trial function (which can be chosen initially almost arbitrarily) in imaginary time $`\tau `$, replacing $`\mu \chi (r)`$ by $`(/\tau )\chi (r,\tau )`$. In the long time limit the propagation “filters” the trial function to the condensate ground state Alternative methods for solving numerically the Gross-Pitaevskii equation are presented in .
The evaluation of the fermion density profile is done by two methods, as described above. In the case of the Thomas-Fermi approximation, $`n_F(\stackrel{}{r})`$ is found by direct insertion of $`n_B(\stackrel{}{r})`$ into (11), searching numerically for the energy $`E_F`$ giving the right number of particles. Within the Slater determinant method one obtains the density profile directly from (18), once the diagonalization of (17) has been done.
## III Anisotropic traps
In this section we treat the case of an anisotropic trapping potential with a cylindrical geometry ($`\omega _x=\omega _y=\omega _{}\omega _z`$) as this corresponds to current experimental setups We thus have
$$V_{ext}=\frac{1}{2}M\omega _{}^2r^2+\frac{1}{2}M\omega _z^2z^2,$$
(19)
where $`r=\sqrt{x^2+y^2}`$ and $`z`$ are the radial and axial coordinate respectively. We define the asymmetry parameter $`\lambda =\omega _z/\omega _{}`$.
As in the case of the isotropic potential we have for the bosons a non-linear Schrödinger equation corresponding to (1). By the substitution $`\chi =\sqrt{r}\psi `$ we obtain the equation
$`\mu \chi (r,z)=`$ $``$ $`{\displaystyle \frac{1}{2}}\left[{\displaystyle \frac{^2\chi }{r^2}}+{\displaystyle \frac{^2\chi }{z^2}}\right]{\displaystyle \frac{\chi (r,z)}{8r^2}}+{\displaystyle \frac{1}{2}}(r^2+\lambda ^2z^2)\chi (r,z)`$ (20)
$`+`$ $`gN_B{\displaystyle \frac{\left|\chi (r)\right|^2}{r}}\chi (r,z)+hn_F(\stackrel{}{r})\chi (r,z),`$ (21)
in harmonic oscillator units. Again the radial function has to vanish on the symmetry axis to remove potential problems of divergences near the origin. This boundary condition is implemented in our numerical procedure by imposing on the radial function a $`\sqrt{r}`$ dependence for small values of $`r`$, fitting to the value of $`\chi `$ at larger distances from the axis. As in the case of the isotropic trapping potential a Split-Step-Fourier technique is used to propagate the boson wavefunction to the ground state. An alternative method for solving the Gross-Pitaevskii equation in a cylindrical configuration, applying an alternating-direction implicit method to compute the derivatives is discussed in .
We shall limit ourselves to the Thomas-Fermi approximation for the fermions, both out of necessity and convenience. Already in the spherically symmetric, effectively 1-dimensional case, the full quantum mechanical analysis is very time consuming and as we shall demonstrate in the next section, the Thomas-Fermi approximation offers the same qualitative features as the exact description. The fermion density is thus evaluated using equation (11) with the external potential (19) and with the boson density obtained from (21).
## IV Results
The main conclusion of is the prediction of a component separation under variation of the strength of the boson-boson and boson-fermion interaction. In the Thomas-Fermi approximation for both components, the density distributions solve the coupled equations
$`V_{ext}(\stackrel{}{r})+gn_B(\stackrel{}{r})+hn_F(\stackrel{}{r})`$ $`=`$ $`\mu `$ (22)
$`{\displaystyle \frac{\mathrm{}^2}{2M}}(6\pi ^2n_F(\stackrel{}{r}))^{2/3}+V_{ext}(\stackrel{}{r})+hn_B(\stackrel{}{r})`$ $`=`$ $`E_F.`$ (23)
In the case of $`N_F/N_B1`$ the fermions may be neglected in the equation for the bosons. For the fermions we then obtain the simple equation
$`{\displaystyle \frac{\mathrm{}^2}{2M}}(6\pi ^2n_F(\stackrel{}{r}))^{2/3}+(1{\displaystyle \frac{h}{g}})V_{ext}(\stackrel{}{r})+{\displaystyle \frac{h}{g}}\mu =E_F,`$ (24)
where the terms proportional with $`h`$ are absent in regions with vanishing $`n_B(\stackrel{}{r})`$. We may distinguish between 3 different types of solutions; if $`h<g`$ the potential minimum of the fermions is located at the center of the trap, and if their number is small enough, they will constitute a ’core’ entirely enclosed within the Bose condensate. The two quantum gases are truly interpenetrating. If $`h=g`$ the fermions have a constant density throughout the Bose condensate, falling towards zero outside. If $`h>g`$ the effective potential for the fermions is that of an inverted harmonic oscillator having a minimum at the edge of the Bose condensate, where the fermions localize as a ’shell’ wrapped around the condensate.
### A Isotropic trap, quantum treatment
When we replace the Thomas-Fermi approximation by an exact description including the kinetic energy operator for the bosons and treating the fermions quantum mechanically, we expect to observe the same overall behaviour, but with minor corrections. The boson kinetic energy is expected to cause penetration into the fermionic component and a rounding off of the atomic distributions at the boundaries. Fig. 1 shows the spatial distribution of 1000 fermions in a condensate of $`10^6`$ bosons for different values of the boson-fermion interactions strength, $`h`$. The strength of the boson-boson interaction, $`g`$, is chosen to give maximal overlap between the two atomic clouds. In order to have clouds of comparable size, we equate the Thomas-Fermi expressions for the radius of the Bose condensate $`(15N_Bg/4\pi M\omega ^2)^{1/5}`$, and the radius of the zero temperature Fermi gas $`(48N_F)^{1/6}\sqrt{\mathrm{}/M\omega }`$. This gives the condition:
$$g/(\mathrm{}\omega a_{0}^{}{}_{}{}^{3})21.1N_{F}^{}{}_{}{}^{5/6}/N_B,$$
(25)
which for the parameters of Fig. 1 requires $`g=0.015`$. The coupling $`g`$ differs for different atomic species and this value is in approximate agreement with the coupling stregth in the MIT Na setup , and we recall the possibility to achieve couplings of arbitrary strength by the recently demonstrated modification of the atomic scattering length by external fields . This allows a ’tuning’ of the scattering length through both positive and negative values. Finally we recall that we have insisted on equal masses and trapping potentials for the two components. If these constraints are relaxed, we may more easily vary the values of the scaled interaction strengths.
The oscillations in the fermion density distribution near the trap center reflect the matter wave modulation of the particles in the outermost shell. Their de Broglie wavelength can be estimated in the Thomas-Fermi approximation from (9): In the center of the trap the particles in the $`\mathrm{}=0`$ states experience a vanishing potential for $`h=0`$. As the Thomas-Fermi expression for the Fermi energy of $`N_F`$ fermions in a harmonic potential is $`E_F=(6N_F)^{1/3}\mathrm{}\omega `$ we find for the de Broglie wavelength
$$\lambda _{DB}\frac{2\pi }{k_F(0)}\frac{\sqrt{2}\pi a_0}{(6N_F)^{1/6}}1a_0,$$
(26)
an estimate that is reproduced by the data, see inset.
We now turn to the case of equal numbers of bosons and fermions. The influence of the inter-species interaction grows as the number of fermions is increased with dramatical effects on the atomic distibutions, as we shall demonstrate. We study the case of $`10^6`$ fermions, and the same number of bosons with an interaction strength of $`g=2.11\mathrm{}\omega a_{0}^{}{}_{}{}^{3}`$. We again expect that for certain critical parameters, the components find it energetically favorable to separate into two distinct phases, but this time bosons are expelled from the trap center, minimizing their internal interaction energy by spreading in a ’shell’ around a fermionic bubble. Figs. 2 and 3 present our results. The essential features are again the spatial separation of the two components, this time manifesting itself by the exclusion of the bosonic component from the trap center and the existence of a constant fermion density through the boson distribution for $`h=g`$. For a different choise of parameters, for example by letting the fermions be trapped by a weaker potential, we are also capable of producing a multi-layered structure with fermions residing on both sides of the bosons.
We notice that as the bosons are expelled from the center of the trap, forming a ’mantle’ around the fermions, the fermionic component is compressed, having a higher peak density and covering a smaller portion of the trapping volume. A similar behavior has been noted for bi-condensate systems . One of the essential features predicted in the Thomas-Fermi approximation is the existence of a ’plateau’ of constant fermion density through the boson distribution for $`h=g`$. As illustrated by Fig. 3, which is just a magnification of the central parts of Fig. 2e, this phenomenon also appear in our quantum mechanical treatment, although with the parameters chosen it does not involve quite as many particles as obtained from the semi-classical calculations in .
It is interesting to compare the above mentioned results with those obtained by treating the fermions in the Thomas-Fermi approximation. This is done in Fig. 4 for $`N_B=N_F=10^6`$, $`h=g=2.11\mathrm{}\omega a_0^3`$, and we note that the semi-classical description gives a qualitatively correct description, in that it reliably predicts the phase separation. Thus it is reasonable to use this approximative treatment of the fermions in the anisotropic case, where the exact description is too cumbersome.
### B Anisotropic trap
We now turn our attention to the anisotropic potentials, where we will use only the semi-classical Thomas-Fermi approximation for the fermion density. We aim to reveal similar variations in the ground state density profiles as for the isotropic trap, but going to higher dimensions we now have the opportunity to investigate the phenomenon of spatial symmetry breaking. Intuitively, we assume that for critical parameters it may be preferable for the two components to break mirror symmetry ($`zz`$), thereby minimizing their mutual interaction, especially in elongated traps. Such a behavior has been predicted by Öhberg and Stenholm for bi-condensates in two dimensions .
It remains to be demonstrated though, that the features described in the case of the isotropic trap are still essential, when we consider the anisotropic scenario relevant in comparison with currently experimentally feasible setups. We present in Figs. 5 and 6 the analog of Fig. 1 with the same choise of parameters and $`\lambda =1/\sqrt{8}`$, *i.e.* the inverse of the value for the current traps which have the strongest confinement along the $`z`$-axis. We notice the appearance of the same qualitative features as in the isotropic trap, that is component separation for $`h>g`$ and a plateau of constant fermion density for $`h=g`$.
The $`10^6`$ bosons are in the condensate which is unaffected in form and location by the presence of the relatively few fermions. Not shown in Figs. 5 and 6 is the distribution of fermions for $`h`$ smaller than $`g`$. In this case the fermionic component overlaps the boson cloud at the center of the trap.
To address the issue of symmetry breaking we adopt the same procedure as Öhberg and Stenholm . This offers only suggestive evidence that symmetry breaking may occur. To investigate this behavior correctly one must use an altogether different approach, minimizing the energy functional to find the ground-state density profile . The point is that the solutions of the Gross-Pitaevskii equation are stationary points of the energy functional, not necessarily corresponding to minima. They may therefore be unstable in certain parameter regions. It is possible though to single out the more stable of two configurations by comparing their total energy as this is minimum in equilibrium.
The total energy functional of the two-component system is a sum of four terms
$$E=T_B+T_F+V_{ext}+V_{int}.$$
(27)
The first term is the boson kinetic energy
$$T_B=d^3r\frac{\mathrm{}^2}{2M}|\psi (\stackrel{}{r})|^2.$$
(28)
As a fermion with wave number $`\stackrel{}{k}(\stackrel{}{r})`$ has a kinetic energy of $`\mathrm{}^2k^2/2M`$, the total fermionic contribution to the kinetic energy is found by integrating this local term over all of phase-space, weighted by the phase-space density, $`1/(2\pi )^3`$,
$`T_F`$ $`=`$ $`{\displaystyle \frac{d^3r}{(2\pi )^3}_0^{k_F(\stackrel{}{r})}d^3k\frac{\mathrm{}^2k^2}{2M}}`$ (29)
$`=`$ $`{\displaystyle \frac{d^3r}{2\pi ^2}\frac{\mathrm{}^2}{10M}\left[6\pi ^2n_F(\stackrel{}{r})\right]^{5/3}}.`$ (30)
Calculating the potential energy terms is easy, as they involve only integrals over the atomic densities
$`V_{ext}`$ $`=`$ $`{\displaystyle d^3r\frac{1}{2}M\omega ^2r^2\left[n_B(\stackrel{}{r})+n_F(\stackrel{}{r})\right]}`$ (31)
$`V_{int}`$ $`=`$ $`{\displaystyle d^3rn_B(\stackrel{}{r})\left[gn_B(\stackrel{}{r})+hn_F(\stackrel{}{r})\right]}.`$ (32)
We have chosen the number of atoms to be $`N_B=N_F=10^3`$, while the asymmetry parameter is still set to $`\lambda =1/\sqrt{8}`$. The interaction parameters are $`g=6.67\mathrm{}\omega a_0^3`$ and $`h=5g`$.
Starting the iteration with two well separated clouds displaced along the cylinder axis, *i.e.* along the direction of the weaker trapping potential, the calculation converges to a situation where the fermions localize on both sides of a central concentration of the Bose condensate: a ’boson-burger’, see Fig. 7. Initiating the calculation with two overlapping clouds in the center of the trap results in just the reversed situation: a ’fermion-burger’, consisting of a central fermionic part surrounded on two sides by bosons, but this configuration has a larger energy. The ’boson-burger’ seems to be the stable solution.
In Fig. 8 we show the spatial distribution of 5000 fermions and 1000 bosons. The particles feel the same trapping potential as in Fig. 7, and the interaction strengths are kept unchanged. This configuration is the result when the starting point of the calculation is two separated clouds. When we start by placing both species at the center of the trap we achieve again the ’fermion-burger’, but at a higher energy. Thus we conclude that in this region of parameter space the system is unstable against breaking of the reflection symmetry.
We note that our approach provides two degenerate symmetry broken states, the one in Fig. 8 and its mirror image in the $`xy`$-plane. Going beyond our theoretical treatment (Hartree), we may construct superpositions of these two macroscopically states which do not break the spatial symmetry. One of these states will have a lower energy, but such a ’Schrödinger-cat’ state is exceedingly complicated to prepare, *c.f.* the discussion in . Thus the symmetry broken solution is most likely to be observed in an experiment.
## V Conclusion
In this paper we have investigated the zero temperature ground state of a mixture of boson and fermion gases in both isotropic and anisotropic trapping potentials. We have addressed the issue of component separation using nummerical techniques to solve coupled equations for the spatial density of the two species. Our calculations have confirmed and expanded upon the results of a previous paper , which treated the problem only within the Thomas-Fermi approximation for both components and which analyzed only the case of an isotropic trap. We have confirmed the existence of three distinct states of the system under variation of the ratio of the interaction strengths $`h/g`$: For small values of this parameter the gases are interpenetrable, overlapping throughout the occupied volume of the trap, as their mutual repulsion is not strong enough to cause separation. When the coupling strength $`h`$ exceeds the strength of the boson-boson interaction one of the species is expelled from the center of the trap. The spatial configuration in this case depends on the symmetry of the trapping potential. In an isotropic trap the separated phase is rotationally symmetric, the excluded component constitutes a spherical shell wrapped around a centrally compressed bulk. The anisotropic trap however has a parameter region where a breaking of symmetry ($`zz`$) may occur, and we have demonstrated such forms. In the limiting case $`h=g`$ there exists the possibility for the fermions to have a constant spatial density where the bosons are localized.
An aspect of this work is the availability of an almost isolated degenerate Fermi gas through the complete separation of the two species. The trapped, degenerate Fermi gas is interesting in view of the possibility of a BCS transition when two spin states are trapped simultaneously and because of the analogies between this system and both atomic nuclei and the interior of neutron stars.
The details of sympathetically cooling the Fermi gas to the degeneracy level through thermal contact with the Bose condensate are of course of great importance in further research . In general the investigation of the cooling ability of the condensate should not be restricted to fermionic impurities. In view of the recent trapping of simple molecules in both optical and magnetic potentials, also more complex solutes with several internal degrees of freedom pose an interesting challenge for future research.
Another direction worth noticing is the prospect of trapping a boson-fermion mixture in the periodic potential of an optical lattice , both in its own right and as a study of solid state phenomena. With quantum gases well beyond the degeneracy level a complete filling of the potential wells may well be expected .
Finally it should be mentioned that in this work we have concentrated solely on systems with a positive coupling strength $`h`$. Allowing the interaction between the species to become attractive is known to induce a dramatic change in the macroscopic behavior of the system as it becomes unstable against collapse for large negative values of $`h`$ . We are currently setting up calculations to investigate this phenomenon in detail using the numerical procedure developed in this work.
|
no-problem/9901/cond-mat9901255.html
|
ar5iv
|
text
|
# Ab Initio Calculation of Spin Gap Behavior in CaV4O9
## Abstract
Second neighbor dominated exchange coupling in CaV<sub>4</sub>O<sub>9</sub> has been obtained from ab initio density functional (DF) calculations. A DF-based self-consistent atomic deformation model reveals that the nearest neighbor coupling is small due to strong cancellation among the various superexchange processes. Exact diagonalization of the predicted Heisenberg model yields spin-gap behavior in good agreement with experiment. The model is refined by fitting to the experimental susceptibility. The resulting model agrees very well with the experimental susceptibility and triplet dispersion.
CaV<sub>4</sub>O<sub>9</sub> was the first two-dimensional system observed to enter a low-temperature quantum-disordered phase with a spin gap $`\mathrm{\Delta }110`$K. The gap was first apparent in its susceptibility, which vanishes at low temperatures as $`\chi (T0)\mathrm{exp}(\mathrm{\Delta }/kT)`$ , and was observed directly in the dispersion of triplet spin excitations ($`\mathrm{\Omega }_Q`$) measured by neutron scattering. This unexpected behavior has stimulated considerable theoretical study of the exchange couplings between S=$`\frac{1}{2}`$ spins on the V lattice using Heisenberg models .
CaV<sub>4</sub>O<sub>9</sub> is a layered compound—the interlayer distance is sufficiently large to make interlayer V-V coupling negligible. Within a layer, the V atoms form a $`\frac{1}{5}`$-depleted square lattice shown as the circles in Fig. 1 . The lattice was originally viewed as an array of square “plaquettes” of V ions (e.g., 1-2-3-4 in Fig. 1) tending toward singlet formation since isolated plaquettes have a singlet ground state. Examination of the structure however suggests intra- and inter-plaquette nearest neighbor V-V coupling should be similar, so the limit of isolated plaquettes is not realistic.
Self-consistent electronic structure work identified the V<sup>4+</sup> spin orbital as $`d_{xy}`$, which implied that it was a larger square of V ions, the “metaplaquette,” where singlet formation arises. Fitting Heisenberg Hamiltonians to the measured dispersion of the triplet excitations confirmed that the dominant second neighbor exchange coupling is crucial to account for the shape of $`\mathrm{\Omega }_Q`$.
The complete Heisenberg Hamiltonian for CaV<sub>4</sub>O<sub>9</sub> has four different coupling constants: nearest-neighbor ($`nn`$) and next-nearest-neighbor ($`nnn`$) couplings and, for each of these, intra- and inter-plaquette couplings. In notation of Gelfand et al., the Hamiltonian is given by
$`H`$ $`=`$ $`J_1{\displaystyle \underset{nn}{}}𝐒_i𝐒_j+J_1^{}{\displaystyle \underset{nn^{}}{}}𝐒_i𝐒_j`$ (1)
$`+`$ $`J_2{\displaystyle \underset{nnn}{}}𝐒_i𝐒_j+J_2^{}{\displaystyle \underset{nnn^{}}{}}𝐒_i𝐒_j,`$ (2)
where $`𝐒_i`$ denotes the spin $`\frac{1}{2}`$ operator in site $`i`$. The $`nn`$ sums run over nearest-neighbor bonds and the $`nnn`$ sums run over next-nearest-neighbor bonds. Unprimed sums connect V’s in the same plaquette, while primed sums connect V’s in different plaquettes. The four couplings are drawn in different line styles in Fig. 1.
In this Letter we show that the spin gap behavior of CaV<sub>4</sub>O<sub>9</sub>, even considering its complex structure with eight very low symmetry V<sup>4+</sup> ions in the primitive cell, can be calculated in ab initio fashion. Our work has three separate aspects. 1) Local spin density approximation (LSDA) calculations are used to obtain energies for various magnetic configurations. The resultant exchange interactions are obtained by fitting these energies to the mean-field Heisenberg model as described below. 2) An approximate but physically motivated local orbital method called the self-consistent atomic deformation (SCAD) method is used to provide explicit local orbitals, eigenvalues, and hopping integrals for calculating the exchange interactions from perturbation theory. This method reveals that the $`nn`$ interactions are not intrinsically small, but the net value of the superexchange coupling is small due to cancellations among various fourth-order processes. It also indicates that direct V-V exchange coupling is important. 3) The Heisenberg Hamiltonian is solved using exact diagonalization techniques on finite periodic clusters. Spin gap behavior is obtained, and $`\chi `$(T) is similar to the data. The Heisenberg couplings are refined by fitting to $`\chi (T)`$. The resulting Hamiltonian agrees well with $`\chi (T)`$ and with the triplet dispersion determined from neutron scattering.
The LSDA calculations of the energy for various magnetic configurations were more precise extensions of previous work on CaV<sub>4</sub>O<sub>9</sub> . The magnetism of the V ion is found to be robust, allowing us to break the spin symmetry in any manner we choose and obtain the energy from a self-consistent calculation. The symmetry of the non-magnetic state is initially broken as desired by applying the necessary local magnetic fields to the V ions. The seven configurations we have chosen include the ferromagnetic (FM) state, one ferrimagnetic (FiM) state, and five antiferromagnetic (AF) states with zero net spin. These AF states include the Néel state, a state in which FM plaquettes are antialigned (FMPL), and a state in which the metaplaquettes are aligned antiferromagnetically (AFMP). The configurations, given explicitly in Table I, were chosen either because of their physical relevance (AFMP was anticipated to be lowest in energy, as found) or computational considerations such as retaining inversion symmetry.
The resulting energies were fit to the mean-field Heisenberg model, which contains simply the $`\mathrm{S}_i^z`$ or Ising terms of the full Hamiltonian (2), to determine the four coupling constants. The six energy differences lead to six conditions on the four $`J`$s, and a least-squares fit gives the values listed as LSDA in Table II, each with a fitting uncertainty of about 1 meV. Since both nearest and next nearest couplings are AF in sign, there is a great deal of frustration in the magnetic system. The large value of $`J_2^{}`$ indicates that singlet formation on the metaplaquette is the driving force for the spin gap.
To understand how these values of the exchange parameters arise, we evaluate the fourth-order expressions for the exchange constants, using an approximate but parameter-free method based on the SCAD method. For each coupling constant in CaV<sub>4</sub>O<sub>9</sub>, we focus on the relevant clusters for each coupling. The $`nnn`$ interactions require a V<sub>2</sub>O cluster with two V ions (each with one relevant orbital) and one O in between. The $`nn`$ exchange interactions require a V<sub>2</sub>O<sub>2</sub> cluster. All three $`2p`$ orbitals in each O are relevant, since the low symmetry makes them non-degenerate and oriented in directions determined not by symmetry but by electronic interactions.
We neglect the Hubbard $`U`$ and Hund’s rule coupling on the O ions. In what follows, $`U`$ is the V on-site repulsion, $`ϵ_V`$ and $`ϵ_\alpha `$ are site energies of the V and $`\alpha `$-th O orbitals, and $`t_{i\alpha }`$ is the hopping amplitude between the $`i`$-th V and the $`\alpha `$-th O orbital. Defining the energy denominators $`\mathrm{\Delta }_\alpha =U+ϵ_Vϵ_\alpha `$ simplifies the expressions.
The initial state has each O orbital doubly filled and each V with one electron. The perturbation theory is given by three fourth-order terms and the direct second-order V-V term:
$`J`$ $`=`$ $`j_1+j_2+j_3+j_d`$ (3)
$`=`$ $`{\displaystyle \frac{4}{U}}\left({\displaystyle \underset{\alpha }{}}{\displaystyle \frac{t_{1\alpha }t_{2\alpha }}{\mathrm{\Delta }_\alpha }}\right)^2+4{\displaystyle \underset{\alpha }{}}{\displaystyle \frac{(t_{1\alpha }t_{2\alpha })^2}{\mathrm{\Delta }_\alpha ^3}}`$ (4)
$`+`$ $`4{\displaystyle \underset{\alpha <\beta }{}}{\displaystyle \frac{t_{1\alpha }t_{2\alpha }t_{1\beta }t_{2\beta }}{\mathrm{\Delta }_\alpha +\mathrm{\Delta }_\beta }}\left({\displaystyle \frac{1}{\mathrm{\Delta }_\alpha }}+{\displaystyle \frac{1}{\mathrm{\Delta }_\beta }}\right)^2+{\displaystyle \frac{4t_{12}^2}{U}}`$ (5)
In the $`nnn`$ case, $`\alpha `$ and $`\beta `$ sum over the three orbitals in the single oxygen atom. In the $`nn`$ case, $`\alpha `$ and $`\beta `$ sum over the six orbitals in both oxygen atoms. The first three terms in (5) can be categorized by their configurations after the second hop of the four-hop process: 1) One vanadium empty; 2) One oxygen orbital empty; 3) Two oxygen orbitals half filled. The last term has an extra factor of two because it arises twice: the total spin singlet case is reduced in energy and the total spin triplet is increased by the same amount. The latter picks up a minus sign due to electron exchange.
This expression is evaluated with the SCAD model, which expresses the total density $`n(r)`$ as a sum over localized densities $`|\varphi _\alpha ^{(i)}(𝐫𝐑_i)|^2`$ centered at the atomic sites $`𝐑_i`$ . The orbitals $`\varphi _\alpha ^{(i)}`$ are solutions to atom-centered one-electron Hamiltonians $`H_i`$ for each site. The potentials in $`H_i`$ are determined self-consistently from the expression for the functional derivative of the total energy. It includes a local approximation for exchange and correlation energy and the Thomas-Fermi function for kinetic energy of overlapping densities.
Each V ion has the lowest of its five 3d levels occupied by a single electron, giving the V<sup>4+</sup>, O<sup>2-</sup> ionic description. $`U3.5`$ eV was computed by minimizing the SCAD energy subject to the constraint that one V ion has its charge increased by unity. The electron comes mainly from the other V ions with only a minor portion coming from the nearby O ions.
The matrix elements, $`t_{ij}=\psi _i|H|\psi _j`$ require the full Hamiltonian $`H`$ and orthogonalized orbitals $`\psi `$. The $`\psi `$’s are obtained from the SCAD orbitals using Löwdin’s method , and $`H`$ is determined from the site centered SCAD Hamiltonians by removing the kinetic energy overlap contributions from the latter. This gives expressions for $`H`$ that differ in the site selected for spherical harmonic expansion of the potential. We find the two possibilities, $`t_{ij}`$ and $`t_{ji}`$, may differ by $``$20%, which leads to a much larger uncertainty in the fourth-order $`J`$’s. Since the vanadium sites of a given pair of V ions are equivalent by symmetry, the direct interaction, $`j_d`$, has no such uncertainty. To be consistent with the direct interaction calculation, we use the vanadium-site-expanded potentials for evaluating matrix elements between oxygen-vanadium pairs. The net values obtained (labelled SCAD in Table II) agree rather well with those derived from LSDA energies for $`J_1`$, $`J_2`$, and $`J_2^{}`$. The close agreement may be fortuitous in view of the uncertainties mentioned above and the approximations inherent in the SCAD method. Nevertheless, we believe certain qualitative features of the SCAD results are real: 1) The values for $`J_1`$ and $`J_1^{}`$ result primarily from $`j_d`$, with relatively small contributions from fourth-order terms due to cancellation within $`j_1`$ and between $`j_2`$ and $`j_3`$. 2) The value for $`J_2^{}`$, the largest coupling, is dominated by a single term in $`j_1`$, resulting from V overlap with the middle O 2p level.
For each set of four coupling constants, we calculated the uniform susceptibility of the Hamiltonian (2) on periodic 20-spin clusters. The susceptibility is given by:
$$\chi (T)=\frac{n(g\mu _B)^2}{Nk_BT}\underset{ij}{}S_i^zS_j^z,$$
(6)
where $`n`$ is the number of V atoms per gram and $`N`$ is the number of sites in the cluster. We take $`g=1.67`$ for all plots. This was determined from the fit to the experimental magnetic susceptibility data described below.
To evaluate (6), we calculate all eigenvalues of the Hamiltonian—eigenvectors are not required. We block-diagonalize the Hamiltonian with all possible symmetries: translations, rotations, S, and $`S_z`$ . The blocks are left with no degeneracies, so the eigenvalues are calculated very efficiently using the Lanczos algorithm with no reorthogonalization developed by Cullum and Willoughby. This allows $`\chi `$ to be calculated exactly at all temperatures using one Lanczos run for each symmetry sector. The Hamiltonian for the 20-spin cluster has blocks as large as 36950. Within each block at least the 400 lowest and highest eigenvalues are calculated, and an analytic density of states is assumed for the middle eigenvalues. This technique will be described elsewhere.
The susceptibility of the full Hamiltonian (2) calculated with each set of coupling constants in Table II is shown in Fig. 2. The experimental susceptibility of Taniguchi, et al. is shown for comparison. All curves exhibit a spin gap, as evidenced by their low temperature behavior, $`\chi (T0)\mathrm{exp}(\mathrm{\Delta }/kT)`$, where $`\mathrm{\Delta }`$ is the gap. Both the LSDA and SCAD approaches overestimate the gap, indicating that the calculated coupling constants are too large. The coupling constants deduced from neutron scattering are also shown .
Also shown in Fig. 2 is a curve generated using the coupling constants obtained from a least-squares fit of the susceptibility to the experimental results. In the fitting procedure, we allow the $`g`$-value in eq. (6) and all four $`J`$’s to vary. At the best fit, we obtain the coupling constants listed as “Fit” in Table II and shown as the line thicknesses in Fig. 1. We find $`g=1.67`$, which is smaller than the $`g`$-value indicated by ESR measurements . Near the minimum, the fitting function is quadratic. The eigenvalues of the Hessian (scaled by an arbitrary constant) are 1, 0.046, 0.013, and 0.00039. The smallness of the last eigenvalue indicates that in the $`\delta \{J_1,J_1^{},J_2,J_2^{}\}=\{0.09,0.57,0.81,0.09\}`$ direction from the minimum, the least-squares fit is very soft.
The 20-spin cluster is sufficiently large compared with the correlation length to describe the infinite system accurately. The minimum triplet gap hardly varies between 20 and 32-spin clusters: $`\mathrm{\Delta }_{20}=9.92`$ meV while $`\mathrm{\Delta }_{32}=10.02`$ meV for the Fit Hamiltonian.
Fig. 3 shows the triplet dispersion $`\mathrm{\Omega }_Q`$ of the LSDA, SCAD, and susceptibility-fit coupling constants calculated with the expansion in Ref. . Since the LSDA and SCAD coupling constants overestimate the gap, we rescaled their $`J`$’s by 0.58 and 0.65, respectively. Both the Fit and rescaled LSDA $`\mathrm{\Omega }_Q`$ agree with the neutron scattering data reasonably well; in particular, they correctly have minima at $`Q=(0,0)`$.
To conclude, we have shown that the quantum-disordered phase in CaV<sub>4</sub>O<sub>9</sub> can be predicted in ab initio fashion. We calculated the coupling constants of the Heisenberg Hamiltonian for CaV<sub>4</sub>O<sub>9</sub> in two very different first-principles approaches. In both methods, the strongest coupling is found between next-nearest-neighbor V atoms on metaplaquettes—the weak coupling between nearest-neighbor V’s results from the cancellation among superexchange processes. The uniform magnetic susceptibility for each set of coupling constants is calculated using a novel finite-temperature exact diagonalization technique, which shows the Hamiltonians determined from both ab initio approaches have quantum-disordered phases. The Hamiltonian that best fits the experimental susceptibility is calculated, and the agreement is remarkable. Finally the triplet dispersion of the ab initio and best susceptibility-fit Hamiltonians are shown to agree well with the neutron scattering data.
We thank Z. Weihong for the code to calculate the curves in Fig. 3 and N.E. Bonesteel, J.L. Feldman, R.E. Rudd, M. Sato, R.R.P. Singh, and C.C. Wan for stimulating conversations. This work was supported by the Office of Naval Research. C.S.H was supported by the National Research Council, and W.E.P. by NSF Grant DMR-9802076. Computations were done at the Arctic Region Supercomputing Center and at the DoD Major Shared Resource Centers at NAVOCEANO and CEWES.
|
no-problem/9901/astro-ph9901277.html
|
ar5iv
|
text
|
# GeV emission from the nearby radio galaxy Centaurus A
## 1 Introduction
Observations in the 30–10000 MeV energy range by the high-energy $`\gamma `$-ray telescope EGRET on board the Compton Observatory (CGRO) has shown the presence of a class of $`\gamma `$-ray bright blazars. Blazars are in general characterized by flat radio spectra, rapid time variability at most wavelengths and typically emit the bulk of their bolometric luminosity at $`\gamma `$-ray energies. The recently released 3rd EGRET catalog hart contains 271 $`\gamma `$-ray sources of which 68 are known to be extragalactic (66 blazars, 1 radio galaxy (Cen A) and 1 normal galaxy (LMC)). Thus almost $`\frac{2}{3}^{rd}`$ of the known $`\gamma `$-ray sources remain unidentified.
We report here the results from the analysis of all available EGRET data ($``$ 10 weeks of on-axis exposure) on the nearby radio galaxy Centaurus A. Centaurus A, at a distance of $``$3.5 Mpc hui , is the closest active galactic nucleus. The proximity of Cen A has made it the subject of numerous studies at many wavelengths from radio to TeV energies. Radio studies have shown the presence of a double-lobed radio morphology sch1 . In the past, the presence of an obscuring dust lane has prevented detailed optical studies of the central nucleus and the inner regions of the jet. However recent NICMOS observations sch2 show extended emission and a bright unresolved central nucleus which may have associated with it a small ($``$40 pc diameter) inclined disk. A one-sided X-ray jet is visible sch3 and is collimated in the direction of the giant radio lobes. At TeV energies Grindlay et al. gr used an early non-imaging Cherenkov system to report the discovery of emission from Cen A during a period of overall high activity at lower frequencies. More recent observations by more sensitive instruments (CANGAROO) provide only upper limits rowell .
Earlier attempts to detect Cen A from individual EGRET observations (typically 2 weeks long) were hampered by weak detection significance which resulted in large positional uncertainty. The possibility of association of the $`\gamma `$-ray excess with another likely candidate, BL Lac object 1312-423 (1.95away from Cen A), could not be ruled out . The strong Galactic diffuse emission and the larger uncertainties associated with the diffuse model hunter ; sree1 , also contributed to the difficulties in confirming the detection early on in the mission fichtel ; nol .
## 2 Results
A likelihood analysis mattox shows a 6.5$`\sigma `$ detection of a point source type excess, positionally coincident with Centaurus A. The average $`>`$100 MeV flux is (13.6$`\pm `$2.5)$`\times `$10<sup>-8</sup> photons cm<sup>-2</sup> s<sup>-1</sup> . The nearby BL Lac object 1312-423 is well outside the 95$`\%`$ confidence contour. Unlike the variability seen at lower energies miyazaki , the emission above 100 MeV appears steady. The lack of variability could arise from the near threshold detection associated with the individual observations. The 30-10000 MeV photon spectrum is well characterized by a single power law of index 2.40$`\pm `$0.28. This is steeper than the average power law spectrum from $`\gamma `$-ray blazars (2.15$`\pm `$0.04) muk and also steeper than the observed extragalactic $`\gamma `$-ray background (2.10$`\pm `$0.03) sree1 . A comparison of the EGRET measurements with OSSE kinzer and COMPTEL steinle data at keV and MeV energies yields a smooth, continuous spectrum that appears to evolve from a power law above 200 keV (index=1.97) and steepen gradually above 1 MeV. The inclusion of significant systematic errors in comparing results from different instruments is small, as evidenced by the good cross-comparison of the single power-law Crab pulsar spectrum across these energy bands ulmer . A systematic search for $`\gamma `$-ray emission from other nearby radio galaxies/Seyferts yielded no significant detection.
## 3 Conclusions
EGRET localization and spectral measurements provide unique confirmation that the source detected by OSSE and COMPTEL is Cen A. Unlike the error regions defined by OSSE and COMPTEL, the nearby XBL 1312-4221 ($``$2away from Cen A) lies well outside the EGRET 99$`\%`$ confidence contour. The consistency of the spectrum going from 50 keV to 1 GeV argues favorably for emission from a single source coincident with Cen A.
A detailed analysis of EGRET archival data shows $`>`$100 MeV emission from only the nearest radio galaxy (Cen A). The low $`\gamma `$-ray luminosity of Cen A ($``$10<sup>41</sup> ergs/s, about 10<sup>5</sup> times smaller than the typical $`\gamma `$-ray blazar) if typical of this source class, provides the most likely explanation for the non-detection of more distant members of this source class.
If Cen A is indeed a misaligned blazar bailey this provides new evidence for $`>`$100 MeV emission from radio-loud AGN with jets at large inclination angles ($`>`$60). This is contrary to the model of Skibo, Dermer & Kinzer sk which suggests a significant cut off in the spectrum around a few MeV. Assuming a unification model for AGN, and increasing high-energy emission with decreasing inclination angles dermer , the detection of more distant radio-loud AGN with intermediate jet inclination angles can be expected. This provides a new extragalactic source class for future high-energy experiments such as GLAST and VERITAS. Though the intrinsic luminosity is lower than other on-axis sources, the significantly larger space density of radio-loud FR-I sources, points to a new unresolved source class that could contribute to the extragalactic $`\gamma `$-ray background around 1 MeV steinle . If the mean $`\gamma `$-ray spectrum of this new source class is harder than the power-law spectral index of 2.40$`\pm `$0.28 observed for Cen A, then significant contributions from this new source class are also expected above 10 MeV. A more detailed discussion including constraints on some theoretical models is provided elsewhere sree2 .
|
no-problem/9901/astro-ph9901389.html
|
ar5iv
|
text
|
# Improved Searches for HI in Three Dwarf Spheroidal Galaxies
## 1 Introduction
Dwarf spheroidal galaxies, the smallest companions of our own Milky Way galaxy, were long thought to be old and dead galaxies devoid of any interstellar medium. They show no sign of current star formation. Furthermore, searches for HI emission from the dwarf spheroidal galaxies (Knapp et al. 1978; Mould et al. 1990; Koribalski, Johnston, & Otrupcek 1994) found no evidence of gas in the galaxies, with one possible exception (Carignan et al. 1998). Optical and UV absorption experiments for Leo I (Bowen et al. 1995, 1997) also detected no gas. Thus, it is commonly assumed that the dwarf spheroidal galaxies have no neutral ISM at all.
However, recent work on color-magnitude diagrams of the Local Group dwarf spheroidals contradicts this picture of old, dead galaxies. Most of the dwarf spheroidals experienced periods of star formation activity at various times from 10 Gyr to 1 Gyr ago (e.g. Smecker-Hane 1997 and references therein; Hurley-Keller, Mateo, & Nemec 1998). The Fornax dwarf spheroidal even seems to contain some very young stars, $``$10<sup>8</sup> yrs old (Stetson, Hesser, & Smecker-Hane 1998). Since star formation requires neutral gas, these facts clearly demonstrate that the spheroidals have had an interstellar medium (ISM) in the past several Gyr or less. This ISM could have existed in the spheroidals for most of their lifetimes, or it could have been captured from some external source such as the Magellanic Stream, high velocity clouds, or a cooling intracluster medium (e.g. Silk, Wyse, & Shields 1987). Regardless of its origin, star formation histories show that the spheroidals had significant amounts of neutral gas in the recent past. Therefore, they might be expected to have an interstellar medium today.
The question of whether there is neutral gas in the dwarf spheroidals has important implications for our understanding of star formation and the evolution of galaxies. If the dwarf spheroidals had neutral gas in the past but they have none now, what happened to the gas? Two popular ideas are that (1) the neutral gas in the spheroidals may have been stripped by interactions with the outer halo of our Galaxy, or that (2) a burst of star formation activity, with attendant supernovae and stellar winds, may have evacuated the neutral gas from the galaxies. Skillman & Bender (1995) discuss some advantages and disadvantages of these ideas. Another possibility might be that if the gas density were lowered, neutral gas could be ionized by the interstellar UV field (e.g. Bland-Hawthorn, Freeman, & Quinn 1997; Bland-Hawthorn 1998). In any case it is clear that the presence or absence of neutral gas in the dwarf spheroidals is an important clue to their history.
The existing data on the HI content of dwarf spheroidals is not complete enough to answer the question of whether they have any neutral gas (Section 2). Therefore, we have conducted new searches for HI in and around the Leo II, Fornax, and Draco dwarf spheroidal galaxies using the VLA’s D configuration. These observations cover larger regions than before and limit possible confusion with foreground Galactic HI. Subsequent sections describe what was previously known about the HI contents of dwarf spheroidals, the current observations, and the implications of these new data.
## 2 Existing Data on the HI Content of Dwarf Spheroidals
It is commonly repeated that dwarf spheroidals have no HI, but an examination of the published results shows that the data are simply not adequate to make this conclusion. The basic problem is incompleteness. HI surveys (Hartmann 1996; Huchtmeier & Richter 1986) have covered a large fraction of the Northern sky or have searched many galaxies, but at poor sensitivity. Searches for optical and UV absorption lines in front of quasars near Leo I (Bowen et al. 1995, 1996) provide extremely low column density limits, but they only probe three points at radii of 3, 5, and 10 times the tidal radius of the galaxy. (In this paper, core and tidal radii for the dwarf spheroidals are taken from the work of Irwin & Hatzidimitriou .) Even the published HI observations (Knapp et al. 1978; Mould et al. 1990; Koribalski et al. 1994) are inconclusive, for reasons described below.
Existing searches for HI in dwarf spheroidal galaxies suffer from two problems. The major problem with existing single-dish observations is that they searched only a small fraction of the galaxies’ areas. In the case of the Draco and Ursa Minor spheroidals (Knapp et al. 1978) a beam with a half-power radius of 5′ was centered on galaxies whose core radii (semi-major axes) are 9′ and 16′. Therefore, less than one third of the area inside the core radii of these galaxies has been observed. Similar arguments apply to the Sagittarius dwarf spheroidal, Fornax, and Carina (Koribalski et al. 1994; Knapp et al. 1978; Mould et al. 1990). The small HI mass limits given for these galaxies are commonly misunderstood and misused because they apply only to the small area which has been observed, not to the entire galaxy. Neutral gas could be present in the unobserved parts of the optical galaxies.
Gas could also be present in the outer parts beyond the optical galaxies. Blow-out models (e.g. Dekel & Silk 1986; De Young & Heckman 1994; Mac Low & Ferrara 1997) provide some reasons why gas might be found in the outer parts of quiescent dwarf galaxies instead of in the center. Furthermore, several dwarf galaxies are indeed observed to have HI minima centered on the galaxy and HI rings (or partial rings) outside the optical galaxy. These include M81 dwarf A (Sargent, Sancisi, & Lo 1983; Puche & Westpfahl 1994), Sag DIG (Young & Lo 1997), and even the dwarf spheroidal Sculptor (Carignan et al. 1998). It is not clear whether the blow-out models mentioned above explain the observed HI rings. In any case it cannot be assumed that all of the gas should be in the centers of the dwarfs, where previous searches have been made.
Another problem with single-dish observations of the dwarf spheroidals is that HI gas at velocities close to 0 km s<sup>-1</sup> could have been overlooked because of confusion with Galactic HI. The best example of how this can happen is the case of the Phoenix dwarf (sometimes referred to as a “transition” galaxy between irregulars and spheroidals). Phoenix was observed with a single-dish telescope (Carignan et al. 1991) and subsequently with the VLA (Young & Lo 1997). The VLA observations detected a cloud of HI at $`23`$ km s<sup>-1</sup>, but the single-dish observations did not detect the cloud because this velocity lies under partially-subtracted Galactic HI which the VLA resolves away. More recent interferometric observations of Phoenix show even more HI than Young & Lo (1997) found (Carignan 1998, private communication). Knapp’s (1978) observations of Fornax and Leo II might also suffer from confusion with Galactic HI. The problem is exacerbated by the fact that the velocity of Leo II was not known at the time the observations were made. Leo II’s optical velocity is +76 $`\pm `$ 1 km s<sup>-1</sup> (Vogt et al. 1995), but at that position the Galactic HI extends out to velocities of almost +120 km s<sup>-1</sup> (Young & Gallagher 1998). Thus, the detection efforts made to date are limited and cannot give a conclusive answer about whether the spheroidals really have no HI.
## 3 Observations
We address some of the problems of previous observations by using the NRAO Very Large Array (VLA)<sup>1</sup><sup>1</sup>1The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc. to search for HI emission in and around the Fornax, Leo II, and Draco dwarf spheroidal galaxies. For Draco, these VLA observations cover a much larger fraction of the galaxy than has been previously searched. For Leo II and Fornax, these VLA observations suffer much less from confusing local HI emission. One reason that there is less confusion in the VLA images is that most of the local HI emission is simply not detected. Unlike a single-dish telescope, the VLA acts as a high-pass spatial filter. Most of the radiated power from foreground Galactic HI is on relatively low spatial frequencies (large angular scales of degrees or greater). Thus, most of the Galactic HI does not appear in the VLA images. In addidition, spatial mapping allows us to distinguish gas that is probably not associated with the galaxy.
The observational setups are described in Table 1. The observations were made in the D and DnC configurations in 1997–1998. They cover a bandwidth of 1.56 MHz, which gives a usable velocity range of about 290 km s<sup>-1</sup> centered close to the optical velocity of the galaxy as determined from stellar absorption lines. The velocity resolutions were 2.6 km s<sup>-1</sup>, based on previous experience detecting HI clouds in the vicinity of the Phoenix and Tucana dwarf spheroidals (Young & Lo 1997; Oosterloo et al. 1996). The primary beam of the VLA at 21cm has a full width at half maximum of 31′, i.e. a response of 50% at a radius of 15.5′, and a response of 10% at a radius of 26.4′ (Napier & Rots 1982). The data were mapped using natural weight and again with a tapering weight function which emphasizes large spatial structures, both before and after continuum subtraction. Continuum emission was subtracted directly from the combined dataset using the task UVLIN in the AIPS package. Table 1 gives the positions, velocity ranges covered, beam sizes, noise levels, and column density limits for these observations. The beam linear sizes are computed assuming the distances given in Irwin & Hatzidimitriou (1995).
The phase/pointing centers of the VLA observations, given in Table 1, are quite close to the actual centers of the galaxies. The phase/pointing centers for Leo II and Draco are less than 1′ away from the galaxy centers given in Irwin & Hatzidimitriou (1995). The Fornax dwarf spheroidal is observed to have significant asymmetrical structure, with the peak stellar density about 6′ northeast of the centroid of the lowest isophotes (Stetson et al. 1998). The VLA phase/pointing center is between the peak stellar density and the galaxy centroid, about 2′ southwest of the position of peak stellar density. The center velocities in Table 1 are also within 6 km s<sup>-1</sup> of the most recently determined heliocentric stellar velocities, which are 53 $`\pm `$ 2 km s<sup>-1</sup> for Fornax (Mateo et al. 1991), 76 $`\pm `$ 1 km s<sup>-1</sup> for Leo II (Vogt et al. 1995), and $``$294 $`\pm `$ 3 km s<sup>-1</sup> for Draco (Hargreaves et al. 1996).
## 4 Results
We find no evidence for any HI emission or absorption that can be associated with the dwarf spheroidal galaxies. Sensitivity limits for HI emission are given in Table 1 and are typically 5$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the galaxy center, twice that at the VLA half-power point (radius 15.5′), and 5$`\times 10^{19}`$ $`\mathrm{cm}^2`$ at the VLA 10% power point (radius 26.4′). The column density limits are given as the column density of a 3$`\sigma `$ signal in three consecutive channels. For purposes of comparison, the low column density HI clouds observed near the Sculptor, Phoenix, and Tucana dwarfs peak at 2$`\times 10^{19}`$ $`\mathrm{cm}^2`$ (Carignan et al. 1998), 4$`\times 10^{19}`$ $`\mathrm{cm}^2`$ (Young & Lo 1997), and 8$`\times 10^{19}`$ $`\mathrm{cm}^2`$ (Oosterloo et al. 1996), respectively. We argue in Section 5 that there is not likely to be a significant amount of HI which we have not detected, especially in the centers of the galaxies.
Figures 1, 2, and 3 show isopleth maps of the galaxies (Irwin & Hatzidimitriou 1995) along with the half-power and 10% power circles of the VLA primary beam. Table 1 gives the major axis core radii and tidal radii of these galaxies, taken from Irwin & Hatzidimitriou (1995). For Leo II, the column density limit is 4.3$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the center of the galaxy and 5.2$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the tidal radius of the galaxy. Thus, effectively all of Leo II has been searched at good sensitivity (see also section 5). For Draco, the detection limit is 7.1$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the center of the galaxy, increasing to 8.7$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the (major axis) core radius, and about 7$`\times 10^{19}`$ $`\mathrm{cm}^2`$ at the tidal radius. Since the tidal radius of the galaxy is approximately equal to the VLA’s 10% power radius, any HI associated with the galaxy would most likely be within the VLA field of view. However, the sensitivity at Draco’s tidal radius is probably not good enough to exclude the presence of HI there (section 5). In the case of Fornax, the detection limit is 4.6$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the galaxy center and 7.9$`\times 10^{18}`$ $`\mathrm{cm}^2`$ at the core radius. The galaxy’s tidal radius (71′) is much larger than even the VLA’s 10%-power radius. Thus, there might still be undetected HI somewhere between the core radius and the tidal radius of Fornax or Draco.
We find some HI emission in the data cubes, but it is undoubtedly from foreground Galactic HI. Galactic emission in these VLA images takes the form of large-scale ($``$20′) positive and negative features at velocities near 0 km s<sup>-1</sup>; the negative features arise because the VLA has resolved out most of the total flux. Figures 4, 5, and 6 show spectra constructed from each data cube and illustrate that the emission features are not associated with the dwarf galaxies in velocity. In the case of the Fornax dwarf the Galactic HI is seen at velocities between 19 and $``$22 km s<sup>-1</sup> (heliocentric), with greatest intensities at two peaks at about 9 and $``$4 km s<sup>-1</sup>; the galaxy’s stellar velocity is +53$`\pm `$2 km s<sup>-1</sup> (Mateo et al. 1991). Near Leo II, Galactic HI is observed between velocities of 3 and $``$59 km s<sup>-1</sup>, with greatest intensities at $``$2 and $``$41 km s<sup>-1</sup>, in agreement with a single-dish spectrum of Leo II (Young & Gallagher 1998). The stellar velocity of Leo II is +76$`\pm `$1 km s<sup>-1</sup> (Vogt et al. 1995). No Galactic HI is observed in the Draco field. The VLA has effectively removed contamination from Galactic HI at the velocities of the dwarf galaxies, so we conclude that we have not missed any dwarf galaxy emission which is hiding behind Galactic HI.
There are a number of background continuum sources in each of the VLA fields, but no absorption is detected towards any of them. Table 2 presents the positions, peak flux densities, and optical depth limits (3$`\sigma `$) for the brightest continuum sources in each field. The positions quoted are simply those of the brightest pixels and should not be assumed to be more accurate than 0.5 pixel (5″–10″). The last column also gives the distance between the continuum source and the galaxy center. Since none of the continuum sources are particularly bright, the column density limits in absorption are not as meaningful as the limits in emission. For example, the smallest optical depth limit is 0.11 for a source 22′ away from the center of Draco; for this source, the column density upper limit is $`\mathrm{N}_{\mathrm{HI}}<5.2\times 10^{17}\mathrm{T}_\mathrm{S}\mathrm{cm}^2`$, and a typical spin temperature $`\mathrm{T}_\mathrm{S}`$ of 100 K would give column density limits of 5$`\times 10^{19}`$ $`\mathrm{cm}^2`$.
## 5 Discussion
Interferometers, by their nature, cannot detect very smooth spatial structures. However, it is unlikely that HI in the dwarf spheroidal galaxies has escaped detection by reason of being too smooth for the VLA to detect. The present datasets include baselines as short as 170 $`\lambda `$ (20′) for all three galaxies, and the VLA can be expected to image structures as large as 15′ (Perley 1997). At the adopted distances of these spheroidals, 15′ corresponds to linear sizes of 520 pc (Fornax), 900 pc (Leo II), and 310 pc (Draco). Thus, HI in the spheroidals could be “resolved out” by the VLA only if it was very smooth on scales smaller than at least 300–900 pc. Such a situation would be highly unusual, as every other galaxy which has been observed in HI emission at high resolution shows small scale structures. Kalberla et al. (1985) found structures as small as 0.5 pc – 1 pc in Galactic HI; a mosaic of HI in the Small Magellanic Cloud shows intricate structure down to scales of 30 pc (Staveley-Smith et al. 1997). If HI structures are due in large part to star formation activity, the dwarf spheroidals might indeed have relatively smooth interstellar media. However, the scales involved, at least 300 pc, are so large that the absence of any structure seems a remote possibility. Furthermore, while smoothly distributed gas could not be detected in emission, it could be detected in absorption against the point sources, and no absorption was found.
We consider it possible, but unlikely, that HI could exist in the dwarf spheroidals at column density levels below the sensitivities achieved in these VLA obervations, especially in the galaxy centers. HI simply does not seem to exist at low column density levels in the outer parts of galactic systems. Sensitive observations of a spiral galaxy (van Gorkom 1993) show that the HI disk of the spiral cuts off sharply when the HI column density reaches about 10<sup>19</sup> $`\mathrm{cm}^2`$. A similar effect is seen in high velocity clouds, where Colgan et al. (1990) observe a tendency for the HI in the clouds to cut off sharply at column densities below 5$`\times 10^{18}`$ $`\mathrm{cm}^2`$. Corbelli & Salpeter (1993a, 1993b) and others have argued that these HI cutoffs are probably caused by ionization by the galactic and/or extragalactic UV radiation field. In this picture, hydrogen could not exist in neutral form in the dwarf spheroidals at column densities below about 10<sup>19</sup> $`\mathrm{cm}^2`$. (And because of the high spatial resolution of these VLA images, 30–75 pc, the column density of any HI should not be diluted much by a small beam filling factor.) The observed column density limit for Leo II is well below 10<sup>19</sup> $`\mathrm{cm}^2`$ even beyond the tidal radius of the galaxy; therefore, it is unlikely that Leo II contains significant amounts of HI. For Draco and Fornax, the column density limits rise above 10<sup>19</sup> $`\mathrm{cm}^2`$ between the core radius and the tidal radius. For these galaxies, it is highly unlikely that there is significant HI within the core radii, but we cannot rule out the presence of HI at column density levels of a few times 10<sup>19</sup> $`\mathrm{cm}^2`$ between the core radius and the tidal radius.
## 6 Implications
The stellar population of Leo II is predominantly made up of stars with ages between 7 and 14 Gyr, and the stars in Draco are at least 10 Gyr old (Smecker-Hane 1997 and references therein; Grillmair et al. 1998). However, recent observations of Fornax (Stetson, Hesser, & Smecker-Hane 1998) indicate that there are a number of young stars in that galaxy, so that the absence of neutral gas in the center of Fornax becomes an interesting puzzle. The photometry of Stetson et al. (1998) reveals a large number of bright blue ($`BR<0`$) stars which are interpreted as a young main sequence with an age of only 100 to 200 million years. These young stars are concentrated in the center of the galaxy, with a distribution much like that of the bulk of the stars in Fornax. Figure 7, which is based on the data of Stetson et al. (1998), shows the distribution of the bright blue stars in Fornax and in the VLA field of view.
The young stars in Fornax are concentrated in the center of the VLA field of view; and as little as 10<sup>8</sup> years ago, these stars must have been associated with neutral gas. We infer two possibilities: either (1) the gas that formed the young stars in Fornax is now ionized or molecular, and has not been detected; and/or (2) the neutral gas and the young stars parted company in the last 10<sup>8</sup> years. Perhaps the neutral gas was ejected from the galaxy, as in the popular “blow-out” models (e.g. Mac Low & Ferrara 1998, and references therein). Since the one-dimensional velocity dispersion of the stars in Fornax is 11$`\pm `$2 km s<sup>-1</sup> (Mateo et al. 1993), gas moving at the escape speed of 38 km s<sup>-1</sup> would reach the outer edge of the VLA field of view (26′ = 920 pc) after only 2.4$`\times 10^7`$ yr. Apparently, enough time has elapsed to get rid of the gas which formed the young stars. If neutral hydrogen existed at some point in the past, and then expanded because of blow-out caused by star formation, its column density could drop significantly and it might now exist in an ionized state at very low emission measure. The presence of these young stars and the apparent absence of neutral gas is a puzzle which we cannot resolve at this time.
## 7 Summary
We present VLA searches for HI in the Fornax, Leo II, and Draco dwarf spheroidal galaxies. No HI was detected in these galaxies, either in emission or absorption. In all three cases the VLA observations cover larger areas than have been previously searched, and for Fornax and Leo II the new data have the important advantage of removing possible confusion with Galactic HI. For Leo II, the column density limit in emission is 5$`\times 10^{18}`$ $`\mathrm{cm}^2`$ out to the tidal radius. For Fornax and Draco the column density limits are 4$`\times 10^{18}`$ and 7$`\times 10^{18}`$ $`\mathrm{cm}^2`$ in the galaxy centers, increasing to 10<sup>19</sup> $`\mathrm{cm}^2`$ at points between the core radii and the tidal radii. In the Draco dwarf galaxy we also find HI optical depth limits $`\tau <0.1`$ towards two continuum sources at 1.9 and 2.4 core radii from the center. From these observations we conclude that there is no significant HI within the tidal radius of Leo II or in the centers of Fornax and Draco. It will be necessary to observe still larger areas to determine whether there is HI in the outer parts of Fornax and Draco. However, these observations are much more complete than previous ones, and they close important loopholes in assessing the question of whether there is or isn’t HI in the spheroidals.
Thanks to J. Gallagher for helpful discussions, to M. Irwin for providing the isopleth maps of the dwarf spheroidals, and to P. Stetson for providing data on the bright blue stars in Fornax.
|
no-problem/9901/hep-ph9901212.html
|
ar5iv
|
text
|
# Chiral NN interactions in nuclear matter
## Acknowledgments
Author is very grateful for the support from SRCSSM where the main part of this work was done.
|
no-problem/9901/cond-mat9901271.html
|
ar5iv
|
text
|
# Crossover to Potential Energy Landscape Dominated Dynamics in a Model Glass-forming Liquid
## I Introduction
Dynamical behavior of many physical and biological systems can be considered in terms of the transient localization of the system in basins of potential energy, and transitions between basins. In particular, this approach has received much attention in studies of slow dynamics and the glass transition in supercooled liquids. Here, the strong temperature dependence of transport properties such as the diffusion coefficient and viscosity, and the possible existence of a thermodynamic transition underlying the laboratory glass transition, have been sought to be understood in terms of the properties of the liquid’s potential energy (or free energy) surface, or “landscape” as it is commonly called .
For a system composed of $`N`$ atoms, the potential energy surface is simply the system’s potential energy plotted as a function of the $`3N`$ particle coordinates in a $`3N+1`$ dimensional space . The potential energy surface contains a large number of local minima, termed “inherent structures” by Stillinger and Weber . Each inherent structure is surrounded by a “basin”, which is defined such that a local minimization of the potential energy maps any point in the basin to the inherent structure contained within it. The time evolution of a liquid may be viewed as the motion of a point on the potential energy surface, and thus as a succession of transitions from one basin to another. These transitions are expected to occur differently as the temperature $`T`$ is varied. In particular, Goldstein argued that below a crossover temperature, $`T_x`$, where the shear relaxation time is $`10^9`$ seconds, relaxation is governed by thermally activated crossings of potential energy barriers. The presence of significant energy barriers below $`T_x`$ suggests a clear separation of short-time (vibrational) relaxation within potential energy basins from long-time relaxation due to transitions between basins.
A complementary approach to the dynamics of supercooled liquids is provided by the mode coupling theory (MCT) . The simplest (so-called “ideal”) version of this theory predicts a power-law divergence of relaxation times and the inverse diffusion coefficient, at a critical temperature $`T_c`$. Although a power law provides a reasonable description of the temperature dependence of these quantities above $`T_c`$ in both real and simulated systems, power law behavior breaks down for $`TT_c`$, i.e. the predicted singularity at $`T_c`$ is not observed. This deviation is attributed to the presence of “hopping” motion as a mechanism of relaxation, which is not included in ideal MCT . Consequently, $`T_c`$ is usually estimated by fitting a power law to a relaxation time, taking into account that this fit is expected to break down close to (and below) $`T_c`$.
It was noted by Angell that experimentally it is often found that the shear relaxation time is on the order of $`10^9`$ seconds at the estimated $`T_c`$, leading to the argument that $`T_xT_c`$ (See also Ref.). The presence of a low temperature regime where barrier crossings dominate the dynamics, and the correspondence of the crossover to that regime with the mode coupling critical temperature $`T_c`$, has also been discussed in the context of mean field theories of certain spin glass models .
The existence of a crossover temperature and corresponding separation of the dynamics can be directly tested with computer simulations, using the concept of inherent structures. In this paper, we map the dynamical evolution of an equilibrated model liquid to a time series of inherent structures for a range of temperatures. In this way, we test the extent to which short-time “intra-basin” relaxation is separable from long-time “inter-basin” relaxation. Our results demonstrate that this separation becomes valid as the system is cooled, and we estimate the crossover temperature $`T_x`$ to be close to the estimated value of $`T_c`$.
## II Inherent Dynamics
In this section we describe the details of our approach, which is sketched in Fig. 1. After equilibration at a given thermodynamic state point, a discrete time series of configurations, $`𝐑(t)`$, is produced by standard molecular dynamics (MD) simulation. Each of the configurations $`𝐑(t)`$ is then mapped to its corresponding inherent structure, $`𝐑^I(t)`$, by locally minimizing the potential energy in configuration space. We refer to this procedure as a “quench”. After quenching the configurations in $`𝐑(t)`$, we have two “parallel” time series of configurations, $`𝐑(t)`$ and $`𝐑^I(t)`$. The time series $`𝐑(t)`$ defines the “true dynamics”, which is simply the usual (Newtonian) MD dynamics. In an analogous way, the time series $`𝐑^I(t)`$ defines the “inherent dynamics”. If a function quantifying some aspect of the true dynamics is denoted by $`f(𝐑(t))`$, then the corresponding function, $`f(𝐑^I(t))`$, of the inherent dynamics is calculated in exactly the same way, except using the time series of inherent structures. For example, the self intermediate scattering function, $`F_s(q,t)`$, and the *inherent* self intermediate scattering function, $`F_s^I(q,t)`$, are defined by
$`F_s(q,t)`$ $``$ $`\mathrm{cos}𝐪(𝐫_j(t)𝐫_j(0))\text{ , and}`$ (1)
$`F_s^I(q,t)`$ $``$ $`\mathrm{cos}𝐪(𝐫_j^I(t)𝐫_j^I(0))`$ (2)
where $`𝐫_j^I(t)`$ is the position of the $`j`$th particle in the inherent structure $`𝐑^I(t)`$ and $`\mathrm{}`$ denotes an average over $`j`$ and the time origin.
In this paper, we quantitatively compare $`F_s(q,t)`$ and $`F_s^I(q,t)`$ to test whether the dynamics of a binary Lennard-Jones mixture can be separated into vibrations around, and transitions between inherent structures. If so, then $`F_s^I(q,t)`$ describes the relaxation of the liquid as described by $`F_s(q,t)`$, but with the effect of the vibrations removed. We show that this scenario becomes true below a crossover temperature, $`T_x`$, which is close to the lowest temperature simulated in the present work.
## III Results
In the following we present results from molecular dynamics simulations of a binary Lennard-Jones mixture in three dimensions, equilibrated at eight different temperatures. The model used for the present simulations is described in Ref. . The system contains $`251`$ particles of type A and $`249`$ particles of type B interacting via a binary Lennard-Jones potential with parameters $`\sigma _{BB}/\sigma _{AA}=5/6`$, $`\sigma _{AB}=(\sigma _{AA}+\sigma _{BB})/2`$, and $`ϵ_{AA}=ϵ_{AB}=ϵ_{BB}`$. The masses are given by $`m_B/m_A=1/2`$. The length of the sample is $`L=7.28\sigma _{AA}`$ and the potential was cut and shifted at $`2.5\sigma _{\alpha \beta }`$. All quantities are reported in reduced units: $`T`$ in units of $`ϵ_{AA}`$, lengths in units of $`\sigma _{AA}`$ and time in units of $`\tau (m_B\sigma _{AA}^2/48ϵ)^{1/2}`$ (this was misprinted in ). Adopting “Argon units” leads to $`\sigma _{AA}=3.4\mathrm{\AA }`$, $`ϵ/k_B=120K`$, and $`\tau =3\times 10^{13}`$s. The simulations were performed in the NVE ensemble using the leap-frog algorithm with a timestep of $`0.01\tau `$, at constant reduced density, $`\rho =1.296`$. The quenching was performed using the conjugate gradient method .
We first briefly describe aspects of the true dynamics that demonstrate a qualitative change occuring in the temperature range investigated.
In Fig. 2 we show the quantity $`4\pi r^2G_{sA}(r,t_1)`$, which is the distribution of displacements of particles of type A during the time interval $`t_1`$. We define $`t_1`$ as the time where the mean square displacement is unity, $`r^2(t_1)_A=1`$. At all temperatures the dynamics become diffusive ($`r^2(t)_At`$) for $`tt_1`$ (see inset), i.e., $`t_1`$ marks the onset of diffusivity. At the highest temperatures, $`4\pi r^2G_{sA}(r,t_1)`$ agrees well with the Gaussian approximation \[thick curve, $`G_{sA}(r,t_1)\mathrm{exp}(3r^2/2)`$\]. As $`T`$ is lowered, the distribution of particle displacements deviates from the Gaussian approximation, and a shoulder develops at the average interparticle distance ($`r1.0`$ in the adopted units), which at T=0.59 becomes a well-defined second peak. The second peak, observed also in other model liquids at low temperatures, indicates single particle “hopping” (see Fig. 3a): particles stay relatively localized for a period of time (first peak), and then move approximately one interparticle distance, where they again become localized (second peak). Thus we see from Fig. 2 that as we approach our lowest simulated temperature $`T=0.59`$, there is a qualitative change from dynamics well described by a Gaussian distribution to dynamics dominated by hopping processes.
In Fig. 3b the inherent dynamics approach is applied to the true trajectory seen in Fig. 3a. The resulting “inherent trajectory” consists of the positions of the particle in 1600 successive quenched configurations. The quenching procedure is seen to remove the vibrational motion from the true trajectory. The inherent trajectory will be discussed in more detail in section IV.
We now compare the true self intermediate scattering function, $`F_s(q,t)`$, with its inherent counterpart $`F_s^I(q,t)`$. Fig. 4a shows the self intermediate scattering function for the A particles, $`F_{sA}(q,t)`$, at $`q=7.5`$ corresponding to the position of the primary peak in the static structure factor for the A-A correlation. For each temperature $`F_s(q,t)`$ was calculated from approximately 2000 configurations (depending on temperature). As $`T`$ decreases, $`F_{sA}(q,t)`$ is found to display the typical two-step relaxation, where the short time decay is attributed to vibrational relaxation (or “dephasing”, see Ref. ) of particles within cages formed by neighboring particles . The long time, or $`\alpha `$-relaxation is separated from the short time regime by a plateau indicating transient localization, or “caging” of particles, and is generally observed to follow a stretched exponential form.
The self part of the inherent intermediate scattering function for the A particles, $`F_{sA}^I(q,t)`$ at q=7.5, is shown in Fig. 4b. This was calculated by quenching each configuration used in Fig. 4a, and then applying the same data analysis program on the resulting time series of inherent structures. As expected, the plateau disappears in the inherent dynamics, as previously shown also for the inherent mean-square displacement . At all $`T`$ we find that the long-time behavior of both $`F_{sA}(q,t)`$ and $`F_{sA}^I(q,t)`$ is well described by stretched exponentials (dashed lines). As a result, we can quantitatively compare the long time relaxation of $`F_{sA}(q,t)`$ and $`F_{sA}^I(q,t)`$, by comparing the fitting parameters $`\{\tau _\alpha ,\beta ,f_c\}`$ of the stretched exponentials $`f(t)=f_c\mathrm{exp}((t/\tau _\alpha )^\beta )`$.
If the true dynamics can be separated into vibrations around and transitions between inherent structures, how do we expect the fitting parameters for the inherent self intermediate scattering function, $`\{\tau _\alpha ^I,\beta ^I,f_c^I\}`$ to be related to the fitting parameters for the true self intermediate scattering function, $`\{\tau _\alpha ,\beta ,f_c\}`$? To answer this question, we assume that the initial relaxation in $`F_s(q,t)`$ is due to vibrations (as widely accepted ). If this is the case, then we expect the quenching procedure to remove the initial relaxation (since it removes the vibrations), which means that $`F_s^I(q,t)`$ can be thought of as $`F_s(q,t)`$ with the initial relaxation removed<sup>*</sup><sup>*</sup>*If vibration can be separated from transitions between inherent structures we may write for the x-displacement: $`\mathrm{\Delta }x=\mathrm{\Delta }x_{vib}+\mathrm{\Delta }x_{inh}`$ where the two terms are statistically uncorrelated. Thus, \[using an exponential instead of cosine in Eqs. (2.1) and (2.2)\] we find that the self intermediate scattering function is a product of a term relating to vibrations and one relating to transitions between inherent structures. At long times the former becomes time-independent, converging to the non-ergodicity parameter.. This in turn means that $`F_s^I(q,t)`$ should be identical to the long time relaxation of $`F_s(q,t)`$, but rescaled to start at unity: $`\{\tau _\alpha ^I,\beta ^I,f_c^I\}=\{\tau _\alpha ,\beta ,1\}`$.
The fitting parameters used for fitting stretched exponentials to $`F_{sA}(q,t)`$ (Fig. 4a) and $`F_{sA}^I(q,t)`$ (Fig. 4b) are shown in Fig. 5: (a) relaxation times, $`\tau _\alpha `$ and $`\tau _\alpha ^I`$, (b) stretching parameters, $`\beta `$ and $`\beta ^I`$, and (c) non-ergodicity parameters, $`f_c`$ and $`f_c^I`$. We also show in Fig. 5a the fit of the asymptotic mode coupling prediction $`\tau _\alpha (TT_c)^\gamma `$, from which we find $`T_c=0.592\pm 0.006`$ and $`\gamma =1.41\pm 0.07`$. The fitting was done without the lowest temperature, where hopping is clearly present in the system (see Fig. 2), since this type of particle motion is not included in the ideal mode coupling theory. Excluding the *two* lowest $`T`$ gives a fit which is consistent with the one presented here; including all temperatures gives a considerably worse fit. Applying the same procedure to the inverse diffusion coefficient, $`D^1(T)`$, gives $`T_c=0.574\pm 0.005`$ and $`\gamma =1.40\pm 0.09`$ (data not shown).
Also shown in Fig. 5 as insets are $`\tau _\alpha ^I`$ vs. $`\tau _\alpha `$ and $`\beta ^I`$ vs. $`\beta `$. Within the error bars we find that $`\tau _\alpha `$ and $`\tau _\alpha ^I`$ are identical at all temperatures. At the highest temperatures $`\beta `$ is poorly defined since there is no well-defined plateau in $`F_{sA}(q,t)`$. Consequently it is difficult to compare $`\beta `$ and $`\beta ^I`$ at high $`T`$, but we find that they become identical (within the error bars) at low $`T`$. Thus at low temperatures our results confirms the expectation that the inherent dynamics is simply a coarse-graining of the true dynamics, i.e., that $`\{\tau _\alpha ^I,\beta ^I\}=\{\tau _\alpha ,\beta \}`$. On the other hand, the non-ergodicity parameters $`f_c`$ and $`f_c^I`$ (Fig. 5c) are strikingly different. While $`f_c`$ is roughly independent of $`T`$, $`f_c^I`$ increases towards unity as $`T`$ approaches our lowest temperature. The fact that we observe a temperature dependence of $`f_c^I`$ approaching unity as $`T`$ approaches our lowest temperature $`T=0.59`$, leads us to conclude that this is close to the crossover temperature, $`T_x`$. We note that Goldstein’s estimate of shear relaxation times at $`T_x`$ ($`10^9`$ seconds) in our LJ units corresponds to $`3\times 10^3`$, which is the same order of magnitude as $`\tau _\alpha `$ in the temperature range where $`f_c`$ approaches unity.
Below $`T_x`$ the inherent dynamics can be thought of as the true dynamics with the effect of the vibration removed, as shown above. How should the inherent dynamics be interpreted above $`T_x`$? In Fig. 4b the short time relaxation of the inherent self intermediate scattering function at high temperatures is seen to be approximately logarithmic in time. This is an artificial relaxation introduced by applying the quenching procedure at a temperature where the dynamics is *not* separated into vibrations around, and transitions between inherent structures, i.e. the quenching procedure is doing more than simply removing the vibrations around inherent structures. Presumably the inherent dynamics above $`T_x`$ contains information about the underlying potential energy landscape. At the present, however, we do not know how to interpret this, and we do not have an explanation as to why the (artificial) initial relaxation appears to be logarithmic at high temperatures.
We now proceed to discuss Angell’s proposal, that $`T_xT_c`$. We find that both estimated values for $`T_c`$ \[$`0.592\pm 0.005`$ from $`\tau _\alpha (T)`$ and $`0.574\pm 0.005`$ from $`D^1(T)`$\] are in the temperature range where $`f_c^I`$ is approaching unity. We note that in the system investigated here two of the asymptotic predictions of the ideal mode coupling theory do not hold; $`\tau _\alpha `$ and $`D^1`$ have different temperature dependence and we do not find time-temperature super-position of the $`\alpha `$-part of the self intermediate scattering function. However, the argument given by Angell (and Sokolov ) only relates to $`T_c`$ as the temperature where power-law fits to experimental data tend to break down, i.e. the “usage” of MCT in this argument is similar to the way we have estimated $`T_c`$ in Fig. 5a, and does not require, e.g., time-temperature superposition.
## IV Transitions between inherent structures
As shown in the previous section, separation of the dynamics into vibrations around and transitions between (the basin of attraction of) inherent structures becomes possible as $`T`$ approaches $`T_x`$, which is close to our lowest simulated temperature T=0.59. At this temperature, it therefore becomes meaningful to examine the details of the transitions between successive inherent structures. We identify such transitions by quenching the MD configurations every $`0.1\tau `$ (i.e. every 10 MD-steps) and looking for signatures of the system undergoing a transition from one inherent structure to another. We have considered 2 such signatures: i) We monitor the inherent structure energy $`E^I(t)`$ as a function of time, as shown in Fig. 6a. ii) We monitor the distance in configuration space $`\mathrm{\Delta }R^I(t)`$ between two successive quenched configurations (Fig. 6b), where
$`\mathrm{\Delta }R^I(t)`$ $``$ $`|𝐑^I(t+0.1)𝐑^I(t)|`$ (3)
$`=`$ $`\sqrt{{\displaystyle \underset{j=1}{\overset{N}{}}}\left(𝐫_j^I(t+0.1)𝐫_j^I(t)\right)^2}.`$ (4)
Each jump in $`E^I(t)`$ corresponds to a peak in $`\mathrm{\Delta }R^I(t)`$, indicating a transition to a new inherent structure. In the (rare) event where a transition occurs between two inherent structures with the same energy, $`\mathrm{\Delta }R^I(t)`$ will still exhibit a peak even in the absence of a jump in $`E^I(t)`$, and for this reason we use $`\mathrm{\Delta }R^I(t)`$ to identify transitions. The condition $`\mathrm{\Delta }R^I(t)>0.1`$ was found to be a sufficient threshold for this purpose. When evidence of a transition was found in a time interval $`\mathrm{\Delta }t=0.1\tau `$, this time interval was divided into 10 subintervals of $`\mathrm{\Delta }t=0.01\tau `$ and the procedure described above was repeated.
For each transition, we monitor the difference between the particle positions in the two successive inherent structures. The distribution $`p(r)`$ of all such particle “displacements” averaged over the 12000 transitions we have identified is shown in Fig. 7. While many particles move only a small distance ($`r<0.2`$) during a transition from one inherent structure to the next, a number of particles move farther, and in particular, we find that the distribution for $`r>0.2`$ is to a good approximation exponential. The dotted curve is a fit to a power-law with exponent $`5/2`$, which is a prediction from linear elasticity theory , describing the displacements of particles in the surroundings of a local rearrangement “event”. This power-law fit does not look very convincing by itself, but we note that the exponent was not treated as a fitting parameter (i.e. only the prefactor was fitted), and the power-law must break down for small displacements, since these correspond to distances far away from the local event, and are thus not present in our relatively small sample. From the change in behavior of $`p(r)`$ at $`r0.2`$, it is reasonable to think of particles with displacements larger than $`0.2`$ as those taking part in the local event, and the rest of the particles as merely “adjusting” to the local eventNote however, that our data does not imply what is cause and what is effect, or even if such a distinction is meaningful.. Using this definition it is found that on average approximately 10 particles participate in an event.
Fig. 7 has two important consequences with regards to points discussed earlier in this paper. The first point relates to the single particle hopping indicated by the secondary peak in $`4\pi r^2G_s(r,t)`$ (Fig. 2) at low temperatures. A common interpretation of the single particle hopping is that the jump of a particle from one “localized state” (first peak) to the second localized state (secondary peak), corresponds to the transition of the system over an energy barrier from one inherent structure to the next. If such a transition typically occurs over a single energy barrier, i.e. without any new inherent structures between the two states, we would expect to find a preference for displacements of one average interparticle distance ($`r1`$) in Fig. 7. That this is not the case demonstrates that the hopping indicated by the secondary peak in $`4\pi r^2G_s(r,t)`$ at low temperatures is not due to transitions over single energy barriers. Instead, as seen in the inherent trajectory in Fig. 3, the jump occurs via a number of “intermediate” inherent structures.
The second important consequence of Fig. 7 is that particles in the surroundings of a local event are displaced by small distances. This kind of motion is difficult to detect in the true dynamics, since it is dominated by the thermal vibrations. Presumably this kind of motion is the reason why the inherent trajectory in Fig. 3 shows small displacements ($`0.2\sigma _{AA}`$), even when the corresponding true trajectory seems to oscillate around the same position: When a transition between inherent structures involving significant particle rearrangements in the surroundings occurs, the particle starts vibrating around a position that is slightly displaced, and a corresponding small displacement of the inherent trajectory is seen. This view of the dynamics is also consistent with the fact that the first peak in the inherent counterpart of $`4\pi r^2G_s(r,t)`$ (not shown, see Ref. ) is not a delta function in $`r=0`$.
By observing, for a number of transitions, the positions of all particles that moved a distance greater than $`0.2`$ during a transition, we find these particles to be clustered together in “strings”, as shown in Fig. 8. Typically, one transition appears to involve just one string-like cluster. Detailed investigations of the transition events will be presented in a separate publication. Here we simply note that string-like particle motion has been observed also in the true dynamics above $`T_c`$ in a similar binary Lennard-Jones mixture. These strings are found on long time scales and involve particles moving approximately one inter-particle distance, and are thus different from, but presumably related to, the strings found in the present work.
## V Conclusions
We have investigated the dynamics of a model glass-forming liquid in terms of its potential energy landscape by “quenching” a time series of MD configurations to a corresponding time series of inherent structures. In this way we have provided numerical evidence for the conjecture, originally made by Goldstein 30 years ago in this journal , that below a crossover temperature $`T_x`$ the dynamics of the liquid can be separated into vibrations around and transitions between inherent structures. Specifically, by comparing the self intermediate scattering function $`F_s(q,t)`$ with its inherent counterpart $`F_s^I(q,t)`$ we presented evidence for the existence of $`T_x`$. It is perhaps not surprising that the dynamics of a liquid becomes dominated by the structure of the potential energy landscape at sufficiently low temperatures. What we have done here using the concept of inherent dynamics, is to provide direct numerical evidence for this, *and* we have shown that this regime can be reached by equilibrium molecular dynamics (for the particular system investigated here). To our knowledge this is the first time such evidence has been presented.
In agreement with previous proposals we find $`T_xT_c`$, where $`T_c`$ is estimated from a power-law fit to $`\tau _\alpha `$. This is also the temperature range where single particle hopping starts to dominate the dynamics, and $`\tau _\alpha `$ becomes on the order of $`10^9`$ seconds (Goldstein’s estimate of the shear relaxation time at $`T_x`$).
The fact that we have been able to cool the system, under equilibrium conditions, to temperatures where the separation between vibrations around inherent structures and transitions between these is (almost) complete, means that it becomes meaningful to study the individual transitions over energy barriers, since the transitions in this regime dominate the dynamics. Our two key findings with regards to the individual transitions between inherent structures are i) single particle displacements during transitions show no preference for displacements on the order of the inter-particle distance, showing that the single particle hopping indicated in $`4\pi r^2G_s(r,t)`$ at low $`T`$ (Fig. 2) does not correspond to transitions of the system over single energy barriers; and ii) particle displacements during transitions are spatially correlated (in “strings”).
## VI Acknowledgments
We thank F. Sciortino and F.H. Stillinger for helpful feedback. This work was supported in part by the Danish Natural Science Research Council.
|
no-problem/9901/astro-ph9901006.html
|
ar5iv
|
text
|
# HCG 16 Revisited: Clues About Galaxy Evolution in Groups
## 1 Introduction
To study the dynamical structure of compact groups of galaxies, de Carvalho et al. (1997) obtained new spectroscopic data on 17 of Hickson’s compact groups (HCGs), extending the observations to galaxies which are in the immediate vicinity of the original group members (within 0.35 Mpc, H= 75 km/s/Mpc, from the nominal center, in average, Ribeiro et al. 1998). The analysis based on this survey (Ribeiro et al. 1998; Zepf et al. 1997) helped to resolve some of the ambiguities presented by the HCGs. In particular, it revealed that compact groups may be different dynamical stages of evolution of larger structures, where replenishment by galaxies from the halo is always operating. Several other papers have addressed this particular scenario from either the observational or theoretical point of view (e.g. Barton et al. 1996; Ebeling, Voger, & Boringer 1994; Rood & Struble 1994; Diaferio, Geller, & Ramella 1994, 1995; Governato, Tozzi, & Cavaliere 1996).
Consistent with the dynamical analysis, the classification of the activity types and the study of the stellar populations of the galaxies in these groups suggest that their evolution followed similar paths and that they were largely influenced by their environment (Ribeiro et al. 1998; Mendes de Oliveira et al. 1998). Most of the groups have a core (basically corresponding to the Hickson definition of the group) and halo structure (see Ribeiro et al. 1998 for a definition of the halo population). The core is dominated by AGNs, dwarf AGNs and galaxies whose spectra do not show any emission, whereas starbursts populate the halo. The AGNs are located in the most early–type, luminous galaxies and are preferentially concentrated towards the central parts of the groups. The starbursts in the halo, on the other hand, appear to be located preferentially in late–type spiral galaxies (Coziol et al. 1998a, 1998b). This last result for the core of the groups was recently confirmed by Coziol et al. (1998c) from a study of a new sample of 58 compact groups in the southern hemisphere (Iovino & Tassi 1998). In this study, we also show that no Seyfert 1s have been found in out sample of compact groups.
In terms of star formation and populations, the galaxies in the core of the groups (the “non–starburst” galaxies) seem more evolved than those in the outer regions: the galaxies are more massive and more metal rich than the starbursts and they show little or no star formation. Most of these galaxies have, however, stellar metallicities which are unusually high compared to those of normal galaxies with similar morphologies (Coziol et al. 1998b). They also show unusually narrow equivalent widths of metal absorption lines and relatively strong Balmer absorption lines, which are consistent with the presence of a small (less than 30%) population of intermediate age stars (Rose 1985). These observations suggest that most of the non–starburst galaxies in the groups are in a relatively evolved “post-starburst” phase (Coziol et al. 1998b).
HCG 16 is a group composed of 7 galaxies with a mean velocity V$`=3959\pm 66`$ km s<sup>-1</sup> and a dispersion $`\sigma =86\pm 55`$ km s<sup>-1</sup> (Ribeiro et al. 1998). Although we are keeping Hickson’s nomenclature for this group, it is important to note that we are not following specifically Hickson’s definition of a group, since this is not a crucial point for our analysis. Besides, there is evidence that HCG 16 is part of a larger and sparser structure (Garcia 1993). Specific studies have been done on HCG 16, covering a broad domain of the electromagnetic spectrum, allowing a thorough exam of its physical properties. Radio and infrared (Menon 1995; Allam et al. 1996); CO observations estimating the mass of molecular gas in some of the HCG16’s members (Boselli et al. 1996); rotation curves exhibiting abnormal shapes (Rubin, Hunter, & Ford 1991). Hunsberger et al. (1996) detected some dwarf galaxy candidates for HCG16-a, which is interpreted as a sign of strong interaction. From the spectral characteristics, Ribeiro et al. (1996) identified one Seyfert 2 galaxy, two LINERs and three starburst galaxies. Considering the significant amount of information gathered for HCG 16, this group represents a unique opportunity to obtain new clues on the process of formation of the compact groups. Here in this paper we focus on study of the activity of five galaxies belonging to the group: four galaxies originally defined as the Hickson group number 16 and the fifth one added from Ribeiro et al. (1998). These authors re-defined this structure with seven galaxies (including the original four from Hickson), but we gathered high quality data for only five of them.
## 2 Observations and data reduction
Spectroscopic observations were performed at the Palomar 200-inch telescope using the Double Spectrograph on UT 1996 October 16. Typical exposure times were 600 to 900 seconds depending on the magnitude of the galaxy. Two gratings were used: one for the red side (316 l/mm, resolution of 4.6 Å), and one for the blue side (300 l/mm, resolution of 4.9 Å). The wavelength coverage was 3800Å to 5500Å in the blue and 6000Å to 8500Å in the red. For calibration, He–Ne arc lines were observed before and after each exposure throughout the night. During the night, the seeing varied around 1.5 arcsecs. It is important to stress that in this paper we present only a qualitative discussion of the relative rates of star formaton since the data were taken under non-photometric conditions hampering a proper flux calibration.
The reduction of the spectra was done in IRAF using standard methods. An overscan was subtracted along the dispersion axis, which took care of the bias correction. All the spectra were trimmed and divided by a normalized flat field. Wavelength calibration, done through a polynomial fit to the He–Ne arc lines, gave residuals of $``$0.1Å.
The relatively high signal to noise ratios of the spectra (S/N $`70`$ on average), allow us to study the variation of the emission line characteristics and stellar populations as a function of their position in the galaxies. To do so, the reduction to one dimension was done in the case of the red spectra using up to 7 apertures of $`3`$ arc seconds in width. Due to the lower S/N level obtained, only 3 apertures were used in the blue part of the spectrum. To compare the line ratios and absorption features in the red with those measured in the blue, the reduction was also redone in the red using only 3 apertures.
In the case of the spectra reduced with 3 apertures, the spectrum of the galaxy NGC 6702 was used as a template to correct for contamination by the stellar populations (Ho 1996, Coziol et al. 1998a). Before subtraction, the spectrum of the template was normalized to fit the level of the continuum in the galaxies and in one case, HCG 16–5, the Balmer absorption lines were artificially enlarged to fit the broad absorption lines observed in this galaxy.
## 3 Results
### 3.1 Distribution of the light and ionized gas in the spectra
Table 1 gives the basic characteristics of the 5 galaxies studied in this paper. The numbers in column 1 follow the nomenclature used in Ribeiro et al. (1996). The radial velocities in column 2 and the absolute magnitudes in column 3 were taken from Coziol et al. (1998b). The morphological types listed in column 4 were taken from Mendes de Oliveira & Hickson (1994). The different types of activity in column 5 correspond to our new classification as presented in Section 4 and Figure 3. The complexity of the AGNs is obvious from the multiple characteristics of their spectra. The next 3 columns correspond to the extension of the projected light on the spectra, as deduced from the red part of the spectrum. The total galaxy is measured from the extension until the signal reaches the sky level. The ionized region corresponds to the projected length where emission can be seen. The nucleus corresponds to the extension of light at half maximum intensity (FWHM). With the exception of HCG16–1, all the galaxies have a nucleus which is well resolved. The last column gives for each galaxy the equivalent of 1 arc second in parsecs.
Figure 1 shows, on the left, the extension of the ionized gas, as traced by H$`\alpha `$ and the two \[N II\] lines, and, on the right, the light profile along the slit. In the galaxies HCG16–1, HCG16–2 and HCG16–3, 90% of the light is concentrated in a window $`9`$ arcsecs wide, which corresponds to $`2`$ kpc at the distance of the galaxies. The remaining 10% of the light extends over a region not exceeding 8 kpc. These galaxies look compact compared to normal spiral galaxies.
In galaxies HCG16–4 and HCG16–5 the light is slightly more extended ($`3`$ and 6 kpc, respectively), but this is because these two galaxies probably have a double nucleus. The second nucleus in HCG16–4 corresponds to the second peak 5 arcsecs west of the primary nucleus, while in HCG–5 the second nucleus corresponds to the small peak 7 arcsecs east of the primary nucleus. It is very unlikely that these structures could be produced by dust, because we are using the red part of the spectra where extinction effects are minimized. In the next section, we will show also that the second nucleus in HCG16–5 presents a slightly different spectral characteristic compared to the primary nucleus, which is inconsistent with the idea that this is the same galaxy. HCG16–4 and HCG16–5 are probably the product of recent mergers of galaxies. Other studies present strong evidence of central double nuclei (Amram et al. 1992; Hibbard 1995).
In all the galaxies, the ionized gas is more intense and mostly concentrated in the nucleus. H II regions outside the nucleus are clearly visible only in HCG16–1 and HCG16–3. It looks like the activity (star formation or AGN) is always concentrated in the center of the galaxies. In HCG16–5, the second nucleus seems less active (we see less ionized gas) than the primary nucleus, while in HCG16–4, the two nuclei appear equally active.
### 3.2 Variation of the activity type with the radius
In Ribeiro et al. (1996) we already determined the activity types of these galaxies. Having in hand spectra with high S/N we now repeat our analysis of the activity for the five most luminous galaxies, but this time separating each spectrum in various apertures covering different regions in order to see how activity varies with the radius.
In Figure 2, we present the results of our classification of the activity type using the standard diagnostic diagram (Baldwin, Phillips & Terlevich 1981; Veilleux & Osterbrock 1987). The line ratios correspond to the values obtained after subtraction of the template galaxy NGC 6702. Because of the relatively lower S/N of the blue as compared to the red part of the spectra, we limit our study to only three apertures. In Figure 2, the first apertures, identified by filled symbols, cover the nucleus. The two other apertures cover regions to the east and to the west of the nucleus. The width of these apertures can be found in column 3 of Table 3. Note that these apertures are covering mostly the central part of the galaxies.
Our new classification is similar to the one given in Ribeiro et al. (1996). In particular, the galaxies keep their original classification as an AGN or a starburst. We note, however, some interesting variations. The most obvious of these variations concerns HCG16-1, which was classified as a luminous Seyfert 2 and now appears as a LINER nucleus with outer regions in a starburst phase. Another difference with our previous classification is related to the discovery of the second nucleus in HCG16-5, although we do not find any evidence of difference in excitation state of both nuclei, considering the large error bars (See Figure 2). We see very little variation in the other three galaxies. The level of excitation for HCG16-3 is higher suggesting that the gas in this galaxy is slightly less metal rich than in HCG16-4 (McCall, Rybsky, & Shields 1985; Evans & Dopita 1985).
To study the variation of the activity in greater detail, we have divided the spectra in the red into 7 equal apertures of $`3`$ arc seconds in width. In Table 2, the different apertures are identified by a number which increases from east to west. The apertures centered on the nuclei are identified with a small n and the circumnuclear regions with a small ci. In column 3, the corresponding radius in parsecs is also given. The parameters that were measured are: the FWHM of the H$`\alpha `$ emission line (column 4) and the ratio \[N II\]$`\lambda 6548`$/H$`\alpha `$ (column 5), which allow to distinguish between starbursts and AGNs (Baldwin, Phillips, & Terlevich 1981; Veilleux & Osterbrock 1987; Ho, Fillipenko, & Sargent 1993; Véron, Gonçalvez, & Véron-Cetty 1997); the equivalent width of H$`\alpha `$ (column 6), which in a starburst is a good indicator of the strength of star formation (Kennicutt 1983; Kennicutt & Kent 1983; Copetti, Pastoriza, & Dottori 1986; Salzer, MacAlpine, & Boroson 1989; Kennicutt, Keel, & Blaha 1989; Coziol 1996); and the ratio \[S II\]$`\lambda 6716+\lambda 6731`$/H$`\alpha `$ (column 5), which we use as a tracer of the level of excitation (Ho, Fillipenko, & Sargent 1993; Kennicutt, Keel, & Blaha 1989; Lehnert & Heckman 1996; Coziol et al. 1999). All the lines were measured using the standard routines in SPLOT, fitting the continuum by eye. A gaussian profile was usually assumed, though in some cases, a lorentzian was used. The uncertainties were determined by comparing values obtained by measuring the same lines in two different spectra of the same object.
In Figure 3, we present the diagrams of the ratio \[N II\]$`\lambda 6548`$/H$`\alpha `$ as a function of the EW of H$`\alpha `$. The corresponding regions are identified by their number in Table 2. In these diagrams, AGNs usually have a higher \[N II\]/H$`\alpha `$ ratio than starbursts, but smaller EW (Coziol et al 1998b). We now examine each galaxy separately.
In HCG16-1, the star formation in the outer regions, as noted in Figure 2, appears quite clearly. As compared to HCG16-4, which is the strongest starburst we have in the group, the relatively lower EW of these H II regions suggests milder star formation. The EW of H$`\alpha `$ is a measure of current to past star formation, the relatively lower EW suggests, therefore, an older phase of star formation (Kennicutt, Keel, & Blaha 1989; Salzer, MacAlpine, & Boroson 1989; Coziol 1996). The star formation is constant on the east side of the galaxy (apertures 1 and 2) but decreases to the west (from apertures 6 to 7). The nucleus and circumnuclear regions do not show any variation, the condition of excitation of the gas staying constant out to a radius of $`1.2`$ kpc.
In HCG16-2, no star formation is observed. We see a slight variation in the circumnuclear regions, within a 1 kpc radius of the nucleus, and a more significant variation in the outer regions. If we assume that the source of the gas excitation is limited to the nucleus, the variation of the \[N II\]/H$`\alpha `$ and EW in the outer regions can be explained by a simultaneous decrease of the gas excitation (H$`\alpha `$ flux goes down) and a change towards older stellar populations (EW H$`\alpha `$ decreases). This suggests that HCG16-2 is an AGN located in a galaxy dominated by intermediate and older age stellar populations. In starburst galaxies, the ratio \[N II\]/H$`\alpha `$ is also sensitive to the abundance of nitrogen (Evans & Dopita 1985; Coziol et al. 1999). The increase of \[N II\]/H$`\alpha `$ in the outer regions, therefore, could also suggests an increase of the abundance of nitrogen (Stauffer 1982; Storchi-Bergmann 1991; Storchi-Bergmann & Wilson 1996; Ohyama, Taniguchi & Terlevich 1997; Coziol et al. 1999). It may suggest a previous burst of star formation in the recent past of this AGN (Glass & Moordwood 1985; Smith et al. 1998).
HCG16-3 is a starburst galaxy at the periphery of the four other luminous members of HCG 16 and the only one in our sample which is not original member of the Hickson group. Comparison with HCG16-4 indicates that the star formation is at a lower level. Again, no variation is observed within $`1.2`$ kpc of the nucleus while the \[N II\]/H$`\alpha `$ ratio increases and EW decreases in the outer regions. However, the variation of these two parameters is less severe than in the case of HCG16-2. Because HCG16-3 is classified as a starburst, we assume that the source of gas ionization is not limited only to the nucleus but follows the star formation. The variation observed would then mean that the star formation in the outer regions (aperture 2 and 6) is at a more advanced stage of evolution than in the nucleus.
The same behavior as in HCG16-3 is observed in HCG16-4. The star formation in this galaxy, however, is at a more intense level. This is probably because HCG16-4 is in a merger phase since this galaxy has a double nucleus. Contrary to HCG16-3, we see also some spectral variations in the nucleus, consistent with a double nucleus: apertures 3 and 2 correspond to the second nucleus while apertures 4 and 5 correspond to the primary nucleus. Again the outer regions seem to be in a more advanced stage of evolution than in the nucleus.
The variations observed in HCG16-5 are much more complex than in the other galaxies. The presence of a second nucleus makes the interpretation even more difficult. In Figure 3, the second nucleus corresponds to apertures 6 and 7. It can be seen that the two nuclei have the same behavior. The variation of the parameters out of the nuclei is similar to what we observed in the two starbursts HCG16-3 and HCG16-4, but the range of variation is more similar to that observed in HCG16-2. Although HCG16-5 was classified as a LINER, its nature seems ambiguous, showing a mixture of starburst and AGN characteristics. It is important to note the difference with respect to HCG16-1, which is a central AGN encircled by star forming regions. In HCG16-5, on the other hand, the AGN in the nucleus seems to be mixed with intense star formation (Maoz et al. 1998; Larking et al. 1998). Out of the nucleus, there is no star formation and the AGNs may be responsible for ionizing the gas (Haniff, Ward, & Wilson 1991; Falcke, Wilson, & Simpson 1998; Contini 1997).
### 3.3 Variation of the excitation with the radius
Comparing the ratio \[N II\]$`\lambda 6548`$/H$`\alpha `$ with the ratio \[S II\]$`\lambda 6716+\lambda 6731`$/H$`\alpha `$ it is possible to distinguish between the different source of excitation of the gas (Kennicutt, Keel, & Blaha 1989, Ho, Fillipenko, & Sargent 1993, Lehnert & Heckman 1996). Shocks from surpernovae remnants in a starburst, for example, produce a \[S II\]/H$`\alpha `$ ratio higher than 0.6, much higher than the mean value of $`0.25`$ usually observed in normal H II regions or in starbursts (Greenawalt & Walterbos 1997; Coziol et al. 1997). In AGNs, however, the effect of shocks are more difficult to distinguish because both of these lines are highly excited (Baldwin, Phillips & Terlevich 1981; Veilleux & Osterbrock 1987; Ho, Fillipenko, & Sargent 1993; Villar-Martín, Tadhunter, & Clark 1997; Coziol et al. 1999). We will assume here that a typical AGN has \[N II\]/H$`\alpha >1`$ and \[S II\]/H$`\alpha >0.6`$.
In Figure 4, we now examine the behavior of these ratios as a function of the radius for each of the galaxies. In HCG16-1, although we now classify the nucleus as a LINER, the values of the two ratios are still consistent with those of a typical AGN. The \[N II\]/H$`\alpha `$ ratio for the outer starbursts are at the lower limit of the value for AGNs, but the \[S II\]/H$`\alpha `$ ratio is normal for gas ionized by hot stars. On the other hand, the outer region corresponding to aperture 7 has a very unusually high ratio, which suggests that this region could be the location of shocks (Ho, Fillipenko, & Sargent 1993; Lehnert & Heckman 1996; Contini 1997).
In HCG16-2, both ratios are high, consistent with its AGN nature. We note also that in the outer regions the \[S II\]/H$`\alpha `$ ratio decreases or stays almost constant while the \[N II\]/H$`\alpha `$ ratio increases. This suggests a variation of \[N II\]/H$`\alpha `$ due to an abundance effect. This behavior is consistent with our interpretation of Figure 3, and suggests that this AGN probably had a starburst in its outer region (like in HCG16-1, for example) in the recent past.
The values observed in the starburst HCG16-3 are consistent with excitation produced by massive stars. The outer regions however show values that could be interpreted as the products of shocks. The same behavior is observed in HCG16-4, although at a much lower level. This is consistent with the idea that HCG16-4 is much more active than HCG16-3. In this galaxy the burst population in the outer regions, though more evolved than in the nucleus, are however younger than in the outer regions of HCG16-3.
Again, the analysis of HCG16-5 is the most complex. The values for the primary nucleus are at the lower limit for AGN and starburst and are consistent with shocks. The secondary nucleus has values consistent with shocks and AGN. All the outer regions show values unusually high, suggesting the presence of shocks or domination by an AGN. This observation supports our previous interpretation that HCG16-5 is a mixture of two AGNs with starbursts in their nucleus.
### 3.4 Variation of the stellar populations with the radius
In this section we complete our analysis for our 5 galaxies by studying the characteristics of their stellar populations, as deduced from the absorption features. For this study, we measured the absorption features in three apertures. The results are presented in Table 3. The three apertures are the same as those used for the activity classification. The corresponding widths in kpc are given in column 3. The absorption features were measured by drawing a pseudo continuum by eye using a region $`100`$ Å wide on each side of the line. Columns 4 to 10 give the EW of the most prominent absorption features in the spectra. Column 11 gives the ratios of the center of the line intensity of the Ca II H + H$`ϵ`$ lines to the center of the line intensity of the Ca II K and column 12 gives the Mg<sub>2</sub> index. The uncertainties were determined the same way as for the emission line features.
In Figure 5, we show the diagram of the EW of H$`\delta `$ as a function of the (Ca II H + H$`ϵ`$)/Ca II K index (Rose 1985). This diagram is useful for identifying post-starburst galaxies (Rose 1985; Leonardi & Rose 1996; Poggianti & Barbaro 1996; Zabludoff et al. 1996; Caldwell et al. 1996; Barbaro & Poggianti 1997; Caldwell & Rose 1997). Galaxies with intermediate age populations have high EW of H$`\delta `$ and high values of the (Ca II H + H$`ϵ`$)/Ca II K ratios. From this diagram, it can be seen that the five galaxies in HCG 16 show the presence of intermediate age stellar populations.
In Figure 5, we compare the five galaxies in HCG 16 with the sample of HCG galaxies previously studied by Coziol et al. (1998b). It can be seen that the five galaxies in HCG 16 have characteristics which indicate younger post-starburst phases than in most of the galaxies in Coziol et al. (1998b). This observation is consistent with our scenario for the formation of the groups, which suggests that HCG 16 is an example of a young group.
In Figure 5, it is interesting to compare the position of the two starburst galaxies HCG16-3 and HCG16-4. The position of HCG16-3 suggests that it contains more intermediate age stars than HCG16-4. But at the same time we deduce from Figure 3 that HCG16-4 has a younger burst than HCG16-3. How can we understand this apparent contradiction? One possibility is to assume that the EW(H$`\delta `$) in HCG16-4 is contaminated by emission, explaining the low EW observed for this galaxy. For the (Ca II H + H$`ϵ`$)/Ca II K indices we note also that these values are comparable with those produced by very massive stars (Rose 1985). Another alternative, however, would be to suppose that the stellar populations are from another generation suggesting multiple bursts of star formation in HCG16-4 (Coziol 1996; Moore, Lake, & Katz 1998; Smith et al. 1998; Taniguchi & Shioya 1998).
In Figure 5, the position of HCG16-2 is consistent with no star formation in its nucleus. It could have been higher in the outer regions in the recent past, which is consistent with our interpretation of Figures 3 and 5 for this galaxy. We also note the very interesting position of HCG16-5, which shows a strong post-starburst phase in the two nuclei and in the outer regions. This observation supports our previous interpretation of these two LINERs as a mixture of AGNs with starbursts in their nuclei.
Finally, we examine the stellar metallicities of our galaxies, as deduced from the Mg2 index (Burstein et al. 1984; Brodie & Huchra 1990; Worthey, Faber, & González 1992; Bender, Burstein, & Faber 1993). In Figure 6, the stellar metallicity is shown as a function of the ratio EW(Ca II H + H$`ϵ`$)/EW(Ca II K), which increases as the stellar population get younger (Rose 1985; Dressler & Schectman 1987). For our study, we assume that a high value of the Mg2 index indicates a high stellar metallicity. In Figure 6, the range of Mg2 generally observed in late type spirals is indicated by two dotted lines. The upper limit for the early–type galaxies is marked by a dashed line.
Figure 6 suggests that, the stellar populations are generally more metal rich in the nuclei than in the circumnuclear regions. The two AGNs, HCG16-1 and HCG16-2, are more metal rich, and, therefore, more evolved. HCG16-3 and HCG16-4 have, on the other hand, typical values for starburst galaxies (Coziol et al. 1998). In terms of stellar population and metallicity HCG16-5 is more similar to HCG16-3 and HCG16-4, which suggests a similar level of evolution.
## 4 Discussion
Our observations are consistent with the existence of a close relation between AGN and starbursts. In our sample the most obvious case is HCG16-1, which has a LINER nucleus and star formation in its outer regions. A similar situation was probably present in HCG16-2, in a recent past. HCG16-5, on the other hand, shows a very complicated case where we cannot clearly distinguish between star formation an AGN. The question then is what is the exact relation between these two phenomena?
One possibility would be to assume that AGN and starburst are, in fact, the same phenomenon (Terlevich et al. 1991): the AGN characteristics are produced by the evolution of a massive starbursts in the center of the galaxies. HCG16-5 could be a good example of this. However, nothing in our observations of this galaxy allows us to identify the mechanism producing the LINER with only star formation. In fact, the similarity of HCG16-5 to HCG16-2 suggests that what we see is more a mixture of the two phenomena, where an AGN coexists in the nucleus with a starburst (Maoz et al. 1998; Larkin et al. 1998; Gonzalez-Delgado et al. 1997; Serlemitsos, Ptak, & Yaqoob 1997).
Perhaps the two phenomena are different, but still related via evolution. In one of their recent paper, Gonzalez-Delgado et al. (1997) proposed a continuous sequence where a starburst is related to a Seyfert 2, which, at the end, transforms into a Seyfert 1. Following our observations, it is interesting to see that in terms of stellar populations, HCG16-1 and HCG16-2 are the most evolved galaxies of the group. In Coziol et al. (1998b) we also noted that this is usually the case for the luminous AGN and low–luminosity AGN galaxies in the groups. The AGNs in the samples of Gonzalez-Delgado et al. (1998) and in Hunt et al. (1997) all look like evolved galaxies. However, as we noted in the introduction, we have not found any Seyfert 1 in the 60 compact groups we have investigated (Coziol et al. 1998). Following the scenario of Gonzalez-Delgado et al. (1998) this would simply mean that the groups are not evolved enough. This is difficult to believe as it would suggest that we observe all these galaxies in a very special moment of their existence. In Coziol et al. (1998b) the observations suggests that the end product of the evolution of the starburst–Seyfert 2 connection in the groups is a low–luminosity AGN or a galaxy without emission lines.
Maybe, there are no Seyfert 1 in the groups because the conditions for the formation of these luminous AGNs are not satisfied in the groups. On this matter, it is interesting to find two mergers in HCG 16: HCG16-4 and HCG16-5. But galaxy HCG16-4 is a strong starburst while HCG16-5 is, at most, a LINER or a Seyfert 2. Could it be then that the masses of these two mergers were not sufficient to produce a Seyfert 1? Maybe the mass of the merging galaxies and/or the details on how the merging took place are the important parameters (Moles, Sulentic, & Márquez 1998; Moore, Lake, & Katz 1998; Lake, Katz, & Moore 1998; Taniguchi 1998; Taniguchi & Shioya 1998).
An evolutionary scenario for the starburst–AGN connection is probably not the only possible alternative. It could also be that the presence of a massive black hole (MBH) in the nucleus of an AGN influences the evolution of the star formation (Perry 1992; Lake, Katz, & Moore 1998; Taniguchi 1998). One can imagine, for instance, that a MBH is competing with the starburst for the available gas. Once the interstellar gas has become significantly concentrated within the central region of the galaxy, it could accumulates in an extended accretion disk to fuel the MBH. Assuming 10% efficiency, accretion representing only 7 M yr<sup>-1</sup> will easely yield 10<sup>13</sup> L, while astration rates of 10–100 M yr<sup>-1</sup> are necessary to produce $`10^{11}10^{12}`$ L (Norman & Scoville 1988). Obviously the gas that goes into the nucleus to feed the MBH will not be available to form stars, hence the star formation phase will have a shorter lifetime. Other phenomena also related to AGNs, like jets, ejection of gas, or even just a very extended ionized region could stimulate or inhibit star formation in the circumnuclear regions (Day et al. 1997; Falcke 1998; Quillen & Bower 1998). Obviously, the more active the AGN the greater its influence should be. Therefore, the fact that most of the AGNs in the compact groups are of the shallower types (Seyfert 2, LINER and low–luminosity AGN) suggests that these phenomena probably were not so important in the groups.
Another interesting aspect of our observations concerns the origin of the compact groups. In Coziol et al. (1998b) and Ribeiro et al. (1997), we suggest that the core of the groups are old collapsing structures embedded in more extended systems where they are replenished in galaxies (Governato, Tozzi, & Cavaliere 1996). We have also proposed an evolutionary scenario for the formation of the galaxies in the group. Following this scenario, HCG 16 would be an example of a group at an early stage of its evolution. Our present observations support this scenario and give us further insights on how the groups could have formed.
The original core of HCG 16 is formed of the galaxies HCG16-1, HCG16-2, HCG16-4 and HCG16-5 (Ribeiro et al. 1997). Our observations now suggest that HCG16-1, HCG16-2 form the evolved core of HCG 16, while HCG16-4 and HCG16-5 are more recent additions. The fact that we see traces of mergers in these two last galaxies suggests that HCG16-4 and HCG16-5 originally were not two massive galaxies but 4 smaller mass, metal poor galaxies. The remnant star formation activity in HCG16-1, HCG16-2 could also indicate that they too were formed by mergers, but a much longer time ago. This scenario may resolve the paradox of why galaxies in the cores of the HCGs have not already merged to form one big galaxy (Zepf & Whitmore 1991; Zepf 1993). If HCG 16 is typical of what happened in the other groups, then originally the number of galaxies was higher and their mass lower and hence the dynamic of the groups was much different. HCG16-3 looks, on this matter, like a more recent addition, and suggests that the process of formation of the group is still going on today.
We would like to thank Roy Gal, and Steve Zepf for very useful suggestions.
|
no-problem/9901/nucl-th9901090.html
|
ar5iv
|
text
|
# Microscopic Calculations of Weak Interaction Rates of Nuclei in Stellar Environment for A = 18 to 100
## Abstract
We report here the microscopic calculation of weak interaction rates in stellar matter for 709 nuclei with A = 18 to 100 using a generalized form of proton-neutron quasiparticle RPA model with separable Gamow-Teller forces. This is the first ever extensive microscopic calculation of weak rates calculated over a wide temperature-density grid which includes 10<sup>7</sup> $``$ T(K) $``$ 30 $`\times `$ 10<sup>9</sup> and 10$``$ $`\rho Y_e`$ (gcm<sup>-3</sup>) $``$ 10<sup>11</sup>, and over a larger mass range. Particle emission processes from excited states, previously ignored, are taken into account, and are found to significantly affect some $`\beta `$ decay rates. The calculated capture and decay rates take into consideration the latest experimental energy levels and $`ft`$ value compilations. Our calculation of electron capture and $`\beta `$-decay rates, in the $`fp`$-shell, show considerable differences with a recently reported shell model diagonalization approach calculation.
preprint: nucl-th/9901abc
The weak interaction have several crucial effects in the course of development of a star. They initiate the gravitational collapse of the core of a massive star triggering a supernova explosion, play a key role in neutronisation of the core material via electron capture by free protons and by nuclei and affect the formation of heavy elements above iron via the r-process at the final stage of the supernova explosion (including the so-called cosmochronometers which provide information about the age of the Galaxy and of the universe). The weak interaction also largely determines the mass of the core, and thus the strength and fate of the shock wave formed by the supernova explosion (see, eg., ).
Precise knowledge of the terrestrial $`\beta `$ decay of neutron-rich nuclei is crucial to an understanding of the r-process. Most of these nuclei cannot be produced in terrestrial laboratories and one has to rely on theoretical extrapolations in respect of beta decay properties. The microscopic calculations of weak interaction rates, performed at that time led to a better understanding of the r-process .
The weak interaction rates in domains of high temperature and density scales are of decisive importance in studies of the stellar evolution. A particularly important input which determines both the final electron (or lepton) fraction of the “iron”-core prior to collapse (i.e., at the presupernova stage) as well as its initial entropy , is the nuclear beta decay and electron capture rates. These reactions not only lead to a change in the neutron-to-proton ratio in the stellar core material but because of the removal of energy by neutrinos produced in the reactions, they cool the core to a lower entropy state. It is therefore important to follow the evolution of the stellar core during its late stages of hydrostatic nuclear burning with a sufficiently detailed nuclear reaction network that includes these weak-interaction mediated reactions.
The first extensive effort to tabulate the nuclear weak interaction rates at high temperatures and densities, where decays from excited states of the parent nuclei become relevant, was done by Fuller, Fowler, and Newmann (FFN) (such rates are referred to as stellar rates throughout this paper). FFN calculated the stellar weak interaction rates over a wide range of densities and temperatures ($`10\rho Y_e`$ (g cm<sup>-3</sup>) $`10^{11}`$ and $`10^7`$ T(K) $`10^{11}`$) for 226 nuclei with masses between A = 21 and 60. The Gamow-Teller (GT) strength and excitation energies were calculated using a zero-order shell model. They also incorporated the experimental data available at that time. For unmeasured transitions, FFN assumed an average log $`ft`$ value of 5.0.
The FFN rates were then updated, taking into account some quenching of the GT strength by an overall factor of two . These studies were based on the same strategy and formalism as already employed by FFN. Furthermore these authors simulated the low-lying transitions by the same $`ft`$-value, while FFN adopted specific values for individual nuclei. Later results from implied the need for a more reliable calculation of stellar rates.
Oda et al. (OHMTS) did an extensive calculation of stellar weak interaction rates of $`sd`$-shell nuclei in the full ($`sd`$)<sup>n</sup>-shell model space. They also compared their calculated rates with those of FFN and in certain cases they reported differences in the rates up to two orders of magnitude and more. OHMTS calculated weak process rates for 79 nuclei covering isotopes from the $`sd`$-shell (A = 17 to 39) for both $`\beta ^{}`$ and $`\beta ^+`$ decay directions.
The proton-neutron quasiparticle random-phase-approximation (pn-QRPA) theory has been shown to be a good microscopic theory for the calculation of beta decay half-lives far from stability . The pn-QRPA was first developed by . Some extension of the model to deformed nuclei was discussed by , while general formulae for the calculation of odd-odd parent nuclei can be found in . Calculations of beta decay rates for all nuclei, at low temperatures, far from stability by microscopic nuclear theory were first performed by , and then complemented and refined by . Recent studies by have shown that the best extrapolations to cold neutron-rich nuclei far from stability to date still are given by . The pn-QRPA theory was also successfully employed in the calculation of $`\beta ^+`$/EC half-lives of cold nuclei and again good agreement with experimental half-lives was found . The pn-QRPA theory was then extended to treat transitions from nuclear excited states . Keeping in view the success of pn-QRPA theory in calculating terrestrial decay rates, in the present work this extended model was used to calculate for the first time the weak interaction rates in stellar matter using pn-QRPA theory. One of the main advantages of using this formalism is that one can handle large configuration spaces, by far larger than possible in any shell model calculations, and hence can include parent excitation energies over large ranges of 10’s of MeV.
In this present work, we considered a model space up to 7 major shells. Particle emission processes from excited states, which were not considered in previous compilations, are taken into account in this work. We specifically calculate 12 different stellar rates for each parent nucleus. These include $`e^\pm `$-capture rates, $`\beta ^\pm `$-rates, (anti)neutrino energy loss rates, probabilities of beta-delayed proton (neutron) emission and energy rates of beta-delayed protons (neutrons). Our calculation of stellar rates for $`sd`$-shell nuclei shows significant differences, especially for decay rates, compared to the earlier works of FFN and OHMTS.
During the course of this work, a handful of electron capture and $`\beta `$ decay rates were calculated using the shell model diagonalization approach (SMDA) . However, due to the very large m-scheme dimensions involved, the GT strength distributions were calculated in truncated model spaces (only a model space of 1 major shell was considered). These authors restricted themselves to parent excited states of a few MeV for the calculation of electron capture rates. For the calculation of $`\beta `$-decay rates they considered parent excited states usually up to 1 MeV and in addition, back resonances (the GT back resonance are states reached by the strong GT transitions in the electron capture process built on ground and excited states, see ) built on daughter states below 1 MeV.
We, in general, considered a few 100’s of initial and final states in our rate calculation. We consider parent excitation energies up to the particle decay threshold, i.e., minimum of $`S_p`$ and $`S_n`$ (after accounting for the effective Coulomb barrier which prevents a proton from being promptly emitted and the uncertainty in calculation of energy levels). This has the effect that our calculated electron capture rates are, in general, suppressed in comparison to the corresponding rates of FFN at high temperatures and densities. The effect is more pronounced for the case of decay rates. A detailed comparison can be found in . Our results for capture rates are enhanced for odd-A and odd-odd nuclei in comparison to the corresponding SMDA calculation. For even-even nuclei they are suppressed at high temperatures (T9 $`>`$ 3, where T9 is the temperature in units of 10<sup>9</sup> K). For the case of decay rates our calculation is, in general, enhanced for the case of odd-A nuclei. In all other cases our calculated rates are suppressed. The degree of suppression and/or enhancement varies with temperature and density. For more quantitative conclusions we refer to . Our results do not support the claim by that for capture on odd-odd nuclei FFN placed the GT centroid at too low excitation energy. For odd-odd nuclei no experimental information is available for the GT strength distribution, and also did not present a comparison of the corresponding terrestrial rates of odd-odd nuclei with measured half-lives to show the reliability of their calculation. Our calculation is, in general, in good agreement with the FFN calculation for odd-odd nuclei. However, for certain nuclei, FFN rates exceed ours at high densities. Table I and Table II compares some of our calculated electron capture and decay rates with earlier calculations. For the sake of reliability of our calculation, a comparison of all calculated terrestrial rates using the pn-QRPA theory, used in the present work, with measured half-lives, wherever possible, have been made and discussed in .
The calculated weak interaction rates for 709 nuclei (A = 18 to 100), including also the neutron-rich nuclei which play a key role in the evolution of the stellar core, can be obtained as files on a magnetic tape from the authors on request. For details of the formalism and the calculations we refer to .
Some examples of astrophysical application comprising of the new theoretical data set presented here have been discussed in .
|
no-problem/9901/astro-ph9901086.html
|
ar5iv
|
text
|
# Study of the TeV Emission from Mkn 501 with the Stereoscopic Cherenkov Telescope System of HEGRA
## 1. Introduction
In the first two years after its discovery as a TeV $`\gamma `$-ray source, the BL Lac object Mkn 501 showed fluxes well below the persistent flux of the Crab Nebula (Quinn et al. 1996, Bradbury et al. 1997). In 1997 the source went into a state of surprisingly high activity and dramatic variability, outshining during several nights the brightest known source in the TeV sky, the Crab Nebula, by factors as large as $`10`$. In this paper we report on detailed studies of this spectacular bright phase performed with the HEGRA IACT system. The IACT system (Daum et al. 1997) is located on the Roque de los Muchachos on the Canary Island of La Palma, (lat. 28.8 N,long. 17.9, 2200 m a.s.l.). It is formed by 5, during 1997 by 4, identical IACTs - one at the center and 4 (during 1997, 3) at the corners of a 100 m by 100 m square area. Each telescope is equipped with a segmented 8.5 m<sup>2</sup> mirror and a 4.3 field of view high resolution camera consisting of 271 pixels of 0.25 diameter. Exploiting the stereoscopic observation technique (simultaneous observation of air showers under widely differing viewing angles with two or more Cherenkov telescopes, see Aharonian et al. 1997) the system achieves a low energy threshold of 500 GeV, an excellent angular resolution of 0.1, an energy resolution of better than 20% (all for individual photons), and a flux sensitivity $`\nu F_\nu `$ at 1 TeV of $`10^{11}\mathrm{ergs}/\mathrm{cm}^2\mathrm{sec}`$ $``$ 1/4 Crab for 1 hour of observation time (S/$`\sqrt{\mathrm{B}}`$=5$`\sigma `$ with a system of 4 IACTs). The 4 IACT system started operation in fall 1996.
## 2. Data sample and analysis method
The analysis of this paper is based on 110 hours of Mkn 501 data acquired between March 16th, 1997 and October 1st, 1997 under optimal weather conditions, with the optimal detector performance, and with Mkn 501 being more than 45 above the horizon. Altogether about $`38,000`$ Mkn 501 photons were recorded, making it possible to verify the source location with an accuracy of 35 arcsec (Pühlhofer et al. 1997). Since the IACT system provides an unprecedented signal to noise ratio, loose $`\gamma `$/hadron-separation cuts can be used to extract the Mkn 501 signal and to suppress the background of charged cosmic rays which accept a large fraction of $`80\%`$ of the $`\gamma `$-rays at all energies above 1 TeV. By this means, the systematic uncertainties associated with uncertainties in energy dependent cut efficiencies are minimized. The analysis, i.e. the cut optimization and the calculation of effective detection areas and cut efficiencies, is based on detailed Monte Carlo simulations which have been checked experimentally using cosmic ray data (hadron induced showers) and Mkn 501 and Crab data (photon induced showers). A more detailed description of the analysis tools and also of the temporal characteristics of Mkn 501 can be found in (Aharonian et al. 1998).
## 3. Time-averaged 1997 Mkn 501 energy spectrum
The time-averaged Mkn 501 energy spectrum is shown in Fig. 1 (left side) over the energy region from 500 GeV to 25 TeV. For determining the spectrum down to energies below 800 GeV, the analysis is restricted to the 80 h of low energy threshold data taken with Mkn 501 at altitudes $`>`$ 60. The systematic error on the absolute energy scale is 15%. The shaded region shows our current conservative estimate of the additional systematic error on the shape of the spectrum. It is mainly caused by uncertainties in the effective areas near detection threshold. The error bars in vertical direction show the statistical errors and the error bars in horizontal direction indicate the energy resolution of the IACT System. The spectrum is smooth over the whole energy range and it is clearly curved. Although the exact shape of the spectrum above 10 TeV is still preliminary, the Mkn 501 emission clearly extends into the energy range well above 10 TeV. A $`\chi ^2`$-analysis yields a 2 $`\sigma `$ lower limit on the minimum photon energy of the signal of 18 TeV. A fit of the data over the energy region of small systematic errors, i.e. from 1.25 TeV to 50 TeV, with a power law model with an exponential cut off gives:
$$dN/dE=\mathrm{\hspace{0.17em}9.7}\pm 0.3(\mathrm{stat})\pm 2.0(\mathrm{syst})10^{11}E^{1.9\pm 0.06(\mathrm{stat})\pm 0.07(\mathrm{syst})}$$
$$\mathrm{exp}\left[E/(5.7\pm 1.1(\mathrm{stat})\pm 0.6(\mathrm{syst})\mathrm{TeV})\right]\mathrm{cm}^2\mathrm{s}^1\mathrm{TeV}^1.$$
In Fig. 1 (right side) the spectral energy distribution (SED) $`\nu F_\nu `$ is shown for the mean spectrum and for all days with a differential flux at 2 TeV below 1.6 and above 3 times 10<sup>-11</sup> $`\mathrm{cm}^2\mathrm{s}^1\mathrm{TeV}^1`$. Seemingly, the SEDs peak in the energy range between 500 GeV and 2 TeV, although, due to the systematic uncertainties, a peak in the energy range below 500 GeV can not be excluded. The three SEDs have within the statistics the same shape. Thus we do not find any evidence for a correlation of the emission strength at 2 TeV and the spectral shape in the energy range from 500 GeV to 20 TeV. Most interestingly, as described further below, the time-averaged spectrum also fits the diurnal energy spectra statistically satisfactorily. The observed spectral shape is invariant during the whole 1997 observation period.
## 4. The temporal characteristics of the 1997 Mkn 501 emission
The stereoscopic IACT system makes it possible to determine differential TeV spectra even on diurnal basis.
Figure 2 (upper panel) shows the differential spectra obtained for 8 exemplary individual nights in the energy range from 1 to 10 TeV. We do not find any diurnal spectrum with a shape which deviates significantly from the time-averaged 1997 spectrum. The temporal evolution of emission intensity and spectral steepness have been studied by fitting power law models to the diurnal spectra in the energy range from 1 to 5 TeV. In Fig. 2 (2 lower panels) the results are shown. As before, the error bars show the statistical errors only. The systematic error on the flux amplitude deriving from the 15% uncertainty in the energy scale is approximately 20% and the systematic uncertainty on the spectral index is 0.1. The emission intensity, i.e. the differential flux at 2 TeV, varies dramatically from a fraction of a Crab unit to $`10`$ Crab units, the peak emission being recorded on MJD 50625/50626. In contrast, the differential spectral indices from 1 to 5 TeV are rather stable. Only two $`3\sigma `$-deviations from the mean value -2.25 have been found, namely for the night MJD 50550/50551 the spectral index is $``$1.87 +0.13 $``$0.14 and for the night MJD 50694/50695 it is $``$1.05 +0.30 $``$0.38.
A dedicated search for the shortest time scales of flux variability has been carried out. The time gradient of the flux computed with adjacent diurnal flux amplitudes $`\mathrm{\Delta }t`$ hours apart corresponds to shortest increase/decay times $`\tau =\mathrm{\Delta }t/\mathrm{\Delta }\mathrm{ln}(\mathrm{flux})`$ in the order of 15 h. Variability within individual nights could only marginally be detected for the two nights MJD 50576/50577 and MJD 50606/50607. The trial corrected chance probability for more significant variability is 0.4% and 1% for both nights respectively. The flux variabilities of these two days correspond to increase/decay times in the order of 5 h.
## 5. Correlation X-ray / TeV
The RXTE ASM (Remillard & Levine 1997) data have been used to study the correlation between the 2 to 12 keV and the TeV emission intensities. Figure 3 (left side) shows the Discrete Correlation Function DCF (Edelson & Krolik 1988) as function of the time lag $`\mathrm{\Delta }t`$ between X-ray and TeV variability, as computed from the HEGRA diurnal flux amplitudes at 2 TeV and the ASM 2-12 keV count rate, the latter for each day averaged over all measurements within a 24 h interval centered close to 0:00 UTC. The DCF shows evidence for a weak correlation between X-ray and TeV activity with a time lag between X-ray and TeV emission smaller than or equal to one day. The DCF computed for $`\mathrm{\Delta }t=\mathrm{\hspace{0.17em}0}`$ with 50 pairs of data is 0.37$`\pm `$0.03. Due to the limited number of $``$50 pairs of data entering the determination of the DCF, the significance of the correlation is modest. Depending on the assumptions about the autocorrelation properties of the X-ray and TeV emission, the chance probability for larger DCF values is computed to lie between 0.43% and 8%. In Fig. 3 (right side), the correlation between the diurnal X-ray and TeV fluxes is shown for $`\mathrm{\Delta }t=\mathrm{\hspace{0.17em}0}`$. The straight line fit to the data indicates a much larger relative flux variability in the TeV energy range than in the 2 to 12 keV energy band.
## 6. Outlook
The 1997 high emission phase of Mkn 501 made it possible to study this BL Lac object in the TeV energy range with unprecedented signal to noise ratio during a long time period of more than 6 months. The IACT system of HEGRA has been used to obtain a wealth of detailed spectral and temporal information. Most interestingly, within the statistical accuracy, the shape of the energy spectrum is constant during the whole observation period and extends well into the energy range above 10 TeV. A deep understanding of the spectral properties is rendered difficult since several effects combine to give the observed spectrum, e.g. the spectrum of the emitting electrons, the spectrum of possible Inverse Compton seed photons, internal $`\gamma _{\mathrm{TeV}},\gamma _{\mathrm{O},\mathrm{UV}}`$ absorption of the TeV photons in the source, and intergalactic absorption of the TeV photons in $`\gamma _{\mathrm{TeV}}\gamma _{\mathrm{IR}}e^+e^{}`$ processes by the Diffuse Extragalactic Background Radiation (DEBRA). Note that already the pure fact of the registration of TeV photons with energies exceeding 10 TeV yields a sensitive upper limit on the largely unconstrained DEBRA in the wavelength region from 1 to 20 microns. Due to very general arguments concerning the emitted $`\gamma `$-ray luminosity, the optical depth $`\tau `$ of the DEBRA for TeV photons cannot exceed 1 by much more than one order of magnitude. The condition $`\tau <\tau _010`$ yields for the DEBRA density $`n(\epsilon )`$ at energy $`\epsilon `$ the upper limit $`\epsilon ^2n(\epsilon )/(10^3\mathrm{eV}/\mathrm{cm}^3)<(\tau _0/5)(\mathrm{H}_0/(60\mathrm{k}\mathrm{m}/\mathrm{sMpc}))/(\epsilon /\mathrm{eV})`$ where H<sub>0</sub> is the Hubble constant, with only small corrections depending on the shape of the spectrum.
Models will further be constrained by intensive multiwavelength campaigns and by studying more sources. The analysis of the Mkn 501 and Mkn 421 multiwavelength campaigns performed during 1997 and 1998 with participation of the HEGRA IACT array is underway. Further more, the IACT system has extensively been used to search for new TeV emitting BL Lac sources, although, up to now, without positive evidence. Members of the HEGRA collaboration pursue the installation of two next generation IACT installations aiming at a sensitivity increase by one order of magnitude. HESS, a stereoscopic system of at first 4, in the second stage 16 IACTs of the 10 m diameter class for $`\gamma `$-ray astronomy at energies above 40 GeV (Hofmann 1997) will probably start operation in the year 2001. MAGIC will be a dedicated “low energy threshold” stand alone IACT for $`\gamma `$-ray observations above an energy threshold of 10 GeV (Lorenz 1997).
### Acknowledgments.
We thank the Instituto de Astrofísica de Canarias (IAC) for supplying excellent working conditions at La Palma. HEGRA is supported by the BMBF (Germany) and CYCIT (Spain). The RXTE ASM data has been obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center.
## References
Aharonian F.A., Hofmann W., Konopelko A.K., Völk H.J. 1997, Astropart. Phys. 6, 343
Aharonian F.A., Akhperjanian A.G., Barrio J.A., et al., 1998, A&A accepted for publication, astro-ph/9808296
Bradbury S.M., Deckers T., Petry D., et al., 1997, A&A 320, L5
Daum A., Hermann G., Heß M., et al., 1997, Astropart. Phys. 8, 1
Edelson R.A., Krolik J.H., 1988, ApJ 333, 646
Hofmann, W., 1997. In: Proc. Towards a Major Atmospheric Cherenkov Detector-V, ed. O.C. de Jager, 405
Lorenz, E.C., 1997. In: Proc. 25th ICRC, Durban, 5, 177
Pühlhofer G., Daum A., Hermann G., et al., 1997, Astropart. Phys. 8, 101
Quinn J., Akerlof C.W., Biller S., et al., 1996, ApJ 456, L83
Remillard R.A., Levine M.L., 1997, astro-ph/9707338
|
no-problem/9901/astro-ph9901423.html
|
ar5iv
|
text
|
# Inside-Out Bulge Formation and the Origin of the Hubble Sequence
## 1 Introduction
Despite considerable progress in our understanding of the formation of galaxies, the origin of the Hubble sequence remains a major unsolved problem. The main morphological parameter that sets the classification of galaxies in the Hubble diagram is the disk-to-bulge ratio ($`D/B`$). Understanding the origin of the Hubble sequence is thus intimately related to understanding the parameters and processes that determine the ratio between the masses of disk and bulge. Especially, we need to understand whether this ratio is imprinted in the initial conditions (‘nature’) or whether it results from environmental processes such as mergers and impulsive collisions (‘nurture’).
Here I suggest a simple inside-out formation scenario for the bulge (a ‘nature’-variant) and investigate the differences in properties of the proto-galaxies that result in different disk-to-bulge ratios. A more detailed discussion on the background and ingredients of the models can be found in van den Bosch (1998; hereafter vdB98).
## 2 The formation scenario
In the standard picture of galaxy formation, galaxies form through the hierarchical clustering of dark matter and subsequent cooling of the baryonic matter in the dark halo cores. Coupled with the notion of angular momentum gain by tidal torques induced by nearby proto-galaxies, this theory provides the background for a model for the formation of galactic disks. In this model, the angular momentum of the baryons is assumed to be conserved causing the baryons to settle in a rapidly rotating disk (e.g., Fall & Efstathiou 1980). The turn-around, virialization, and subsequent cooling of the baryonic matter of a proto-galaxy is an inside-out process. First the innermost shells virialize and heat its baryonic material to the virial temperature. The cooling time of this dense, inner material is very short, whereas its specific angular momentum is relatively low. If the cooling time of the gas is shorter than the dynamical time, the gas will condense in clumps that form stars, and this clumpiness is likely to result in a bulge. Even if the low-angular momentum material accumulates in a disk, the self-gravity of such a small, compact disk makes it violently unstable, and transforms it into a bar. Bars are efficient in transporting gas inwards, and can cause vertical heating by means of a collective bending instability. Both these processes lead ultimately to the dissolution of the bar; first the bar takes a hotter, triaxial shape, but is later transformed in a spheroidal bulge component. There is thus a natural tendency for the inner, low angular momentum baryonic material to form a bulge component rather than a disk. Because of the ongoing virialization, subsequent shells of material cool and try to settle into a disk structure at a radius determined by their angular momentum. If the resulting disk is unstable, part of the material is transformed into bulge material. This process of disk-bulge formation is self-regulating in that the bulge grows until it is massive enough to sustain the remaining gas in the form of a stable disk. I explore this inside-out bulge formation scenario, by incorporating it into the standard Fall & Efstathiou theory for disk formation.
The ansatz for the models are the properties of dark halos, which are assumed to follow the universal density profiles proposed by Navarro, Frenk & White (1997), and whose halo spin parameters, $`\lambda `$, follow a log-normal distribution in concordance with both numerical and analytical studies. I assume that only a certain fraction, $`ϵ_{\mathrm{gf}}`$, of the available baryons in a given halo ultimately settles in the disk-bulge system. Two extreme scenarios for this galaxy formation (in)efficiency are considered. In the first scenario, which I call the ‘cooling’-scenario, only the inner fraction $`ϵ_{\mathrm{gf}}`$ of the baryonic mass is able to cool and form the disk-bulge system: the outer parts of the halo, where the density is lowest, but which contain the largest fraction of the total angular momentum, never gets to cool. In the second scenario, referred to hereafter as the ‘feedback’-scenario, the processes related to feedback and star formation are assumed to yield equal probabilities, $`ϵ_{\mathrm{gf}}`$, for each baryon in the dark halo, independent of its initial radius or specific angular momentum, to ultimately end up in the disk-bulge system. The values of $`ϵ_{\mathrm{gf}}`$ are normalized by fitting the model disks to the zero-point of the observed Tully-Fisher relation. Recent observations of high redshift spirals suggest that the zero-point of the Tully-Fisher relation does not evolve with redshift. This implies that the galaxy formation efficiency, $`ϵ_{\mathrm{gf}}`$, was higher at higher redshifts (see vdb98 for details). Disks are modeled as exponentials with a scalelength proportional to $`\lambda `$ times the virial radius of the halo (as in the disk-formation scenario of Fall & Efstathiou). The bulge mass is determined by requiring that the disk is stable. Since the amount of self-gravity of the disk is directly related to the amount of angular momentum of the gas, the disk-to-bulge ratio in this scenario is mainly determined by the spin parameter of the dark halo out of which the galaxy forms.
## 3 Clues to the formation of bulge-disk systems
Constraints on the formation scenario envisioned above can be obtained from a comparison of these disk-bulge-halo models with real galaxies. From the literature I compiled a list of $`200`$ disk-bulge systems, including a wide variety of galaxies: both high and low surface brightness spirals (HSB and LSB respectively), S0, and disky ellipticals (see vdB98 for details). After choosing a cosmology and a formation redshift, $`z`$, I calculate, for each galaxy in this sample, the spin parameter $`\lambda `$ of the dark halo which, for the assumptions underlying the formation scenario proposed here, yields the observed disk properties (scale-length and central surface brightness). We thus use the formation scenario to link the disk properties to those of the dark halo, and use the known statistical properties of dark halos to discriminate between different cosmogonies.
The main results are shown in Figure 1, where I plot the inferred values of $`\lambda `$ versus the observed disk-to-bulge ratio for the galaxies in the sample. The dotted lines outline the distribution function of halo spin parameters of dark halos; it can thus be inferred what the predicted distribution of disk-to-bulge ratios is for galaxies that form at a given formation redshift. Results are presented for an open cold dark matter (OCDM) model with $`\mathrm{\Omega }_0=0.3`$ and no cosmological constant ($`\mathrm{\Omega }_\mathrm{\Lambda }=0`$). These results are virtually independent of the value of $`\mathrm{\Omega }_\mathrm{\Lambda }`$, but depend strongly on $`\mathrm{\Omega }_0`$, which sets the baryon mass fraction of the Universe. Throughout, a universal baryon density of $`\mathrm{\Omega }_b=0.0125h^2`$ is assumed, in agreement with nucleosynthesis constraints. The inferred spin parameters are larger for higher values of the assumed formation redshifts. This owes to the fact that halos that virialize at higher redshifts are denser. Since the scalelength of the disk is proportional to $`\lambda `$ times the virial radius of the halo, higher formation redshifts imply larger spin parameters in order to yield the observed disk scalelength. In the cooling scenario, the probability that a certain halo yields a system with a large disk-to-bulge ratio (e.g., a spiral) is rather small. This is due to the fact that in this scenario most of the high angular momentum material never gets to cool to become part of the disk. The large observed fraction of spirals in the field, renders this scenario improbable. For the feedback cosmogony, however, a more promising scenario unfolds: At high redshifts ($`z>1`$) the majority of halos yields systems with relatively small disks (e.g., S0s), whereas systems that form more recently are more disk-dominated (e.g., spirals). This difference owes to two effects. First of all, halos at higher redshifts are denser, and secondly, the redshift independence of the Tully-Fisher relation implies that $`ϵ_{\mathrm{gf}}`$ was higher at higher redshifts. Coupled to the notion that proto-galaxies that collapse at high redshifts are preferentially found in overdense regions such as clusters, this scenario thus automatically yields a morphology-density relation, in which S0s are predominantly formed in clusters of galaxies, whereas spirals are more confined to the field.
## 4 Conclusions
* Inside-out bulge formation is a natural by-product of the Fall & Efstathiou theory for disk formation.
* Disk-bulge systems do not have bulges that are significantly more massive than required by stability of the disk component. This suggests a coupling between the formation of disk and bulge, and is consistent with the self-regulating, inside-out bulge formation scenario proposed here.
* A comparison of the angular momenta of dark halos and spirals suggests that the baryonic material that builds the disk can not loose a significant fraction of its angular momentum. This rules against the ‘cooling scenario’ envisioned here, in which most of the angular momentum remains in the baryonic material in the outer parts of the halo that never gets to cool.
* If we live in a low-density Universe ($`\mathrm{\Omega }_0<0.3`$), the only efficient way to make spiral galaxies is by assuring that only a relatively small fraction of the available baryons make it into the galaxy, and furthermore that the probability that a certain baryon becomes a constituent of the final galaxy has to be independent of its specific angular momentum, as described by the ‘feedback scenario’.
* If more extended observations confirm that the zero-point of the Tully-Fisher relation is independent of redshift, it implies that the galaxy formation efficiency, $`ϵ_{\mathrm{gf}}`$, was higher at earlier times. Coupled with the notion that density perturbations that collapse early are preferentially found in high density environments such as clusters, the scenario presented here then automatically predicts a morphology-density relation in which S0s are most likely to be found in clusters.
* A reasonable variation in formation redshift and halo angular momentum can yield approximately one order of magnitude variation in disk-to-bulge ratio, and the simple formation scenario proposed here can account for both spirals and S0s. However, disky ellipticals have too large bulges and too small disks to be incorporated in this scenario. Apparently, their formation and/or evolution has seen some processes that caused the baryons to loose a significant amount of their angular momentum. Merging and impulsive collisions (e.g., galaxy harassment) are likely to play a major role for these systems.
It thus seems that both ‘nature’ and ‘nurture’ are accountable for the formation of spheroids, and that the Hubble sequence has a hybrid origin.
###### Acknowledgements.
Support for this work was provided by NASA through Hubble Fellowship grant # HF-01102.11-97.A awarded by the Space Telescope Science Institute, which is operated by AURA for NASA under contract NAS 5-26555.
|
no-problem/9901/hep-ph9901233.html
|
ar5iv
|
text
|
# Untitled Document
QCD factorization in $`\gamma ^{}\gamma \pi \pi `$ and $`\gamma ^{}N\pi \pi N`$ To be published in the proceedings of the conference INPC’98, Paris, France, August 1998
M. Diehl<sup>a</sup>, T. Gousset<sup>b</sup>, B. Pire<sup>c</sup> and O.V. Terayev<sup>d</sup>
<sup>a</sup> DESY, 22603 Hamburg, Germany
<sup>b</sup> SUBATECH, B.P. 20722, 44307 Nantes, France
<sup>c</sup> CPhT, Ecole Polytechnique, 91128 Palaiseau, France
<sup>d</sup> Bogoliubov Laboratory, JINR, 141980 Dubna, Russia
Exclusive two-pion production near threshold in the collision of a highly virtual photon with a real one offers a new possibility to unravel the partonic content of hadrons. We explain the dynamics of this regime, i.e. the separation of the amplitude in terms of partonic diagrams computable in QCD perturbation theory and generalized distribution amplitudes, which are nonperturbative functions describing the exclusive transition of a parton pair into two hadrons. The same quantities also appear in $`\gamma ^{}N\pi \pi N`$ in the kinematical regime where the momentum transfer of the nucleon and the invariant mass of the $`\pi \pi `$ pair are both small; this allows to extend the QCD analysis of exclusive $`\rho `$ electroproduction outside the resonance region of the two produced pions.
1. FACTORIZATION
Factorization is the property that allows the description of a hadronic process in terms of quarks and gluons. It means the separation of a perturbatively computable subprocess, dominated by a hard momentum scale, from soft nonperturbative hadron matrix elements. A well-known example of this is inclusive deep inelastic scattering $`\gamma ^{}pX`$, where the cross section is given by the imaginary part of the forward amplitude $`\gamma ^{}p\gamma ^{}p`$ via the optical theorem. In the Bjorken limit this amplitude can be calculated as a perturbative parton-photon scattering times a parton distribution in the proton. Parton distributions, which can be extracted from the structure function $`F_2(x_B,Q^2)`$, are universal objects, appearing in other hard processes like lepton pair production $`p+p\mu ^+\mu ^{}+X`$ at large invariant mass, and the production of heavy gauge bosons $`W`$ or $`Z`$.
Another case of factorization occurs in hard exclusive processes, e.g. meson or proton electromagnetic form factors, or exclusive electroproduction of vector or pseudoscalar mesons. For instance, the pion form factor may be written as a convolution of the form
$`F_\pi (Q^2)={\displaystyle _0^1}𝑑z{\displaystyle _0^1}𝑑z^{}\phi (z;\mu ^2)H(z,z^{},Q^2;\mu ^2)\phi (z^{};\mu ^2),`$
where $`H(z,z^{},Q^2;\mu ^2)`$ is the perturbatively calculable hard scattering amplitude of a virtual photon on a $`q\overline{q}`$ pair at factorization scale $`\mu ^2`$, and $`\phi (z;\mu ^2)`$ is the pion distribution amplitude.
In the following we present a further instance of factorization, which opens a new domain of investigation for the quark structure of hadrons. For further details we refer to .
2. THE PROCESS $`\gamma ^{}\gamma \pi \pi `$
To leading order in $`\alpha _S`$ the scattering amplitude of the reaction $`\gamma ^{}(q)+\gamma (q^{})\pi (p)+\pi (p^{})`$ at large $`Q^2=q^2`$ and small $`W^2=(p+p^{})^2`$ can be represented as (see Figure 1)
$`T^{\mu \nu }={\displaystyle \frac{1}{2}}(v^\mu v^\nu +v^\mu v^\nu g^{\mu \nu }){\displaystyle \underset{q}{}}e_q^2{\displaystyle _0^1}𝑑z{\displaystyle \frac{2z1}{z(1z)}}\mathrm{\Phi }_q(z,\zeta ,W^2;Q^2).`$
Here we use lightlike vectors, $`v`$ and $`v^{}`$, defining a “plus” and a “minus” direction, which are related to the photon momenta by
$`q=(vv^{})Q/\sqrt{2}`$ and $`q^{}=v^{}(Q^2+W^2)/(\sqrt{2}Q)`$.
$`z`$ and $`\zeta `$ are the “plus” momentum fraction carried by the quark and by the $`\pi ^+`$, respectively:
$`z={\displaystyle \frac{kv^{}}{(p+p^{})v^{}}},\zeta ={\displaystyle \frac{pv^{}}{(p+p^{})v^{}}}.`$
The generalized distribution amplitude (GDA) $`\mathrm{\Phi }_q`$ is defined through the soft matrix element that describes the $`q\overline{q}\pi \pi `$ transition:
$`\mathrm{\Phi }_q(z,\zeta ,W^2;\mu ^2)={\displaystyle \frac{1}{2\pi }}{\displaystyle 𝑑x^{}e^{iz(P^+x^{})}\pi ^+(p)\pi ^{}(p^{})|\overline{\psi }(x^{}v^{})\gamma ^+\psi (0)|0}.`$
Figure 1. $`\gamma ^{}(q)+\gamma (q^{})\pi (p)+\pi (p^{})`$ at leading order in $`\alpha _S`$.
We find that the $`\gamma ^{}\gamma `$ amplitude is independent of $`Q^2`$ at fixed $`\zeta `$ and $`W^2`$, up to small logarithmic scaling violation. The leading logarithmic corrections to scaling come from the evolution of the GDA with the factorization scale $`\mu ^2`$. The framework to study this evolution is the same as the one for standard distribution amplitudes . As in the case of quark densities one can separate $`\mathrm{\Phi }_q`$ in singlet and non singlet components. The singlet component mixes under evolution with the gluon GDA $`\mathrm{\Phi }_g`$, associated with the transition $`gg\pi \pi `$. This is further discussed in .
The process $`\gamma ^{}\gamma \pi \pi `$ can be considered as the extension of the pion transition form factor, $`\gamma ^{}\gamma \pi ^0`$, to a two-pion final state; it may also be seen as the crossed channel of deeply virtual Compton scattering on a pion target $`\gamma ^{}+\pi \gamma +\pi `$.
In electroproduction, $`e\gamma e\pi \pi `$, our process $`\gamma ^{}\gamma \pi \pi `$ interferes with bremsstrahlung, $`e\gamma e\gamma ^{}e\pi \pi `$. The latter produces $`\pi \pi `$ in the $`C`$-odd channel, therefore it does not contribute to $`\pi ^0\pi ^0`$ production. For a charged pion pair it can be computed from the value of the timelike elastic form factor measured in $`e^+e^{}\pi ^+\pi ^{}`$.
3. THE PROCESS $`\gamma ^{}N\pi \pi N`$
At large photon virtuality $`Q^2`$ and small momentum transfer to the proton the amplitude for electroproduction of a single meson ($`\pi ,\rho `$, …) factorizes into a hard scattering kernel $`H_{ij}`$ and two nonperturbative objects obeying their own QCD evolution equations, namely a nondiagonal parton distribution $`f_{i/p}(x_1,x_1x,t;\mu ^2)`$ and the distribution amplitude of the produced meson $`\phi _j(z;\mu ^2)`$:
$`={\displaystyle \underset{ij}{}}{\displaystyle _0^1}𝑑z{\displaystyle _0^1}𝑑x_1f_{i/p}(x_1,x_1x_B,t;\mu ^2)H_{ij}(Q^2,x_1/x_B,z;\mu ^2)\phi _j(z;\mu ^2).`$
The introduction of generalized distribution amplitudes allows to enlarge the scope of this dynamical description to electroproduction of a nonresonant, low invariant mass $`\pi \pi `$ pair . The factorization diagram is shown in Figure 2. This means for instance that nonresonant $`\pi \pi `$ production (and in particular the $`\pi ^0\pi ^0`$ channel absent in $`\rho `$ decay) should have the same scaling behavior in $`Q^2`$ as $`\rho `$-production.
Figure 2. Factorization of $`\gamma ^{}+p\pi \pi +p`$ at large $`Q^2`$, small momentum transfer to the proton and small $`\pi \pi `$ invariant mass.
4. SUMMARY
The process $`\gamma ^{}\gamma \pi ^+\pi ^{}`$, $`\pi ^0\pi ^0`$, $`\mathrm{}`$ at large photon virtuality and small c.m. energy offers a new opportunity to study hadron structure, where factorization of long and short distance dynamics enables us to extract nonperturbative quantities. It should be experimentally accessible at existing or planned $`e^+e^{}`$ or $`e\gamma `$ facilities.
The same long distance amplitudes also appear in other reactions, namely in exclusive electroproduction of a meson pair, where it allows an extension of vector meson production studies. The investigation of these generalized distribution amplitudes complements our existing tools for the description of hadrons in QCD.
Acknowledgments. SUBATECH est l’unité mixte 6457 de l’Université de Nantes, de l’Ecole des Mines de Nantes et de l’IN2P3/CNRS. CPhT est l’unité mixte 7644 du CNRS.
REFERENCES
1. M. Diehl, T. Gousset, B. Pire and O.V. Teryaev, Phys. Rev. Lett. 81 (1998) 1782.
2. G.P. Lepage and S.J. Brodsky, Phys. Lett. B 87 (1979) 359;
A.V. Efremov and A.V. Radyushkin, Phys. Lett. B 94 (1980) 245.
3. M.K. Chase, Nucl. Phys. B 174 (1980) 109.
4. J.C. Collins, L. Frankfurt and M. Strikman, Phys. Rev. D 56 (1997) 2982.
5. For a review and references, see X. Ji, J. Phys. G 24 (1998) 1181.
6. M.V. Polyakov, hep-ph/9809483.
|
no-problem/9901/hep-ph9901435.html
|
ar5iv
|
text
|
# Indirect Detection of Neutralino Annihilation from Three-body Channels
## I Introduction
The neutralino is one of the most promising dark matter candidatesreport . It could be detected indirectly by observation of energetic neutrinos from neutralino annihilation in the Sun and/or Earth. Energetic neutrinos are produced by decays of neutralino annihilation products. These neutrinos can be detected by neutrino detectors such as AMANDA and super-Kamiokande energeticneutrinos , which observe the upward muons produced by the charged-current interactions in the rock below the detector. For the models which can be tested by the current or next generation of detectors, an equilibrium of accumulation and annihilation is reached, so the annihilation rate is determined by the capture rate in the Sun or Earth. The event rate is proportional to the second moment of the neutrino energy spectrum, so it is this neutrino energy moment weighted by the corresponding branching ratio that determines the detection rate.
By now, the cross sections for annihilation have been calculated for all two-body final states that arise at tree level. Roughly speaking, among the two-body channels the $`b\overline{b}`$ and $`\tau ^+\tau ^{}`$ final states usually dominate for $`m_\chi <m_W`$. Neutralinos that are mostly higgsino annihilate primarily to gauge bosons if $`m_\chi >m_W`$, because there is no $`s`$-wave suppression mechanism for this channel. Neutralinos that are mostly gaugino continue to annihilate primarily to $`b\overline{b}`$ pairs until the neutralino mass exceeds the top-quark mass, after which the $`t\overline{t}`$ final state dominates, as the cross section for annihilation to fermions is proportional to the square of the fermion mass.
Three-body final states arise only at higher order in perturbation theory and are therefore usually negligible. However, some two-body channels easily dominate the cross section when they are open because of their large couplings; for example the $`W^+W^{}`$ for the higgsinos and $`t\overline{t}`$ for gauginos. This suggests that their corresponding three-body final states can be important just below these thresholds. Moreover, the neutrinos produced in these three-body final states are generally much more energetic than those produced in $`b`$ and $`\tau `$ decays, Recently, we calculated the $`s`$-wave cross section for the processes $`\chi \chi W^+W^{}Wf\overline{f^{}}`$ and $`\chi \chi t\overline{t}^{}tW^{}\overline{b}`$, and their charge conjugates in the $`v_{\mathrm{rel}}0`$ limit jheppaper .
## II Calculation
Below the top pair threshold, the neutralino may annihilate via a virtual top quark: $`\chi \chi tt^{}tWb`$. The Feynman diagrams for this process are shown in Fig. 2. Like annihilation to $`t\overline{t}`$ pairs, annihilation to this three-body final state takes place via $`s`$-channel exchange of $`Z^0`$ and $`A^0`$ (pseudo-scalar Higgs bosons) and $`t`$\- and $`u`$-channel exchange of squarks. Although there are additional diagrams for this process, such as those shown in Fig. 2, these are negligible for the gaugino because of the small coupling. On the other hand, for higgsinos the gauge boson channel dominates and the $`tt^{}`$ would be unimportant anyway. In the $`v_{\mathrm{rel}}0`$ limit
$$\sigma v_{\mathrm{rel}}=\frac{N_c}{128\pi ^3}_{x_{6min}}^{x_{6max}}𝑑x_6_{x_{4min}}^{x_{4max}}𝑑x_4\frac{1}{4}||^2,$$
(1)
the amplitude is given by $`M=M_tM_u+M_s`$.
The three-body cross section for a series of typical models are shown in Fig. 4. In these models the neutralino is primarily gaugino. As expected, the three-body cross section approaches the two-body value above the top threshold (we take $`m_t=180`$ GeV). Below the top mass it is non-zero but drops quickly. The flux of upward muons is proportional to the second moment of the neutrino energy spectrum weighted by branching ratios, which is given by
$$B_FNz^2=\frac{3}{128\pi ^3}𝑑x_4𝑑x_6||^2\left(Nz^2_tx_4^2+Nz^2_Wx_5^2+Nz^2_bx_6^2\right),$$
(2)
in the three-body case. Although the three-body cross section is small except just below the $`t\overline{t}`$ threshold, its contribution to the the second moment, $`B_FNz^2`$, may be important, as illustrated in Fig. 4. This is because the $`Nz^2`$ for top quarks and $`W`$ bosons is significantly larger than that for the light fermions neutrino .
Below the $`W^+W^{}`$ threshold, the neutralino can annihilate to a real $`W`$ and a virtual $`W`$. The $`W`$-bosons then decay independently into a fermion pair $`f\overline{f^{}}`$, which can be $`\tau \nu `$, $`\mu \nu `$, $`e\nu `$, $`cs`$, or $`ud`$. About 10% of these decay into a muon (or anti-muon) and a muon anti-neutrino (or neutrino).
The $`WW^{}`$ calculation is similar to the $`tWb`$ calculation. In the $`v0`$ limit, only chargino exchange in the $`t`$ and $`u`$ channels shown in Fig. 5 are important. Neutrinos can be produced either by the virtual $`W^{}`$ or by decay of the real $`W`$.
The neutrinos produced by decay of the muon or other fermions can be neglected, and the real $`W`$ boson has a probability $`\mathrm{\Gamma }_{W\mu \nu }`$ to decay to a muon neutrino. On the other hand this $`W`$ boson is produced in all $`\chi \chi Wf\overline{f^{}}`$ channels, and each of these channels has approximately the same cross section, so the contribution has a weight of $`n_{chan}\mathrm{\Gamma }_{W\mu \nu }1`$. The cross section and second moment for annihilation in the Earth are shown in Fig. 7 and Fig. 7. Below $`m_W/2`$, the four-body channel might become significant, and may smooth the jump in much the same fashion as the three-body channel does near the two-body channel threshold, but we will not consider it here.
## III Conclusions
The contributions of these three-body channels are important only in a limited region of parameter space. However, they may produce a large effect. In fact, our calculation shows that although the cross sections of these annihilations are significant only just below the two-body channel threshold, due to the high energy of the neutrinos they produce, they can enhance the neutrino signal by many times and actually dominate the neutrino signal far below the two-body threshold. Furthermore, the regions in question may be of particular interest. For example, motivated by collider data, Kane and Wells proposed a light higgsino light higgsino , and recent DAMA results DAMA suggest a WIMP candidate with $`m_x60`$ GeV (but see EllisCosmo98 ). There are also arguments that the neutralino should be primarily gaugino with a mass somewhere below but near the top-quark mass leszek .
There are many parameters in the minimal supersymmetric model. The results shown in Figs. 4 and 7 are of course model dependent, and these effects might be more or less important in models with different parameters.
###### Acknowledgements.
We thank G. Jungman for help with the neutdriver code. This work was supported by D.O.E. Contract No. DEFG02-92-ER 40699, NASA NAG5-3091, and the Alfred P. Sloan Foundation.
|
no-problem/9901/cond-mat9901311.html
|
ar5iv
|
text
|
# Numerical study of relaxation in electron glasses
## I Introduction
Strongly localized systems are characterized by very slow relaxation rates due to the exponential dependence of the transition rates on hopping length . For a wide range of parameters, the typical times involved are much larger than the experimental times and a glassy behavior is observed. Ben–Chorin et al. reported on non-ergodic transport in Anderson localized films of indium–oxide and ascribed the phenomena to the hopping transport in non-equilibrium states. Ovadyahu and Pollak performed further experiments on this system that clearly demostrate the glassy nature of Anderson insulators. Glassy behavior may be obtained independently of the strength of interactions and regardless of their long or short range. In systems with localized states, long hopping lengths result in very long relaxation times. However, it is thought that there are specific features of the glassy relaxation behavior that indeed depend on the type and strength of the interactions involved. If so, relaxation experiments could be an adequate tool for studying the strength of interactions. There has been no systematic study of the effects of interactions on the relaxation properties of strongly localized systems, and in this paper we try to fill this gap as much as possible.
Most properties of systems with localized electronic states strongly depend on interactions. This is especially true for Coulomb glasses where interactions are of a long range character. The non–equilibrium properties of these systems are affected by dynamic correlations in the motion of electrons . One–particle densities of states or excitations are not enough to encompass the whole problem. To deal with such problems, methods were developed to obtain the low lying states and energies of electron glasses. The states of the system, their energies and the transition rates between them constitute the information needed to compute non-equilibrium properties. We use this information to study energy relaxation for systems with no interactions, with long-range Coulomb interactions and with short-range interactions.
In the next section, we describe the model and the numerical procedure used. In section III, we study the temporal dependence of energy relaxation and, in section IV, we calculate the largest relaxation time $`\tau _2`$ and its dependence on size and temperature. Finally, in section V, we present results about the number of electrons participating in low- energy relaxation processes.
## II Model and numerical procedure
We consider three–dimensional systems in the strongly localized regime, in which quantum overlap energies, $`h`$, arising from tunnelling are much smaller than the other important energies in the problem and are taken into account only to the lowest contributing order, i.e., to zero order for energies and to first order for transition rates. Spin is neglected since exchange energies are proportional to $`t^2`$. We use the standard tight–binding Coulomb gap Hamiltonian :
$$H=\underset{i}{}ϵ_in_i+\underset{i<j}{}n_in_jV_{ij},$$
(1)
where $`ϵ_i`$ is the random site energy chosen from a box distribution with interval $`[W/2,W/2]`$. For non-interacting systems $`V_{ij}=0`$, while $`V_{ij}=1/r`$ for systems with Coulomb interactions and $`V_{ij}=(0.7/r)^4`$ is the potential chosen for short range interactions. The large value of the Hubbard energy is accounted for by disallowing double occupation of sites.
We study systems with sizes from 248 to 900 sites placed at random (for short range interactions we only consider systems sizes up to 465 sites), but with a minimum separation between them, which we choose to be $`0.5l_0`$ where $`l_0=(4\pi N/3)^{1/3}`$ and $`N`$ is the concentration of sites. We take $`e^2/l_0`$ as our unit of energy and $`l_0`$ as our unit of distance. We choose the number of electrons to be equal to half the number of sites. We use cyclic boundary conditions.
We use two different numerical algorithms to obtain the ground state and the lowest energy many–particle configurations of the systems up to a certain energy. For short range interactions, we employ an algorithm that relaxes the system through certain simultaneous $`n`$–electron transitions . The procedure is repeated for different initial random configurations of the charges until the configuration of lowest energy is found ten times. The configurations thus generated were memorized in terms of site occupation numbers and of energy, whenever this was less than the highest energy configuration in memory storage. We complete the set of low–energy configurations by generating all the states that differ by one– or two–electron transitions from any configuration stored.
For long range interactions, we use an algorithm that consists of finding the low-energy many-particle configurations by means of a three-step algorithm . This comprises local search , thermal cycling and construction of “neighbouring” states by local re-arrangements of the charges . The efficiency of this algorithm is discussed in Ref. . In the first step, an initial set, $`𝒮`$, of metastable low-energy many-particle states is created. We start from states chosen at random. These states are relaxed by a local search algorithm which ensures stability with respect to excitations from one to four sites. In the second step, this set $`𝒮`$ is improved by means of the thermal cycling method, which combines the Metropolis and local search algorithms. Lastly, the third step completes the set $`𝒮`$ by systematical investigations of the surroundings of the states previously found.
The transition rate $`\omega _{IJ}`$ between configurations $`I`$ and $`J`$ is taken to be
$$\omega _{IJ}=\frac{1}{\tau _0}\mathrm{exp}\left(2r_{ij}/a\right)\mathrm{exp}\left(\frac{E_JE_I}{kT}\right)$$
(2)
for $`E_J>E_I`$, and without the second exponential for $`E_J<E_I`$. In this equation, $`\tau _0`$ is the inverse phonon frequency, of the order of $`10^{13}`$ s, $`a`$ is the localization radius, which we take equal to $`0.3l_0`$, and $`r_{ij}`$ is the minimized sum of the hopping lengths of the electrons participating in the transition.
The relaxation process is governed by the master equation, which in first order can be written in matrix form as $`𝐩(t+\delta t)=𝐩(t)`$, where $`𝐩`$ is the vector of occupation probabilities in the configuration space, and $``$ the matrix of transition probabilities between states during a time, $`\delta t`$, given by :
$$()_{JI}=\{\begin{array}{cc}\omega _{IJ}\delta t\hfill & \mathrm{for}IJ,\hfill \\ 1_{KI}\omega _{IK}\delta t\hfill & \mathrm{for}I=J.\hfill \end{array}$$
(3)
We assume that the system initially occupies a set, $`𝒦`$, of $`m`$ configurations with equal probabilities, that is, $`p_K^{(0)}=1/m`$ for $`K𝒦`$, and $`p_L^{(0)}=0`$ for all other $`L`$. The time evolution of $`𝐩`$ is governed by the eigenvalues $`\lambda _i`$ and right eigenvectors $`\stackrel{}{\varphi }_i`$ of $``$. We will assume that the $`\lambda _i`$ are arranged in decreasing order. Rewriting $`𝐩^{(0)}`$ as a linear combination of the $`\stackrel{}{\varphi }_i`$, the probability vector after $`n`$ time steps $`𝐩^{(n)}`$ is given by
$$𝐩^{(n)}=a_1\stackrel{}{\varphi }_1+a_2\stackrel{}{\varphi }_2\lambda _2^n+a_3\stackrel{}{\varphi }_3\lambda _3^n+\mathrm{}$$
(4)
where $`a_i`$ is the $`i`$-th component of $`𝐩^{(0)}`$ in the basis $`\left\{\stackrel{}{\varphi }_i\right\}`$. At long times (large $`n`$), Eq. (4) approaches equilibrium with time dependences given by $`\lambda _i^n`$. Thus, the relaxation times are given by
$$\tau _i=\frac{1}{|\mathrm{ln}\lambda _i|}$$
(5)
in units of $`\delta t`$. The final state is $`p_M^{(\mathrm{})}=\mathrm{exp}(E_M/kT)/Z`$ for all $`M`$, where $`E_M`$ is the energy of state $`M`$, and $`Z`$ is the partition function. Clearly $`𝐩^{(\mathrm{})}`$ is a right eigenvector of $``$ with eigenvalue 1, since $`𝐩^{(\mathrm{})}=𝐩^{(\mathrm{})}`$. All the other eigenvalues of $``$ are smaller than 1, since otherwise the system would not tend to the stationary probability distribution. The second largest eigenvalue corresponds to the largest relaxation time of the system. The addition of the other eigenvectors to $`\varphi _1=𝐩^{(\mathrm{})}`$, transfers $`𝐩`$ from high energy states to low energy states at various rates.
We have developed a renormalization method to be able to properly handle the huge range of transition rates involved. Large values of $`\tau _i`$ correspond to $`\lambda _i`$ with values which are very close to unity, Eq. (5), and a direct calculation of $`\tau _i`$, in units of $`\delta t`$, is strongly limited by the numerical precision of the computer. In order to minimize errors, we must choose a $`\delta t`$ which is as large as possible, although this soon yields negative diagonal elements of $``$. We overcome this problem using a renormalization procedure that allows us to increase $`\delta t`$ and to simultaneously keep all terms of $``$ positive. This procedure forms groups of configurations. Each group is made up of configurations connected between themselves by transition rates which are larger than a critical one. The groups are clusters in local equilibrium for times greater than the inverse of the critical transition rate. Firstly, we take a critical transition rate $`\omega _\mathrm{c}`$. Then for each $`\omega _{IJ}`$ larger than $`\omega _\mathrm{c}`$ we define a new equilibrium state, $`M`$, and substitute the original configurations, $`I`$ and $`J`$, by this new state, $`M`$. The transition rates between $`M`$ and any other configuration $`K`$ ($`KI,J`$) are defined as:
$`\omega _{KM}=\omega _{KI}+\omega _{KJ}`$ (6)
$`\omega _{MK}={\displaystyle \frac{\omega _{IK}}{1+R_M}}+{\displaystyle \frac{\omega _{JK}}{1+R_M^1}}`$ (7)
where $`R_M`$ is given by:
$$R_M=\frac{\omega _{IJ}}{\omega _{JI}}=\mathrm{exp}\left\{(E_IE_J)/k_\mathrm{B}T\right\}.$$
(8)
The diagonal matrix elements $`\omega _{MM}`$ are again equal to 1 minus the sum of the non-diagonal elements of the column $`M`$ multiplied by $`\mathrm{\Delta }t`$.
After the matrix $``$ has been renormalized by the previous procedure, we can increase the time scale to a larger interval $`\delta ^{}t=1/\omega _\mathrm{c}`$. With this $`\delta ^{}t`$ we calculate the new elements of $``$. The eigenvalues of the transition matrix will be given now in units of $`\delta ^{}t(>\delta t)`$. We have checked the validity of our renormalization procedure with several samples of small systems where errors are not critical. The method minimizes computer errors in the solution of the eigenproblem as the matrix becomes less ill-conditioned, and allows us to consider large systems, with matrix elements that differ by many orders of magnitude.
## III Temporal dependence
We calculate the temporal dependence of the energy of the system when it relaxes from an initial set of high energy configurations. At very long times, the longest relaxation process involved predominates and we see an exponential relaxation. For shorter times, there is an almost continuous sequence of relaxation times, which gives rise to a power law relaxation $`(EE_{\mathrm{eq}})t^\alpha `$. To obtain the exponent of this law it is convenient to represent the absolute derivative of the energy with respect to time. In Fig. 1 we show $`|\mathrm{d}E/\mathrm{d}t|`$ versus time (in units of $`\tau _0`$) in a double log<sub>10</sub> plot for a sample with Coulomb interactions and 248 sites. The continuous curve corresponds to a temperature $`T=0.004`$, and the dashed curve to $`T=0.005`$. The straight line is a fit to the data in the non–exponential part of both curves, and its slope is equal to $`1.15`$. So the power–law exponent for relaxation is $`\alpha =0.15`$. This exponent is basically independent of temperature for all the systems considered.
We have also studied energy relaxation for systems with short–range interactions and for non–interacting systems. The results for short–range interactions are very similar to those for Coulomb interactions. The power–law exponent is roughly 0.15 and the largest relaxation time is of the same order of magnitude as for Coulomb systems. In Fig. 2 we show $`|\mathrm{d}E/\mathrm{d}t|`$ as a function of time in a double log<sub>10</sub> plot for a non–interacting system with $`N=248`$ sites. The continuous curve is for $`T=0.004`$, and the dashed curve for $`T=0.005`$. The slope of the straight line is again equal to $`1.15`$. There are two differences between the results for interacting and for non–interacting systems. The longest relaxation times are shorter for the latter, and the power–law regime is not very well defined in the absence of interactions. Both figures give the rate of relaxation $`|\mathrm{d}E/\mathrm{d}t|`$ at any time. At very small $`t`$, the interacting systems relax faster than the non–interacting systems. A possible explanation of this is that in the excited state of the interacting systems some electrons get very close to each other. In the initial stages of relaxation these electrons hop away from electrons in the nearest neighbors sites, the whole process being very fast.
Several samples have been checked and in all of them we obtain similar results to Figs. 1 and 2. Two features characterize our relaxation process, the exponent $`\alpha `$ of the power–law regime and the longest relaxation time. The exponents $`\alpha `$ do not appreciably vary from sample to sample, nor with temperature or with the type of interaction. On the other hand, the longest relaxation time drastically changes from sample to sample and with changes in temperature, size and the range of interaction. On average, this time increases with the size of the system and with the strength of the interactions. In the next section we study the longest relaxation time in detail. Now we shall analyze exponent $`\alpha `$.
Temporal relaxation can be described as a sum of parallel exponential relaxation processes, each with its own different relaxation time, $`\tau _i`$. The energy, $`E`$, of the system can be written as a function of time, $`t`$, as follows
$$E(t)=\underset{i>1}{}c_i\mathrm{exp}\left(\frac{t}{\tau _i}\right)+E_{\mathrm{Eq}}$$
(9)
where $`c_i`$ is the product of the $`i`$-th component of the initial occupation vector, $`a_i`$, and the energy associated to the eigenvector $`\varphi _i`$. This energy is the sum of the components of $`\varphi _i`$ multiplied by the corresponding energies. $`E_{\mathrm{Eq}}`$ is the equilibrium energy, i. e. $`E_{\mathrm{Eq}}=E(t\mathrm{})`$. In Fig. 3 we plot $`(c_i/\tau _i)\mathrm{exp}(t/\tau _i)`$ for the 30 largest eigenvalues of $``$, excluding $`\lambda _1=1`$, as a function of time for a sample with Coulomb interactions and of size $`N=465`$. The solid line represents the temporal derivative of the actual energy as a function of time. This curve is below, but very close to, the envelope of the curves corresponding to the individual relaxation processes. Note how the combination of several simple exponential relaxation processes gives rise to a power law relaxation.
Surprisingly, $`\alpha `$ is fairly independent of temperature, size, type of interaction considered, and localization radius, facts for which we do not have any interpretation. Anyway, the robustness of the exponent could be a signature of self–organized criticality. Similar trends have been found in experimental measurements of the excess conductance of 2D samples excited far from equilibrium . In the absence of magnetic field, the power law exponent of these measurements ranges between 0.27 and 0.29, diminishing with the strength of the magnetic field.
Our results point to the difficulty in extracting information about the effects of interactions from the power law exponent. Nevertheless, the type of interaction significantly affects the longest relaxation times.
## IV Longest relaxation time
We also study the longest relaxation time, $`\tau _2`$, as a function of temperature and the size of the sample for systems with Coulomb interactions, with short-range interactions and for non-interacting systems. In Fig. 4 we plot $`\mathrm{log}_{10}\tau _2`$ versus the inverse of the temperature for the three types of interactions mentioned, Coulomb (solid lines), short-range (dotted-dashed lines) and no-interactions (dashed lines). The number of sites considered are $`N=248`$, 341, 465, 744 and 899, for long range interactions and for non-interacting systems; for short range interactions we did not use the two largest sizes. $`\tau _2`$ increases with sample size, and thus the smallest sample corresponds to the lowest curve, and so on. $``$ denotes averages over site configurations. Fluctuations in $`\tau _2`$ from sample to sample are very large and, as is the case with most properties of disordered systems, one has to average the common logarithm of $`\tau _2`$, rather than $`\tau _2`$ itself. The curves extend over the range of validity of the results. The ’high’ temperature limit $`T_{\mathrm{max}}`$ depends on the energy range $`\mathrm{\Delta }E`$ spanned by the configurations stored. We choose $`T_{\mathrm{max}}=0.1\mathrm{\Delta }E`$. The low temperature limit arises from the discrete nature of the spectrum of configurations and we take it as being equal to the mean energy spacing of the ten lowest energy configurations $`\mathrm{\Delta }ϵ`$.
From Fig. 4 we can conclude that the longest relaxation time depends strongly on the type of interaction. $`\tau _2`$ is one order of magnitude larger for interacting than for non-interacting systems. As we will see, this effect is much larger when extrapolated to macroscopic sizes.
In order to extrapolate the previous results to macroscopic sizes we plotted $`\mathrm{log}_{10}\tau _2`$ as a function of $`L^\beta `$ at a fixed temperature for different values of the exponent $`\beta `$. $`L=N^{1/3}`$ is the length of the side of the system, and $`N`$ is the number of sites. We found that the results for the three types of interactions fit straight lines fairly well when $`\beta =1`$. In Fig. 5 we show $`\mathrm{log}_{10}\tau _2`$ versus $`L^1`$ for systems with Coulomb interactions (dots), short-range interactions (diamonds) and without interactions (squares). The horizontal dashed line represents a macroscopic time, say, one day ($`10^{18}\tau _0`$). The temperature chosen in this plot is $`T=0.0025`$, which is valid for the four sizes employed in both types of interactions. The size of the symbols used roughly corresponds to the standard deviation of $`\mathrm{log}_{10}\tau _2`$. The crossing point of each straight line with the vertical axis is the extrapolation of $`\tau _2`$ to macroscopic sizes. The results are $`\tau _2^{(\mathrm{})}10^{31\pm 1}\tau _0=10^{18\pm 1}`$ s (Coulomb interactions) $`\tau _2^{(\mathrm{})}10^{11\pm 1}`$ s (short-range interactions) and $`\tau _2^{(\mathrm{})}10^{5\pm 1}`$ s (no interactions). It is clear from this figure that the longest relaxation time drastically increases with the strength of interactions, although these results have to be taken with care as they are extracted from a very long extrapolation.
The results presented in Figs. 4 and 5 correspond to a localization radius $`a=0.3l_0`$. For larger values of $`a`$, the relaxation times will decrease, as can be deduced from Eq. (2). We have found empirically that a change in $`a`$ causes a change in $`\tau _2`$ of approximately $`\mathrm{\Delta }\mathrm{log}_{10}\tau _23\mathrm{\Delta }(a^1)`$. The values of $`\tau _2^{(\mathrm{})}`$ are so large for interacting systems that we would expect non–ergodic behaviour for these systems even for much larger localization radii than the one considered here.
## V Variable number relaxation
At zero temperature, the relaxation process is downward in energy and we can assume that the fastest process always dominates, corresponding to a well defined sequence of configurations with decreasing energies. For each transition at $`T=0`$, the shorter the hopping length, the faster the corresponding transition rate. From each configuration, the system chooses the nearest one (in terms of $`r`$) from those with less energy. With this in mind, we have computed for all low-energy configurations the closest one of smaller energy, and have stored the number of electrons $`n`$ participating in the transition.
In Fig. 6 we show the number of electrons, $`n`$, of the fastest transition from an initial configuration as a function of the number of this configuration for a Coulomb interacting sample with 900 sites. At very low energies, the relative importance of many-electron transitions increases. The proportion of transitions with a fixed number of electrons greater than one ($`n>1`$) increases with decreasing energy. Obviously, in the non-interacting case all processes are one-electron transitions.
## VI Conclusions
Our numerical results of relaxation in localized electronic systems show a power law behavior with an exponent close to 0.15 and independent of all the parameters and type of interactions considered. At very long times, we obtain exponential relaxation with a characteristic time that strongly varies with size, localization radius and type of interaction. The extrapolation of this characteristic time to macroscopic sizes predicts values much larger than the typical experimental times, especially for the interacting cases. The strength of interactions in experiments performed on these systems can be deduced from their longer relaxation times.
## VII Acknowledgments
We would like to acknowledge Prof. M. Pollak for useful conversations and a critical reading of the manuscript. We also acknowledge the Dirección General de Investigación Científica y Técnica for financial support, project number PB 96/1118, and for APG’s grant.
|
no-problem/9901/quant-ph9901021.html
|
ar5iv
|
text
|
# References
SEARCHING IN GROVER’S ALGORITHM
Richard Jozsa
School of Mathematics and Statistics
University of Plymouth
Plymouth, Devon PL4 8AA, England.
Email: rjozsa@plymouth.ac.uk
ABSTRACT
Grover’s algorithm is usually described in terms of the iteration of a compound operator of the form $`Q=HI_0HI_{x_0}`$. Although it is quite straightforward to verify the algebra of the iteration, this gives little insight into why the algorithm works. What is the significance of the compound structure of $`Q`$? Why is there a minus sign? Later it was discovered that $`H`$ could be replaced by essentially any unitary $`U`$. What is the freedom involved here? We give a description of Grover’s algorithm which provides some clarification of these questions.
INTRODUCTION
Grover’s quantum searching algorithm is usually described in terms of the iteration of a compound operator $`Q`$ of the form
$$Q=HI_0HI_{x_0}$$
(1)
on a starting state $`|\psi _0=H|0`$. Here $`H`$ is the Walsh-Hadamard transform and $`I_0,I_{x_0}`$ are suitable inversion operators (c.f. below).<sup>1</sup><sup>1</sup>1 This $`Q`$ is the one used in . Grover uses instead $`Q^{GR}=I_0HI_{x_0}H`$ iterated on $`|\psi _0=|0`$ which is clearly an equivalent process. Later it was discovered that $`H`$ may be replaced by essentially any unitary operation $`U`$ and using
$$Q=UI_0U^1I_{x_0}|\psi _0=U|0$$
(2)
the searching algorithm still works just as well as before (with at most a constant slowdown in the number of iterations). At first sight this appeared remarkable since $`H`$ is known to be singularly significant for other quantum algorithms . The efficacy of these other algorithms could be understood in terms of the fast Fourier transform construction but Grover’s algorithm appears to rest on different principles. Although it is quite straightforward to work through the algebra of the algorithm , this provides little insight into why it works! The operator $`HI_0H`$ was originally called a “diffusion” operator and later interpreted as “inversion in the average” but neither of these appears to provide much heuristic insight (especially in the context of the more general eq. (2)). What is the significance of the particular compound structure of $`Q`$ for a searching problem? What is the significance of the minus sign in $`Q`$? Why can $`H`$ be replaced by an arbitrary $`U`$ – what is the freedom involved here? The purpose of this note is to give a different description of Grover’s algorithm which provides some clarification of these issues. We will show that the algorithm may be seen to be a consequence of the following elementary theorem of 2-dimensional real Euclidean geometry:
Theorem 1: Let $`M1`$ and $`M2`$ be two mirror lines in the Euclidean plane $`\text{IR}^2`$ intersecting at a point $`O`$ and let $`\alpha `$ be the angle in the plane from $`M1`$ to $`M2`$. Then the operation of reflection in $`M1`$ followed by reflection in $`M2`$ is just rotation by angle $`2\alpha `$ about the point $`O`$.
Figure 1. Reflection in $`M1`$ followed by reflection in $`M2`$ is equivalent to rotation about $`O`$ through angle $`2\alpha `$.
THE SEARCH PROBLEM
The search problem is often phrased in terms of an exponentially large unstructured database with $`N=2^n`$ records, of which one is specially marked. The problem is to locate the special record. Elementary probability theory shows that classically if we examine $`k`$ records then we have probability $`k/N`$ of finding the special one so we need $`O(N)`$ such trials to find it with any constant (independent of $`N`$) level of probability. Grover’s quantum algorithm achieves this result with only $`O(\sqrt{N})`$ steps (or more precisely $`O(\sqrt{N})`$ iterations of $`Q`$ but $`O(\sqrt{N}\mathrm{log}N)`$ steps, the $`\mathrm{log}N`$ term coming from the implementation of $`H`$.) It may be shown that the square root speedup of Grover’s algorithm is optimal within the context of quantum computation.
The search problem may be more accurately phrased in terms of an oracle problem, which we adopt here. In the description above, there is a potential difficulty concerning the physical realisation of an exponentially large unstructured database. One might expect that it will require exponentially many degrees of freedom of some physical resource, such as space, and consequently it may need exponential (i.e. $`O(N)`$) effort or time just to access a typical (remotely lying) record. We will replace the database by an oracle which computes an $`n`$ bit function $`f:B^nB`$ (where $`B=\{0,1\}`$). It is promised that $`f(x)=0`$ for all $`n`$ bit strings except exactly one string, denoted $`x_0`$ (the “marked” position) for which $`f(x_0)=1`$. Our problem is to determine $`x_0`$. We assume as usual that $`f`$ is given as a unitary transformation $`U_f`$ on $`n+1`$ qubits defined by
$$U_f|x|y=|x|yf(x)$$
(3)
Here the input register $`|x`$ consists of $`n`$ qubits as $`x`$ ranges over all $`n`$ bit strings and the output register $`|y`$ consists of a single qubit with $`y=0`$ or 1. The symbol $``$ denoted addition modulo 2.
Figure 2. The action of $`U_f`$ on a general basis state $`|x|y`$ of the input and output registers.
The assumption that the database was unstructured is formalised here as the standard oracle idealisation that we have no access to the internal workings of $`U_f`$ – it operates as a “black box” on the input and output registers. In this formulation there is no problem with the access to $`f(x)`$ for any of the exponentially many $`x`$ values and indeed we may also readily query the oracle with a superposition of input values.
Instead of using $`U_f`$ we will generally use an equivalent operation denoted $`I_{x_0}`$ on $`n`$ qubits. It is defined by
$$I_{x_0}|x=\{\begin{array}{cc}\hfill |x& \text{if }xx_0\text{ }\hfill \\ \hfill |x_0& \text{if }x=x_0\hfill \end{array}$$
(4)
i.e. $`I_{x_0}`$ simply inverts the amplitude of the $`|x_0`$ component. If $`x_0`$ is the $`n`$ bit string $`00\mathrm{}0`$ then $`I_{x_0}`$ will be written simply as $`I_0`$.
A black box which performs $`I_{x_0}`$ may be simply constructed from $`U_f`$ by just setting the output register to $`\frac{1}{\sqrt{2}}(|0|1)`$. Then the action of $`U_f`$ leaves the output register in this state and effects $`I_{x_0}`$ on the input register<sup>2</sup><sup>2</sup>2Note that, conversely, $`U_f`$ may be constructed from the a black box for $`I_{x_0}`$ as follows. Let $`J_{x_0}`$ denote the operation “$`I_{x_0}`$ controlled by the output qubit”. Apply $`H`$ to the output register, then $`J_{x_0}`$ to the input and output registers, then $`H`$ again to the output register. The total effect is just $`U_f`$ on the $`n+1`$ qubits of both registers, as the reader may verify. To construct $`J_{x_0}`$ from $`I_{x_0}`$ we also need an eigenstate of $`I_{x_0}`$. The construction is described in or .:
Figure 3. Construction of $`I_{x_0}`$ from $`U_f`$. Here $`|\psi `$ is any $`n`$-qubit state.
Our searching problem becomes the following: we are given a black box which computes $`I_{x_0}`$ for some $`n`$ bit string $`x_0`$ and we want to determine the value of $`x_0`$.
REFLECTIONS ON REFLECTIONS
We first digress briefly to record some elementary properties of reflections which will provide the basis for our interpretation of Grover’s algorithm.
In two real dimensions:
In real 2 dimensional Euclidean space, let $`M`$ be any straight line through the origin specified by a unit vector $`v`$ perpendicular to $`M`$. Let $`I_v`$ denote the operation of reflection in $`M`$. Note that if $`u`$ is any vector we may write it uniquely as a sum of components parallel and perpendicular to $`v`$. If $`v^{}`$ is a unit vector lying along $`M`$ then we have
$$u=av+bv^{}$$
and $`I_v`$ simply replaces $`a`$ by $`a`$.
In $`N`$ complex dimensions:
Note that $`I_v`$ is exactly like $`I_{x_0}`$ in eq. (4) above except that there, we were in a complex space of higher dimension. We may interpret $`I_{x_0}`$ as a reflection in the hyperplane orthogonal to $`|x_0`$. In terms of formulas, to reflect the $`x_0`$ amplitude we have
$$I_{x_0}=I2|x_0x_0|$$
(5)
where $`I`$ is the identity operator. More generally if $`|\psi `$ is any state we define
$$I_{|\psi }=I2|\psi \psi |$$
(6)
Then $`I_{|\psi }`$ is the operation of reflection in the hyperplane<sup>3</sup><sup>3</sup>3More generally, for any subspace $`D`$ we may define $`I_D`$ by $`I_D=I2_d|dd|`$ where $`\{|d\}`$ is any orthonormal basis of $`D`$. Then $`I_D`$ is reflection in the orthogonal complement $`D^{}`$ of the subspace $`D`$. orthogonal to $`|\psi `$. For any state $`|\chi `$ we may uniquely express it as a sum of components parallel and orthogonal to $`|\psi `$ and $`I_{|\psi }`$ simply inverts the parallel component.
We have the following simple properties of $`I_{|\psi }`$:
Lemma 1: If $`|\chi `$ is any state then $`I_{|\psi }`$ preserves the 2-dimensional subspace $`𝒮`$ spanned by $`|\chi `$ and $`|\psi `$.
Proof: Geometrically, $`𝒮`$ and the mirror hyperplane are orthogonal to each other (in the sense that the orthogonal complement of either subspace is contained in the other subspace) so the reflection preserves $`𝒮`$. Alternatively in terms of algebra, eq. (6) shows that $`I_{|\psi }`$ takes $`|\psi `$ to $`|\psi `$ and for any $`|\chi `$, it adds a multiple of $`|\psi `$ to $`|\chi `$. Hence any linear combination is mapped to a linear combination of the same two states $`\mathrm{}`$.
Lemma 2: For any unitary operator $`U`$
$$UI_{|\psi }U^1=I_{U|\psi }$$
Proof: Geometrically we are just changing description (reference basis) by $`U^1`$ but the result is also immediate from eq. (6):
$$UI_{|\psi }U^1=I2U|\psi \psi \left|U^1=I2\right|U\psi U\psi |=I_{U|\psi }\mathrm{}.$$
Looking back at eq. (2) we see that
$$Q=I_{U|0}I_{|x_0}$$
(7)
Back to two real dimensions:
By lemma 1, both $`I_{x_0}`$ and $`I_{U|0}`$ preserve the two dimensional subspace $`𝒱`$ spanned by $`|x_0`$ and $`U|0`$ . Hence by eq. (7), $`Q`$ preserves $`𝒱`$ too. Now we may introduce a basis $`\{|e_1,|e_2\}`$ into $`𝒱`$ such that $`U|0`$ and $`|x_0`$ up to an overall phase, have real coordinates. Indeed choose $`|e_1=U|0`$ so $`U|0`$ has coordinates $`(1,0)`$. Then $`e^{i\xi }|x_0=a|e_1+b|e_2`$ where $`|e_2`$, orthonormal to $`|e_1`$, still has an overall phase freedom. Thus choose $`\xi `$ to make $`a`$ real and the phase of $`|e_2`$ to make $`b`$ real. Then in this basis, since $`U|0`$ and $`|x_0`$ have real coordinates, the operators $`I_{x_0}`$ and $`I_{U|0}`$ when acting on $`𝒱`$, are also described by real 2 by 2 matrices – in fact they are just the real 2 dimensional reflections in the lines perpendicular to $`|x_0`$ and $`U|0`$ in $`𝒱`$. Finally we have:
Lemma 3: For any 2 dimensional real $`v`$ we have
$$I_v=I_v^{}$$
where $`v^{}`$ is a unit vector perpendicular to $`v`$.
Proof: For any vector $`u`$ we write $`u=av+bv^{}`$. Then $`I_v`$ just reverses the sign of $`a`$ and $`I_v`$ reverses the sign of $`b`$. Thus the action of $`I_v`$ is the same as that of $`I_v^{}`$ $`\mathrm{}`$.
Later, this lemma will explain the significance of the minus sign in eq. (2). For the present, note that from eq. (7) we can write
$$Q=I_{|w}I_{|x_0}$$
where $`|w`$ is orthogonal to $`U|0`$ and lies in the plane of $`U|0`$ and $`|x_0`$. Since we are working with real coordinates, theorem 1 shows that $`Q`$, acting in $`𝒱`$, is just the operation of rotation through angle $`2\alpha `$ where $`\alpha `$ is the angle between $`|w`$ and $`|x_0`$ i.e. $`\mathrm{cos}\alpha =x_0|w`$. Since $`U|0`$ is perpendicular to $`|w`$ we can write $`\mathrm{sin}\alpha =x_0|U|0`$.
GROVER’S ALGORITHM
We now give an interpretation of the workings of the quantum searching algorithm in view of the preceeding simple facts about reflections. Given the black box $`I_{x_0}`$ how can we identify $`x_0`$? Surely we must apply $`I_{x_0}`$ to some state (we can do nothing else with a black box!) but there seems no reason a priori to choose any one state rather than any other. So let us just choose a state $`|w`$ at random. $`|w`$ may be written as $`U|0`$ where $`U`$ is chosen at random.
Now by lemma 1, $`I_{x_0}`$ preserves the subspace spanned by $`|w`$ and $`|x_0`$ and by theorem 1, $`I_{|w}I_{x_0}`$ provides a way of moving around in this subspace – it is just rotation by twice the angle between $`|x_0`$ and $`|w`$. (Note that $`I_{|w}`$ may be constructed via lemma 2 as $`UI_0U^1`$.) The idea now is to try to use this motion to move from the known starting state $`|w`$ towards the unknown $`|x_0`$. This process has been called “amplitude amplification” as we are effectively trying to enhance the amplitude of the $`|x_0`$ component of the state. Once we are near to $`|x_0`$ then a measurement of the state in the standard basis $`\{|x\}`$ will reveal the value of $`x_0`$ with high probability.
However there is an apparent problem: we do not know $`x_0`$ so we know neither the angle $`\beta `$, between $`|x_0`$ and $`|w`$, nor the angle $`2\beta `$ of rotation of $`I_{|w}I_{x_0}`$. Hence we do not know how many times to apply the rotation to move $`|w`$ near to $`|x_0`$. Remarkably we can solve this problem by using the extra information that $`|x_0`$ is known to be a member of a particular basis $`\{|x\}`$ of $`N`$ orthonormal states! If we choose
$$|w_0=\frac{1}{\sqrt{N}}\underset{x}{}|x$$
(8)
to be a uniform superposition of all the $`|x`$’s then whatever the value of $`x_0`$ is, we have that $`x_0|w_0=\frac{1}{\sqrt{N}}`$ and hence we will know that the angle $`\beta `$ is given by $`\mathrm{cos}\beta =\frac{1}{\sqrt{N}}`$ in every possible case! Note that for large $`N`$ (the usual case of interest) $`|x_0`$ and $`|w_0`$ are nearly orthogonal so $`2\beta `$ is near to $`\pi `$. This will typically be the case for any $`|w`$ chosen at random in a large Hilbert space – it will tend with high probability to be nearly orthogonal to any previously fixed state such as $`|x_0`$.
Now, $`I_{|w}I_{x_0}`$ rotating through nearly $`\pi `$, acts rather wildly on the space, moving vectors through a great distance and we would prefer to have a gentler incremental motion of $`|w_0`$ towards $`|x_0`$. One way of doing this is to use instead, the operation $`(I_{|w_0}I_{x_0})^2`$ rotating through $`4\beta `$ which is near to $`2\pi `$ i.e. 0 mod $`2\pi `$. Since $`\mathrm{cos}\beta =\frac{1}{\sqrt{N}}`$ we have that $`4\beta `$ mod $`2\pi `$ is $`O(\frac{1}{\sqrt{N}})`$. (To see this, write $`\beta =\frac{\pi }{2}\alpha `$ so $`\mathrm{sin}\alpha =\mathrm{cos}\beta =\frac{1}{\sqrt{N}}`$. Then $`4\beta =2\pi 4\alpha `$ and $`\alpha \frac{1}{\sqrt{N}}`$ for large $`N`$). Now the angle between $`|w_0`$ and $`|x_0`$ is nearly $`\pi /2`$ so we will need $`O(\sqrt{N})`$ iterations of this rotation to move $`|w_0`$ near to $`|x_0`$. A second way of dealing with the large $`\beta `$ problem – the way actually used in Grover’s algorithm – is to simply put a minus sign in front of $`I_{|w_0}I_{x_0}`$! This explains the role of the minus sign in eq. (2). Indeed by lemma 3, $`I_{|w}I_{x_0}=I_{|w_0^{}}I_{x_0}`$ where $`|w_0^{}`$ is orthogonal to $`|w_0`$ in the subspace spanned by $`|w_0`$ and $`|x_0`$ and now the angle $`\alpha `$ between $`|w_0^{}`$ and $`|x_0`$ is given by $`\mathrm{cos}\alpha =x_0|w^{}`$ i.e. $`\mathrm{sin}\alpha =x_0|w`$ so $`\alpha \frac{1}{\sqrt{N}}`$. Again we will need $`O(\sqrt{N})`$ iterations of the rotation $`I_{w_0}I_{x_0}`$ through $`2\alpha `$ to span the angle between $`|w_0`$ and $`|x_0`$.
In conclusion, we choose the starting state $`|w_0`$ of eq. (8) and apply $`O(\sqrt{N})`$ times, the operator
$$Q=I_{|w_0^{}}I_{x_o}=I_{|w_0}I_{x_0}=UI_0U^1I_{x_0}$$
where $`U`$ is any unitary operation with $`U|0=|w_0`$ (for example $`U=H`$). The significance of the minus sign (in the context of reflection operations) is to convert a nearly orthogonal pair of directions to a nearly parallel pair (c.f. lemma 3) . The composite structure of $`Q`$ is just to build a rotation as a product of two reflections (c.f. theorem 1) and the random choice of $`U`$ just picks a random starting state in the two dimensional subspace, which is then moved towards $`|x_0`$. The exact number of iterations of $`Q`$ required depends on knowledge of the angle between $`|x_0`$ and $`U|0`$. If $`U|0=|w_0=\frac{1}{\sqrt{N}}_x|x`$ then this is explicitly known, but if $`U`$ is chosen more generally at random then we will not know this angle. However $`Q`$ will still generally be a small rotation through some angle of order $`O(\frac{1}{\sqrt{N}})`$ but we will not know when to stop the iterations. Nevertheless the process will still move $`U|0`$ near to $`|x_0`$ in $`O(\sqrt{N})`$ steps. It will fail only in the unlikely situation that the randomly chosen $`U|0`$ happens to be exactly orthogonal to the unknown $`|x_0`$ (i.e. $`U`$ has zero matrix element $`U_{0x_0}`$ as noted in ). $`Q`$ is then a rotation through an angle of zero.
Having identified $`W`$ as a rotation through $`2\alpha `$ in the plane $`𝒱`$ of $`|x_0`$ and $`|w_0`$, where $`\alpha `$ is defined by
$$\mathrm{sin}\alpha =\frac{1}{\sqrt{N}}$$
(9)
(for the case that $`U|0=|w_0`$) we may readily calculate the motion of $`|\psi _0=|w_0=\frac{1}{\sqrt{N}}_x|x`$ towards $`|x_0`$ by iterated application of $`W`$. In $`𝒱`$ introduce the basis $`\{|x_0,|x_0^{}\}`$ where $`|x_0^{}=\frac{1}{\sqrt{N1}}_{xx_0}|x`$. Then from eq. (9) we get
$$|\psi _0=\mathrm{sin}\alpha |x_0+\mathrm{cos}\alpha |x_0^{}$$
If we write
$$|\psi _{n+1}=Q|\psi _n\text{and}|\psi _n=\mathrm{sin}\alpha _n|x_0+\mathrm{cos}\alpha _n|x_0^{}$$
then we immediately get from the interpretation of $`Q`$ as a rotation through $`2\alpha `$
$$\alpha _{n+1}=\alpha _n+2\alpha \text{i.e. }\alpha _n=(2n+1)\alpha $$
giving the solution of the iteration derived in . For a more general choice $`|\psi _0=|w`$ of starting state, the value of $`\alpha `$, given by $`\mathrm{sin}^1|x_0|w|`$, will generally be unknown but the application of $`W`$ still increments the angle successively by $`2\alpha `$ as above. The number of iterations is chosen to make $`\alpha _n`$ as close as possible to $`\pi /2`$.
Acknowledgement
After this note was completed it came to my attention that various other workers were aware of the significance of double reflections for Grover’s algorithm. However it appears not to be widely known and I present this note for its pedagogical value.
|
no-problem/9901/cs9901007.html
|
ar5iv
|
text
|
# Universal Object Oriented Languages and Computer Algebra
## 1 Introduction
Despite of existence of many interactive algebra–numerical systems (MatLAB or MathCAD<sup>1</sup><sup>1</sup>1Either can use MAPLE library) and systems of computer algebra (MACSYMA, REDUCE, MATHEMATICA, MAPLE, etc.), some kind of algebraic problems related with numerous difficult and formal manipulation could not be simply rewritten for computer. The systems of computer algebra very useful if the class of problems under consideration is already has standard implementation. It is symbolic integration and differentiation, rational and integer number arithmetic with arbitrary precision, etc. .
For implementation of new classes of problems the most of systems of computer algebra (CA) have his own programming languages. Sometime it can be very universal and powerful languages like LISP. For example, many new packages was implemented in REDUCE/RLISP by users.
On the other hand the programming of the new methods in a system of CA can be even more difficult, than using universal modern languages like Pascal, C++, etc. with new powerful techniques like data types or object oriented programming (OOP).
In the paper are considered possible extensions of the OOP to make possible of simple implementations to systems of CA. The paper is continue a theme presented earlier at .
## 2 Object Oriented Programming and Computer Algebra
The development of big and difficult pieces of software is often related with proliferation of errors and lost of clarity. The problem of effectiveness of programming was one of central point for creation structural languages like Pascal and new generation of object oriented languages like Java. The LISP, traditional for CA, is very universal, but the structure of programs is too complicated.
The OOP has possibility of understanding representation of structures in CA . For example it is possible to define Module <sup>2</sup><sup>2</sup>2Here it is additive Abelian group, unit is called “zero”:
Module = Object;
operation $`+`$ (A,B : Module) : Module;
operation $``$ (A,B : Module) : Module;
operation $``$ (A : Module) : Module;
const Zero : Module;
end; { Module }
Then definition of Ring can exploit inheritance in OOP:
Ring = Object(Module)
operation $``$ (A,B : Ring) : Ring;
operation $`/`$ (A,B : Ring) : Ring;
function Inversion(A : Ring) : Ring;
const Unit : Ring;
end; { Ring }
It is conception of an abstract basic type. They are algebra, module, group, field, ring etc.. The abstract types do not contain a data. Other objects contain data. It is integer, real or complex numbers, quaternions, $`n\times n`$ matrices, etc.:
Quaternion = Object(Algebra)
Data : array \[0..3\] of Number;
function Norm : Number;
operation $`+`$ (A,B : Quaternion) : Quaternion;
operation $``$ (A,B : Quaternion) : Quaternion;
$`\mathrm{}`$
end; { Quaternion }
There is a difficulty because in CA it is necessary to work with analytical expressions associated with each of the object. We can have possibility to write either $`z:=2i`$, or $`z:=x+iy`$. In the work already was discussed some kind of extensions to the model of an object. It is some synthesis of functional and object oriented programming.
Comments to attached copy of slide presentation
## Slide 1
A scientific problem could be resolved with using variety of different ways. Here is represented using: Standard systems of computer algebra, Universal programming languages and traditional work with pen and piece of paper.
$$$$
An application of universal programming languages like C++, Pascal, etc. is still widely used not only for numerical calculation but for any scientific data manipulation, including computer algebra. Here is represented an approach with using Object oriented programming for computer algebra.
$$$$
An object oriented program for computer algebra consists of Specific structures and algorithms of computer algebra together with Standard structures of the object oriented language.
$$$$
It is convenient to merge both types of structures in some Extension of object oriented language for computer algebra.
$$$$
Practical realization of such idea could be either standalone translator for the language or Translator CA $``$ OOP, i.e. convertor of a program for computer algebra to a program on one of widespread object oriented languages (C++, JAVA, Delphi, etc.). The code generator produces program on the OO language, standard algorithms of computer algebra also can be included in Libraries in a standard format accessible for the language. Next, the produced C++ program can be translated to executable module; if it is JAVA program it can be converted to bytecode for using in WWW applets; etc.. It is the possibility to make computer algebra more effective, universal, fast, “light” and to include it as parts in other software projects.
## Slide 2
Here is represented good agreement of principles of object oriented programming with structures of standard algebraic models in mathematics, i.e. inheritance of methods and properties between such abstract objects as Semigroup, Group, Module, Ring, Division ring, Field, Algebra and such descendant “actual” objects as Real and Complex numbers or Polynomials.
## Slide 3
Here is emphasized one of specific property of mathematical language that requires some extension of model of an object in object oriented programming. As a simple example is considered meaning of same variable $`x`$ in different contexts:
What does ‘x’ mean?
First meaning is traditional for standard programming language there ‘x’ means value of variable $`x`$ and nothing else. But another meanings also are essential and used in mathematics and computer algebra. The second meaning is ‘x’ as some variable $`x`$ of given type with value unknown or not essential. The last, most difficult case is ‘x’ as value of some function with (maybe partially) unknown arguments or expression like $`x=ay+2`$.
\-
All three meaning are also used in Pure functional programming languages there the functions with partially defined arguments are used and could be described formally as some tree represented on the slide.
\-
The trees are simply described by using usual Structures in object oriented languages produced by formal inversion of all arrows in diagram above.
\-
But principles of type checking of an object oriented language should be extended for such a case to make the new structure (an object with pointers to argument and to method of evaluation of the function) compatible with initial type of variable $`x`$. It require an Extension of object programming: compatibility of 3 subtypes mentioned above.
|
no-problem/9901/astro-ph9901199.html
|
ar5iv
|
text
|
# Performance of the Stereoscopic System of the HEGRA Imaging Air Čerenkov Telescopes: Monte Carlo Simulations & Observations
## 1 Introduction
The HEGRA collaboration is close to completing an array of five imaging air Čerenkov telescopes (IACTs) located on the Roque de los Muchachos, Canary Island La Palma ($`28.8^{}\mathrm{N},17.9^{}`$). The telescope array, primarily designed for stereoscopic observations of the $`\gamma `$-radiation at energies above several hundred GeV is formed by five identical IACTs - one at the center, and four others at the corners of a 100 m by 100 m square area. The multi-mirror reflector of each system telescope has an area of $`8.5\mathrm{m}^2`$. Thirty 60 cm diameter front aluminized and quartz coated spherical mirrors with focal length of about 5 m are independently installed on an almost spherical frame of an alt-azimuth mount. The FWHM of the point spread function of the reflector is better than 10 arcminutes. Each telescope is equipped with a 271-channel camera with a pixel size of about $`0.25^{}`$ which results in the telescope’s effective field of view of $`4.3^{}`$. The digitization of the PMT pulses is performed with a 120 MHz FADC system. The system trigger demands at least two neighboring pixels above the threshold in each of at least two telescopes. The present system of four telescopes is taking data in the stereoscopic mode since 1996 . The whole telescope array of 5 IACTs is planned to run as a single detector at the end of 1998.
The basic concept of the HEGRA IACT array is the stereoscopic approach based on the simultaneous detection of air showers by $`2`$ telescopes, which allows precise reconstruction of the shower parameters on an event-by-event basis, superior rejection of hadronic showers, and effective suppression of the background light of different origins (night sky background, local muons, etc.). The recent observations of the Crab Nebula and Mkn 501 by the IACT system strongly support the theoretical expectations concerning the features of the stereoscopic imaging technique .
Due to the lack of a calibrated very high energy $`\gamma `$-ray beam, detailed Monte Carlo simulations are usually used for the design studies as well as for performance studies of the imaging atmospheric Čerenkov experiments. For example, new data analysis methods are developed and are tested with Monte Carlo simulations before being applied to real data. The measurement of absolute $`\gamma `$-ray flux and energy spectra of the established $`\gamma `$-ray sources as well as the determination of upper limits for the quiet objects heavily rely on Monte Carlo predictions of the detector performance. In the past, the comparison of the characteristics of recorded cosmic ray (CR) events with the characteristics of the Monte Carlo simulated hadron-induced air showers used to be the most reliable way to test the predictive power of the Monte Carlo simulations (e.g., ). Once the telescope response to CR-induced air showers is well understood, the Monte Carlo predictions for the $`\gamma `$-ray- induced shower can be performed with a high degree of confidence. The situation changed dramatically with the observation of the high flaring activity of Mkn 501 in 1997. The observations with the HEGRA system of 4 IACTs ($`110`$ h) provided a large data base of $`\gamma `$-rays ($``$ 30000) with unprecedented signal-to-noise ratio. Several observational key characteristics of $`\gamma `$-ray- induced air showers can be measured with small statistical uncertainties. The agreement of these key characteristics of $`\gamma `$-ray induced air showers in data and Monte Carlo simulations substantially strengthened the reliability of the simulations.
In this paper we discuss the standard data analysis procedure and the results of the detailed Monte Carlo simulations for the currently operating system of 4 IACTs as well as for the complete HEGRA array of 5 IACTs. Special attention has been paid on the proper simulation of the camera and electronics of the Čerenkov telescopes (see Section 2). Detailed comparisons of the detected cosmic ray and $`\gamma `$-ray-induced air showers with simulations have been made to understand the performance of the detector (Section 3). The basic characteristics of the HEGRA IACT system were calculated (Section 4). Finally, we discuss the resulting sensitivity of the HEGRA IACT array with respect to TeV $`\gamma `$-rays (Section 5) and summarize the basic recommendations for the use of the instrument.
## 2 Simulations
In the present calculations the complete Monte Carlo simulation procedure was divided into two steps. In the first step an extended library of simulated air showers, induced by different primaries, is produced. In the second step simulation of the response of the telescope camera pixels is applied to all generated events. The most time consuming step is evidently the first one, whereas the second step is relatively fast. This division allows to apply the detector simulation procedure several times to the generated showers in order to tune the Monte Carlo simulations to the hardware status of the telescopes (e.g. trigger threshold, mirror adjustment etc.) Here we discuss the major features of this two-step simulation procedure.
### 2.1 Shower Simulation
The generation of the air showers has mainly been performed using the ALTAI Monte Carlo code . For the electromagnetic cascade this code has implemented an algorithm based on the analytical probability distributions of the electron (positron) transport in the multiple-scattering segments. This algorithm substantially reduces the computational time needed for simulation of a single shower. The proton-nuclei cascade in the atmosphere is simulated according to the radial scaling model (RSM) based on accelerator data . The air showers induced by the primary nuclei are simulated in a model of independent nucleon interactions for the fragmentation at the projectile-nucleus. The fragmentation of the colliding nuclei is processed according to the probabilities of different fragmentation channels. We studied the influence of the proton-nuclei cascade model on the observable shower parameters using the additional simulations with the CORSIKA code which has implemented the HDPM model for the simulation of the nucleus-nucleus interactions.
The shower simulation is carried out at the level of single Čerenkov photons. A certain fraction ($`k0.2`$) of the Čerenkov photons of a shower which hit the telescope reflector are stored with full information, i.e. the arrival time, the arrival direction, and the impact coordinates in the reflector frame. By this means it is possible to apply the complete detector simulation procedure to all showers which have been processed in this way. The Monte Carlo library contains air showers induced by the primary $`\gamma `$-rays, protons, helium and other nuclei belonging to CNO, heavy and very heavy nuclei groups. The primary energy of the showers is randomly distributed inside 14 fixed energy bins covering the energy range from 100 GeV to 100 TeV. The events are used with weights according to some chosen primary spectra. The simulations have been performed for zenith angles $`0^{},\mathrm{\hspace{0.17em}30}^{},\mathrm{\hspace{0.17em}45}^{}`$. For each type of primary particle and for each zenith angle approximately $`10^5`$ showers were simulated. The actual setup of HEGRA IACT telescopes was used in the simulations. The position of the shower axis in the observation plane was uniformly randomized over the area limited by the radius $`R_0`$ with respect to the central telescope. The radius $`R_0`$ was chosen between 250 and 450 m, increasing with shower energy and inclination angle. For the CR air showers the additional randomization over the solid angle around the telescope axis with the half opening angle of $`3.5^{}`$ has been introduced in order to reproduce the isotropic distribution of the CR events over the camera field of view.
### 2.2 Detector Simulation
The detector simulation procedure accounts for all efficiencies involved in the process of the Čerenkov light propagation which starts with emission of a Čerenkov photon and ends with the digitization of the PMT signal (see for details ). The list of the effects which are important in this respect contains: (i) mirror reflectivity, modelled with the raytracing technique or in phenomenological way using the measured functions of the light spot distortion in the camera focal plane; (ii) the light absorption in the plexiglass panel covering the camera; (iii) the acceptance of the funnels placed in front of the photomultipliers (PMTs); (iv) the photon-to-photoelectron conversion inside the PMTs (EMI 9083R) taking into account a measured single photoelectron spectrum. The overall efficiency of the photon-to-photoelectron conversion is of $`0.1`$. By analogy with the experiment the structure of the readout based on the 120 MHz FADC data acquisition and the multiple-telescope trigger scheme were implemented in the Monte Carlo simulations. Note, that this procedure takes into account the arrival times of the Čerenkov light photons hitting the telescope reflector. The basic characteristics of the performance of the telescope hardware, for instance the number of camera pixels read out for the triggered and none triggered telescopes, the single pixel rate and the single pixel trigger rate, the ratio between first and second maximum pixel amplitudes in the image etc have been directly compared between Monte Carlo and data .
## 3 Parameters of the Cosmic Ray Air Showers
The detection rate of the IACT system is mainly determined by the isotropic flux of primary cosmic ray protons and nuclei. The system is triggered if the shower produces a sufficiently high number of Čerenkov photons to trigger at least two telescopes. Since the number of Čerenkov photons produced by a shower is to first order approximation proportional to the energy of the primary particle, the trigger condition determines the energy threshold of the IACT system. At larger shower axis distances the Čerenkov photon density decreases rapidly. With increasing energy of the primary particles more and more showers are able to fulfill the trigger criteria, although they have impact distances far away from the telescope system. However, due to the steep primary energy spectrum of cosmic rays $`dJ_{cr}/dEE^{2.75}`$ the contribution of the high energy showers ($`\mathrm{E}>20\mathrm{TeV}`$) to the total cosmic ray detection rate is rather small even though they are collected over a larger area.
### 3.1 Cosmic Ray Detection Rate
The basic characteristics of a single Čerenkov telescope at hardware level can be calculated using the procedure described in . For a system of IACTs the only difference stems from the performance of the multi-telescope trigger. For this purpose, first the local trigger condition at each telescope as $`2\mathrm{n}\mathrm{n}/271>\mathrm{q}_0`$ (the signals in at least two neighboring pixels exceed $`q_0`$) has to be fulfiled. The trigger threshold $`q_0`$ is measured in photoelectrons. The system trigger demands for each individual event coincident trigger signals from at least $`N`$ telescopes (N/5, N = 2, 3, 4, 5). Assuming the energy spectrum and the chemical composition of the primary cosmic rays , the cosmic ray detection rate can be computed with the following formula:
$$R_{cr}=_0^{\mathrm{\Omega }_0}𝑑\mathrm{\Omega }\mathrm{\hspace{0.17em}\hspace{0.17em}2}\pi _0^{R_0}R𝑑R_0^{E_0}\frac{dJ_{cr}}{dEd\mathrm{\Omega }dS}P_{cr}(R,E,\theta )$$
(1)
where $`E`$ is the primary energy of a cosmic ray, $`R`$ is the impact distance of the shower induced by the cosmic ray, $`\theta `$ is the angle of the shower axis with respect to the telescope axis, and $`P_{cr}(R,E,\theta )`$ is the probability of the shower to trigger the telescope system.
The performance of the multi-telescope trigger of the HEGRA IACT system was discussed in detail in . In Table 1 we show the hardware detection rates for the different trigger thresholds as measured with the HEGRA telescope system together with the results of the Monte Carlo simulations. The measured and computed rates are in a good agreement which confirms that the trigger procedure has been modelled correctly. Note that the absolute accuracy of the measured rate is about 0.2 Hz. The Monte Carlo predicted rates strongly rely on the used chemical composition and fluxes of the cosmic rays. So the accuracy of the Monte Carlo predicted rates is better than 20 %. The calculations of CR detection rates have been done also for the complete HEGRA system of 5 IACTs (see Table 2). One can see that the hardware detection rate for the complete array is expected to be approximately $`1.4`$ times larger than for the currently operating 4 IACT system.
### 3.2 Cosmic Ray Shower Images
The standard method to parametrize the Čerenkov light images was primarily introduced in . It is based on the second moment analysis of the two-dimensional angular distributions of Čerenkov light flashes, sampled with pixels (PMTs) of finite solid angular extension . The effective technique (supercuts) to extract the shower image from the measured matrix of the pixels amplitude was suggested in . This method provides an effective $`\gamma `$/hadron separation and has been extensively used by several single Čerenkov telescopes around the world. For the system of IACTs the performance of the method can be substantially improved, as will be shown below.
The distributions of the second-moment image parameters depend crucially on the amount of background light per pixel. A slight overestimate or underestimate of the background light content dramatically change the distributions. The detailed detector simulation procedure is provided to account the exact background content in the camera pixels. In Figure 1 we show the distributions of the second-moment parameters of the cosmic ray images measured in the OFF region of Mrk 501 observations (dark sky region) by the HEGRA system of 4 IACT telescopes and the Monte Carlo simulations. It can be seen in Figure 1 that the Monte Carlo simulated images fit the measured images quite well.
In general, the distributions of the shape parameters of the Čerenkov light images depend on the model of the development of the proton-nuclei cascade in the atmosphere and also on the model of the nucleus-nucleus interaction. To study this effect the series of simulations have been performed using the ALTAI code and CORSIKA code (with HDPM model) in the fragmentation model as well as under assumption of a simple superpositon model of nucleus-nucleus interactions. The results show a good agreement of two simulation codes, despite of the very different models which are used for the simulations of the proton- nuclei component of the air showers. The distributions shown in Figure 1 have been produced assuming a certain chemical composition of the primary cosmic ray .
## 4 Imaging of the Gamma-Ray Air Showers
The hardware detection rate of the cosmic ray air showers dominates by at least two orders of magnitude over the detection rate of the $`\gamma `$-rays. To extract the $`\gamma `$-ray signal at sufficient confidence level, a special analysis is used to suppress significantly the rate of the cosmic ray events. This analysis is based on the application of several software cuts related to the orientation and the shape of the Čerenkov light images. Assuming an integral flux and an energy spectrum index of a $`\gamma `$-ray source, the detection rates of the $`\gamma `$-ray-induced air showers before and after application of the software cuts can be calculated.
### 4.1 Collection Areas and Detection Rates
One of the major advantages of the ground based Čerenkov technique in comparison with satellite observations is that the VHE $`\gamma `$-ray-induced air showers can be detected at large distances ($`100`$ m) of the shower core from the telescope. That yields a high detection rate of the $`\gamma `$-ray-induced air showers which are distributed over the large area of $`S_\gamma 10^9\mathrm{cm}^2`$ around the telescope site. The collection area for the $`\gamma `$-ray-induced air showers is calculated as
$$S_\gamma (E)=2\pi _0^{\mathrm{}}P_\gamma (E,r)r𝑑r$$
(2)
where $`P_\gamma (E,r)`$ is the trigger efficiency of $`\gamma `$-ray-induced air showers of primary energy $`E`$ and impact distance $`r`$. The collection area $`S_\gamma (E)`$ for showers of primary energy E, is mainly determined by the effective area of the telescope reflector $`S_{ph.e.}=S_m\chi _{ph.e.}`$, where $`S_m`$ is the total mirror area and $`\chi _{ph.e.}`$ is the efficiency of the photon-to-photoelectron conversion of the camera channels. Given a fixed mirror area, a maximum collection area is achieved by reducing the trigger threshold of the telescopes. The latter is limited at the lower end by the fluctuations of the background light in each camera pixel.
In Figure 2 the collection areas for the complete HEGRA system of IACTs are shown for the conventional trigger criteria. The strong increase of collection area in the energy range $`1\mathrm{TeV}`$ changes to a logarithmic growth at higher energy. The behaviour of the collection areas is determined by the shape of the lateral distribution of the Čerenkov light pool at the observation level. The density of Čerenkov light photons at plateau for impact distances up to 125 m (for air showers observed at angles close to zenith) is roughly proportional to the primary shower energy whereas beyond 125 m the density of the Čerenkov light density decreases rapidly. In observations at large zenith angles ($`30`$ degree) the collection area decreases at low energies ($`E\mathrm{\hspace{0.17em}3}\mathrm{TeV}`$) but increases at higher energies (E$`>`$5 TeV). The reason for that is that air showers at the larger zenith angles develop higher in the atmosphere. This is clue to the fact that the same amount of the Čerenkov light (neglecting the increase in absorption) is produced by a shower, but is scattered over the larger area at the observation level which provides decreasing the density of Čerenkov photons. Thus low energy air showers at large zenith angles cannot trigger the telescopes but at the same time the high energy air showers have much larger collection area due to the large size of the Čerenkov light pool at observation level.
The HEGRA system of 5 imaging air Čerenkov telescopes has been designed for effective observation of $`\gamma `$-rays with the primary energy of several hundred GeV in stereoscopic mode with telescopes of relatively small mirror area, 8.5 $`\mathrm{m}^2`$. The system trigger based on the simultaneous detection of the shower images in several telescopes (at least 2) forces down the trigger threshold and consequently the energy threshold of the IACT system. Usually, the energy threshold is determined as the energy at which the detection rate of observed $`\gamma `$-ray showers reaches its maximum. For convenience we use in the following rate calculations a $`\gamma `$-ray spectrum according to:
$$dJ_\gamma /dE=AE^{\alpha _\gamma },J_\gamma (>1\mathrm{TeV})=10^{11}\mathrm{cm}^2\mathrm{s}^1.$$
(3)
For a certain spectrum index $`\alpha _\gamma `$ the $`\gamma `$-ray detection rate is calculated as
$$R_\gamma =_0^{\mathrm{}}(\frac{dR_\gamma }{dE})𝑑E=_0^{\mathrm{}}(\frac{dJ_\gamma }{dE})S_\gamma (E)𝑑E$$
(4)
where $`(dR_\gamma /dE)`$ \[$`\mathrm{Hz}\mathrm{TeV}^1`$\] is the differential $`\gamma `$-ray detection rate.
Under the assumption of a differential index, $`\alpha _\gamma `$, of the $`\gamma `$-ray energy spectrum one can calculate the differential $`\gamma `$-ray detection rate $`(dR_\gamma /dE)`$ (see Figure 3). The peak of the differential detection rate slightly shifts to higher energies with increase of the trigger multiplicity because low energy events cannot effectively trigger several telescopes. The integral detection rates of the $`\gamma `$-ray-induced air showers for different trigger multiplicities are presented in Table 3. Note that in the case of a steep $`\gamma `$-ray spectrum (e.g. $`\alpha _\gamma `$ = 3.0) for the system trigger 2/5 the $`\gamma `$-ray detection rate is much higher as compared with the operation for the 3/5 trigger mode. It is seen from Figure 3 that for the trigger conditions 2/5, 2nn/271$`>q_0`$ ph.e. the differential detection rate peaks at an energy of $`500\mathrm{GeV}`$ which is identified as the energy threshold of the instrument. The energy threshold for observations at large zenith angles significantly increases (see Figure 4). At the same time, the rate of high energy events detected at large zenith angles could even exceed the corresponding rate in observations at the nominal zenith angles due to the significant increase of the collection area with increasing zenith angle.
Our Monte Carlo studies show that $`\gamma `$-ray air showers detected at large impact distance from the telescopes cause some difficulties for a reliable selection of $`\gamma `$-ray events. For plane-parallel $`\gamma `$-ray flux the Čerenkov light image shifts to the camera edge for large impact distances. These images are partially cut by the camera edge and cannot be used for an accurate reconstruction of the shower parameters (orientation of the shower axis in space, shower core location etc.). Thus for better evaluation of the energy spectrum it is useful to set a restriction on the reconstructed impact radius from the center of the system. This restriction influences mainly the collection areas at high energies. Above a certain energy the effective collection area is then determined simply by the geometrical area around the IACT system. The restriction on the impact distance for $`r<200`$ m in the case of steep spectrum does not significantly change the detection rate. For the case of a flat energy spectrum, especially at observations at large zenith angles, it is an advantage to collect $`\gamma `$-ray-induced air showers at large distances to the center of the IACT system.
### 4.2 Reconstruction of Shower Arrival Direction
The simultaneous observations of the air showers with $`2`$ imaging air Čerenkov telescopes offers the possibility to reconstruct the orientation of the shower axis with respect to the telescope axis . The general approach is based on the superposition of the several images in one common focal plane in order to derive the intersection point of the major axis of the ellipsoid-like images. This intersection point determines the shower direction . If the Čerenkov telescopes are directed towards the object, the reconstructed source position in the camera field of view for the $`\gamma `$-ray-induced air showers has to be in the center of the camera focal plane. The currently operating HEGRA system of IACTs performs so-called wobble mode observations. The position of the source in the camera focal plane is offset by $`0.5^{}`$ from the camera center (on declination) and consequently rotates depending on the azimuth angle. This approach gives the possibility to perform continuous ON source observations, whereas the OFF region can be chosen in $`1^{}`$ offset from the source position . In present simulations the Čerenkov light images were shifted by $`0.5^{}`$ from the center of the focal plane with the correlated randomization over the azimuth.
The difference between the true and reconstructed position of the $`\gamma `$-ray source in the camera field of view, $`\mathrm{\Theta }`$, is a measure of the angular resolution of the system of IACTs. The distributions of $`\mathrm{\Theta }^2`$, both for the simulated and observed $`\gamma `$-ray showers from Mrk 501 are shown in Figure 5. One can see from Figure 5 both distributions match and both show a prominent peak around the source position. Our Monte Carlo studies show that the tail of the distribution at large $`\mathrm{\Theta }^2`$ is due to the air showers with the core positions close to the line connecting two or three telescopes. In such a case, the images in the telescopes are almost parallel to each other and the reconstruction procedure leads to significant error in the evaluated shower direction because of a small intersection angle. Note that in the reconstruction procedure we require at least 3 telescopes have to be triggered (in addition we require also a minimum number of photoelectrons in the image of 40 ph.e.).
The angular resolution of the system of IACTs can be characterized quantitatively by the acceptance of the $`\gamma `$-ray-induced air showers, $`\kappa _\gamma ^{dir}`$, after the application of the fixed angular cut on $`\mathrm{\Theta }^2`$. In Table 4 the data on the acceptance, $`\kappa _\gamma ^{dir}`$, of the $`\gamma `$-rays for three angular cuts $`\mathrm{\Theta }^20.03,\mathrm{\hspace{0.17em}0.05}`$ and 0.1 $`[\mathrm{deg}^2]`$ are shown for the simulations at different zenith angles 0, 30, 45 degree. In general, for observations at large zenith angles the shower is far from the observer and the Čerenkov light images detected at large zenith angles have a smaller angular size, they are closer to the camera center and show almost circular shape. These changes in the image topology lead consequently to larger errors in the reconstruction of the shower direction. The comparison of the angular resolution for two different trigger multiplicities, 2/5 and 3/5, show some increase in the $`\gamma `$-ray acceptance applying the same angular cuts for the higher trigger multiplicity – 3/5. However, because of the energy dependence of the angular resolution two different system triggers have to be considered as complementary in observations of $`\gamma `$-ray sources with very different spectral features.
The angular resolution noticeably depends on the impact distance of the shower core from the center to the IACT system. For impact distances $`r125`$ m the angular resolution slightly improves with increasing the energy of the $`\gamma `$-ray showers, because the images on average contain more light and the image orientation is better determined. The angular resolution, $`\delta \mathrm{\Theta }`$ (one standard deviation of the Gaussian distribution on $`\theta `$), for $``$1 TeV $`\gamma `$-ray showers is of $`0.11^{}`$ and $`0.09^{}`$ at $`20`$ and 200 m, respectively. Beyond $`120`$ m the angular resolution decreases at higher energies. This can be explained by the distortion of the images by the edge of the limited camera field of view. The impact distances for high energy $`\gamma `$-rays correspond to large shifts of the images from the center of the camera focal plane (the high energy air showers occur deeper in the atmosphere). For $`10`$ TeV $`\gamma `$-ray showers detected at impact distances of 200 m the angular resolution is $`0.14^{}`$.
The advanced angular resolution of the HEGRA system of IACTs is a very effective tool for suppression of the isotropic cosmic ray background. The acceptances of cosmic ray air showers after application of an angular cut, $`\mathrm{\Theta }^2<\mathrm{\Theta }_0^2`$, evaluated from the data taken with the currently operating HEGRA system of IACTs (OFF sample of Mrk 501) as well as from the Monte Carlo simulations are shown in Table 5. It is seen from Table 5 that after applying the angular cut of $`\mathrm{\Theta }_0^2`$ = 0.03 $`[\mathrm{deg}^2]`$ the cosmic ray background rejection is as high as $`200`$. One can see also from Table 5 that Monte Carlo simulations reproduce quite well the measured contamination of the cosmic ray air showers after application of an angular cut.
### 4.3 Localization of Shower Core
The position of the shower core at the observation level can be measured by the system of IACTs for a single individual event. Then the impact distances from the shower core to the system telescopes can be evaluated. The reconstruction algorithm is based on the orientation of the Čerenkov light images in several telescopes which have been triggered. We use pure geometric reconstruction which does not relate to the image shape. The accuracy of the shower core reconstruction is limited by the errors in the determination of the image orientation. As discussed before, the change of the Čerenkov light image topology leads to an increase of the error in the core position with increasing zenith angle. The accuracies of shower core reconstruction for the different primary energy and impact distance are summarized in Table 6. Observing a $`\gamma `$-ray source at zenith angles below 45 degrees and restricting the impact distances from the telescope to the shower core within 200 m the average accuracy is $`20`$ m.
The reconstructed impact distance is used for calculating the image shape parameters scaled on an impact distance and image amplitude. These parameters are applied for the cosmic ray rejection. For that, the accuracy of 20 m is quite sufficient because the shape of the Čerenkov light images does not significantly change within 20 m. Furthermore, the value of the reconstructed impact distance is needed for evaluation of the shower energy. The calculations show that even with an accuracy of the shower core localization of around 20 m the energy resolution for $`\gamma `$-ray-induced air showers is better than 20%.
### 4.4 Measurement of Shower Energy
The procedure of energy reconstruction for the $`\gamma `$-ray-induced air showers observed with a single imaging air Čerenkov telescope appears to be quite complicated due to the lack of a direct measurements of the distance from the telescope to the shower core. Several indirect methods have been invented in order to estimate the impact distance using the centroid shift in the camera field of view as well as the image shape .
For the system of IACTs the measurement of the impact distance is straightforward and is not related to the image shape. That improves the accuracy in the energy reconstruction compared with a single telescope. The algorithms of the energy reconstruction for each individual event as well as for the spectrum evaluation with the system of IACTs have been discussed in . This method of energy reconstruction was investigated by means of Monte Carlo simulations for the HEGRA system of IACTs.
If one can measure the distance from the shower core to the telescope ($`r_i,i=1,N`$, where $`N`$ is the number of triggered telescopes) the primary energy of air showers can be evaluated using the inverse function of the image size (amplitude) on shower energy and impact distance
$$E_i=F(S_i,r_i,\theta )$$
(5)
where $`S_i`$ is the image size (total number of photoelectrons in the image) and $`\theta `$ is a zenith angle. The final energy estimate can be constructed by incorporating several images in a number of telescopes, $`E_i,i=1,N`$, as $`E_0=_iw_iE_i`$, where $`w_i`$ is an energy an core distance dependent weight ($`w_i=1`$). Note that the energy resolution depends on the accuracy of determing the core distance and is strongly influenced by fluctuations in the image amplitude.
In Table 7 we show the estimates of the energy resolution in the different primary energy ranges as well as for the simulations at different zenith angles. Note that in a case of wobble mode observations the images of high energy $`\gamma `$-ray-induced air showers $`\mathrm{E}10\mathrm{TeV}`$ are very often cut by the camera edge. This leads to a distortion in the impact distance reconstruction and consequently in the reconstruction of the shower energy. To remove this effect the large impact distances $`r200`$ m are usually excluded from the consideration. For inclined showers (at large zenith angles) the Čerenkov light images are closer to the center of the field of view because these showers develop in atmosphere very far from the telescope and the corresponding images shrink to the camera center. Thus the problem of the camera edge appears to be less significant for inclined showers in the energy range at least up to 20 TeV. Data shown in Table 7 demonstrate that over the whole energy range for the $`\gamma `$-ray-induced air showers in observations at zenith angles up to 45 degrees the estimated energy resolution for the HEGRA system of IACTs is around $`20`$%.
### 4.5 Rejection of CR background using the image shape
Due to the difference in the nature of $`\gamma `$-ray- and proton/nuclei-induced air showers the corresponding Čerenkov light images are also very different in shape . In the standard second-moment approach these differences can be determined by using such image shape parameters as Width, Length etc. The $`\gamma `$-ray-induced air showers have on average, images of smaller angular size. The selection of the $`\gamma `$-rays can be done by applying the standard parameters cuts: Width $`w_0,Lengthl_0`$ etc where $`w_0,l_0`$ are the boundaries which limit the domain of the most of the $`\gamma `$-ray-induced showers ($`50`$%).
When the minimum amplitude for the pixel used for the image parametrization is fixed, the parameters of the image shape would increase with the total number of photoelectrons in the image. For the high energy air showers more pixels are involved in the image parametrization procedure that is why the angular size of the image increases with the shower energy. Application of fixed image shape cuts saves the most of the low energy $`\gamma `$-rays (which are close to the energy threshold) and reduces significantly the content of high energy $`\gamma `$-rays. For the $`\gamma `$-ray spectrum evaluation that is undesirable because it will decrease the statistics of the observed high energy $`\gamma `$-rays. To avoid this problem the energy dependent cuts have to be used. In addition, for a fixed primary energy of $`\gamma `$-ray-induced air showers the angular size of image also depends on the distance from the telescope to the shower core due to the decrease in image amplitude with impact distance. Thus for a single telescope using the restriction on the position of the image in a certain range from the camera center, one can keep only events at core distances $`r120`$ m. The radial dependence of the angular size of an image is very small in this particular range of impact distances. However, such a restriction significantly reduces the number of detected $`\gamma `$-rays. For the system of IACTs both the energy and radial dependence of the image size can be accounted in detail, because the IACTs system measures the shower core position and consequently the energy for each individual event.
The images from the $`\gamma `$-ray-induced air showers can be sorted into several bins on the measured distance from the shower core, $`\mathrm{\Delta }r_i,i=1,n`$, and image size, $`\mathrm{\Delta }log(S_j),j=1,m`$. Then the averaged image shape parameters are calculated for these particular bins, $`<w>_{ij},<l>_{ij}`$. For an individual event the shower core position and impact distance from the shower core to the triggered telescopes can be reconstructed. Instead of the usual Width ($`w`$) of the image the scaled Width ($`\stackrel{~}{w}`$) is calculated for each telescope and after that the mean scaled Width parameter is defined for this event:
$$<\stackrel{~}{w}>=1/N\underset{k=1}{\overset{N}{}}w^k/<w>_{ij}^k$$
(6)
where $`N`$ is the number of triggered telescopes. The rejection of cosmic ray background events is performed by applying a cut on the mean scaled Width, $`<\stackrel{~}{w}><\stackrel{~}{w}_0>`$ . Measured distributions of the mean scaled Width for $`\gamma `$-ray-induced air showers observed at different zenith angles $`10`$, 30, 45 degrees are shown in Figure 6. The distributions have a peak at 1.0 and are very narrow because after the scaling procedure the RMS of the distributions is determined only by the pure image fluctuations and does not depend any more on the image amplitude and shower impact distance. The multifold cut on the image Width is replaced by single parameter mean scaled Width and works effectively against the cosmic ray background events. For a single telescope (i.e. one view of the shower) the probability, that cosmic ray shower gives an image beeing very similar to the $`\gamma `$-ray-induced shower, could be as high as 0.1. In observations by several telescopes (in different projections) this probability is reduced down to $`10^2`$ (see Table 8). Note that in addition to mean scaled Width cut another shape parameters, such as Length, could be used. However, the simulations show that already after the mean scaled Width cut the cosmic ray background rejection is already very strong and an application of another shape cut does not improve the signal-to-noise ratio but simply reduces the amount of the $`\gamma `$-rays. The acceptances of the cosmic ray showers after the application of a mean scaled Width cut can be compared for Monte Carlo simulations and observed cosmic ray events. The data shown in Table 8 demonstrates a good agreement between the simulations and data.
There are two possible strategies for the choice of cuts. The first one is to apply the so-called strong cuts ($`<\stackrel{~}{w}_0>=1.0`$) which dramatically suppress the CR background but at the same time also noticeably reduce the content of $`\gamma `$-rays (see Table 8). This approach works best when searching for the $`\gamma `$-ray signal from source candidates on a short time scale. For spectrum studies it is more effective to apply the loose cuts ($`<\stackrel{~}{w}_0>=1.4`$) which save most of the observed $`\gamma `$-rays at high energies and do not show strong energy dependent efficiencies (opposite to the tight cuts). It is important to note that the efficiency of the cosmic ray rejection improves approximately by a factor of 2 for 3-telescope events compared to 2-fold coincidence events while the decrease of the $`\gamma `$-ray acceptance is only of $`20`$%. Thus use of 3-fold events seems to be preferable from the point of view of the cosmic ray rejection using the image shape and orientation as well as for an accurate impact distance and energy reconstruction. However, in the specific case of the $`\gamma `$-ray fluxes characterized by very steep energy spectra the analysis of 2-fold coincidences could be applied in order to increase the statistics of $`\gamma `$-rays.
## 5 Sensitivity to $`\gamma `$-ray fluxes
Using the calculated characteristics of the IACT system performance, one can estimate the sensitivity of the instrument in $`\gamma `$-ray observations. For two complementary approaches of the cosmic ray background rejection based on the application of tight and loose cuts the acceptances of the $`\gamma `$-rays and cosmic rays are shown in Table 9. For the integral flux of $`\gamma `$-rays taken at the level of $`J_\gamma (>1\mathrm{TeV})=10^{11}\mathrm{cm}^2\mathrm{s}^1`$ the expected number of detected $`\gamma `$-rays per one hour is about 28 and 56 for the tight and loose cuts, respectively. For the tight cuts the cosmic rays are highly suppressed and the expected cosmic ray rate per hour is around 3 particles. Thus, the stereoscopic observations of $`\gamma `$-ray sources of an integral flux $`\mathrm{J}_\gamma (>1\mathrm{TeV})10^{11}\mathrm{cm}^2\mathrm{s}^1`$ with the HEGRA array of IACTs are essentially background free (rate of the detected $`\gamma `$-rays exceeds the rate of the background event by more than a factor of 10). The expected sensitivity of the 5 IACT system for the $`\gamma `$-ray flux stated above after application of the loose cuts is about $`5`$ “sigma-per-one-hour”. The 5$`\sigma `$ detection of $`\gamma `$-ray fluxes from point sources at the level of $`J_\gamma (>1\mathrm{TeV})=10^{12}\mathrm{cm}^2\mathrm{s}^1`$ is possible in 20 hours of observations. Note, that after application of the tight cuts the amount of the detected cosmic electrons ($`0.4`$ particle per hour) becomes comparable to the cosmic ray contamination and further suppression of the background is limited by the electron content.
## 6 Summary
The HEGRA system of imaging air Čerenkov telescopes is the first instrument operating in the stereoscopic observation mode. The HEGRA system of 5 telescope with relatively small mirror area of 8.5 $`\mathrm{m}^2`$ gives a rather low energy threshold of $`500`$ GeV. The large camera field of view ($`4.3`$ degree) allows to perform the observations of the point $`\gamma `$-ray sources in the wobble mode, taking at the same time the ON and OFF events, and increase the available observation time by a factor of two. The use of several images gives the angular resolution better than 0.1 degree and yields the cosmic ray rejection, using the shower orientation, up to 200 times. The geometrical reconstruction of the shower impact position with a good accuracy ($`20`$ m) improves the energy resolution and makes possible to apply the image shape cuts which are independent on a shower energy. The energy reconstruction procedure for the telescope system is straightforward and is not related to the image shape. The energy resolution of the HEGRA system in the dynamic energy range of 0.5 GeV - 30 TeV is better than 20 %. In order to avoid the uncertainties in the evaluation of the energy spectrum of the detected $`\gamma `$-rays the effective collection area can be taken simply as the geometrical area around the center of the telescope system using the restriction on the reconstructed impact parameter (e.g., 200 m). Simulateneos registration of a several Čerenkov light images from the individual air shower makes possible to apply the correlated analysis of the image shape (mean scaled Width) and substantially improve the cosmic ray rejection up to 100. Note that the data analysis based on three concidence view appears to be preferable for the better angular, energy resolution as well as gives better cosmic ray rejection using the image shape. The three concidence events are optimum for the stereo imaging of the TeV $`\gamma `$-ray air showers. In the search mode the tight software cuts on shape and orientation show the better performance approaching the case of a background free detection of the $`\gamma `$-ray sources. However for the energy spectrum studies one can use the loose cuts providing the high $`\gamma `$-ray statistics.
The HEGRA system of IACTs could be considered as a successful prototype for the future low energy (100 GeV) arrays such as HESS and VERITAS. In general the current experience of the HEGRA operation can be used for the design of the forthcoming arrays of IACTs.
## 7 Acknowledgements
The support of the German Ministery for Research and Technology BMBF and of the Spanish Research Council CYCIT is gratefully acknowledged. We thank the Instituto de Astrophysica de Canarias for the use of the site and providing excellent working conditions. We gratefully acknowledge the technical support staff of Heidelberg, Kiel, Munich, and Yerevan.
We thank the anonymous referee for suggesting the improvements in the manuscript.
|
no-problem/9901/quant-ph9901072.html
|
ar5iv
|
text
|
# Spin flips and quantum information for anti-parallel spins
## I Introduction
Quantum information differs from classical information because it obeys the superposition principle and because it can be entangled. The huge potential of quantum information processing has renewed the interest in the foundations of two of the major scientific theories of the twentieth century: information theory and quantum mechanics .
Despite the very intensive recent work on quantum information, surprising effects are continuously being discovered. Here we describe yet another surprise. In a nutshell, we enquire whether quantum information is better stored by two parallel spins or two anti-parallel ones?
In more detail, our paper is centered around the following problem of quantum communication. Suppose Alice wants to communicate Bob a space direction $`\stackrel{}{n}`$. She may do that by one of the following two strategies. In the first case, Alice sends Bob two spin 1/2 particles polarized along $`\stackrel{}{n}`$, i.e. $`|\stackrel{}{n},\stackrel{}{n}>`$. When Bob receives the spins, he performs some measurement on them and then guesses a direction $`\stackrel{}{n}_g`$ which has to be as close as possible to the true direction $`\stackrel{}{n}`$. The second strategy is almost identical to the first, with the difference that Alice sends $`|\stackrel{}{n},\stackrel{}{n}>`$., i.e. the first spin is polarized along $`\stackrel{}{n}`$ but the second one is polarized in the opposite direction. The question is whether these two strategies are equally good or, if not, which is better.
To put things in a better perspective, consider first a simpler problem. Suppose Alice wants to communicate Bob a space direction $`\stackrel{}{n}`$ and she may do that by one of the following two strategies. In the first case, Alice sends Bob a single spin 1/2 particle polarized along $`\stackrel{}{n}`$, i.e. $`|\stackrel{}{n}>`$. The second strategy is identical to the first, with the difference that when Alice wants to communicate Bob the direction $`n`$ she sends him a single spin 1/2 particle polarized in the opposite direction, i.e. $`|\stackrel{}{n}>`$. Which of these two strategies is better?
If the particles would be classical spins then, obviously, both strategies would be equally good, as an arrow defines equally well both the direction in which it points and the opposite direction. Is the quantum situation the same?
First, we should note that in general, by sending a single spin 1/2 particle, Alice cannot communicate to Bob the direction $`\stackrel{}{n}`$ with absolute precision. Nevertheless, it is still obviously true that the two strategies are equally good. Indeed, all Bob has to do in the second case is to perform exactly the same measurements as he would do in the first case, only that when his results are such that in the first case he would guess $`\stackrel{}{n}_g`$, in the second case he guesses $`\stackrel{}{n}_g`$.
We are thus tempted to conjecture that:
Conjecture: Similar to the classical case, for the purpose of defining a direction $`\stackrel{}{n}`$, a quantum mechanical spin polarized along $`\stackrel{}{n}`$ is as good as a spin polarized in the opposite direction. In particular, the two two-spin states $`|\stackrel{}{n},\stackrel{}{n}>`$ and $`|\stackrel{}{n},\stackrel{}{n}>`$ are equally good.
Surprisingly however, as we’ll show here, this conjecture is not true.
As we will show, the main reason behind this effect is, once more, entanglement. Here entanglement does not refer to the two spins - whether parallel or anti-parallel they are always in a direct product state - but to the eigenvectors of the optimal measurement. (Indeed, as Massar and Popescu demonstrated the optimal measurement on parallel spins requires entanglement.)
This result has also led us to many new questions. For example we were led to consider a universal quantum spin flip (UQSF) machine a machine which flips an unknown spin as well as possible, and an anti-cloning machine, i.e. a machine which takes as input N parallel spins, polarized in an unknown direction, and generates some supplementary spins polarized in the opposite direction. We also point out the relation between spin flip and partial transpose.
## II Optimal fidelity for parallel and anti-parallel spins
In the previous section we have presented our main problem as a quantum communication problem. We can present it also in a different way, which brings it closer a well-known problem. Indeed, we can completely dispense with Alice, and consider that there is a source which emits pairs of parallel (or anti-parallel) spins, and Bob’s task is to identify the state as well as possible.
For concreteness, let us define Bob’s measure of success as the fidelity,
$$F=𝑑\stackrel{}{n}\underset{g}{}P(g|\stackrel{}{n})\frac{1+\stackrel{}{n}\stackrel{}{n}_g}{2}$$
(1)
where $`\stackrel{}{n}\stackrel{}{n}_g`$ is the scalar product in between the true and the guessed directions, the integral is over the different directions $`\stackrel{}{n}`$ and $`d\stackrel{}{n}`$ represents the a priori probability that a state associated to the direction $`\stackrel{}{n}`$, i.e. $`|\stackrel{}{n},\stackrel{}{n}>`$ or $`|\stackrel{}{n},\stackrel{}{n}>`$ respectively, is emitted by the source; $`P(g|\stackrel{}{n})`$ is the probability of guessing $`\stackrel{}{n}_g`$ when the true direction is $`\stackrel{}{n}`$. In other words, for each trial Bob gets a score which is a (linear) function of the scalar product between the true and the guessed direction, and the final score is the average of the individual scores.
When the different directions $`\stackrel{}{n}`$ are randomly and uniformly distributed over the unit sphere, an optimal measurement for pairs of parallel spins $`\psi =|\stackrel{}{n},\stackrel{}{n}>`$ has been found by Massar and Popescu . Bob has to measure an operator A whose eigenvectors $`\varphi _j`$, $`j=1\mathrm{}4`$ are
$$|\varphi _j>=\frac{\sqrt{3}}{2}|\stackrel{}{n}_j,\stackrel{}{n}_j>+\frac{1}{2}|\psi ^{}>$$
(2)
where $`|\psi ^{}>`$ denotes the singlet state and the Bloch vectors $`\stackrel{}{n}_j`$ point to the 4 vertices of the tetrahedron:
$`\stackrel{}{n}_1`$ $`=`$ $`(0,0,1)\stackrel{}{n}_2=({\displaystyle \frac{\sqrt{8}}{3}},0,{\displaystyle \frac{1}{3}})`$ (3)
$`\stackrel{}{n}_3`$ $`=`$ $`({\displaystyle \frac{\sqrt{2}}{3}},\sqrt{{\displaystyle \frac{2}{3}}},{\displaystyle \frac{1}{3}})\stackrel{}{n}_4=({\displaystyle \frac{\sqrt{2}}{3}},\sqrt{{\displaystyle \frac{2}{3}}},{\displaystyle \frac{1}{3}})`$ (4)
\[The phases used in the definition of $`|\stackrel{}{n}_j>`$ are such that the 4 states $`\varphi _j`$ are mutually orthogonal\]. The exact values of the eigenvalues corresponding to the above eigenvectors are irrelevant; all that is important is that they are different from each other, so that each eigenvector can be unambiguously associated to a different outcome of the measurement. If the measurement results corresponds to $`\varphi _j`$, then the guessed direction is $`\stackrel{}{n}_j`$. The corresponding optimal fidelity is 3/4 .
A related case is when the directions $`\stackrel{}{n}`$ are a priori on the vertices of the tetrahedron, with equal probability 1/4. Then the above measurement provides a fidelity of 5/6$`0.833`$, conjectured to be optimal.
Let us now consider pairs of anti-parallel spins, $`|\psi >=|\stackrel{}{n},\stackrel{}{n}>`$, and the measurement whose eigenstates are
$`\theta _j=\alpha |\stackrel{}{n}_j,\stackrel{}{n}_j>\beta {\displaystyle \underset{kj}{}}|\stackrel{}{n}_k,\stackrel{}{n}_k>`$ (5)
with $`\alpha =\frac{13}{6\sqrt{6}2\sqrt{2}}1.095`$ and $`\beta =\frac{52\sqrt{3}}{6\sqrt{6}2\sqrt{2}}0.129`$. The corresponding fidelity for uniformly distributed $`\stackrel{}{n}`$ is $`F=\frac{5\sqrt{3}+33}{3(3\sqrt{3}1)^2}0.789`$; and for $`\stackrel{}{n}`$ lying on the tetrahedron $`F=\frac{2\sqrt{3}+47}{3(3\sqrt{3}1)^2}0.955`$. In both cases the fidelity obtained for pairs of anti-parallel spins is larger than for pairs of parallel spins!
## III Spin flips
As we have seen in the previous section, parallel and anti-parallel spins are not equivalent. Let us try to understand why.
That there could be any difference between communicating a direction by two parallel spins or two anti-parallel spins seems, at first sight, extremely surprising. After all, by simply flipping one of the spins we could change one case into the other. For example, if Bob knows that Alice indicates the direction by two anti-parallel spins he only has to flip the second spin and then apply all the measurements as in the case in which Alice send from the beginning parallel spins. Thus, apparently, the two methods are bound to be equally good.
The problem is that one cannot flip a spin of unknown polarization. Indeed, it is easy to see that the flip operator V defined as
$$V|\stackrel{}{n}>=|\stackrel{}{n}>$$
(6)
is not unitary but anti-unitary. Thus there is no physical operation which could implement such a transformation.
But then a couple of question arise. First, why is it still the case that a single spin polarized along $`\stackrel{}{n}`$ defines the direction as well as a single spin polarized in the opposite direction?
What happens is that although Bob cannot implement an active transformation, i.e. cannot flip the spin, he can implement a passive transformation, that is, he can flip his measuring devices. Indeed, there is no problem for Bob in flipping all his Stern-Gerlach apparatuses, or, even simpler than that, to merely rename the outputs of each Stern-Gerlach “up”$``$ down and “down”$``$“up”.
But given the above, why can’t Bob solve the problem of two spins in the same way, just by performing a passive transformation on the apparatuses used to measure the second spin?!? The problem is the entanglement. Indeed, if the optimal strategy for finding the polarization direction would involve separate measurements on the two spins then two parallel spins would be equivalent to two anti-parallel spins. (This would be true even if which measurement is to be performed on the second spin depends on the result of the measurement on the first spin.) But, as shown in the previous section, the optimal measurement is not a measurement performed separately on the two spins but a measurement which acts on both spins simultaneously, that is, the measurement of an operators whose eigenstates are entangled states of the two spins. For such a measurement there is no way of associating different parts of the measuring device with the different spins, and thus there is no way to make a passive flip associated to the second spin. Consequently there is no way, neither active nor passive to implement an equivalence between the parallel and anti-parallel spin cases.
This result illustrates once more that entanglement can produce results “classically impossible”, similar to Bell inequality and to non-locality without entanglement .
## IV Spin flips and the partial transpose of bipartite density matrices
We have claimed in the previous section that when we perform a measurement of an operator whose eigenstates are entangled states of the two spins, there is no way of making a passive flip associated with the second spin. We would like to comment in more detail about this point.
Physically it is clear that in the case of a measuring device corresponding to an operator whose eigenstates are entangled states of the two spins, we cannot identify one part of the apparatus as acting solely on one spin and another part of the apparatus as acting on the second spin. Thus we cannot simply isolate a part of the measuring device and rename its outcomes. But perhaps one could make such a passive transformation at mathematical level, that is, in the mathematical description of the operator associated to the measurement and then physically construct an apparatus which corresponds to the new operator.
The optimal measurement on two parallel spins is described by a nondegenerate operator whose eigenstates $`|\varphi _j>`$ are given by (2) and (3). It is convenient to consider the projectors $`P^j=|\varphi _j><\varphi _j|`$ associated with the eigenstates. As is well-known, any unit-trace hermitian operator, and in particular any projectors, can be written as
$$P^j=\frac{1}{4}(I+\stackrel{}{\alpha }^j\stackrel{}{\sigma }^{(1)}+\stackrel{}{\beta }^j\stackrel{}{\sigma }^{(2)}+R_{k,l}^j\sigma _k^{(1)}\sigma _l^{(2)}).$$
(7)
with some appropriate coefficients $`\stackrel{}{\alpha }^j`$, $`\stackrel{}{\beta }^j`$ and $`R_{k,l}^j`$. (The upper indexes on the spin operators mean “particle 1” or “2”). Why then couldn’t we simply make the passive spin flip by considering a measurement described by the projectors
$$\stackrel{~}{P}^j=\frac{1}{4}(I+\stackrel{}{\alpha }^j\stackrel{}{\sigma }^{(1)}\stackrel{}{\beta }^j\stackrel{}{\sigma }^{(2)}R_{k,l}^j\sigma _k^{(1)}\sigma _l^{(2)}).$$
(8)
obtained by the flip of the operators associated second spin, $`\stackrel{}{\sigma }^{(2)}\stackrel{}{\sigma }^{(2)}`$? The reason is that the transformed operators $`\stackrel{~}{P}^j`$ are no longer projectors! Indeed, each projection operator $`P^j`$ could also be viewed as a density matrix $`\rho ^j=P^j=|\varphi _j><\varphi _j|`$. The passive spin flip (7)$``$ (8) is nothing more that the partial transpose of the density matrices $`\rho ^j`$ with respect to the second spin. But each density matrix $`\rho ^j`$ is non-separable (because they describe the entangled state $`|\mathrm{\Phi }>`$). But according to the well-known result of the Horodeckis the partial transpose of a non-separable density matrix of two spin 1/2 particles has a negative eigenvalue and thus it cannot represent a projector anymore. Obviously however, if the optimal measurement would have consisted of independent measurements on the two spins, each projector would have been a direct product density matrix and the spin flip would have transformed them into new projectors, and thus led to a valid new measurement.
## V Spin flips, entropy and the global structure of the set of states
There is yet another surprise in the fact that anti-parallel spins can be better distinguished than parallel ones. Consider the two sets of states, that of parallel spins and that of anti-parallel spins. The distance in between any two states in the first set is equal to the distance in between the corresponding pair of states in the second set. That is,
$$|<\stackrel{}{n},\stackrel{}{n}|\stackrel{}{m},\stackrel{}{m}>|^2=|<\stackrel{}{n},\stackrel{}{n}|\stackrel{}{m},\stackrel{}{m}>|^2$$
(9)
Nevertheless, as a whole, the anti-parallel spin states are farer apart than the parallel ones! Indeed, the anti-parallel spin states span the entire 4-dimensional Hilbert space of the two spin $`\frac{1}{2}`$, while the parallel spin states span only the 3-dimensional subspace of symmetric states. This is similar to a 3 spin example discovered by R. Jozsa and J. Schlienz.
## VI The universal quantum spin-flip and anti-cloning machines
As we have already noted in section III, a perfect universal quantum spin-flip machine i.e. a machine which would reverse any spin $`\frac{1}{2}`$ state $`|\stackrel{}{n}>|\stackrel{}{n}>`$ is impossible - it would require an anti-unitary transformation. However, following the lesson of the cloning machine, , let us ask how well could one approximate such a machine.
Analog to an optimal universal cloning machine, let us define an optimal universal quantum spin-flip machine (UQSF). By definition, a UQSF is a machine which acting on a spin 1/2 particle implements the transformation
$$|\stackrel{}{n}>\rho (\stackrel{}{n})$$
(10)
such that $`\rho (\stackrel{}{n})`$ is as close as possible to $`|\stackrel{}{n}>`$. For concreteness, we define “as close as possible” to mean “according to the usual fidelity” $`F=𝑑\stackrel{}{n}<\stackrel{}{n}|\rho (\stackrel{}{n})|\stackrel{}{n}>`$. Furthermore, to be “universal” we require that the fidelity is independent of the initial polarization of the spin, that is, that all states are flipped equally well. (Obviously, in order to be able to implement the transformation (10) which is non-unitary, the UQSF machine is allowed to entangle the spin with an ancilla.)
Following the technique of , developed for optimal eavesdropping in the six state protocol of quantum cryptography, one finds that the fidelity of the optimal quantum spin-flip machine is of 2/3 (which appears as the maximal disturbance Eve can introduce in the quantum channel).
One simple way to implement this optimal spin flip consists in first measuring the spin in an arbitrary direction, then produce a spin pointing in the direction opposite to the measurement result.
Surprisingly enough, although the original goal was to flip a single spin (the input spin) the optimal UQSF machine can produce additional flipped spins at no extra cost! This follows from the fact that the optimal UQSF provides classical information which then can be used to prepare as many flipped spins as we want. This result is surprising because one is tempted to imagine that if we only want to flip a single spin we could do it with much better fidelity if we don’t attempt to extract classical information from it. At least this is the lesson of many other quantum information processing procedures, such as cloning, teleportation, data compression, quantum computation etc. In all these cases quantum information can be processed with much better results if we keep it all the time in quantum form rather than extracting some classical information from it and processing this classical information. The deep reason why spin flipping is essentially a classical operation is an interesting but yet open question.
One can also consider other interesting machines. For example machines that take as input 2 parallel spins $`|\stackrel{}{n},\stackrel{}{n}>`$ and the output is as close as possible either to $`|\stackrel{}{n},\stackrel{}{n}>`$, or to $`|\stackrel{}{n}>`$, or to $`|\stackrel{}{n},\stackrel{}{n},\stackrel{}{n}>`$. Another interesting open question is which operation can be produced with higher fidelity: from two parallel spins to anti-parallel ones, or vice versa.
## VII Spin-flips and quantum optics
Entanglement is closely connected to the mathematics of partial transpose $`\rho _{ij,kl}^T=\rho _{il,kj}`$: a 2-spin $`\frac{1}{2}`$ (mixed) state is separable if and only if its partial transpose has non-negative eigenvalues . Interestingly, partial transposes can be seen as a representation of spin flips. Indeed, the partial transpose of a product operator reads
$$(a_0+\stackrel{}{a}\stackrel{}{\sigma })(b_0+\stackrel{}{b}\stackrel{}{\sigma })(a_0+\stackrel{}{a}\stackrel{}{\sigma })(b_0+\stackrel{}{b}\stackrel{}{\sigma }2b_z\sigma _z)$$
(11)
(where the $`\sigma _k`$ are the usual Pauli matrices) hence, a partial transpose is a reflection of the second spin through the x-z plane (the plane depends on the basis, as partial transpose is basis-dependent). Note that this is a practical way of representing the polarization of a photon reflected by a mirror: the upper and lower hemisphere of the Poincaré sphere are exchanged, corresponding to the change of right handed and left handed elliptic polarization states. To complete the connection with spin flips, add after the reflection a $`\pi `$ rotation around the axes orthogonal to the reflection plane (like the Faraday rotator in Faraday mirrors ), this flips the second spin. Now, the proof that perfect UQSF machines do not exist can be reformulated: a perfect UQSF machine would turn entangled states into states with negative eigenvalues! Physically, the use of a mirror acting only on the second photon is of course still possible, but one must note that mirrors change right handed reference frames into left handed ones. This is acceptable as long as one can describe the two photons separately, but leads to erroneous predictions if applied to entangled photons.
## VIII Conclusion
We have proved that there is more information about a space direction $`\stackrel{}{n}`$ in a pair of antiparallel spins $`|\stackrel{}{n},\stackrel{}{n}>`$ than in a pair of parallel spins $`|\stackrel{}{n},\stackrel{}{n}>`$. This demonstrates again the role played by entanglement in quantum information processing: not a source of paradoxes, but a means of performing tasks which are impossible classically. It also draws the attention to the global structure of the state space of combined systems. Related questions concern the optimal quantum spin-flip machine and the optimal quantum machine that turns parallel to anti-parralel pair of spins.
## IX Acknowledgments
This work was partially supported by the Swiss National Science Foundation and by the European TMR Network “The Physics of Quantum Information” through the Swiss OFES. After completion of this work, the very interesting article by V. Buzek, M. Hillery and R. Werner appeared on the quant-ph introducing the universal quantum NOT gate which is our quantum spin-flip machine. Finally, we like to thank Lajos Diosi for helpful discussions on spin-flips.
|
no-problem/9901/cond-mat9901020.html
|
ar5iv
|
text
|
# Realistic model of correlated disorder and Anderson localization
## Abstract
A conducting 1D line or 2D plane inside (or on the surface of) an insulator is considered. Impurities displace the charges inside the insulator. This results in a long-range fluctuating electric field acting on the conducting line (plane). This field can be modeled by that of randomly distributed electric dipoles. This model provides a random correlated potential with $`U(r)U(r+k)1/k`$. In the 1D case such correlations give essential corrections to the localization length but do not destroy Anderson localization.
It was recently stated in that some special correlations in a random potential can produce a mobility edge (between localized and delocalized states) inside the allowed band in the 1D tight-binding model. In principle, extrapolation of this result to 2D systems may give a possible explanation of the insulator-conductor transition in dilute 2D electron systems observed in ref. . In such a situation it is very important to build a reasonable model of “correlated disorder” in real systems and calculate the effects of this “real” disorder.
Usually, a 1D or 2D conductor is made inside or on the surface of an insulating material. Impurities inside the insulator displace the electric charges. However, a naive “random charge” model violates electro-neutrality and gives wrong results. Indeed, the impurities do not produce new charges, they only displace charges thus forming electric dipoles. Therefore, we consider a model of randomly distributed electric dipoles (alternatively, one can consider a spin glass model which gives the correlated random magnetic field). The dipoles have long-range electric field. Therefore, the potentials at different sites turn out to be correlated.
The potential energy produced by the system of the dipoles $`d_j`$ is equal to
$$U(r)=e\underset{j}{}𝐝_𝐣\frac{1}{|𝐫𝐑_𝐣|}.$$
(1)
The average value of this potential is zero if $`𝐝_𝐣=\mathrm{𝟎}`$. The fluctuations of the potential at a given site are as follows
$$U(r)U(r)=\frac{e^2d^2\rho }{3}\frac{d^3R}{R^4}=\frac{4\pi e^2d^2\rho }{3r_0}.$$
(2)
Here we assumed that $`d_i^\alpha d_j^\beta =d^2/3`$ $`\delta _{il}\delta _{\alpha \beta }`$ where $`\alpha `$ and $`\beta `$ are space indices, and the dipoles are distributed in space with a constant density $`\rho `$. We had to introduce a cut-off parameter $`r_0`$ since the integral diverges at small $`R`$. This parameter is, in fact, the geometrical size of the dipole. Indeed, inside the radius $`r_0`$ the electric field cannot be described by the dipole formula and the real potential $`U(r)`$ does not contain the singularity $`1/r^2`$ which leads to the divergence of the integral. Our cut-off corresponds to a zero field inside the sphere of radius $`r_0`$ .
The correlator of the potentials at the points $`𝐫_\mathrm{𝟏}`$ and $`𝐫_\mathrm{𝟐}`$ is equal to
$$U(𝐫_\mathrm{𝟏})𝐔(𝐫_\mathrm{𝟐})=𝐞^\mathrm{𝟐}\underset{𝐢,𝐣}{}<𝐝_𝐢\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟏}𝐑_𝐢|}𝐝_𝐣\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟐}𝐑_𝐣|}>=\frac{𝐞^\mathrm{𝟐}𝐝^\mathrm{𝟐}}{\mathrm{𝟑}}\underset{𝐣}{}\left(\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟏}𝐑_𝐣|}\right)\left(\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟐}𝐑_𝐣|}\right).$$
(3)
Assume that the dipoles are distributed in space with a constant density $`\rho `$. Then we have
$$U(𝐫_\mathrm{𝟏})𝐔(𝐫_\mathrm{𝟐})=\frac{𝐞^\mathrm{𝟐}𝐝^\mathrm{𝟐}\rho }{\mathrm{𝟑}}𝐝^\mathrm{𝟑}𝐑\left(\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟏}𝐑|}\right)\left(\frac{\mathrm{𝟏}}{|𝐫_\mathrm{𝟐}𝐑|}\right)=\frac{\mathrm{𝟒}\pi 𝐞^\mathrm{𝟐}𝐝^\mathrm{𝟐}\rho }{\mathrm{𝟑}|𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}|}.$$
(4)
The integral here is convergent and can be easily calculated for $`r_0=0`$ using integration by parts. It is interesting that the result is the same (does not depend on $`r_0`$) for any $`r_0<|𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}|/\mathrm{𝟐}`$. Indeed, this problem is mathematically equivalent to the calculation of the electrostatic energy of two charged spheres of the radius $`r_0`$ ( the interaction energy is proportional to the cross term $`2𝐄_\mathrm{𝟏}𝐄_\mathrm{𝟐}/\mathrm{𝟖}\pi `$ in the energy density of electric field $`𝐄^\mathrm{𝟐}/\mathrm{𝟖}\pi `$; $`𝐄_{\mathrm{𝟏},\mathrm{𝟐}}=𝐞\left(\mathrm{𝟏}/|𝐫_{\mathrm{𝟏},\mathrm{𝟐}}𝐑|\right)`$). The answer is known: the interaction energy is equal to $`e^2/|𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}|`$ if $`2r_0<|𝐫_\mathrm{𝟏}𝐫_\mathrm{𝟐}|`$.
Thus , we obtained the following result for the normalized correlator
$$\xi (k)\frac{U(𝐫)𝐔(𝐫+𝐤)}{U^2}=\frac{r_0}{k}.$$
(5)
We see that the correlations in the dipole random potential decay inversely proportional to the distance.
In the Refs. the following expression for the inverse localization length for the 1D discrete Shrodinger equation
$$\psi _{n+1}+\psi _{n1}=(E+ϵ_n)\psi _n$$
(6)
has been obtained:
$$l^1=\frac{ϵ_0^2\varphi (\mu )}{8\mathrm{sin}^2\mu };$$
(7)
$$\varphi (\mu )=1+2\underset{k=1}{\overset{\mathrm{}}{}}\xi (k)\mathrm{cos}(2\mu k)$$
(8)
Here the eigenenergy is $`E=2\mathrm{cos}\mu `$, $`ϵ_n=U(r_n)`$, $`ϵ_0^2=U^2`$. This equation has been derived in the approximation $`ϵ_n1`$. Now we can substitute the correlator $`\xi (k)=r_0/k`$ into this equation. The result is
$$\varphi (\mu )=12r_0\mathrm{ln}|2\mathrm{sin}\mu |.$$
(9)
In this equation $`r_0`$ is measured in units of the lattice constant. The minimum of $`\varphi `$ is given by $`\varphi _{min}11.4r_0`$. The delocalization corresponds to $`\varphi =0`$ ($`r_0=0.72`$). This condition seems to be impossible to satisfy. Indeed, the equation $`\xi (k)=r_0/k`$ is valid for $`r_0<0.5`$. The realistic value of $`r_0`$ is smaller than this limit. A typical dipole size in molecules is about one Bohr radius while the lattice constant is about five times larger. This gives an estimate $`r_00.2`$. Also, any short-range fluctuations increase $`U^2`$ and reduce the normalized long-range correlator $`\xi (k)`$, see eq. (5).
However, the correlations change the localization length significantly. It is well known that in the 2D case the localization length is very sensitive to parameters of the problem. It would be a natural guess that in the 2D case the correlations (5) due to the long- range character of the dipole field are more important than in the 1D case, and that they may lead to delocalization.
Acknowledgments. The author acknowledges the support from Australian Research Council. He is grateful to F. Izrailev and A. Krokhin for discussions and to V.Zelevinsky for valuable comments and hospitality during the stay in MSU Cyclotron laboratory when this work was done.
|
no-problem/9901/cond-mat9901025.html
|
ar5iv
|
text
|
# Imprints of log-periodic self-similarity in the stock market
## Abstract
Detailed analysis of the log-periodic structures as precursors of the financial crashes is presented. The study is mainly based on the German Stock Index (DAX) variation over the 1998 period which includes both, a spectacular boom and a large decline, in magnitude only comparable to the so-called Black Monday of October 1987. The present example provides further arguments in favour of a discrete scale-invariance governing the dynamics of the stock market. A related clear log-periodic structure prior to the crash and consistent with its onset extends over the period of a few months. Furthermore, on smaller time-scales the data seems to indicate the appearance of analogous log-periodic oscillations as precursors of the smaller, intermediate decreases. Even the frequencies of such oscillations are similar on various levels of resolution. The related value $`\lambda 2`$ of preferred scaling ratios is amazingly consistent with those found for a wide variety of other complex systems. Similar analysis of the major American indices between September 1998 and February 1999 also provides some evidence supporting this concept but, at the same time, illustrates a possible splitting of the dynamics that a large market may experience.
PACS numbers: 01.75.+m Science and society - 05.40.+j Fluctuation phenomena, random processes, and Brownian motion - 89.90.+n Other areas of general interest to physicists
The fact that a healthy and normally functioning financial market may reveal certain properties common to complex systems is fascinating and, in fact, seems natural. Especially interesting in this context is the recently suggested analogy of the financial crashes to critical points in statistical mechanics . Criticality implies a scale invariance which in mathematical terms, for a properly defined function $`F(x)`$ characterizing the system, means that for small $`x`$
$$F(\lambda x)=\gamma F(x).$$
(1)
A positive constant $`\gamma `$ in this equation describes how the properties of the system change when it is rescaled by the factor $`\lambda `$. The simplest solution to this equation reads:
$$F_0(x)=x^\alpha ,$$
(2)
where $`\alpha =\mathrm{log}(\gamma )/\mathrm{log}(\lambda )`$. This is a standard power-law that is characteristic of continuous scale-invariance and $`\alpha `$ is the corresponding critical exponent.
More interesting is the general solution to Eq. (1):
$$F(x)=F_0(x)P(\mathrm{log}F_0(x)/\mathrm{log}(\gamma )),$$
(3)
where $`P`$ denotes a periodic function of period one. In this way the dominating scaling (2) acquires a correction which is periodic in $`\mathrm{log}(x)`$. This solution accounts for a possible discrete scale-invariance and can be interpreted in terms of a complex critical exponent $`\alpha =\alpha _R+i\alpha _I`$, since $`\mathrm{}\{x^\alpha \}=x^{\alpha _R}\mathrm{cos}(\alpha _I\mathrm{log}(x))`$, which corresponds to the first term in a Fourier expansion of (3). Thus, if $`x`$ represents a distance to the critical point, the resulting spacings between consecutive minima $`x_n`$ (maxima) of the log-periodic oscillations seen in the linear scale follow a geometric contraction according to the relation:
$$\frac{x_{n+1}x_n}{x_{n+2}x_{n+1}}=\lambda .$$
(4)
Then, the critical point coincides with the accumulation of such oscillations.
Existence of the log-periodic modulations correcting the structureless pure power-law behaviour has been identified in many different systems . Examples include diffusion-limited-aggregation clusters , crack growth , earthquakes and, as already mentioned, the financial market where $`x`$ is to be interpreted as the time to crash. Especially in the last two cases this is an extremely interesting feature because it potentially offers a tool for predictions. Of course, the real financial market is exposed to many external factors which may distort its internal hierarchical structure on the organizational as well as on the dynamical level. Therefore, the searches for the long term, of the order of few years, precursors of crashes have to be taken with some reserve, as already pointed out in Ref. . A somewhat related example is shown in Fig. 1 which displays the S$`\&`$P 500 versus DAX charts between 1991 and February 1999. While the global characteristics of the two charts are largely compatible there exist several significant differences on shorter time-scales. It is the purpose of the present paper to explore more in detail the emerging short-time behaviour of the stock market indices.
On the more general ground, the current attitude in developing the related theory is logically not fully consistent. First of all, no methodology is provided as how to incorporate a pattern of log-periodic oscillations preceding a particular crash into an even more serious crash which potentially may occur in a year or two later. Secondly, even though there is some indication that the largest crashes are outliers by belonging to a different population , there exists no precise definition of what is to be qualified as a crash, especially in the context of its analogy to critical points of statistical mechanics. Just a bare statement that the crash corresponds to discontinuity in the derivative of an appropriate market index is not sufficiently accurate to decide what amount of decline is needed to signal a real crash and what is to be considered only a ’correction’. In fact, a closer inspection of various maket indices on different time-scales suggests that it is justifiable to consider them as nowhere differentiable. An emerging scenario of the market evolution, in a natural way resolving this kind of difficulties, would then correspond to a permanent competition between booms and crashes of various sizes; a picture somewhat analogous to the self-organized critical state and consistent with a causal information cascade from large scales to small scales as demonstrated through the analysis of correlation functions . In this connection the required existence of many critical points within the renormalization group theory may result from a more general nonlinear renormalization flow map, i.e. by replacing $`\lambda x`$ by $`\varphi (x)`$ . In fact, such a mechanism may even remain compatible with the log-periodic scaling properties of Eq. (3) on various scales. For this to apply the accumulation points of the log-periodic oscillations on smaller scales need themselves to be distributed as the log-periodic sequence.
Identification of a clean hierarchy of the above suggested structures on the real market is not expected to be an easy task because of a possible contamination by various external factors or by some internal market nonuniformities. However, on the longer time-scales many such factors may cancel out to a large extent . Within the shorter time-intervals, on the other hand, the influence of such factors can significantly be reduced by an appropriate selection of the location of such intervals. In this later sense one finds the most preferential conditions in the recent DAX behaviour as no obvious external events that may have influenced its evolution can be indicated. Here, as illustrated in Fig. 2, within the period of only 9 month preceding July 1998 the index went up from about 3700 to almost 6200 and then quickly declined to below 4000. This draw down is however somewhat slower than some of the previous crashes analysed in similar context but for this reason it even better resembles a real physical second order phase transition. During the spectacular boom period the three most pronounced deep minima, indicated by the upward long solid-line arrows, can immediately be located. Denoting the resulting times as $`t_n`$, $`t_{n+1}`$, $`t_{n+2}`$ and making the correspondence with Eq. (3) by setting $`x_i=t_ct_i`$, where $`t_c`$ is the crash time, already such three points can be used to determine $`t_c`$:
$$t_c=\frac{t_{n+1}^2t_{n+2}t_n}{2t_{n+1}t_nt_{n+2}}.$$
(5)
The result is indicated by a similar downward arrow and reasonably well agrees with the actual time of crash. The corresponding preferred scaling ratio between $`t_{n+1}t_n`$ and $`t_{n+2}t_{n+1}`$ (Eq. (4)), governing the log-periodic oscillations, gives $`\lambda =2.17`$, which is consistent with the previous cases analysed in the literature not only in connection with the market evolution but for a wide variety of other systems as well and, thus, may indicate a universal character of the mechanism responsible for discrete scale invariance in complex systems.
As a further element of the present analysis it can be quite clearly seen from Fig. 2 that there is essentially no qualitative difference between the nature of the major crash and those index declines that mark its preceding log-periodically distributed minima. Indeed, they also seem to be preceded by their own log-periodic oscillations within appropriately shorter time-intervals. The two such sub-sequences are indicated by the long-dashed and short-dashed arrows, respectively, and the corresponding $`t_c`$ calculated using Eq. (5) by downward arrows of the same type. In both cases the so-estimated $`t_c`$’s also reasonably well coincide with times of the decline. Interestingly, the scaling ratios $`\lambda `$ for these log-periodic structures equal 2.06 and 2.07, respectively, and thus turn out consistent with the above value of 2.17. Moreover, even on the deeper level of resolution the two sequences of identifiable oscillations indicated by the dotted-line arrows in Fig. 2 develop analogous structures resulting in $`\lambda =2.26`$ (earlier case) and $`\lambda =2.1`$ (later case), which is again consistent with all its previous values. This means that such a whole fractal hierarchy of log-periodic structures may still remain grasped by one function of the type (3). In this scenario the largest crash of Fig. 2 may appear as just one component of log-periodic oscillations extending into the future and announcing an even larger future crash. Large crashes can thus be assigned no particular role in this sense. They are preceded by the log-periodic oscillations of about the same frequency as the small ones. What still can make them outliers are parallel secondary effects like an overall increase of the market volume which may lead to an additional amplification of their amplitude.
Violant reverse in a market tendency may reveal log-periodic-like structures even during the intra-day trading. One such example is illustrated in Fig. 3 which shows the minutely DAX variation between 11:50 and 15:30 on January 8, 1999. It is precisely during this period (still seen in Fig. 1) that DAX reached its few month maximum after recovery from the previously discussed crash. Taking the average of the ratios between the consecutive five neighbouring time-intervals determined by the six deepest minima indicated by the upward arrows results here in $`\lambda 1.7`$. The corresponding $`t_c`$ (downward arrow) again quite precisely indicates the onset of the decline.
Directly before the major crash on July 20, 1998 the trading dynamics somewhat slows down (as can be seen from Fig. 2) and no such structures on the level of minutely variation can be identified in this case, however. In fact, a fast increase of the market index just before its subsequent decline seems to offer the most favourable conditions for the log-periodic oscillations to show up on the time-scales of a few month or shorter. This may simply reflect the fact that a faster internal market dynamics generates such oscillations of larger amplitude which thus gives them a better chance to dominate a possible external corruption. Consistently, another market (Hong-Kong Stock Exchange) whose Hang Seng index went up recently by almost $`40\%`$ during about a four month period in 1997 also provides quite a convincing example of the short term log-periodic oscillations. The four most pronounced minima at 97.24, 97.43, 97.52 and 97.56 trace a geometric progression with a common $`\lambda `$ of about 2.15 and this progression converges to $`t_c97.6`$, thus exactly indicating the begining of a dramatic crash. The relevant Hang Seng chart can be seen in Fig. 2 of ref. .
Instead of listing further examples where the log-periodic oscillations accompany a local fast increase of the market index we find it more instructive to study more in detail the recent development on the American market. It provides further evidence in favour of this concept but at the same time illustrates certain possible related subtleties. Fig. 4 shows the S$`\&`$P500 behaviour starting mid September 1998 versus the two closely related and most frequently quoted indices: the Dow Jones Industrial Average (DJIA) which is entirely comprised by S$`\&`$P500 and the Nasdaq-100 whose about $`60\%`$ of the volume overlaps with S$`\&`$P500. The latter two indices (DJIA and Nasdaq-100) are totally disconnected in terms of the company content. In order to make the relative speed of the changes directly visible all these indices are normalized such that they are equal to unity at the same date (here on February 1, 1999). Clearly, it is Nasdaq which within the short period between October 8, 1998 (its lowest value in the period considered) and February 1, 1999 develops a very spectacular rise by almost doubling its magnitude. In this case the three most pronounced consecutive minima ($`\lambda =2.25`$) also quite precisely point to the onset of the following 11$`\%`$ correction. Parallel increase of the DJIA is much slower, the pattern of oscillations much more difficult to uniquely interpret and, consistently, no correction occurs. At the same time the S$`\&`$P500 largely behaves like an average of the two. Even though it displays similar three minima as Nasdaq, the early February correction is only rudimentary. In a sense we are thus facing an example of a very interesting temporary spontaneous decoupling of a large market, as here represented by S$`\&`$P500, into submarkets some of which may evolve for a certain period of time according to their own log-periodic pattern of oscillations which are masked in the global index. The smaller, more uniform markets are less likely to experience such effects of decoupling and are thus expected to constitute better candidates to manifest the short-time universal structures. The opposite may apply to the global index since from the longer time-scales perspective such effects should be less significant. The examples analysed here as well as those studied in the literature are in fact consistent with this interpretation. If applies, such an interpretation provides another physically appealing picture: short-time log-periodicity is more localized in the ’market space’ while the longer-time-scales probe its more global aspects.
Tabulating the critical exponents $`\alpha _R`$ for the stock market in the context of our present study of criticality on various time-scales doesn’t seem equally useful as their values significantly depend on the time-window inspected. For instance, in the case of the DAX index we find values ranging between 0.2 for the few years-long time-intervals up to almost 1 for the shorter time-intervals corresponding to the identified log-periodic substructures. One may argue that the logarithm of the stock market index constitutes a more appropriate quantity for determining the critical exponents. However, also on the level of the logarithm the exponents vary between about 0.3 up to 1 in analogous time-intervals as above.
In conclusion, the present analysis provides further arguments for the existence of the log-periodic oscillations constituting a significant component in the time-evolution of the fluctuating part of the stock market indices. Even more, imprints are found for the whole hierarchy of such log-periodically oscillating structures on various time-scales and this hierarchy carries signatures of self-similarity. An emerging scenario of the market evolution characterized by nowhere differentiable permanent competition between booms and crashes of various size is then much more logically acceptable and consistent. Of course, in general, it would be naive to expect that on the real market any index fluctuation can uniquely be classified as a member of a certain log-periodically distributed sequence. Some of such fluctuations may be caused by external factors which are likely to be completely random relative to the market intrinsic evolutionary synchrony. It is this complex intrinsic interaction of the market constituents which may lead to such universal features as the ones discussed above and it is extremely interesting to see that such features (a consistent sequence of the log-periodically distributed oscillations) can quite easily be identified with help of some physics guidance. The above result makes also clear that the stock market log-periodicity reveals much richer structure than just lowest order Fourier expansion of Eq. (3, and therefore, at the present stage the ’arrow dropping’ procedure used here offers much more flexibility in catching the essential structures and seems thus more appropriate, especially on shorter time-scales. Finally, in this context we wish to draw attention to the Weierstrass-Mandelbrot fractal function <sup>*</sup><sup>*</sup>*The Weierstrass-Mandelbrot fractal function is defined as $`W(t)=_{n=\mathrm{}}^{\mathrm{}}(1e^{i\gamma ^nt})e^{i\varphi _n}/\gamma ^{\eta n}`$, where $`0<\eta <1`$, $`\gamma <1`$ and $`\varphi _n`$ is an arbitrary phase. It is easy to show by relabeling the series index that for an appropriate choise of the set of phases $`\{\varphi _n\}`$, $`W(\gamma t)=\gamma ^\eta W(t)`$. which is continuous everywhere, but is nowhere differentiable and can be made to obey the renormalization group equation. The relevance of this function for log-periodicity has already been pointed out in connection with earthquakes . It is likely that a variant of this function also provides an appropriate representation for the stock market criticality.
We thank R. Felber and Dr. M. Feldhoff for very useful discussions on the related matters. We also wish to thank Dr. A. Johansen and Prof. D. Sornette for their useful comments on the earlier version (cond-mat/9901025) of this paper and for bringing to our attention some of the references that we were unaware of.
FIGURE CAPTIONS
Fig. 1. The Deutsche Aktienindex–DAX (upper chart) versus S&P 500 (lower chart) in the period 1991-1999. Logarithms of both indices are shown for a better comparison within the same scale.
Fig. 2. The daily evolution of the Deutsche Aktienindex from October 1997 to October 1998. Upward arrows indicate minima of the log-periodic oscillations used to determine the corresponding critical times denoted by the downward arrows. Different types of arrows (three upward and one downward) correspond to different sequences of log-periodic oscillations identified on various time-scales.
Fig. 3. The minutely DAX variation between 11:50 and 15:30 on January 8, 1999. Upward arrows indicate minima used to determine the corresponding critical time.
Fig. 4 The daily variation of S$`\&`$P500 (upper panel), Nasdaq-100 (middle panel) and Dow Jones Industrial Average (lower panel) from September 1998 till March 1999. All these indices are normalized such they equal unity on February 1, 1999.
|
no-problem/9901/astro-ph9901338.html
|
ar5iv
|
text
|
# Predictions for The Very Early Afterglow and The Optical Flash
## 1 Introduction
The original fireball model was invoked to explain the Gamma-Ray Bursts (GRBs) phenomena. Extreme relativistic motion, with Lorentz factor $`\gamma >100`$ is necessary to avoid the attenuation of hard $`\gamma `$-rays due to pair production. Such extreme relativistic bulk motion is not seen anywhere else in astrophysics. This makes the GRBs a unique and extreme phenomena. Within the fireball model the observed GRB and the subsequent afterglow all emerge from shocked regions in which the relativistic flow is slowed down. We don’t see directly the “inner engine” which is the source of the whole phenomenon. It is therefore, of the utmost importance to obtain as much information as possible on the nature of this flow as this would provide us with some of the best clue on what is producing GRBs.
The afterglow, that was discovered more than a year ago, has revolutionized GRB Astronomy. It proved the cosmological origin of the bursts. The observations, which fit the fireball theory fairly well are considered as a confirmation of the fireball model. According to this model the afterglow is produced by synchrotron radiation produced when the fireball decelerates as it collides with the surrounding medium.
However, the current afterglow observations, which detect radiation from several hours after the burst onwards, do not provide a verification of the initial extreme relativistic motion. Several hours after the burst the Lorentz factor is less than $`10`$. Furthermore, at this stage it is independent of the initial Lorentz factor. These observation do not provide any information on the initial extreme conditions which are believed to produce the burst itself.
It was recently shown (Sari & Piran, 1997, Fenimore, Madras & Nayakshin 1996) that the burst itself cannot be efficiently produced by external shocks, and internal shocks must occur. This has lead to the internal-external scenario. The GRB is produced by internal shocks while the afterglow is produced by external shocks. Additionally there are some observational evidence in favor of the internal-external picture. First, the fact that afterglows are not scaled directly to the GRB suggest that the two are not produced by the same phenomenon. Second, while most GRBs show very irregular time structure and are highly variable all afterglow observed so far show smooth power law decay with minimal or no variability. Still this evidence is so far somewhat inconclusive. In view of the importance of its implications we should search for an additional proof. We suggest here that observation of the early afterglow could provide us with a verification of this picture.
In the internal shocks GRB the time scale of the bursts and its overall temporal structure follow to a large extend the temporal behavior of the source which generates the relativistic flow and powers the GRB (Kobayashi, Piran & Sari, 1997). A fast shell, with a Lorentz factor $`>2\gamma `$, will catch up with a slower shell of Lorentz factor $`\gamma `$, that was emitted $`\delta t`$ earlier, at a radius of $`R2\gamma ^2c\delta t`$. The observed time for this collision will be therefore $`R/2\gamma ^2\delta t`$. The fact that the Lorentz factor cancels out shows that the observed temporal structure of the burst cannot provide any information on the initial Lorentz factor in which the shell was injected.
The initial Lorentz factor is a crucial ingredient for constraining models of the source itself. The initial Lorentz factor specifies how “clean” the fireball is as the baryonic load is $`M=E/\gamma _0`$. A very high Lorentz factor would indicate a very low baryonic load which would indicate some sort of electromagnetic acceleration or even Poynting flux flow. More moderate Lorentz factors could more easily allow for the usual hydrodynamic models. The previous discussion shows that we cannot infer on the initial Lorentz facer from the observed temporal structure in GRBs. Unfortunately the spectrum of the GRBs can provide only a lower limit to this this Lorentz factor. This lower limit of $`100`$ is given by the appearance of high energy photons, which would have produces pairs is the Lorentz factor was low (Fenimore, Epstein, & Ho, 1993; Woods & Loeb, 1995; Piran, 1997). In the internal shock scenario the observed spectrum depends on the Lorentz factor only via the blue shift. However, the frequency in the local frame is highly unknown since it depends on many poorly known parameters such as the fraction of energy given to the electrons, the magnetic field and the relative Lorentz factor between shells. Therefore the spectrum can not teach us much about the initial Lorentz factor.
Furthermore, the basic mechanism by which the burst is produced, internal or external shocks, must be understood before a reliable source model can be given. External shocks could be produced by a single short explosion. Internal shocks require, however, a long and highly variable wind. The inner engine should operate for a long time - as long as the duration of the burst. We must know whether the source operates for a millisecond or for tens or even hundreds or thousand of seconds.
Mochkovitch, Maiti & Marques (1995) and Kobayashi, Piran & Sari, (1997) have shown that only a fraction of the total energy of the relativistic flow could be radiated away by the internal shocks. This means that an ample amount of energy is left in the flow and a significant fraction of it can be emitted by the early afterglow.
GRBs are among the most luminous objects in the universe. They produce a huge fluence, mostly in $`\gamma `$-rays. If this fluence was released optically, a flash of 5th magnitude would have been produced. A magnitude of 5 is by far stronger than current observational upper limits on early optical emission. In fact, even a small fraction of this will be easily observed. It is therefore of importance to calculate any residual emission in the optical band (Sari & Piran 1999).
We explore in this paper the expected prompt (early afterglow) multi wavelength signal. We show that this initial afterglow signal could, when it be measured, provide us with information on the initial Lorentz factor and at least indirectly hint on the nature of the relativistic flow. These could provide some important clues on the nature of the “inner engine” that powers GRBs. The physical model of the afterglow is synchrotron emission from relativistic electrons that are being continuously accelerated by an ongoing shock with the surrounding medium. We consider both the emission due to the hot shocked surrounding medium and from the reverse shock that is propagating into the shell. The spectral characteristics of the synchrotron emission process are unique while the light curve depends on the hydrodynamic evolution, which is more model dependent. We begin, therefore (in section 2) exploring the broad band spectrum due to the forward shock. Then we turn (in section 3) to discuss the possible light curves in several frequency regimes. In section 4 we show how future observations of the early afterglow can be used to estimate the initial Lorentz factor. We show that a detection of a delay between the GRB and its afterglow as well as observation of the characteristic frequency in the early afterglow can finally provide a strong evidence for the internal shocks mechanism. In section 5 we calculate the optical emission including that expected from the reverse shock.
## 2 Synchrotron Spectrum of Relativistic Electrons.
The synchrotron spectrum from relativistic electrons that are continuously accelerated into a power law energy distribution is always given by four power law segments, separated by three critical frequencies: $`\nu _{sa}`$ the self absorption frequency, $`\nu _c`$ the cooling frequency and $`\nu _m`$ the characteristic synchrotron frequency. The electrons could be cooling rapidly or slowly and this would change the nature of the spectrum (Sari, Piran & Narayan 1998). As we show later, only fast cooling is relevant during the early stages of the forward shock (except perhaps the first second). We consider, in the following, only this fast cooling regime.
The spectrum of fast cooling electrons is described by four power laws: (i) For $`\nu <\nu _{sa}`$ self absorption is important and $`F_\nu \nu ^2`$. (ii) For $`\nu _{sa}<\nu <\nu _c`$ we have the synchrotron low energy tail $`F_\nu \nu ^{1/3}`$. (iii) For $`\nu _c<\nu <\nu _m`$ we have the electron cooling slope $`F_\nu \nu ^{1/2}`$. (iv) For $`\nu >\nu _m`$ the spectrum depends on the electron’s distribution, $`F_\nu \nu ^{p/2}`$, where $`p`$ is the index of the electron power law distribution. This spectrum is plotted in figure 1. This figure is a generalization of figure 1a of Sari, Piran and Narayan (1998) for an arbitrary hydrodynamic evolution $`\gamma (R)`$.
Using the shock jump condition and assuming the electrons and magnetic field acquire a fraction of $`ϵ_e`$ and $`ϵ_B`$ of equipartition, we can describe all hydrodynamic and magnetic conditions behind the shock as a function of the observed time $`t=t_s`$sec, the Lorentz factor $`\gamma `$ and the surrounding density $`n_1`$ cm<sup>-3</sup>. The magnetic field $`B`$ and the typical electron Lorentz factor $`\gamma _e`$ are given by:
$$B=4\gamma \sqrt{2\pi ϵ_Bnm_pc^2},$$
(1)
$$\gamma _e=610ϵ_e\gamma .$$
(2)
The typical synchrotron frequency of such an electron is
$$\nu _m=1.1\times 10^{19}\mathrm{Hz}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma }{300})^4n_1^{1/2}.$$
(3)
Within the dynamical time of the system, the electrons are cooling down to a Lorentz factor $`\gamma _c`$ where the total energy emitted at a time $`t`$ is comparable to the electron’s energy: $`\sigma _Tc\gamma ^2\gamma _c^2B^2t/6\pi =\gamma _cm_ec^2\gamma `$. The cooling frequency is the synchrotron frequency $`\nu _c`$ of such an electron:
$$\nu _c=1.1\times 10^{17}\mathrm{Hz}\left(\frac{ϵ_B}{0.1}\right)^{3/2}\left(\frac{\gamma }{300}\right)^4n_1^{3/2}t_s^2,$$
(4)
where throughout this paper we use $`R2\gamma ^2t`$, leaving aside corrections of order unity<sup>1</sup><sup>1</sup>1The numerical coefficient chosen is a compromise between that suitable for the burst itself, and that suitable for the deceleration phase.. One can see that for typical parameters, the cooling frequency is lower than the typical synchrotron frequency, except for a very short initial time (0.1 second for $`ϵ_e=ϵ_B=0.1`$ $`\gamma _0=300`$).
The flux at $`\nu _c`$ is given by the number of radiating electrons, $`4\pi nR^3/3`$ times the power of a single electron:
$$F_{\nu ,\mathrm{max}}=220\mu \mathrm{J}D_{28}^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}\left(\frac{\gamma }{300}\right)^8n_1^{3/2}t_s^3.$$
(5)
Finally the self absorption frequency is given by the condition that the optical depth is of order unity i.e.
$$\nu _{sa}=220\mathrm{G}\mathrm{H}\mathrm{z}\left(\frac{ϵ_B}{0.1}\right)^{6/5}\left(\frac{\gamma }{300}\right)^{28/5}n_1^{9/5}t_s^{8/5}.$$
(6)
These scaling are all indicated in figure 1. Note that some of these numbers involve high powers of $`\gamma `$ and $`t`$. Therefore, the numerical coefficient given, can be considerably different from the actual value with only a slight change of these parameters. Note that when there are high powers of $`t`$, the numerical factor in the approximation $`t=R/2\gamma ^2c`$ will also affect the numerical result.
The above equation shows that the frequency range for the forward shock, though depends strongly on the systems parameters, is most likely to be around the hard x-ray to $`\gamma `$-ray regime. This is more or less like the observed GRB. The fraction of the electrons energy that is emitted in optical bands is very small. We shall show later that the reverse shock emission is at a considerably lower frequency, typically around the optical band. However we ignore the reverse shock emission at this stage, and turn to calculate the light curves produced by the forward shock.
## 3 Light Curves
While the spectrum is always described by the four broken power laws of figure 1, the light curves depend on how the hydrodynamic conditions vary with time. The temporal scaling within each of the spectral segments appearing in figure 1 are given by the up, circle based arrows, when substituting the scaling of $`\gamma `$ as a function of $`t`$. These scalings depend on the exact form of the hydrodynamic evolution. Two shocks are formed as the shell propagates into the surrounding material. A forward shock accelerating and heating the surrounding material and a reverse shock decelerating the shell (Rees & Mészáros 1992, Katz 1994, Sari & Piran 1995).
Consider a relativistic shell with an initial width in the observer frame, $`\mathrm{\Delta }`$, and an initial Lorentz factor, $`\gamma _0`$. Sari and Piran (1995) have shown that there are four critical hydrodynamic radii: $`R_s\mathrm{\Delta }\gamma _0^2`$ where the shell begin to spread; $`R_\mathrm{\Delta }(\mathrm{\Delta }E/nm_pc^2)^{1/4}`$ where the reverse shock crossed the entire shell; $`R_\gamma (E/nm_pc^2)^{1/3}\gamma _0^{2/3}`$ where a surrounding shocked mass smaller by a factor of $`\gamma _0`$ from the shell’s rest mass ($`E/\gamma _0c^2`$) was collected; $`R_N(E/nm_pc^2\mathrm{\Delta })^{1/2}\gamma _0^2`$ where the reverse shock becomes relativistic.
We divide the different configurations according to the relative “thickness” of the relativistic shell. The question whether a shell should be considered as thin or thick depends not only on its thickness, $`\mathrm{\Delta }`$, but also on its Lorentz factor $`\gamma _0`$. We will consider a shell thin if $`\mathrm{\Delta }<(E/nm_pc^2)^{1/3}\gamma _0^{8/3}`$. Shells satisfying $`\mathrm{\Delta }>(E/nm_pc^2)^{1/3}\gamma _0^{8/3}`$ are considered thick.
For thin shells the corresponding transition radii are ordered as $`R_s<R_\mathrm{\Delta }<R_\gamma <R_N`$. As the shell expands it begins to spread at $`R_s`$. For $`R>R_s`$ the width increases and this causes $`R_\mathrm{\Delta }`$ to increase and $`R_N`$ to decrease in such a way that $`R_\mathrm{\Delta }=R_\gamma =R_N`$. So if spreading occurs, by the time when the reverse shock crosses the shell it is mildly relativistic. The corresponding observed time scale of the early afterglow is therefore $`t_\gamma =R_\gamma /2\gamma _0^2`$. This is longer than the burst’s duration, $`\mathrm{\Delta }/c`$, so a separation between the burst and the afterglow is expected (Sari 1997).
For thick shells, the order is the opposite $`R_N<R_\gamma <R_\mathrm{\Delta }<R_s.`$ The reverse shock becomes relativistic early on, reducing the Lorentz factor of the shell as $`\gamma t^{1/4}`$ (Sari 1997). The radius $`R_\gamma `$ becomes unimportant and most of the energy is extracted only at $`R_\mathrm{\Delta }`$, with an observed duration of order $`\mathrm{\Delta }/c`$. The signals from the internal shocks (the GRB) and from the early external shocks (the afterglow) from a thick shell overlap. For thick shells, it might be difficult, therefore, to detect the smooth external shock component.
The final self-similar deceleration phase does not depend on the thickness of the shell. After most of the energy of the shell was given to the surrounding (at $`R_\gamma `$ for thin shells and at $`R_\mathrm{\Delta }`$ for thick shells) the deceleration goes on as $`\gamma t^{3/8}`$, in a self-similar manner.
To summarize, the hydrodynamic evolution can have two or three stages. In the first stage, the ambient mass is too small to affect the system (the reverse shock is weak) and the Lorentz factor is constant. In the last stage the deceleration is self similar with $`\gamma t^{3/8}`$, this stage lasts for months. An intermediate stage of $`\gamma t^{1/4}`$ may occur for thick shells only. Most of the energy is transfered to the surrounding material at $`t_\gamma `$ for thin shells or $`\mathrm{\Delta }/c`$ for thick shells.
If internal shocks give rise to the GRB then the observed duration of the burst equals the initial width of the shell divided by $`c`$. Short bursts correspond to thin shells and long bursts to thick ones. The thickness of the shell in the internal shock scenario is directly observed. Bursts of $`20`$ sec or longer are likely to belong to the thick shell category. While bursts of duration smaller than $`0.1`$sec are likely to belong to the thin shell category unless $`\gamma _0`$ is very large ($`1500`$ or larger). If internal shocks are to produce the bursts, they must occur before the reverse shock has crossed the shell. Since the typical collision radius for internal shocks is $`2\delta \gamma _0^2`$, where $`\delta `$ is the separation between the shells, one needs $`2\delta \gamma _0^2<R_\mathrm{\Delta }`$. This is satisfied automatically for thin shells, for which $`2\delta \gamma _0^2<2\mathrm{\Delta }\gamma _0^2=R_s<R_\mathrm{\Delta }`$. However, an additional constraint: $`2\delta \gamma _0^2<(\mathrm{\Delta }E/nm_pc^2)^{1/4}`$ arises for thick shells, and set an upper limit to the initial Lorentz factor $`\gamma _0`$.
Equations 3-6 show that the self absorbed flux always rises as $`t^1`$. This behavior is independent of the hydrodynamic evolution and it is a general characteristic of fast cooling emission from the forward shock. Therefore, in principle, it can be used as a test of whether the electrons are cooling rapidly or not. However, the self absorbed emission, which is relevant at radio frequencies, is very week in the early afterglow. Detection of radio emission within a few seconds of the burst is unlikely in the near future. Therefore, we will not discuss further the self absorbed frequencies in this paper.
There are many possible light curves. This follows from the appearance of numerous transition between different hydrodynamic evolutions and between the four different spectral segments. Similar to Sari, Narayan and Piran (1998), we define the (frequency dependent) times $`t_c`$ and $`t_m`$ as the time where the cooling and typical frequencies, respectively, cross the observed frequency. Different time ordering of these transitions would lead to different light curves. For thick shells, there are two spectral related times $`t_c<t_m`$, as well as two hydrodynamic transitions occurring at $`R_N<R_\mathrm{\Delta }`$ which corresponds to times $`t_N<t_\mathrm{\Delta }`$. There are therefore six possible orderings and six corresponding light curves. However, as we have mentioned earlier the initial afterglow signal from a thick shell overlaps the GRB. It will be hard to detect the initial smooth signal of the afterglow and to separate it from the complex internal shocks signal. In addition, as we show later, thin shells can provide more information on the initial Lorentz factor. We will therefore, consider in the rest of the paper only the light curves produce by thin shells.
Thin shells are easier to analyze. Here we have two spectral related times $`t_c<t_m`$, but only one additional hydrodynamic time $`t_\gamma `$. At $`t_\gamma `$ the flow changes from a constant Lorentz factor into the self-similar decelerating phase. The thin shell deceleration time, $`t_\gamma `$, is given by
$$t_\gamma =R_\gamma /2\gamma _0^2c=\left(\frac{3E}{32\pi \gamma _0^8nm_pc^5}\right)^{1/3}$$
(7)
There are only 3 possible light curves. Moreover, it will be easier to distinguish between the GRB and the early afterglow emission from thin shells, as there is a delay between the two.
The light curve is determined according to the time ordering of the different time scales which varies from one observed frequency to the other. We consider, first, high frequencies that are above the initial typical synchrotron frequency (the typical synchrotron frequency with the initial Lorentz factor $`\gamma _0`$):
$$\nu >1.1\times 10^{19}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma _0}{300})^4n_1^{1/2}.$$
(8)
The typical synchrotron frequency, $`\nu _m`$, depends only of $`\gamma `$. Since $`\gamma `$ always decreases with time, then $`\nu _m`$ will decrease with time as well. Therefore, if the observed frequency is initially above the initial typical synchrotron frequency it will remain so during the whole evolution. Consequently, the time $`t_m`$ is not defined for these high frequencies. The light curve in this high frequency regime will be given by $`t^2\gamma ^{4+2p}`$ throughout the hydrodynamic evolution (see figure 2a). The light curve rises initially, when $`\gamma `$ is a constant and then it decreases sharply when $`\gamma `$ begins to decline.
The light curve for very low frequencies, which are typically in the optical and possibly the UV, are shown in figure 2c. Low frequencies are defined by the condition: $`t_\gamma <t_c<t_m`$ or:
$$\nu <2.7\times 10^{15}\left(\frac{ϵ_B}{0.1}\right)^{3/2}(\frac{\gamma _0}{300})^{4/3}n_1^{5/6}.$$
(9)
At these low frequencies the transition to the decelerating phase occurs before the cooling frequency crosses the observed band and before the synchrotron typical frequency cross the observed band. Although it might be hard to discriminate between the temporal behavior of the almost constant ($`t^{1/6}`$ ) and $`t^{1/4}`$ parts of the low frequency light curves, the spectral shape is very different in these two segments. The spectrum behaves like $`\nu ^{1/3}`$ during the $`t^{1/6}`$ phase ($`t_\gamma <t<t_c`$) while it goes like $`\nu ^{1/2}`$ during the $`t^{1/4}`$ phase ($`t_c<t<t_m`$). This spectral change at the time $`t_c`$ should be sufficient to distinguish between the two segments.
For intermediated frequencies, the deceleration begins while the observed band is above the cooling frequency but below the typical synchrotron frequency, i.e. $`t_c<t_\gamma <t_m`$. The light curve is shown in figure 2b. The relevant range of frequencies are probably in the UV and or soft X-rays, and it is intermediate between those given by the expression 8 and 9.
## 4 Determination of the initial Lorentz factor
For a short GRB, the time of the afterglow’s peak is given by equation 7. One can invert that to obtain the initial Lorentz factor from an observed time delay:
$$\gamma _0=\left(\frac{3E}{32\pi nm_pc^5T_0^3}\right)^{1/8}=240E_{52}^{1/8}n_1^{1/8}\left(\frac{T}{10\mathrm{s}}\right)^{3/8}.$$
(10)
This determination of $`\gamma _0`$ depends only the hydrodynamic transition. Therefore, it is independent of the highly uncertain equipartition parameters $`ϵ_B`$ and $`ϵ_e`$ which appear when estimating the spectrum. It is also rather insensitive to $`E_{52}`$ and $`n_1`$. Moreover, these last two parameters can be determined from late stage observations of the afterglow to within an order of magnitude (Waxman 1997, Wijers and Galama 1998, Granot, Piran and Sari 1998). This equation for $`\gamma _0`$ was used earlier, when it was suggested that GRBs results from an external shock, to estimate the duration of the burst (Rees & Mészáros 1992). However here we assume that the external shocks produce the afterglow while internal shocks produce the burst.
This method, for estimating the initial Lorentz factor depends on the identification of the delayed emission as resulting from the the afterglow rather than just another peak which is part of the burst. It is, therefore, necessary to compare the detailed structure of the delayed emission with the one described in the previous section. A clear characteristic of the early afterglow emissions, in all frequencies, is an initial very steep rise of the emission ($`t^2`$ or even $`t^{11/3}`$). This happens as the shell collects more and more material and the interaction between the shell and the ISM becomes more and more effective. For the thin shell light curves, discussed in this paper, this initial rise ends at the time $`t_\gamma `$ when the deceleration phase begins. After this rapid rise the light curve becomes almost flat in low and intermediate frequencies, and it decreases rapidly at high frequencies.
It is not clear if such an initial afterglow rise has been observed so far. A good candidate might be GRB970228 (Frontera et. al. 1997, Vietri 1997). This burst consisted of a second peak, mostly in the X-ray range. The statistics for this event may not be good enough to enable a detailed comparison of the light curves around its peak with the theory. However a circumstantial evidence in favor of this explanation exists as the late time X-ray afterglow, extrapolated back in time to the epoch of this second peak, gives the correct flux. Note that a similar situation also exists in GRB970508 and GRB980329, but there the flux does not seem to rise before the second “peak”, so that the rise of the afterglow was not observed. These bursts are probably examples of thick shells.
The identification of the second peak of GRB970228, which occurred $`35`$s after the burst, as the afterglow rise yields $`\gamma _0=150`$. The estimated uncertainty is about 50%. This arises from to the unknown values of the energy and the external density, and from the approximations used in the derivation of equations 7 and 10 .
## 5 Optical Emission and The Reverse Shock
The fluence of a moderately strong burst is $`10^5`$ ergs/cm<sup>2</sup>. About one out of 5 of the BATSE bursts are stronger than that, so such a burst occurs once a weak. Were this huge fluence peaking at the optical band rather than in $`\gamma `$-Rays, with duration of $`10`$sec, it would correspond to a very bright optical source of flux
$$\frac{1}{4}\times \frac{10^5\mathrm{ergs}/\mathrm{cm}^2}{10\mathrm{s}\times 5\times 10^{14}\mathrm{Hz}}=50\mathrm{J}\mathrm{y}5\mathrm{t}\mathrm{h}\mathrm{magnitude}.$$
The additional factor of 4 in the denominator was chosen to account for the large amount of emission above the peak frequency, which on an average GRB goes as $`F_\nu \nu ^{1.25}.`$ The reason for taking a duration of $`10`$sec is double: first it is the typical duration of a GRB. Second it is the integration time of fast optical experiments (LOTIS, TAROT), so that even if the emission takes place on a shorter time scale, the effective time will be the observation’s integration time of $`10`$sec. However, if the emission is spread on a longer time scale, $`t_A`$, then the apparent magnitude will increase accordingly by $`2.5\mathrm{log}_{10}(t_A/10sec)`$.
This is by far stronger than current observational upper limits. In fact, even a small fraction of this will be easily observed. It is therefore worthwhile to explore the expected optical emission at the early GRB evolution.
There are three possible emission regimes which have a comparable amount of energy and could, in principle, emit a powerful optical burst: the GRB itself (whether it is internal or external shocks), the early afterglow produced by the forward shock, and the early emission of the reverse shock. Although, at their peak, each of these sights contains an energy comparable to the total system energy, the optical signal it produces might be dimmer than 5th magnitude for several reasons. The first, as we already mentioned is if the emission is spread in time over a duration longer than $`10`$s. This is simple to account for and it will increase the magnitude by $`2.5\mathrm{log}_{10}(t_A/10sec)`$. Second, the cooling time might be longer than the system dynamical time and the radiation is not very effective. Third, the typical emission frequency might peak in a different energy band rather than in the optical. The residual optical emission might be significantly smaller. We discuss in the following these two latter effects, and leave farther effects such as inverse Compton scattering and self absorption to the next section.
In contrast to the previous sections, we consider here both the fast cooling and slow cooling synchrotron spectra given by Sari, Piran and Narayan (1998). Ignoring self absorption, there are four (actually five) different cases, which depend on the order of $`\nu _m`$, $`\nu _c`$ and $`\nu _{op}`$ where the third frequency is a fiducial frequency in the optical band. The fraction of the system’s energy that is emitted in the optical band in those four cases is shown in Table 1.
The corresponding increase in the magnitude is shown in Fig. 3, where we have used the “canonical” value of $`p2.5`$. With this value of $`p`$ there is a lot of energy in the high energy tail of the electrons distribution. Consequently the optical emission is rather strong if $`\nu _c<\nu _{op}`$ and $`\nu _m<\nu _{op}`$. Significant suppression occurs only if $`\nu _c\nu _{op}`$ and/or $`\nu _m\nu _{op}`$ with the strongest suppression taking place if both $`\nu _m\nu _c\nu _{op}`$.
### 5.1 The Prompt Optical Burst from the GRB and the Forward Shock
Before turning to the reverse shock, which is our main concern here we examine briefly the prompt optical emission from the GRB and from the forward shock. In both cases the typical synchrotron emission is sufficiently above the optical band and hence we don’t expect significant optical emission from there.
For the GRB we use the observed values of $`\nu _c`$ and $`\nu _m`$. The typical emission frequency during a GRB are mostly between 100keV and 400keV. We will adopt $`\nu _m=5\times 10^{19}`$Hz as a typical value. If the cooling break was below the BATSE band, then the spectral slope within the BATSE band, independent of the electron distribution, would have been $`1/2`$. The observed low energy tail is usually in the range of -1/2 to 1/3 (Cohen et. al, 1997). This indicates that the cooling frequency is close to the lower energy of the BATSE band. If the cooling break is indeed in the BATSE range, say at $`30`$keV, we can substitute $`\nu _c=7\times 10^{18}`$Hz together with $`\nu _m=5\times 10^{19}`$Hz in table 1, to get that the residual optical emission is of $`21`$st magnitude. The point that corresponds to these parameters is marked in Fig. 1 as GRB. For bursts which show a low energy tail of spectral index $`1/2`$, the cooling frequency is only known to be below the BATSE band. If, on the unlikely extreme, the cooling frequency is much lower, say below the optical band then the optical emission is of $`11`$th magnitude.
The initial emission of the forward shock, is also characterized by very high typical synchrotron frequency and high cooling frequency (see equations 3 and 4). With reasonable parameters (e.g. $`\gamma _A300`$, $`ϵ_e0.5`$, $`ϵ_B0.1`$), this emission is in the MeV range. Consequently the optical emission is fairly weak. Since the same forward shock is also producing the late afterglow, one can scale late time observations to the early epoch to obtain a direct estimate of the early value of $`\nu _c`$ and $`\nu _m`$. Observations carried on GRB 970508 after 12 days show that $`\nu _{m,12d}10^{11}`$ Hz and $`\nu _{c,12d}10^{14}`$ Hz (Galama et. al. 1998). With adiabatic evolution $`\nu _mt^{3/2}`$ and $`\nu _ct^{1/2}`$ so that within $`10`$s we expect to have $`\nu _m=3\times 10^{18}`$ Hz and $`\nu _c=5\times 10^{16}`$ Hz. With these values we expect the optical emission from the initial forward shock to be of about $`15`$th magnitude. The point corresponding for these value is marked on Fig 1 as FS. There is some uncertainty in this extrapolation as the initial evolution might be radiative rather than adiabatic. This is considerable only if the value of $`ϵ_e1`$ (Sari 1997, Cohen, Piran and Sari 1998). If the evolution is initially radiative, the extrapolation according to the adiabatic scalings is over predicting $`\nu _c`$ while under predicting $`\nu _m`$ (Sari, Piran and Narayan 1998).
### 5.2 The Reverse Shock Optical Emission
The best candidate to produce a strong optical flash is the reverse shock (Sari & Piran 1999). This shock, which heats up the shell’s matter, operates only once. It crosses the shell and accelerates its electrons. Then these electron cool radiatively and adiabatically and settle down into a part of the Blandford-McKee solution that determines the late profile of the shell and the ISM. Thus, unlike the forward shock emission that continues later at lower energies, this reverse shock emits a single burst with a duration comparable to $`t_A`$ (the duration of the GRBs or a few tens of seconds if the burst is short). After the peak of the reverse shock, i.e. after the reverse shock has crossed the shell no new electrons are injected. Consequently there will be no longer emission above $`\nu _c`$, and $`\nu _c`$ drops fast with time due to adiabatic cooling of the shocked shell material. Therefore, in contrast to the forward shock were we have calculated the whole light curve, we will focus here on the emission at the peak time $`t_A`$.
This peak time is given by
$$t_A=\mathrm{max}[t_\gamma ,\mathrm{\Delta }/c]$$
(11)
and the Lorentz factor at this time is
$$\gamma _A=\mathrm{min}[\gamma _0,(17E/128\pi \mathrm{\Delta }^3nm_pc^2)^{1/8}]$$
(12)
The afterglow typical time is similar to the duration of the burst (if the burst is long or the initial Lorentz factor is large) or longer than that if the burst was short and the initial Lorentz factor was low. In the latter case the shells Lorentz factor at the time $`t_A`$ equals its initially value $`\gamma =\gamma _0`$, while in the former case some deceleration has already occurred and $`\gamma <\gamma _0`$. After this time, $`t_A`$, a self similar evolution begins, and the initial width of the shell is no longer important. Therefore, the Lorentz factor at the time $`t_A`$, could be estimated in both cases as
$$\gamma _A=\left(\frac{17E}{128\pi nm_pc^5t_A}\right)^{3/8}.$$
Before discussing the details of this emission we outline a simple energetic consideration to show that the initial energy dissipated in the reverse shock is comparable to the initial energy dissipated in the forward shock (Sari and Piran 1995) and to the GRB energy. The forwards shock and the reverse shock are separated by a contact discontinuity, across which there is a pressure equality. This means that the energy density in both shocked regions is the same. As the forward shocks compresses the fluid ahead of it by a factor of $`\gamma ^2`$ its width is of order $`R/\gamma ^2`$. Though the initial width of the shell can be smaller than that, it will naturally spread to this size due to mildly relativistic expansion in its own frame. Since the energy density is the same and the volume is comparable, the total energy in both shocks is comparable. A more detailed calculation (Sari and Piran 1995) shows that at the time the reverse shock crosses the shell about half of the energy is in the shocked shell material.
The two frequencies that determine the spectrum, $`\nu _c`$ and $`\nu _m`$ for the reverse shock are most easily calculated by comparing them to those of the forward shock. The equality of energy density across the contact discontinuity suggests that the magnetic fields in both shocked regions are the same (provided, of course, that we assume the same magnetic equipartition parameter in both regions). Both shocked material move with the same Lorentz factor. Therefor, the cooling frequency, $`\nu _c`$, at the reverse shock is equal to that of the forward shock. However, instead of using the general description of this frequency as function of both $`\gamma `$ and $`t`$, we can substitute the expression for $`\gamma _A`$ to get:
$$\nu _c=8.8\times 10^{15}\mathrm{Hz}\left(\frac{ϵ_B}{0.1}\right)^{3/2}E_{52}^{1/2}n_1^1t_A^{1/2}$$
(13)
The typical synchrotron frequency is proportional to the electrons random Lorentz factor squared (temperature square) and to the magnetic field and to the Lorentz boost. This leads to the $`\gamma ^4`$ dependence in equation 3. The Lorentz boost and the magnetic field are the same for the reverse and forward shocks while the random Lorentz factor is $`\gamma _0/\gamma _A`$ compared to $`\gamma _A`$ of the forward shock. The “effective” temperature at the reverse shock is much lower than that of the forward shock (by a factor of $`\gamma _A^2/\gamma _01`$). The reverse shock frequency at the time $`t_A`$ is, therefore, given by:
$$\nu _m=1.2\times 10^{14}\left(\frac{ϵ_e}{0.1}\right)^2\left(\frac{ϵ_B}{0.1}\right)^{1/2}(\frac{\gamma _0}{300})^2n_1^{1/2}.$$
(14)
So while the forward shock radiates initially at the energies of $``$MeV the reverse shock radiates at few eV, with significant radiation emission within the optical band.
The case, which is most favorable for a strong optical emission is if the typical frequency of the reverse shock falls just in the optical regime and if the cooling frequency is on or below the optical frequency. This can be achieved with reasonable parameters, say with $`n_1=E_{52}=1`$, $`ϵ_B=0.2`$, $`ϵ_e=0.5`$ and $`\gamma _0=100`$. The other extreme case, which has a considerable lower optical fluence is if the typical radiation frequency as well as the cooling frequency are above the optical regime. As is apparent from these last two equations this requires a high initial Lorentz factor, short GRB, high electron equipartition parameter and a low magnetic equipartition parameter.
The resulting optical emission, as function of the most unknown variable $`\gamma _0`$, and for the “best guess” value of the other parameters, as obtained by late afterglow observations, is given in figure 3. As the Lorentz factor increases the optical emission initially rises. This is mainly due to the fact that the emission is spread on a shorter time scale ($`t_A`$ is decreasing). However, with quite a moderate initial Lorentz factor ($`\gamma _0300`$) the emission duration does no longer depend on the initial Lorentz factor but is given by the observation’s integration time (which we assumed to be $`10`$sec) or by the duration of the burst (for bursts longer than $`10`$sec). As the Lorentz factor continues to increase, the emission drops due to the increase in $`\nu _m`$. With high enough values of $`\gamma _0`$ the flux decreases considerably.
Two other effect can reduce the flux below these estimates: Self absorption might reduce this flux if the system is optically thick at optical frequencies; Inverse Compton may compete with synchrotron in cooling the electrons and reduce the synchrotron flux. We consider these effects now.
### 5.3 Synchrotron Self Absorption
Self absorption would reduce the optical flux from the reverse shock if it is optically thick. A simple way to account for this effect, is to estimate the maximal flux emitted as a black body with the reverse shock temperature. This is given by the
$$F_{sa}=\pi (R_{}/D)^2S_\nu =\pi \left(\frac{R_{}}{D}\right)^2\frac{2\nu ^2}{c^2}\frac{ϵ_e}{3}m_pc^2\gamma _0,$$
(15)
where the quantity $`R_{}\gamma _Act_A`$ is the observed size of the fireball. More detailed calculations, (Waxman 1997, Panaitescu & Mészáros 1997, Sari 1998, Granot, Piran and Sari 1998a,b) obtain a size bigger by a factor of $`2`$. However, these are applicable only deep inside the self-similar deceleration while we are interested in its beginning. To be conservative, we use the lower estimate of the size, which result in a weaker emission. We get that
$$F_{sa}=4.8\mathrm{Jy}D_{28}^2E_{52}^{1/4}n_1^{1/4}\frac{ϵ_e}{0.1}\frac{\gamma _0}{300}\left(\frac{t_A}{1s}\right)^{5/4}$$
(16)
Note that so far we have eliminated the dependence on the distance to the burst $`D`$ by using the observed fluence of the burst. However, self absorption depend on the flux per unit area at the source. The distance, therefore, appears explicitly and can not be eliminated. With a given observed flux, the further the burst is the more important is self absorption. It can be seen from equation 16 that self absorption can hardly play any role for long bursts with say $`t_A>1`$sec. Self absorption can be important only if $`t_A`$ is very short, which is possible only for short bursts and high values of $`\gamma _0`$.
### 5.4 Inverse Compton Cooling
Synchrotron self-Compton, that is inverse Compton scattering of the synchrotron radiation by the hot electrons provides an alternative way to cool the electrons. The typical energy of a photon that has been scattered is $`\nu _{IC}\nu _m\gamma _e^2`$. This emission will be in the MeV regime and not in the optical band. However, if $`ϵ_e>ϵ_B`$ then the efficiency of inverse Compton as a cooling mechanism, relative to synchrotron emission equals: $`\sqrt{ϵ_e/ϵ_B}`$ (Sari, Narayan and Piran 1996). This will reduce the synchrotron flux of any cooling electron by that factor but will not alter the emitted flux of a non cooling electron. It will therefore influence the optical emission only if $`\nu _c<\nu _{op}`$, and may reduce the flux for this case by a factor of few, resulting in the increase of one or two magnitudes. However, if $`\nu _c<\nu _{op}`$ the reverse shock synchrotron flux is very high to begin with.
### 5.5 Extinction
In all the discussion above, we have normalized the optical flux according to the observed GRB fluence. However, $`\gamma `$-rays do not suffer any kind of extinction, while the optical regime may do. Some afterglows show only small amount of extinction, some show strong extinction while other do not show any optical activity and are speculated to be in a highly extincting surrounding. Extinction is probably important if the burst is located in a star formation regime. GRB970508, for example, shows only week extinction after its peak at 2 days. However, before this peak the optical light curve does not fit any of the prediction of the simple models. If this is due to extinction that disappears after two days, it might be crucial in the first few seconds in which we are interested.
## 6 Discussion
We have calculated the observed synchrotron spectra expected from a relativistic shock that accelerates electrons to a power law distribution for an arbitrary hydrodynamic evolution $`\gamma (t)`$. Light curves can be obtained from this spectra by substituting initially $`\gamma (t)=conts`$, then $`\gamma (t)=t^{1/4}`$ and finally $`\gamma (R)=t^{3/8}`$. Where the intermediate expression relevant only for thick shells. For thin shells, we have explicitly constructed the possible light curves for the forward shock, for several frequency regimes. We find that the flux must rise initially steeply as $`t^2`$ or as $`t^{11/3}`$. This rapid rise ends at the time $`t_\gamma `$ when the system approaches self similar deceleration. After this time the light curve is either decreasing (high frequencies) or almost flat (low and intermediate frequencies). The break at $`t_\gamma `$ is, therefore, quite sharp and an observational determination of this transition time should be simple.
In the internal-external scenario, thin shells corresponds to short bursts. We expect, in this case, a gap between the burst and its afterglow. This gap allows a clean observation of the early afterglow light curve. In particular we should observe a clear rise, which is not contaminated by the complex, variable internal shocks burst. Thick shells (which correspond to longer bursts) light curves are more complex, due to the overlap of the burst and the afterglow. This overlap would make it difficult or even impossible to isolate the early afterglow signal.
A detection of the early afterglow rise is possible with future missions. Observations of this predicted light curves would confirm the internal shocks scenario. They will also enable us to measure the initial Lorentz factor. Both these ingredients are essential in order to build a reliable source model. As long as the question of internal or external shocks is not settled with a high certainty, it is not clear whether the source deriving the whole phenomena is operating for a millisecond (as required to the fireball needed for the external shock scenario) or for hundred seconds (as required for the internal shocks).
It is important to stress that a detection of a gap in the emission, by it self, is meaningless. The later emission that follows the gap can be just another peak in the complex GRB emission produced by the internal shocks. A comparison between this emission and the theoretical prediction given here is needed in order to unambiguously identify any delayed emission as the beginning of the afterglow rather than as a continuation of the burst. The spectra and light curves described here should be used to discriminate between an additional “delayed” peak, which is just a part of the internal shock burst and an emission coming from an external shock which should be described by the smooth light curves given here.
A broad band detection of the spectrum (say at 1-1000keV), at the time that the afterglow peaks, will enable us to compare between the spectral properties of the GRB and those of the afterglow. In the internal-external picture these spectra are not closely related and the typical synchrotron frequency and cooling frequency emitted in the early afterglow can be either higher or lower than that of the burst. On the other hand, the burst and the afterglow should be similar if the burst itself is also produced by external shocks.
We have calculated the optical emission that is expected in the simplest scenario of creating GRBs. The emission in the optical regime is dominated by the reverse shock. We showed that a strong optical flash is expected over a duration comparable to that of the GRB or delayed a few dozens of seconds after that. We have used the terminology of the internal-external scenario, where the GRB is produced by internal shocks while the afterglow by external shocks. However, even if the GRB is also produced by external shocks, our conclusions are still valid, with $`t_A`$ being the duration of the GRBs itself. The problem in this case, is that the assumption of a uniform surrounding may not be valid for models producing the GRB by external shocks.
The calculations regarding the reverse shock emission assumed that the shell is made out of baryonic material. If instead it is magnetically dominated where the energy in the rest mass of the baryons is negligible, a considerably lower emission is expected from the reverse shock. Our prediction is heavily based on the fact that the reverse and forward shock carry the same amount of thermal energy. If the shell is initially very thin, and somehow does not spread so that its thickness is kept significantly below $`R/\gamma ^2`$, the reverse shock will be Newtonian, and will contain a small fraction of the system energy. The emission will be reduced accordingly. In the simplest model, where the shell was accelerated hydrodynamically, the back of the shell moves with a Lorentz factor smaller by a factor of a few from its front (this is what defines the shell) so that spearing to a thickness of $`R/\gamma ^2`$ is unavoidable. However, in more complicated forms of acceleration, one might think of shells that have a perfectly uniform Lorentz factor and therefore does not spread.
If the density of the surrounding material is very low, it might take a long time before the shell begins to decelerate, i.e. a very large $`t_A`$. The reverse shock emission will be spread over this large time, resulting in a much lower magnitude. However, it seems to be that this possibility of long $`t_A`$ is already ruled out by current observations, as the beginning of the X-ray decay was observed with BeppoSAX just following some bursts, like GRB 970228, GRB970508 and GRB971214.
Fast optical followup experiments often have a tradeoff between the magnitude they can achieve and how fast can they operate. In this respect, an optical experiment which can detect emission which is simultaneous with the burst is preferred since the reverse shock emission might die soon after that. As there are many bursts of duration of 10 seconds or above, this might be the optimal response time for an optical follow up. Nevertheless, experiments with delays of $`30100`$s should still be able to detect the reverse shock emission from a few long enough bursts.
Finally there is the possibility of extinction. At least in some burst, like GRB 970508, extinction does not seem to play a very important roll in the late afterglow. However, the early signal of GRB 970508 (before its peak at two days) is not described well by the theory. If this is an evidence of some extinction, which is important only on early times, it might reduce the optical flash predicted here.
This research was supported by the US-Israel BSF grant 95-328, by a grant from the Israeli Space Agency and by NASA grant NAG5-3516. Re’em Sari thanks the Sherman Fairchild Foundation for support. Tsvi Piran thanks Columbia University and Marc Kamionkowski for hospitality while this research was done.
|
no-problem/9901/astro-ph9901077.html
|
ar5iv
|
text
|
# Cosmological Obscuration by Galactic Dust: Effects of Dust Evolution
## 1 Introduction
The recent discovery of large numbers of quasars at radio and X-ray frequencies with very red optical–to–near-infrared continua suggests that existing optical surveys may be severely incomplete (eg. Webster et al. 1995 and references therein). Webster et al. (1995) and Masci (1998) have argued that the anomalous colours are due to extinction by dust, although the location of the dust remains a highly controversial issue. Intervening dusty galaxies which happen to lie along the line-of-sight of otherwise normal blue quasars are expected to redden the observed optical continuum, or if the optical depth is high enough, to remove quasars from an optical flux-limited sample (eg. Wright 1990). As suggested by existing obervational and theoretical studies of cosmic chemical evolution however (Pei & Fall 1995 and references therein), one expects a reduction in the amount of dust to high redshift. Consequently, one then also expects that the probability of a background object being either reddened or obscured to be reduced.
The effects of foreground dust on observations of objects at cosmological distances has been discussed by Ostriker & Heisler (1984); Heisler & Ostriker (1988); Fall & Pei (1989, 1993); Wright (1986, 1990) and Masci & Webster (1995). Using models of dusty galactic disks, these studies show that the line-of-sight to a high redshift quasar has a high probability of being intercepted by a galactic disk, particularly if the dust distribution is larger than the optical radius of the galaxy. Based on the dust properties of local galaxies, it is estimated that up to 80% of bright quasars to $`z3`$ may be obscured by dusty intervening systems. The principle issue in these calculations was that realistic dust distributions in galaxies which are ‘soft’ around the edges, will cause many quasars to appear reddened without removing them from a flux-limited sample.
None of the above studies however considered the effects of evolution in dust content. Cosmic evolution in dust is indirectly suggested by numerous claims of reduced chemical enrichment at $`z>2`$. Evidence is provided by observations of trace metals and their relative abundances in QSO absorption-line systems to $`z3`$ (Meyer & Roth 1990; Savaglio, D’Odorico & Möller 1994; Pettini et al. 1994; Wolfe et al. 1994; Pettini et al. 1997; Songaila 1997), which are thought to arise from intervening clouds or the haloes and disks of galaxies. These studies indicate mean metallicities $`10\%`$ and $`<1\%`$ solar at $`z2`$ and $`z3`$ respectively, and dust-to-gas ratios $`<8\%`$ of the galactic value at $`z2`$. These estimates are consistent with simple global evolution models of star formation and gas consumption rates in the universe (Pei & Fall 1995). If the observed metallicities in QSO absorption systems are common, then their interpretation as galactic disks implies that substantial evolution has taken place since $`z3`$. If the quantity of dust on cosmic scales also follows such a trend, then one may expect the effects of obscuration to high redshift to be reduced relative to non-evolving predictions.
In this paper, we continue to model the effects of intervening galactic dust on the background universe at optical wavelengths using a more generalised model where the dust content evolves. We explore the effects of our predictions on quasar number counts in the optical and their implication for quasar evolution.
This paper is organised as follows: The next section briefly describes the generalised model and assumptions. Section 3 describes the model parameters and their values assumed in our calculations. Model results are presented and analysed in Section 4. Implications on quasar statistics and evolution are discussed in Section 5. Other implications are discussed in Section 6 and all results are summarised in Section 7. Unless otherwise stated, all calculations assume a Friedmann cosmology with $`q_0=0.5`$, and Hubble parameter $`h_{50}=1`$ where $`H_0=50h_{50}\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
## 2 The Evolutionary Dust Model
We calculate the probability distribution in total dust optical depth from model galaxies along any random line-of-sight as a function of redshift by following the method presented in Masci & Webster (1995). This was based on a method introduced by Wright (1986) which did not include any effects of evolution with redshift. Here we generalise this model by considering the possibility of evolution in the dust properties of galaxies. In the discussion below and unless otherwise indicated by a subscript, we define $`\tau `$ to be the total optical depth encountered by emitted photons and measured in an observer’s $`B`$ bandpass (effectively at $`\lambda =4400`$Å).
We assume the following properties for individual absorbing galaxies. Following previous studies (eg. Wright 1986, Heisler & Ostriker 1988), we model galaxies as randomly tilted exponential disks, where the optical depth through a face-on disk decreases exponentially with distance $`r`$ from the center:
$$\tau (r,z)=\tau _0(z)e^{r/r_0}.$$
(1)
$`r_0`$ is a characteristic radius and $`\tau _0(z)`$, the value of $`\tau `$ through the center of the galaxy $`(r=0)`$. The redshift dependence of $`\tau _0`$ is due to the increase in absorber rest frame frequency with redshift.
Since we wish to model the observed $`B`$-band optical depth to $`z<6`$, we require an extinction law $`\xi (\lambda )\tau _\lambda /\tau _B`$ that extends to wavelengths of $`630`$Å . We use the analytical fit for $`\xi (\lambda )`$ as derived by Pei (1992) for diffuse galactic dust in the range $`500\mathrm{\AA }<\lambda <25\mu `$m. The optical depth in an observer’s frame through an absorber at redshift $`z`$ ($`\tau _0(z)`$ in equation 1) can be written:
$$\tau _0(z)=\tau _B\xi \left(\frac{\lambda _B}{1+z}\right),$$
(2)
where $`\tau _B`$ is the rest frame $`B`$-band optical depth through the center of an individual galactic absorber.
### 2.1 Evolution
Equation (2) must be modified if the dust content in each galaxy is assumed to evolve with cosmic time. The optical depth seen through the center of a single absorber at some redshift, $`\tau _0(z)`$, will depend on the quantity of dust formed from past stellar processes. For simplicity, we assume all galaxies form simultaneously, maintain a constant space density, and increase in dust content at a rate that is uniform throughout. We also assume no evolution in the dust law $`\xi (\lambda )`$ with redshift. Even though a lower mean metallicity at high redshift may suggest a different wavelength dependence for the dust law, there is no evidence from local observations of the diffuse ISM to support this view (eg. Whittet 1992).
We parameterise evolution in dust content by following simulations of the formation of heavy metals in the cold dark matter scenario of galaxy formation by Blain & Longair (1993a, 1993b). These authors assume that galaxies form by the coalescence of gaseous protoclouds through hierarchical clustering as prescribed by Press & Schechter (1974). A fixed fraction of the mass involved in each merger event is converted into stars, leading to the formation of heavy metals and dust. It was assumed that the energy liberated through stellar radiation was absorbed by dust and re-radiated into the far-infrared. They found that such radiation can contribute substantially to the far-infrared background intensity from which they use to constrain a model for the formation of heavy metals as a function of cosmic time. Their models show that the comoving density of heavy metals created by some redshift $`z`$, given that star formation commenced at some epoch $`z_{SF}`$ follows the form
$$\mathrm{\Omega }_m(z)\mathrm{ln}\left(\frac{1+z_{SF}}{1+z}\right),\mathrm{where}z<z_{SF}.$$
(3)
We assume that a fixed fraction of heavy metals condense into dust grains so that the comoving density in dust, $`\mathrm{\Omega }_d(z)`$, follows a similar dependence as equation (3). The density in dust relative to the present closure density in $`n_0`$ exponential disks per unit comoving volume is given by
$$\mathrm{\Omega }_d=\frac{n_0M_d}{\rho _c},$$
(4)
where $`\rho _c=3H_0^2/8\pi G`$ and $`M_d`$ is the dust mass in a single exponential disk. This mass can be estimated using Eq.7-24 from Spitzer (1978) where the total density in dust, $`\rho _d`$, is related to the extinction $`A_V`$ along a path length $`L`$ in kpc by
$$\rho _d=\mathrm{\hspace{0.17em}1.3}\times 10^{27}\rho _g\left(\frac{ϵ_o+2}{ϵ_o1}\right)(A_V/L).$$
(5)
$`\rho _g`$ and $`ϵ_o`$ are the density and dielectric constant of a typical dust grain respectively and the numerical factor has dimensions of $`\mathrm{gm}\mathrm{cm}^2`$ \- see Spitzer (1978). Using the exponential profile (equation 1) where $`\tau (r)A_V(r)`$ and integrating along cylinders, the dust mass in a single exponential disk can be found in terms of the model parameters $`\tau _B`$ and $`r_0`$. We find that the comoving density in dust at some redshift scales as
$$\mathrm{\Omega }_d(z)\tau _B(z)n_0r_0^2,$$
(6)
where $`\tau _B(z)`$ is the central $`B`$-band optical depth and $`r_0`$ the dust scale radius of each disk. Thus, the central optical depth, $`\tau _B(z)`$, in any model absorber at some redshift is directly proportional to the mass density in dust or heavy metals as specified by equation (3):
$$\tau _B(z)\mathrm{ln}\left(\frac{1+z_{SF}}{1+z}\right).$$
(7)
The redshift dependence of optical depth observed in the fixed $`B`$-bandpass due to a single absorber now involves two factors: first, the extinction properties of the dust as defined by equation (2) and second, its evolution specified by equation (7). The star formation epoch $`z_{SF}`$ can also be interpreted as the redshift at which dust forms. From here on, we therefore refer to this parameter as $`z_{dust}`$ \- a hypothesised “dust formation epoch”. By convolving equations (2) and (7), and requiring that locally: $`\tau _0(z=0)=\tau _B`$, the observed optical depth through a single absorber at some redshift $`z<z_{dust}`$ now takes the form:
$$\tau _0(z)=\tau _B\xi \left(\frac{\lambda _B}{1+z}\right)\left[1\frac{\mathrm{ln}(1+z)}{\mathrm{ln}(1+z_{dust})}\right].$$
(8)
Figure 1 illustrates the combined effects of evolution and increase in observed frame $`B`$-band extinction with redshift defined by equation (8). The extinction initially increases with $`z`$ due to a decrease in corresponding rest frame wavelength. Depending on the value for $`z_{dust}`$, it then decreases due to evolution in dust content. This latter effect dominates towards $`z_{dust}`$.
The characteristic galactic dust radius $`r_0`$ defined in equation (1) is also given a redshift dependence in the sense that galaxies had smaller dust-haloes at earlier epochs. The following evolutionary form is adopted:
$$r_0(z)=r_0(1+z)^\delta ,\delta <0,$$
(9)
where $`\delta `$ gives the rate of evolution and $`r_0`$ is now a ‘local’ scale radius. Evolution in radial dust extent is suggested by dynamical models of star formation in an initially formed protogalaxy (Edmunds 1990 and references therein). These studies show that the star formation rate and hence metallicity in disk galaxies has a radial dependence that decreases outwards at all times. It is thus quite plausible that galaxies have an evolving effective ‘dust radius’ which follows chemical enrichment from stellar processes.
Our parameterisation for evolution in galactic dust (equations 7 and 9) is qualitatively similar to the ‘accretion models’ for chemical evolution of Wang (1991), where the effects of grain destruction by supernovae and grain formation in molecular clouds is taken into account. The above model is also consistent with empirical age-metallicity relationships inferred from spectral observations in the Galaxy (Wheeler, Snedin & Truran 1989), and models of chemical evolution on a cosmic scale implied by absorption-line observations of quasars (Lanzetta et al. 1995; Pei & Fall 1995).
## 3 Model Parameters and Assumptions
### 3.1 Model Parameters
Our model depends on four independent parameters which describe the characteristics and evolutionary properties of intervening galaxies. The parameters defined ‘locally’ are: the comoving number density of galaxies $`n_0`$, the characteristic dust radius $`r_0`$, and dust opacity $`\tau _B`$ at the center of an individual absorber. The evolution in $`\tau _B`$ and $`r_0`$ is defined by equations (7) and (9) respectively. Parameters defining their evolution are $`\delta `$ for $`r_0`$, and the ‘dust formation epoch’ $`z_{dust}`$ for $`\tau _B`$. Both $`n_0`$ and $`r_0`$ have been conveniently combined into the parameter $`\tau _g`$ where
$$\tau _g=n_0\pi r_0^2\frac{c}{H_0},$$
(10)
with $`\frac{c}{H_0}`$ being the Hubble length. This parameter is proportional to the number of galaxies and mean optical depth introduced along the line-of-sight (see Section 4). It also represents a ‘local’ covering factor in dusty galactic disks - the fraction of sky at the observer covered in absorbers.
In all calculations, we assume a fixed value for $`n_0`$. From equation (10), any evolution in the comoving number density $`n_0`$ is included in the evolution parameter $`\delta `$ for $`r_0`$ (equation 9). Thus in general, $`\delta `$ represents an effective evolution parameter for both $`r_0`$ and $`n_0`$. Our model is therefore specified by four parameters: $`\tau _g`$, $`\tau _B`$, $`\delta `$ and $`z_{dust}`$.
### 3.2 Assumed Parameter Values
Our calculations assume a combination of values for the parameters ($`\tau _g`$, $`\tau _B`$) and ($`\delta `$, $`z_{dust}`$) that bracket the range consistent with existing observations. The values ($`\tau _g`$, $`\tau _B`$) are chosen from previous studies of dust distributions and extinction in nearby spirals. From the studies of Giovanelli et al. (1994) and Disney & Phillipps (1995) (see also references therein) we assume the range in central optical depths: $`0.5<\tau _B<4`$, while dust scale radii of $`5<(r_0/\mathrm{kpc})<30`$ are assumed from Zaritsky (1994) and Peletier et al. (1995). For a nominal comoving galactic density of $`n_0=0.002h_{50}^3\mathrm{Mpc}^3`$ (eg. Efstathiou et al. 1988), these scale radii correspond to a range for $`\tau _g`$ (equation 10): $`0.01<\tau _g<0.18`$. These ranges are consistent with those assumed in the intervening galaxy obscuration models of Heisler & Ostriker (1988) and Fall & Pei (1993).
The values for ($`\delta `$, $`z_{dust}`$) were chosen to cover a range of evolution strengths for $`r_0`$ and $`\tau _B`$ respectively. To cover a plausible range of dust formation epochs, we consider $`6z_{dust}20`$, consistent with a range of galaxy ‘formation’ epochs predicted by existing theories of structure formation (eg. Peebles 1989). The upper bound $`z_{dust}=20`$ corresponds to the star formation epoch considered in the galaxy formation models of Blain & Longair (1993b).
We assume values for $`\delta `$ similar to those implied by observations of the space density of metal absorption systems from QSO spectra as a function of redshift (Sargent, Boksenberg & Steidel 1988; Thomas & Webster 1990). These systems are thought to arise in gas associated with galaxies and their haloes and it is quite plausible that such systems also contain dust. Here we assume a direct proportionality between the amount of dust and heavy metal abundance in these systems.
In general, evolution in the number of metal absorption line systems per unit $`z`$, that takes into account effects of cosmological expansion, can be parameterised:
$$\frac{dN}{dz}=\frac{c}{H_0}n_z\pi r_0(z)^2(1+z)(1+2q_0z)^{1/2}.$$
(11)
Evolution, such as a reduction in absorber numbers with redshift, can be interpreted as either a decrease in the comoving number density $`n_z`$, or effective cross-section $`\pi r_0(z)^2`$. With our assumption of a constant comoving density $`n(z)=n_0`$, and an evolving dust scale radius $`r_0`$ as defined by equation (9), we have $`dN/dz(1+z)^\gamma `$, where $`\gamma =0.5+2\delta `$ for $`q_0=0.5`$. Hence for no evolution, $`\gamma =0.5`$.
Present estimates on the evolution of absorber numbers with redshift are poorly constrained. Thomas & Webster (1990) have combined several datasets increasing absorption redshift ranges to give strong constraints on evolution models. For CIV absorption ($`\lambda \lambda `$1548, 1551Å), which can be detected to redshifts $`z>3`$ in high resolution optical spectra, evolution has been confirmed for the highest equivalent width systems with $`W_0>0.6`$Å. It is more likely that these systems are those associated with dust rather than the lower equivalent width (presumably less chemically enriched) systems with $`W_0<0.3`$Åwhich have a trend consistent with no evolution. Their value for the evolution parameter $`\gamma `$, for the highest equivalent width systems is $`0.1\pm 0.5`$ at the $`2\sigma `$ level. Converting this $`2\sigma `$ range to our model parameter $`\delta `$ using the discussion above, we assume the range: $`0.5<\delta <0.05`$.
### 3.3 Comparisons with QSO Absorption-Line Studies
We can compare our assumed ranges in evolutionary parameters: $`6z_{dust}20`$ and $`0.5<\delta <0.05`$ with recent determinations of the heavy element abundance in damped Ly-$`\alpha `$ absorption systems and the Ly-$`\alpha `$ forest to $`z3`$. The damped Ly-$`\alpha `$ systems are interpreted as the progenitors of galactic disks (Wolfe et al. 1986), and recent studies by Pettini et al. (1994; 1997) deduce metal abundances and dust-to-gas ratios at $`z1.82.2`$ that are $`10\%`$ of the local value. The Lyman forest systems however are more numerous, and usually correspond to gas columns $`>10^7`$ times lower than those of damped Ly-$`\alpha `$ absorbers. High resolution metal-line observations by Songaila (1997) deduce metallicities $`<1.5\%`$ solar at $`z2.53.8`$.
To relate these metallicity estimates to cosmic evolution in dust content as specified by our model, we must first note that the metallicity at any redshift $`Z(z)`$, is generally defined as the mass fraction of heavy metals relative to the total gas mass: $`Z(z)=\mathrm{\Omega }_m(z)/\mathrm{\Omega }_g(z)`$. At all redshifts, we assume a constant dust-to-metals ratio, $`\mathrm{\Omega }_d(z)/\mathrm{\Omega }_m(z)`$, where a fixed fraction of heavy elements is assumed to be condensed into dust grains. Therefore the metallicity $`Z(z)`$, relative to the local solar value, $`Z_{}`$, can be written:
$$\frac{Z(z)}{Z_{}}=\frac{\mathrm{\Omega }_d(z)}{\mathrm{\Omega }_d(0)}\frac{\mathrm{\Omega }_g(0)}{\mathrm{\Omega }_g(z)}.$$
(12)
From the formalism in section 2.1, the mass density in dust relative to the local density, $`\mathrm{\Omega }_d(z)/\mathrm{\Omega }_d(0)`$, can be determined and is found to be independent of the galaxy properties $`r_0`$ and $`\tau _B`$, depending only on our evolution parameters, $`\delta `$ and $`z_{dust}`$. This is given by
$$\frac{\mathrm{\Omega }_d(z)}{\mathrm{\Omega }_d(0)}=\left[1\frac{\mathrm{ln}(1+z)}{\mathrm{ln}(1+z_{dust})}\right](1+z)^{2\delta }.$$
(13)
The gas ratio, $`\mathrm{\Omega }_g(0)/\mathrm{\Omega }_g(z)`$, is adopted from studies of the evolution in gas content of damped Ly-$`\alpha `$ systems. These systems are believed to account for at least $`80\%`$ of the gas content in the form of neutral hydrogen at redshifts $`z>2`$ (Lanzetta et al. 1991). We adopt the empirical fit of Lanzetta et al. (1995), who find that the observed evolution in $`\mathrm{\Omega }_g(z)`$ is well represented by $`\mathrm{\Omega }_g(z)=\mathrm{\Omega }_g(0)\mathrm{exp}(\alpha z)`$, where $`\alpha =0.6\pm 0.15`$ and $`0.83\pm 0.15`$ for $`q_0=0`$ and $`q_0=0.5`$ respectively.
Figure 2 shows the range in relative metallicity implied by our evolutionary dust model (equations 12 and 13) as a function of redshift for two values of $`q_0`$. The solid and dashed lines correspond to respectively $`q_0=0`$ and $`q_0=0.5`$ and the regions between these lines correspond to the ranges assumed for our assumed model parameters: $`6z_{dust}20`$ and $`0.5<\delta <0.05`$. For comparison, the mean metallicities $`Z0.1Z_{}`$ and $`Z0.01Z_{}`$ observed in damped Ly-$`\alpha `$ systems at $`z2.2`$ and the Lyman forest at $`z>2.5`$ respectively are also shown. These agree well with our model predictions, suggesting that our model assumptions will provide a reliable measure of dust evolution which are at least compatible with other indirect estimates.
## 4 Results and Analysis
Using the formalism of Masci & Webster (1995) and replacing the parameters $`\tau _B`$ and $`r_0`$ by their assumed redshift dependence as defined in section 2.1, Fig. 3 shows probability density functions $`p(\tau |z)`$ for the total optical depth up to redshifts $`z=`$1, 3 and 5. Results are shown for two sets of galaxy parameters ($`\tau _g`$, $`\tau _B`$), with four sets of evolutionary parameters ($`\delta `$, $`z_{dust}`$) for each.
The area under any normalised curve in Fig. 3 gives the fraction of lines-of-sight to that redshift which have optical depths within some interval $`0\tau _{max}`$. Towards high redshifts, we find that obscuration depends most sensitively on the parameter $`\tau _g`$, in other words, on the covering factor of absorbers (equation 10). Figure 3 shows that as the amount of dust at high redshift decreases, ie., as $`\delta `$ and $`z_{dust}`$ decrease, the curves show little horizontal shift towards larger optical depths from $`z=1`$ to $`z=5`$. A significant shift becomes noticeable however for the weaker evolution cases, and is largest for ‘no evolution’ (solid lines). This behaviour is further investigated below.
In order to give a clearer comparison between the amount of obscuration and strength of evolution implied by our model parameters ($`\tau _g,\tau _B,\delta ,z_{dust}`$), we have calculated the mean and variance in total optical depth as a function of redshift. Formal derivations of these quantities are given in the appendix. Here we briefly discuss their general dependence on the model parameters.
A quantity first worth considering is the number of galaxies intercepted along the line-of-sight. In a $`q_0=0.5`$ ($`\mathrm{\Lambda }=0`$) universe, the average number of intersections within a scale length $`r_0`$ of a galaxy’s center by a light ray to some redshift is given by
$$\overline{N}(z)=\left(\frac{2}{3+4\delta }\right)\tau _g\left[(1+z)^{1.5+\mathrm{\hspace{0.17em}2}\delta }1\right].$$
(14)
Where $`\delta `$ and $`\tau _g`$ are defined in equations (9) and (10) respectively.
In the case where we have no-evolution, ie. where $`\delta =0`$ and $`z_{max}=\mathrm{}`$, and for a dust law that scales inversely with wavelength (ie. $`\xi _\lambda 1/\lambda `$ which is a good approximation at $`\lambda >2500`$Å), exact expressions follow for the mean and variance in total optical depth along the line-of-sight. The mean optical depth can be written:
$$\overline{\tau }(z)=\mathrm{\hspace{0.17em}0.8}\tau _g\tau _B\left[(1+z)^{2.5}1\right],$$
(15)
and the variance:
$$\sigma _\tau ^2(z)=\mathrm{\hspace{0.17em}0.57}\tau _g\tau _{B}^{}{}_{}{}^{2}\left[(1+z)^{3.5}1\right].$$
(16)
The variance (equation 16) or ‘scatter’ about the mean to some redshift provides a more convenient measure of reddening. The mean optical depth has a simple linear dependence on the parameters $`\tau _g`$ and $`\tau _B`$ and thus gives no indication of the degree to which each of these parameters contributes to the scatter. As seen from the probability distributions in Fig. 3, there is a relatively large scatter about the mean optical depth to any redshift. From equation (16), it is seen that the strongest dependence of the variance is on the central absorber optical depth $`\tau _B`$. Thus, larger values of $`\tau _B`$ (which imply ‘harder-edged’ disks), are expected to introduce considerable scatter amongst random individual lines of sight, even to relatively low redshift.
In Fig. 4, we show how the mean optical depth varies as a function of redshift for a range of evolutionary parameters. ‘Strong evolution’ is characterised by $`\delta =0.5`$, $`z_{dust}=6`$ (dot-dashed curves), as compared to the ‘no’, ‘weak’ and ‘moderate’ evolution cases indicated. The mean optical depth flattens out considerably towards high redshift in the strong evolution case, and gradually steepens as $`\delta `$ and $`z_{dust}`$ are increased. Note that no such flattening is expected in mean reddening for the no evolution case (Fig. 4c). The mean optical depth to redshifts $`z>1`$ in evolution models can be reduced by factors of at least three, even for low to moderately low evolution strengths.
Figure 4d shows the scaling of mean optical depth with respect to the evolutionary parameters. It is seen that reddening depends most sensitively on the parameter $`\delta `$, which controls the rate of evolution in galactic dust scale radius $`r_0`$. A similar trend is followed in Fig. 5, which shows the dependence of variance in optical depth on evolution as a function of redshift, for fixed ($`\tau _g`$, $`\tau _B`$). Considerable scatter is expected if the dust radius of a typical galaxy evolves slowly with cosmic time as shown for the ‘weakest’ evolution case $`\delta =0.05`$ in Fig. 5.
Our main conclusion is that the inclusion of evolution in dust content, by amounts consistent with other indirect studies can dramatically reduce the redshift dependence of total reddening along the line-of-sight to $`z>1`$, contrary to non-evolving models.
## 5 Implications on QSO Number Counts
There are numerous observations suggesting that the space density of bright quasars declines beyond $`z3`$ (Sandage 1972; Schmidt, Schneider & Gunn 1988). This has been strongly confirmed from various luminosity function (LF) estimates to $`z4.5`$ (Hartwick & Schade 1990; Pei 1995 and references therein), where the space density is seen to decline by at least an order of magnitude from $`z=3`$ to $`z=4`$. Heisler & Ostriker (1988) speculate that the decline may be due to obscuration by intervening dust, which reduces the number of quasars observed by ever-increasing amounts towards high $`z`$. The results of Fall & Pei (1993) however show that the observed turnover at $`z2.5`$ and decline thereafter may still exist once the effects of intervening dust (mainly associated with damped Ly$`\alpha `$ systems) are corrected for. Since no evolution in dust content was assumed in either of these studies, we shall further explore the effects of intervening dust on inferred quasar evolution using our evolutionary galactic dust model.
Since we are mainly interested in “bright” quasars ($`M_B<26`$) at high redshifts, a single power-law for the observed LF should suffice:
$$\varphi _o(L,z)=\varphi _o(z)L^{\beta 1},$$
(17)
where $`\beta 2.5`$. This power law model immensely simplifies the relation between observed and “true” LFs (corrected for obscuration by dust). In the presence of dust obscuration, inferred luminosities will be decreased by a factor of $`e^\tau `$. Since there is a probability $`p(\tau |z)`$ of encountering an optical depth $`\tau `$ as specified by our model (see Fig. 3), the observed LF can be written in terms of the true LF, $`\varphi _t`$ as follows:
$$\varphi _o(L,z)=_0^{\mathrm{}}𝑑\tau \varphi _t(e^\tau L,z)e^\tau p(\tau |z)$$
(18)
The extra factor of $`e^\tau `$ in equation (18) accounts for a decrease in luminosity interval $`dL`$ in the presence of dust. Equations (17) and (18) imply that the true LF can be written
$$\varphi _t(L,z)=\varphi _t(z)L^{\beta 1},$$
(19)
and the ratio of observed to true LF normalisation as
$$\frac{\varphi _o(z)}{\varphi _t(z)}=_0^{\mathrm{}}𝑑\tau e^{\beta \tau }p(\tau |z).$$
(20)
The observed comoving density of quasars brighter than some absolute magnitude limit $`M_{lim}`$ as a function of redshift is computed by integrating the LF:
$$N_o(z|M_B<M_{lim})=_{L_{lim=L(M_{lim})}}^{\mathrm{}}𝑑L\varphi _o(L,z).$$
(21)
Thus, the true comoving number density $`N_t`$, can be easily calculated by replacing $`\varphi _o`$ in equation (21) by $`\varphi _t(\varphi _t/\varphi _o)\varphi _o`$ leading to the simple result:
$$N_t(z|M_B<M_{lim})\left(\frac{\varphi _o(z)}{\varphi _t(z)}\right)N_o(z|M_B<M_{lim}),$$
(22)
where the normalisation ratio is defined by equation (20).
Figure 6 shows both the observed and true comoving density of bright quasars (with $`M_B<26`$) as a function of redshift. The observed trends are empirical fits deduced by Pei (1995). The true comoving density in all cases was determined by assuming relatively ‘weak’ evolution in the dust properties of intervening galaxies. Two sets of galactic dust parameters for each $`q_0`$ defined by $`(\tau _B,r_0)=(1,10\mathrm{k}\mathrm{p}\mathrm{c})`$ (Figs a and c) and $`(\tau _B,r_0)=(3,30\mathrm{k}\mathrm{p}\mathrm{c})`$ (Figs b and d) are assumed. We shall refer to these as our “minimal” and “maximal” dust model respectively which bracket the range of parameters observed for local galaxies.
Comparing the ‘true’ QSO redshift distribution with that observed, two features are apparent. First, the true number density vs. $`z`$ relation has qualitatively the same behaviour as that observed. No flattening or increase in true quasar numbers with $`z`$ is apparent. Second, there appears to be a shift in the redshift, $`z_{peak}`$, where the quasar density peaks. This shift is greatest for our maximal dust model where $`z_{peak}`$ is increased by a factor of almost 1.5 relative to that observed. This implies that the bulk of quasars may have formed at earlier epochs than previously inferred from direct observation.
Our predictions for QSO evolution, corrected for obscuration by ‘evolving’ intervening dust differs enormously from that predicted by Heisler & Ostriker (1988). The major difference is that these authors neglected evolution in dust content with $`z`$. As shown in Fig. 4, non-evolving models lead to a rapid increase in dust optical depth with $`z`$ and hence this will explain their claim of a continuous increase in the true QSO space density at $`z>3`$. As shown in Fig. 6, the inclusion of even a low-to-moderately low amount of evolution in dust content dramatically reduces the excess number of quasars at $`z>3`$ than predicted by Heisler & Ostriker (1988).
We find that there is no significant difference in the characteristic timescale, $`t_{QSO}`$ for QSO formation at $`z>z_{peak}`$, where
$$t_{QSO}\left(\frac{N}{\stackrel{.}{N}}\right)_{z>z_{peak}}\mathrm{\hspace{0.17em}1.5}\mathrm{Gyr},$$
(23)
is found for both the observed and dust corrected results in Fig. 6. We conclude that the decline in space density of bright QSOs at redshifts $`z>3.5`$ is most likely to be real and an artifact of an intrinsic rapid turn-on of the QSO population with time. This is consistent with estimates of evolution inferred from radio-quasar surveys where no bias from dust obscuration is expected (eg. Dunlop & Peacock 1990).
An increased space density of quasars at redshifts $`z>3`$ predicted by correcting for dust obscuration has implications for theories of structure formation in the Universe. Our minimal dust model (Figs. 6a and c) predicts that the true space density can be greater by almost two orders of magnitude than that observed, while our maximal dust model (Figs. 6b and d) predicts this factor to be greater than 5 orders of magnitude. These predictions can be reconciled with the quasar number densities predicted from hierarchical galaxy formation simulations involving cold-dark matter (eg. Katz et al. 1994). It is found that there are $`>10^3`$ times potential quasar sites at $`z>4`$ (associated with high density peaks) than required from current observations. Such numbers can be easily accommodated by our predictions if a significant quantity of line-of-sight dust is present.
To summarise, we have shown that with the inclusion of even weak to moderately weak amounts of evolution in dust content with $`z`$, the bias due to dust obscuration will not be enough to flatten the true redshift distribution of bright quasars beyond $`z=3`$. A significant excess however (over that observed) in quasar numbers is still predicted.
## 6 Discussion
Our model predictions may critically depend on the dust properties of individual galaxies and their assumed evolution. For instance, is it reasonable to give galaxies an exponential dust distribution? Such a distribution is expected to give a dust covering factor to some redshift considerably larger than if a clumpy distribution were assumed (Wright, 1986). A clumpy dust distribution (for spirals in particular) is expected, as dust is known to primarily form in dense, molecular star-forming clouds (Wang 1991 and references therein).
As noted by Wright (1986), “cloudy disks” with dust in optically-thick clumps can reduce the effective cross-section for dust absorption by at least a factor of five and hence, are less efficient at both obscuring and reddening background sources at high redshift. A dependence of the degree of dust ‘clumpiness’ on redshift, such as dust which is more diffuse at early epochs and becomes more clumpy with cosmic time is unlikely to affect the results of this paper. This will only reduce the effective cross-section for absorption to low redshifts, leaving the effects to high redshift essentially unchanged. The numbers of reddened and/or obscured sources at high redshift relative to those expected in non-evolving dust models however will always be reduced, regardless of the dependence of aborption cross-section on redshift.
Observations of the optical reddening distribution of quasars as a function of redshift may be used to test our predictions. Large and complete radio-selected samples with a high identification rate extending to high redshifts however are required. The reason for this is that first, radio wavelegths are guaranteed to have no bias against obscuration by dust, and second, the statistics at high redshift need to be reasonably high in order to provide sufficient sampling of an unbiased number of random sight-lines.
The sample of Drinkwater et al. (1997) contains the highest quasar fraction ($`>70\%`$) than any existing radio sample with a redshift distribution extending to $`z4`$. A large fraction of sources appear very red in $`BK`$ colour compared to quasars selected by optical means. The dependence of $`BK`$ colour on redshift is relatively flat which may at first appear consistent with the predictions of figure 4, although the fraction of sources identified with $`z>2`$ is only $`5\%`$. Also, this sample is known to contain large numbers of sources which are reddened by mechanisms other than dust in the line-of-sight (eg. Serjent & Rawlings 1996). The role of dust, reddening the optical–to–near-IR continua of radio-selected quasars, and whether it is extrinsic or not still remains a controversial issue. One needs to isolate the intrinsic source properties before attributing any excess reddening to line-of-sight dust. Optical follow-up of sensitive radio surveys that detect large numbers of high redshift sources with known intrinsic spectral properties will be necessary to reliably constrain the rate of evolution in cosmic dust.
## 7 Summary and Conclusions
In this paper, we have modelled the optical depth in galactic dust along the line-of-sight as a function of redshift assuming evolution in dust content. Our model depends on four parameters which specify the dust properties of local galaxies and their evolution: the exponential dust scale radius $`r_0`$, central $`B`$-band optical depth $`\tau _B`$, “evolution strength” $`\delta `$ where $`r_0(z)=r_0(1+z)^\delta `$, and $`z_{dust}`$ \- a hypothesised dust formation epoch. Our evolution model is based on previous studies of the formation of heavy metals in the cold dark matter scenario of galaxy formation.
Our main results are:
1. For evolutionary parameters consistent with existing studies of the evolution of metallicity deduced from QSO absorption-line systems, a significant “flattening” in the mean and variance of observed $`B`$-band optical depth to redshifts $`z>1`$ is expected. The mean optical depth to $`z>1`$ is smaller by at least a factor of 3 compared to non-evolving model predictions. Obscuration by dust is not as severe as shown in previous studies if effects of evolution are accounted for.
2. By allowing for even moderately low amounts of evolution, line-of-sight dust is not expected to significantly affect existing optical studies of QSO evolution. Correcting for dust obscuration, evolving dust models predict the ‘true’ (intrinsic) space density of bright quasars to decrease beyond $`z2.5`$ as observed, contrary to previous non-evolving dust models where a continuous monotonic increase was predicted.
3. For moderate amounts of evolution, our models predict a mean observed $`B`$-band optical depth that scales as a function of redshift as $`\overline{\tau }(1+z)^{0.1}`$. For comparison, evolving models predict a dependence: $`\overline{\tau }(1+z)^{2.5}`$. We believe future radio surveys of high sensitivity that reveal large numbers of optically reddened sources at high redshift will provide the necessary data to constrain these models.
## 8 Acknowledgments
The authors would like to thank Paul Francis for many illuminating discussions and the referee for providing valuable suggestions on the structure of this paper. FJM acknowledges support from an Australian Postgraduate Award.
## Appendix A Derivation of Mean Optical Depth
Here we derive expressions for the mean and variance in total optical depth as a function of redshift in our evolutionary galactic dust model discussed in section 3.2. The galaxies are modelled as exponential dusty disks, randomly inclined to the line-of-sight.
We first derive the average number of galaxies intercepted by a light ray emitted from some redshift $`z`$ (ie. equation 14). Given a ‘proper’ number density of galaxies at some redshift $`n_g(z)`$, with each galaxy having an effective cross-sectional area $`\mu \sigma `$ as viewed by an observer ($`\mu `$ is a random inclination factor, where $`\mu =\mathrm{cos}\theta `$ and $`\theta `$ is the angle between the sky plane and the plane of a galactic disk), the average number of intersections of a light ray along some path length $`ds`$ will be given by
$$dN=n_g(z)\mu \sigma ds.$$
(24)
In an expanding universe we have $`n_g=n_0(1+z)^3`$, where $`n_0`$ is a local comoving number density and is assumed to be constant. Units of proper length and redshift are related by
$$\frac{ds}{dz}=\left(\frac{c}{H_0}\right)\frac{1}{(1+z)^2(1+2q_0z)^{1/2}}$$
(25)
(Weinberg 1972). The effective cross-section projected towards an observer for a randomly inclined disk is found by averaging over the random inclination factor $`\mu `$, where $`\mu `$ is randomly distributed between 0 and 1, and integrating over the exponential profile assumed for each disk with scale radius $`r_0(z)`$ (see equations 1 and 9). The product $`\mu \sigma `$ in equation (24) is thus replaced by
$$_0^1\mu 𝑑\mu _0^{\mathrm{}}e^{r/r_0(z)}\mathrm{\hspace{0.17em}2}\pi r𝑑r=\pi r_{0}^{}{}_{}{}^{2}(1+z)^{2\delta }.$$
(26)
Thus from equation (24), the average number of intersections to some redshift $`z`$ is given by
$`\overline{N}(z)`$ $`=`$ $`{\displaystyle _0^z}\mu \sigma n_g(z^{})\left({\displaystyle \frac{ds}{dz^{}}}\right)𝑑z^{}`$ (27)
$`=`$ $`n_0\pi r_0^2\left({\displaystyle \frac{c}{H_0}}\right){\displaystyle _0^z}{\displaystyle \frac{(1+z^{})^{1+2\delta }}{(1+2q_0z^{})^{1/2}}}𝑑z^{}.`$
With $`\tau _g`$ defined by $`n_0\pi r_0^2\left(\frac{c}{H_0}\right)`$, this directly leads to equation (14) for $`q_0=0.5`$.
The mean optical depth $`\overline{\tau }`$ is derived by a similar argument. If $`\tau _0(z)`$ is the optical depth observed through a face on galaxy at some redshift $`z`$ (equation 8), then a galactic disk inclined by some factor $`\mu `$ will have its optical depth increased to $`\tau _0(z)/\mu `$. Multiplying this quantity by equation (24), the extinction suffered by a light ray along a path length $`ds`$ is given by
$$d\tau =n_g(z)\sigma \tau _0(z)ds.$$
(28)
Thus the mean optical depth to some redshift $`z`$ can be calculated from
$$\overline{\tau }(z)=_0^z\sigma n_g(z^{})\tau _0(z^{})\left(\frac{ds}{dz^{}}\right)𝑑z^{}.$$
(29)
Given $`n_g(z)`$, $`\left(\frac{ds}{dz}\right)`$ and $`\sigma `$ (from the integral over $`r`$ in equation 26) above, and $`\tau _0(z^{})`$ from equation (8), the mean optical depth follows the general form
$`\overline{\tau }(z)`$ $`=`$ $`\mathrm{\hspace{0.17em}2}\tau _g\tau _B{\displaystyle _0^z}{\displaystyle \frac{(1+z^{})^{1+2\delta }}{(1+2q_0z^{})^{1/2}}}\xi \left({\displaystyle \frac{\lambda _B}{1+z}}\right)`$ (30)
$`\times `$ $`\left[1{\displaystyle \frac{\mathrm{ln}(1+z^{})}{\mathrm{ln}(1+z_{dust})}}\right]dz^{}.`$
Similarly, the variance in the optical depth distribution is defined as follows:
$$\sigma _\tau ^2(z)=\tau ^2\tau ^2=_0^z\sigma n_g(z^{})\tau _{0}^{}{}_{}{}^{2}(z^{})\left(\frac{ds}{dz^{}}\right)𝑑z^{}.$$
(31)
In terms of our model dependent parameters, this becomes
$`\sigma _\tau ^2(z)`$ $`=`$ $`\mathrm{\hspace{0.17em}2}\tau _g\tau _{B}^{}{}_{}{}^{2}{\displaystyle _0^z}{\displaystyle \frac{(1+z^{})^{1+2\delta }}{(1+2q_0z^{})^{1/2}}}\xi ^2\left({\displaystyle \frac{\lambda _B}{1+z}}\right)`$ (32)
$`\times `$ $`\left[1{\displaystyle \frac{\mathrm{ln}(1+z^{})}{\mathrm{ln}(1+z_{dust})}}\right]^2dz^{}.`$
|
no-problem/9901/cond-mat9901186.html
|
ar5iv
|
text
|
# From the magnetic-field-driven transitions to the zero-field transition in two-dimensions
For more than a decade it was widely accepted that two-dimensional electrons are insulating at zero temperature and at zero magnetic-field . Experimentally it was demonstrated that, when placed in a strong perpendicular magnetic field, the insulating phase turns into a quantum-Hall state. While this transition was in accordance with existing theoretical models , the density driven metal-insulator transition at zero magnetic-field, recently observed in high-quality two-dimensional systems , was unforeseen and, despite considerable amount of effort, its origins are still unknown . In order to improve our understanding of the zero magnetic-field transition, we conducted a study of the insulator to quantum-Hall transition in low-density, two-dimensional, hole system in GaAs that exhibits the zero magnetic-field metal-insulator transition . We found that, in the low field insulating phase, upon increasing the carrier density towards the metal-insulator transition, the critical magnetic-field of the insulator to quantum-Hall transition decreases and converges to the zero magnetic-field metal-insulator transition. This implies a common origin for both the finite magnetic-field and the zero magnetic-field transitions.
In Fig. 1a we plot the resistivity ($`\rho `$) of one of our samples as a function of magnetic-field ($`B`$) at several temperatures, with the hole-density ($`p`$) held fixed . At $`B=0`$ the system is insulating as indicated by a rapidly increasing $`\rho `$ as the temperature ($`T`$) approaches zero. The insulating behavior is maintained for $`B<B_c^L`$. For $`B>B_c^L`$, a quantum-Hall (QH) state ($`\nu =1`$, where $`\nu `$ is the Landau-level filling factor) is observed with $`\rho `$ tending to zero upon lowering of $`T`$. We identify $`B_c^L`$, the point where the temperature coefficient of resistivity (TCR) changes its sign, with the critical point of the insulator-to-QH transition . At still higher $`B`$, beyond the $`\nu =1`$ QH state, the system turns insulating and a second $`T`$-independent transition point is seen at $`B_c^H`$. $`B_c^H`$ therefore marks the critical $`B`$ of the QH-to-insulator transition . Following the path set by earlier studies we focus, for now, on the low-$`B`$ transition and follow, in Figs. 1b-1e, the evolution of the critical point $`B_c^L`$ as we increase $`p`$.
Data obtained from the same sample at successive increases of the density are shown in Figs. 1b-1e. As in Fig. 1a, the insulator-to-QH transition point is evident in Fig 1b, but the transition point “moves” to a lower $`B`$. This trend continues in Fig. 1c until finally, in Fig. 1d, the crossing point disappears. Along with this shift in $`B_c^L`$, we notice in Figs. 1a-1c, that the insulating behavior at $`B=0`$ becomes weaker until, in Fig. 1d, $`\rho `$ at $`B=0`$ is $`T`$-independent. In the next graph, Fig. 1e, the system has crossed over into its metallic phase and no transition, or $`T`$-independent point, is seen implying that the density of Fig. 1d ($`p=1.3410^{10}`$ cm<sup>-2</sup>) is the critical density of the metal-insulator transition (MIT) at $`B=0`$. This $`B=0`$ transition is the MIT in two dimensions (2D) first reported by Kravchenko *et al.* for Si samples . Our main result can now be stated: Upon increasing $`p`$, $`B_c^L`$ gradually tends to lower $`B`$’s, eventually converging to the $`B`$=0 MIT which, for this sample, takes place at $`p=1.3410^{10}`$ cm<sup>-2</sup>.
To complement our $`p`$-dependence study of the $`B`$-driven transitions we will next focus on the effect of a perpendicular $`B`$ on the $`p`$-driven transition. Our new starting point is the more conventional experimental demonstration of the $`B=0`$ MIT in 2D. In Fig. 2a we plot $`\rho `$ as a function of $`p`$ at several $`T`$’s and at $`B`$=0. A $`T`$-independent crossing point is seen here as well (at $`p_c=1.3410^{10}`$ cm<sup>-2</sup>), marking the transition from insulating behavior for $`p<p_c`$ to metallic behavior for $`p>p_c`$. We then repeat, in Figs. 2b-2e, the measurement of Fig. 2a at different values of $`B`$. In Figs. 2b and 2c, $`p_c`$ shifts to a lower value, a trend which reverses for $`B0.35`$T (Figs. 2d-2e). This trend reversal of $`p_c(B)`$ is accompanied by the development of non-monotonic dependence of $`\rho `$ on $`p`$, which is a precursor to the quantum Hall effect (QHE). We now combine these $`p_c(B)`$ results with the $`B_c(p)`$ of Fig. 1, to plot a comprehensive phase diagram of our system in the $`Bp`$ plane.
The phase diagram obtained from our data is shown in Fig. 3, where we plot the $`B`$ and $`p`$ coordinates of each one of the transitions. Separate symbols are given to $`B_c^L`$ and $`B_c^H`$, defined in Fig. 1, and to $`p_c`$ from Fig. 2. Several points emerge from inspecting the resulting phase diagram. First, we note that the results obtained from the two data sets (fixed $`p`$ and fixed $`B`$ measurements) are mutually consistent. Second, for fixed $`p`$’s between $`0.88`$ and $`1.3310^{10}`$ cm<sup>-2</sup> the low-$`B`$ insulating phase first turns metallic and then reappears at high $`B`$. This reentrant nature of the insulating phase is clearly reflected in the $`B`$ traces of Figs. 1a-1c. And third, the low $`B`$ region of the phase boundary reiterates the main result of our work and clearly depicts the continuous evolution of the transition from high-$`B`$ to the $`B`$=0 MIT. The relation between the transition at finite $`B`$ and the MIT transition at $`B`$=0, suggests that similar processes govern the transport for both transitions .
Fig. 3 also includes the high-$`B`$ side of the phase diagram ($`B_c^H`$). In fact, the low and high $`B`$ regimes are smoothly connected to form a single phase-boundary line. It is common practice to describe the finite-$`B`$ transitions in the language of quantum phase transitions . If we assume that the phase-boundary line of Fig. 3 comprises a set of quantum critical points, it is possible that universal features should be observed in its vicinity. To test this proposition we examine the value of $`\rho `$ at the transition points, $`\rho _c`$. In Fig. 4a we plot $`\rho _c`$ of our transitions as a function of $`B`$. Overall, $`\rho _c`$ is not constant, its value changing by almost a factor of 4 over our $`B`$ range. However, at very low as well as very high $`B`$, $`\rho _c`$ approaches a value close to $`h/e^2`$, the quantum unit of resistance. Although for our sample, at $`B`$=0, $`\rho _c`$ is close to $`h/e^2`$ we wish to point out that the value of $`\rho _c`$ at $`B`$=0 obtained from other samples varies by an order of magnitude, between 0.4 and 4$`h/e^2`$ .
So far we have shown that $`B_c^L`$, $`B_c^H`$ and $`p_c`$ define a common phase boundary line in the $`Bp`$ plane, and that at the intermediate $`B`$ range along this phase boundary $`\rho _c`$ significantly deviates from $`h/e^2`$, its value near $`B`$=0 and at high-$`B`$. It is instructive to consider the dependence of $`\rho _c`$ on $`p`$, rather than on $`B`$. In Figs. 1a-1c we can readily see the general trend: $`\rho _c`$ of the low and high-$`B`$ transitions at fixed $`p`$ are very close to each other. To test this result for our entire range of $`p`$ we plot, in Fig. 4b, $`\rho _c`$ versus $`p`$ obtained from our data. We see that the two transitions have collapsed onto a single curve for our entire range of $`p`$. This result demonstrates that for a given carrier-density $`\rho _c`$ of the low and high-$`B`$ transitions is the same. This supports the notion of symmetry between these transitions .
A possible relation between different transitions in 2D systems was noted by Jiang *et al.* who pointed out the similarities between the insulator-to-QH and the insulator-to-superconductor transitions. Theoretical basis for such similarity was introduced in ref. . In our work we found a relation between the finite $`B`$ insulator-to-QH transition and the metal-insulator transition at $`B`$=0. Since both transitions are measured in the same 2D system, we were able to continuously transform one to the other. This raises the possibility that both transitions share a common physical origin.
Methods
The sample used in this study is a $`p`$-type, inverted semiconductor insulator semiconductor (ISIS) sample grown on (311)A GaAs substrate with Si as a $`p`$-type dopant. In an ISIS device the carriers are accumulated in an undoped GaAs layer lying on top of an undoped AlAs barrier, grown over a $`p^+`$ conducting layer. This $`p^+`$ conducting layer is separately contacted and serves as a back-gate. The hole carrier-density ($`p`$) is varied by applying voltage ($`V_g`$) to the back-gate. The sample was wet-etched to the shape of a standard Hall-bar and the measurements were done in a dilution refrigerator with a base $`T`$ of $`57`$ mK, using AC lock-in technique with an excitation current of $`1`$ nA flowing in the \[01$`\overline{1}`$\] direction.
Acknowledgments.
The authors wish to thank Efrat Shinshoni, M. Hilke and Amir Yacoby for very interesting discussions. This work was supported by the NSF, the BSF and by a grant from the Israeli Ministry of Science and The Arts.
|
no-problem/9901/hep-ph9901335.html
|
ar5iv
|
text
|
# High 𝑃_𝑇 Leptons and 𝑊 Production at HERA
## 1 Introduction
The observation by the H1 experiment of a number of events containing high $`P_T`$ leptons in addition to large missing $`P_T`$, apparently in excess of the number expected from Standard Model processes, has aroused much recent interest and is outlined in section 2 of this article. The Standard Model background is expected to be dominated by $`W`$ production, a preliminary cross section for which has been measured by the ZEUS experiment. ZEUS have also searched for high $`P_T`$ tracks in events with missing $`P_T`$ using similar cuts to H1, the results of which are presented in section 3. A direct comparison between the H1 and ZEUS detector acceptances is shown in section 4. Finally, some theoretical speculations about possible sources of such events in the context of $`R_p`$-violating SUSY are presented in section 5.
## 2 H1 High $`P_T`$ Lepton Events
The H1 analysis is based on an inclusive search for events with a transverse momentum imbalance measured in the calorimeter, $`P_T^{\mathrm{c}alo}`$, greater than $`25`$ $`\mathrm{GeV}`$. This cut minimizes the contributions from neutral current and photoproduction processes and has a well understood experimental efficiency. In the selected event sample, 124 events contain high energy tracks with transverse momentum above 10 $`\mathrm{GeV}`$ and polar angles with respect to the proton direction above $`10^{}`$. The vast majority of these events are charged current events containing a high $`P_T`$ track close to the centre of an hadronic jet. The track isolation with respect to calorimetric deposits ($`\mathrm{D}_{\mathrm{jet}}`$) and with respect to other tracks ($`\mathrm{D}_{\mathrm{track}}`$) is quantified by the Cartesian distance in the $`\eta \varphi `$ plane <sup>1</sup><sup>1</sup>1Both H1 and ZEUS coordinate systems are right-handed with the $`Z`$-axis pointing in the proton beam direction and the horizontal $`X`$-axis pointing towards the centre of HERA. The pseudorapidity variable $`\eta `$ is related to the polar angle by $`\eta =\mathrm{ln}(\mathrm{tan}(\theta /2))`$.. Six events are found to contain isolated tracks with $`\mathrm{D}_{\mathrm{jet}}>1.0`$ and $`\mathrm{D}_{\mathrm{track}}>0.5`$ .
Lepton identification algorithms, based on the signal shape in the calorimeter and muon chamber hits, indicate that the six tracks in fact correspond to high $`P_T`$ leptons : one event contains an electron ($`e^{}`$) and five events contain muons (2 $`\mu ^+`$, 2 $`\mu ^{}`$ and one very energetic muon corresponding to a stiff track whose sign cannot be determined). The muon events are labelled $`\mu 1`$ to $`\mu 5`$ in the following. One of the muon events ($`\mu 3`$) also contains a positron with a lower transverse momentum $`P_T(e^+)=6.7`$ $`\mathrm{GeV}`$.
The lepton signature in each case has been investigated in detail and found to be consistent with the assigned hypothesis. For the electron candidate the shower pattern recorded in the calorimeter is compatible with the expectation for an electromagnetic shower, while the isolated track measured in the central tracker has a specific ionisation consistent with a single particle. The muon candidates are measured in the central tracking system, calorimeters and external iron yoke instrumented with muon chambers. For all tracks the specific ionisation in the central tracker is consistent with single minimum ionising particles. The energy depositions in the calorimeters sampled over more than 7 interactions lengths and the signals in the muon chambers are compatible in shape and magnitude with those expected from a minimum ionising particle. The probability that an isolated charged hadron would simulate a muon in both the calorimeter and the instrumented iron is estimated to be less than $`3\times 10^3`$.
In all events a hadronic shower has been detected in the calorimeters. In the event $`\mu 5`$ no charged particles are found in the core of the high-$`P_T`$ hadronic jet. In all events an imbalance in the net transverse momentum indicates the presence of at least one undetected particle. This hypothesis is supported by the large value for the lepton-hadron acoplanarity observed in most of the events, defined as the angle in the transverse plane between the hadronic system and the direction opposite to that of the high $`P_T`$ lepton. The significance of the transverse momentum imbalance and acoplanarity is tested with data using neutral current (NC) events, which are expected to be intrinsically coplanar and balanced in $`P_T`$. For comparison to the muon events, the kinematics in the NC sample is reconstructed using the positron track parameters instead of calorimetric information. The six high $`P_T`$ lepton events are compared to the NC control sample in figure 1. The probability for an NC event to have both $`\mathrm{\Delta }\varphi `$ and $`P_T^{\mathrm{miss}}`$ values greater than those measured in a given candidate is estimated from a high statistics simulation to be 1% for $`\mu 1`$ and less than 0.1% for the other candidates.
The Standard Model predictions for processes yielding events with isolated leptons and missing energy have been investigated. The predicted rates are dominated by $`W`$ production via the reaction $`e^+pe^+W^\pm X`$, two diagrams for which are shown in figure 2, followed by the leptonic decay of the $`W`$. The cross section of around $`60`$ $`\mathrm{fb}`$ per charge state and leptonic decay channel for this process, calculated using the program EPVEC , gives an expected $`1.7\pm 0.5`$ events in the electron channel and $`0.5\pm 0.1`$ events in the muon channel. A recent next to leading order calculation of the resolved photon contribution to the cross section gives a total cross section for $`e^+pe^+W^\pm X`$ of $`0.97`$ $`\mathrm{pb}`$, consistent with the leading order EPVEC estimate . Other significant sources of events with isolated leptons and missing transverse momentum include neutral current DIS in the electron channel and the $`\gamma \gamma \mu ^+\mu ^{}`$ process in the muon channel. The total predicted rates from all Standard Model processes are $`2.4\pm 0.5`$ events in the $`e^\pm `$ channel (compared with 1 event observed) and $`0.8\pm 0.2`$ events in the muon channel (compared with 5 events observed).
In figure 3 the observed events are compared to $`W`$ production Monte Carlo events in the plane of the transverse momentum of the hadronic system, $`P_T^X`$, versus the transverse mass of the lepton-neutrino system, $`M_T^\mathrm{}\nu `$. The electron event and two of the muon events ($`\mu 3`$ and $`\mu 5`$) are kinematically consistent with the Jacobian peak located around the $`W`$ mass and the low $`P_T^X`$ expected for $`W`$ production. Three muon events can only marginally be accommodated within this interpretation. None of the observed muon events are consistent with the distribution expected for $`\gamma \gamma \mu ^+\mu ^{}`$, also shown in figure 3.
## 3 ZEUS Results on $`W`$ Production and High $`P_T`$ Leptons
The results of a search for $`W`$ production and leptonic decay in $`46.6`$ $`\mathrm{pb}^1`$ of ZEUS $`e^+p`$ data have been presented elsewhere in these proceedings . The measured cross section from the electron channel of $`1.0`$ $`{}_{0.7}{}^{}{}_{}{}^{+1.0}`$ (stat) $`\pm `$ $`0.3`$ (syst) $`\mathrm{pb}`$ is in good agreement with the Standard Model prediction. The absence of any signal in the muon channel is consistent with the smaller efficiency for selecting events on the basis of calorimeter missing $`P_T`$, in turn a consequence of the soft hadronic $`P_T`$ spectrum for Standard Model $`W`$ production.
In order to avoid any hidden lepton identification inefficiencies, a separate search has been performed for isolated high $`P_T`$ vertex fitted tracks in events with large missing $`P_T`$, applying cuts similar to those outlined in . All events with a calorimeter $`P_T`$ greater than $`25`$ $`\mathrm{GeV}`$ are selected, with the exception of neutral current candidate events with an acoplanarity angle less than $`0.1`$ $`\mathrm{rad}`$. The isolation variables $`\mathrm{D}_{\mathrm{jet}}`$ and $`\mathrm{D}_{\mathrm{track}}`$ are defined for a given track, as in the H1 analysis, as the $`\eta \varphi `$ separation of that track from the nearest jet and the nearest remaining track in the event, respectively. Jets must have $`E_T>5`$ $`\mathrm{GeV}`$, an electromagnetic fraction less than $`0.9`$ and an angular size greater than $`0.1`$ $`\mathrm{rad}`$. All tracks with $`P_T>10`$ $`\mathrm{GeV}`$ in the selected events are plotted in the $`\{\mathrm{D}_{\mathrm{track}},\mathrm{D}_{\mathrm{jet}}\}`$ plane in figure 4. The $`4`$ tracks selected with $`\mathrm{D}_{\mathrm{jet}}>1.0`$ and $`\mathrm{D}_{\mathrm{track}}>0.5`$ agrees well with the expectation of $`4.2\pm 0.6`$ tracks from combined Monte Carlo sources.
All four isolated tracks are in fact identified as positrons using standard electron finding algorithms and criteria described in , consistent with the $`2.4\pm 0.5`$ ($`1.5\pm 0.4`$) electron type (muon type) events expected from Monte Carlo. There is therefore no evidence of an excess rate of high $`P_T`$ tracks, whether identified as leptons or not, in the 1994 to 1997 ZEUS data.
## 4 Comparison of H1 and ZEUS Results
As pointed out in , the ZEUS muon data at large calorimeter missing $`P_T`$ disfavours high hadronic $`P_T`$ $`W`$ production as the source of all the H1 high $`P_T`$ muon events. This is consistent with the kinematic properties of the H1 events themselves. While the low statistics of the H1 and ZEUS observations cannot currently exclude a statistical fluctuation, it is nevertheless interesting to ask whether any source of events with a topology similar to the H1 events would be observed at ZEUS. In particular, the leptons in the H1 events are concentrated at small polar angles, close to where the ZEUS central tracking chamber track reconstruction efficiency is expected to fall off.
Using $`W`$ production Monte Carlo events passed through the H1 and ZEUS detector simulations, the efficiency with which muons from $`W\mu \nu `$ decay have a corresponding track reconstructed with $`P_T>10`$ $`\mathrm{GeV}`$ can be calculated. The efficiencies for both $`W^+`$ and $`W^{}`$ production are plotted as a function of polar angle in figure 5. Also indicated are the polar angles of the H1 high $`P_T`$ muons, further details of which may be found in .
It can be seen that the H1 events lie in a region where the ZEUS track reconstruction efficiency is equally high, lending weight to the argument that a signal ought to have been seen in the ZEUS analyses presented here. However, the positions of the H1 and ZEUS turn on curves are significantly different and are currently being checked using suitable data samples.
Although more data will clearly be required to fully understand the source of the H1 high $`P_T`$ lepton events, it is nevertheless worthwhile at this point to consider new mechanisms that might give rise to events of this type.
## 5 Theoretical Speculations
To date, non-Standard Model production mechanisms for the isolated muon events have been proposed in and . In both papers the discussion is performed in the framework of the supersymmetric standard model with $`R_p`$-breaking. The primary process is the $`s`$-channel production of a single scalar top quark ($`\stackrel{~}{t}_1`$) in $`e^+d_k`$ collisions
$$e^+d_k\stackrel{~}{t}$$
(1)
through the $`R_p`$-breaking interaction Lagrangian
$$L=\lambda _{13k}^{}\mathrm{cos}\theta _t(\stackrel{~}{t}_1\overline{d}_{kR}e_L+\stackrel{~}{t}_1^{}\overline{e}_Ld_{kR})$$
(2)
where $`\lambda _{13k}^{}`$ denotes the $`R_p`$-violating coupling to the down quark of the $`k`$-th generation. The angle $`\theta _t`$ denotes the mixing angle in the scalar top quark sector; a similar term involving the heavier stop $`\stackrel{~}{t}_2`$ is also present with $`\mathrm{cos}\theta _t`$ replaced by $`\mathrm{sin}\theta _t`$.
The interaction Lagrangian (2) originates from the general $`R_p`$-breaking ($`\overline{)}R_p`$) superpotential
$$W_\mathit{}_p=\lambda _{ijk}L_iL_jE_{}^{c}{}_{k}{}^{}+\lambda _{ijk}^{}L_iQ_jD_{}^{c}{}_{k}{}^{}+\lambda _{ijk}^{\prime \prime }U_{}^{c}{}_{i}{}^{}D_{}^{c}{}_{j}{}^{}D_{}^{c}{}_{k}{}^{},$$
(3)
where the left-handed lepton (quark) superfield doublets are denoted by $`L`$ ($`Q`$), the right-handed lepton (quark) singlets by $`E`$ ($`U`$ and $`D`$), and $`i,j,k`$ are generation indices. The first two terms violate lepton number and the last term violates baryon number. The couplings $`\lambda `$, $`\lambda ^{}`$ and $`\lambda ^{\prime \prime }`$ are subject to many constraints from low-energy and high-energy LEP, HERA and Tevatron data .
The production mechanism (1) is based on the resonant formation of $`\stackrel{~}{t}`$ in $`e^+p`$ collisions since the rate associated with virtual $`\stackrel{~}{t}`$ production would be too small. These phenomena could be related to a possible surplus of high $`Q^2`$, high $`x`$ events in neutral current scattering seen in the 1994-1996 HERA data. However, even if the NC events cannot be interpreted as $`\stackrel{~}{t}`$ resonance production (not necessarily one single resonance), or are interpreted as a statistical fluctuation, there is still room left for speculation regarding the source of the isolated $`\mu ^+`$ events in the SUSY sector based on top squarks in the mass range of 200 – 230 $`\mathrm{GeV}`$, so long as the branching ratio $`B_{eq}`$ for the $`R_p`$-violating decay $`\stackrel{~}{t}e^+q`$ is small and the $`R_p`$-conserving decay modes are dominant.
In , the $`\stackrel{~}{t}`$ is produced in collisions of positrons with valence $`d`$-quarks in the proton, i.e. $`k=1`$ in equation 2 (down-stop scenario), whereas in the case $`e^+s\stackrel{~}{t}`$ ($`k=2`$, strange-stop scenario) is considered in addition. The papers also differ in the assumed squark decay chains, shown in figure 6, that give rise to the characteristic features of the muon events.
For a stop mass in the range 100 – 200 $`\mathrm{GeV}`$ and with $`\lambda _{131}^{}`$ large enough for stop production to be relevant, there is a wide range of parameters where the decay $`\stackrel{~}{t}b\stackrel{~}{\chi }_1^+`$ dominates over other decay modes. Then the decay chain shown in figure 6a, $`\stackrel{~}{t}b\stackrel{~}{\chi }_1^+`$, $`\stackrel{~}{\chi }_1^+\mu ^+\nu \stackrel{~}{\chi }_1^0`$ generates an isolated muon and a $`b`$ quark jet at large transverse momenta. Since the muon originates from the virtual $`W^+`$, similar events with isolated positrons should be observed. Moreover, to account for the topology of the events with large missing $`P_T`$, the neutralino $`\stackrel{~}{\chi }_1^0`$ must be assumed very long-lived ($`\mathrm{\Gamma }_{\stackrel{~}{\chi }_1^0}10^7`$ $`\mathrm{eV}`$), so that it may escape detection despite the presence of $`R_p`$-conserving decay channels. Since parameters are not easily arranged that give rise to such a long lifetime, alternative decay channels have been considered in .
If trilinear lepton couplings $`\lambda LLE^c`$ are also present in supersymmetric theories with sleptons in the mass range of 100 to 200 $`\mathrm{GeV}`$, another possibility for stop and subsequent chargino decays is open, as shown in figure 6b. The chargino may decay into a neutrino and a slepton, followed by the $`R_p`$-violating slepton decay to a positively charged muon and a neutrino. Such a chain can account for the observed final state, i.e. a jet, a single positively charged muon and missing transverse momentum.
In the case of stop production in $`e^+d`$ collisions (down stop scenario), the chargino must be heavy with $`m_{\stackrel{~}{\chi }_1^+}180`$$`190`$ $`\mathrm{GeV}`$ to account for the required balance of $`R_p`$-conserving and violating stop decay modes implied by the low-energy, HERA and Tevatron data. By contrast, for the strange stop scenario $`e^+s\stackrel{~}{t}`$ with a larger value of $`\lambda _{132}^{}`$ than $`\lambda _{131}^{}`$, one finds a solution for lighter chargino masses $`m_{\stackrel{~}{\chi }_1^+}100`$ – 140 $`\mathrm{GeV}`$.
Assuming a given value for $`m_{\stackrel{~}{t}}`$, the mass $`m_{\stackrel{~}{\chi }_1^+}`$ recoiling against the hadronic $`b`$ jet can be estimated from the calculated 4-momentum of the top squark and the measured 4-momentum of the $`b`$ jet: $`m_{\stackrel{~}{\chi }_1^+}^2=\left(p_{\stackrel{~}{t}}p_b\right)^2`$. The recoil masses must cluster for the observed events; if not, two-body decays of the stop resonance are not the origin of the events, or more than one stop is produced. It is amusing to observe that if both stops with masses $`m_{\stackrel{~}{t}_1}=200`$ $`\mathrm{GeV}`$ and $`m_{\stackrel{~}{t}_2}=230`$ $`\mathrm{GeV}`$ are responsible for the H1 events, the estimated recoiling mass $`m_{\stackrel{~}{\chi }_1^+}`$ falls in the range 130 – 140 $`\mathrm{GeV}`$, compatible with the strange stop scenario.
The branching ratio for the chargino decay $`\stackrel{~}{\chi }_1^+\nu _{\mathrm{}}\stackrel{~}{\mathrm{}}^+`$ can be expected to be close to 1/6. The subsequent decay $`\stackrel{~}{\mathrm{}}^+\mu ^+\nu `$ has to compete with other $`R_p`$-violating and also with $`R_p`$-conserving decay modes. The semi-quantitative discussion performed in suggests that a decay chain $`\stackrel{~}{t}b\stackrel{~}{\chi }_1^+b\nu \stackrel{~}{\mathrm{}}^+b\nu \nu \mu ^+`$, leading to the observed topology, could be realized in supersymmetric theories with $`R_p`$-breaking couplings. However, a large number of other final states with rather complex topologies should be observed at HERA in $`e^+p`$ collisions generated by the mixed $`R_p`$-conserving and violating decay modes. Single and multi-lepton states associated with one or more jets and, in most cases, missing transverse momentum due to escaping neutrinos can be expected,
$$\begin{array}{c}e^+p\stackrel{~}{t}\stackrel{~}{\chi }_1^+b\mathrm{}^+\mathrm{j}\nu \nu ,\mathrm{}^+\mathrm{}^+\mathrm{}^{}\mathrm{j},\mathrm{}^+\mathrm{jjj},\mathrm{jjj}\nu \hfill \\ e^+p\stackrel{~}{c}\stackrel{~}{\chi }_1^0c\mathrm{}^+\mathrm{}^{}\mathrm{j}\nu ,\mathrm{}^\pm \mathrm{jjj},\mathrm{jjj}\nu \hfill \end{array}$$
(4)
where $`\mathrm{}`$, $`\nu `$, j generically denote charged leptons, neutrinos and jets. However, not all combinations are possible in principle. For example single negatively charged lepton events can be accompanied by jets but not by neutrinos. Kinematical constraints imposed by the fixed masses of the intermediate supersymmetric particles can be exploited to check whether such hypothetical decay chains are realized or not.
From the above discussion it is clear that isolated $`\mu ^+`$ events in $`e^+p`$ scattering can occur in supersymmetric scenarios with $`R_p`$-violating interactions. The presence of both $`\lambda ^{}LQD^c`$ and $`\lambda LLE^c`$ terms in the superpotential provides a large variety of mechanisms. If true, a wealth of other interesting phenomena could be observed, not only at HERA.
## 6 Summary and Conclusions
The ZEUS results, along with the kinematic analysis of the events themselves, have shown that $`W`$ production alone is unlikely to account for all the H1 high $`P_T`$ lepton events. Moreover, it is likely that events of a similar topology to those observed by H1 would have been found by ZEUS in a similar high $`P_T`$ track based search. It is intriguing that high $`P_T`$ lepton events of the kind observed by H1 can naturally arise in certain $`R_p`$-violating SUSY scenarios. Nevertheless, only more data will allow the source of the events to be finally established.
JK has been partially supported by the Polish Committee for Scientific Research Grant 2 P03B 030 14. TM and DSW have been assisted by the British Council, Collaborative Research Project TOK/880/11/15.
## 7 References
|
no-problem/9901/math9901002.html
|
ar5iv
|
text
|
# Untitled Document
THE CLASSIFICATION OF THREE-DIMENSIONAL GRADIENT LIKE MORSE-SMALE DYNAMIC SYSTEMS A. O. Prishlyak
In papers \[1 — 4\] the topological classification of Morse-Smale vector fields on 2-manifolds and in on 3-manifolds is got. In classification three-dimensional Morse-Smale diffeomorphisms with non-intersected stable and unstable manifolds of saddle points is given.
In this paper new approach to the classifications problem of three-dimensional gradient like Morse-Smale dynamic systems is represented. The criterion of the such systems topological equivalence is in terms of homeomorphism between surfaces with two circles series on its is given.
The invariant of vector fields and diffeomorphisms are constructed and the classification of gradient like dynamic systems is obtained.
1. Basic definitions. Smooth dynamic system (vector field or diffeomorphism) is called Morse-Smale system if:
1) it has the finite number of the critical elements (periodical trajectories for diffeomorphism, fixed points and the closed orbits in case of vector field) and all of them is non-degenerate (hyperbolic);
2) stable and unstable integral manifolds of critical elements have transversal intersections;
3) the limit set for every trajectory is critical element. Heteroclinic trajectories of diffeomorphism are trajectories lying in the stable and unstable manifolds intersection for the same index critical elements.
Morse-Smale dynamic system is called by gradient like, if there isn’t closed orbit in case of vector field and heteroclinic trajectories in the case of diffeomorphism .
Vector fields are called topological equivalent, if there exists the homeomorphism of manifold onto itself, which maps integral trajectories into integral trajectories preserving their orientation. By graph we will understand finite 1-dimensional CW-complex. The isomorphism of graphs is the cell homeomorphism (id est., graph homeomorphism, which maps vertexes into vertexes and edges into edges).
2. The criterion of the vector fields topological equivalence. Let $`M^3`$ be a closed oriented manifold, $`X`$ and $`X^{}`$ be Morse-Smale vector fields on it. Let $`a_1,\mathrm{}`$, $`a_k`$ be fixed 0-points of the fields $`X`$, and $`a_1^{},\mathrm{}`$, $`a_k^{}`$ of the fields $`X^{}`$; $`b_1,\mathrm{}`$, $`b_n`$ and $`b_1^{},\mathrm{}`$, $`b_n^{}`$ be the fixed 1-points. Let $`K`$ is the union of the 0- and 1-points stable manifolds. We shall consider tubular neighborhood $`U(K)`$ of this union.
We denote by $`N=U(K)`$ the boundary of this neighborhood for field Õ and by $`N^{}`$ for field $`X^{}`$ . Then these boundaries are the surfaces which Heegaard splitting of manifold $`M^3[9]`$.
We denote by $`v(x)`$ and $`u(x)`$ stable and unstable manifolds of fixed points $`x`$ . Let $`u\mathrm{\_}_i`$ be the circles, which are obtained as a result of 1-points unstable manifold intersections. Then $`u\mathrm{\_}_i`$ is a set of non crossed circles on surface $`F`$.
If $`c_1,..`$, $`c_m`$ are the fixed 2-points, then intersection $`v_i=v(c_i)N`$ will form another set of circles on surface F. Analogously, for the field $`X^{}`$ on the surface $`N^{}`$ there exists two sets of circles.
If there is one 0-point and one 3-point then sets of circles will be the systems of the meridians of surface which form Heegaard diagram of manifold $`M^3[9].`$ Lemma 1. Field $`X`$ is topological equivalently to $`X^{}`$ if and only if there is a homeomorphism of surface $`f:FF`$, which maps the first set of circles into the first one and the second into the second. Proof. Necessity results from building. We shall prove adequacy. Let such homeomorphism exists. We shall consider disks which lie onto unstable integral manifolds $`U(b_i)`$ , contain dots $`b_i`$ and bound by circles $`u_i`$ . Then we can continue homeomorphisms from the boundary of these disks up to disks homeomorphisms and such that translate integral trajectories into integral trajectories (because each integral trajectory, except fixed dots, crosses the boundary of disk.
Analogously there exists disks homeomorphisms consisting of the of integral trajectories parts begun on the second type circles and ended in the fixed 2-point. Then surface $`F`$ along with these disks cuts 3-manifold on 3-disks, each of which has one fixed point of index 0 or 3. Having the homeomorphisms of the boundaries of these, shall continue their into interior of its. Thus we constructed homeomorphism of manifolds which set the topological equivalence of vector fields.
3. The expanding of the isomorphisms of graphs up to surfaces homeomorphism.
Let $`G`$ is an oriented graph imbedded into surface $`F`$, and $`G^{}`$ into $`F^{}`$. If the graphs are isomorphic (it is possible not preserving the orientation of edges) then there exist the finite number of the different isomorphisms between them. Then question on the existence of surfaces homeomorphism, restriction of which on graphs is the graphs isomorphism, is equivalent to question on the capability of the graphs isomorphism extension up to surfaces homeomorphism.
Let $`g:GG^{}`$ is a graphs isomorphism which maps vertex $`A_i`$ of the graph $`G`$ on vertex $`A_i^{}`$ , and edges $`B_j`$ on $`B_j^{}`$.
Let $`g:GG^{}`$ be a graph isomorphism which maps vertex $`A_i`$ of graph $`G`$ on vertex $`A_i^{}`$ , and edges $`B_j`$ on $`B_j^{}`$. Denote by $`U(G)`$ the tubular neighborhood of graph $`G`$ in surface $`N`$ and let be the projection of its closure on à graph. Then complement $`N\backslash U(G)`$ consist of surfaces $`F_i`$ with boundary $`andF_i=U(G)`$. Let us cut each circle from the boundary of surface $`F_i`$ to arcs in a such way that each arc maps by projection $`p`$ on one edge of graph $`G`$ and reverse image $`p^1(B_j)`$ for each edges consist of two arc from all surfaces. Let us choose such orientation of the arcs that the projection $`p`$ preserves the orientation and we will denote these arcs by the same letters as the appropriate edges.
We fix the orientation on each surface $`F_i`$ (which is compatible with surface $`F`$ orientation if the surface $`F`$ is oriented and in an arbitrary way otherwise). For each circle from the boundary of surface we form a word consisted from letters $`B_j^{\pm 1}`$ which denote arcs (edges of graph) from this circle. We write the letters in such consequence in which we meet it when we go around the circle along orientation compatible with surface $`F_i`$ orientation. Letter has degree $`+1`$ if the orientation of correspondence arc is the same as the circle orientation and -1 otherwise. Two words are called equivalent if one from another can be obtained by cyclic letters permutation. This situation is obtained if we choose the other beginning of the circuit. The words are called reverse if one from another can be obtained in result of writing letters in reverse order with changing it degree and, possibly, cyclic permutation. This situation holds under the circle circuit with other orientation.
For each surface $`F_i`$ we compose the list consisted of the number $`n_i`$ which is equal the surface $`F_i`$ genus and words which have written when we go around surface boundary along orientation. Two such list are called equivalent if it has the same numbers $`n_i`$ and there is the one to one correspondence between words such that corespondent words are equivalent. The list are reverse if it has the same numbers $`n_i`$ and all the words are reverse.
Thus for surface $`N`$ and of graph $`G`$ we construct the collection of lists in a way that each list correspond one surface $`F_i`$. Two such collection are called equivalent if there is one to one correspondence between list such that correspondent list are equivalent or reverse. Lemma 2. Let $`G`$ be the oriented graph embedded in surface $`N`$ and $`G^{}`$ in $`N^{}`$, $`g:GG^{}`$ be a graph isomorphism, which maps vertex $`A_i`$ of graph $`G`$ in vertex $`A_i^{}`$ and edges $`B_j`$ in $`B_j^{}`$. Then the graph isomorphism can be extended to surface homeomorphism iff replacing $`B_j`$ on $`B_j^{\pm 1}`$ (we choose sign in depending on edges orientation preserving) in the lists collection for pair $`(N,G)`$ we obtain the lists collection for pair $`(N^{},G^{})`$. In addition the homeomorphism preserve the orientation if all correspondent lists are equivalent. Proof. Necessity of theorem condition is followed from construction. Indeed surfaces homeomorphism gives one to one correspondence between the lists and the word sets in its, in addition if we get start go around from other point we obtain equivalent words and lists, and if we reverse the orientation we obtain reverse ones.
Sufficiency. Let $`U_i`$ be a connected component after cutting surface $`N`$ by graph. Then they are gomeomorphic to interiors of surfaces $`F_i`$. Surface $`N`$ can be obtained in result of surfaces $`F_i`$ gluing to graph G. Let us consider the boundaries of surfaces $`F_i`$ as $`n`$-tagon (where $`n`$ is the number of letters in the word). Each edge corresponds to one letter from word and attached map on one edge of graph G. Then the graph isomorphism and the word equivalence give natural homeomorphism between surface boundaries. Because of the genus and the number of boundary connected component are same for surfaces $`F_i`$ and $`F_i^{}`$ then they are gomeomorphic. In addition there exist homeomorphism, which expand the given boundary homeomorphism. This means that graph isomorphism can be expanded to surface homeomorphism.
4. The gradient like vector field invariant construction.
As we do it in section 2 for each vector field we construct surface with two sets of circles on it. The graph is the union of these circles. Vertexes of graph are the circle intersection and one arbitrary point on each circle without intersections. The edges are the arc between vertexes. All edges of graph are decomposed on two sets depending on which sets of circle the correspondent circle belong. We fix the arbitrary orientation of this graph. For this embedded graph as in section 3 we construct the word list collection with letters corresponded to the edges of the graph.
Definition. Such constructed graph with edges decomposition on two sets and with word list collection is called distinguished graph of vector field. Two distinguished graphs are called equivalent if there exist graph isomorphism, which preserve edges decomposition on two set. Replacing letters from word list collection of first graph by corresponding letters from second graph (with degree $`\pm 1`$ depending on orientation) we have to obtain word list collection, which is equivalent to second graph word list collection.
Theorem 1. Two vector fields are topological equivalent if an only if their distinguished graphs are equivalent.
Proof is followed from using lemmas 1 and 2.
5. Topological conjugation of diffeomorphisms. Let $`f:M^3M^3`$ be a gradientlike Morse-Smale diffeomorphism. As it have been done in section 2 we construct surface with two set of circle on it and afterwards as in section 4 distinguished graph. Then the diffeomorphism $`f`$ action on saddle point integral manifolds induce map between first type circles and map between second type circles and isomorphism of distinguished graph on itself, which we call by inner.
Theorem 2. Two gradientlike Morse-Smale diffeomorphisms $`f`$ and $`g`$ are topological conjugated iff there exist their distinguished graphs isomorphism which gives the equivalence between them and commutate with inner graph isomorphisms.
Proof. Necessity follow from construction. Let us prove the sufficiency. As in theorem 1 we construct homeomorphism $`h`$ between manifolds which maps stable manifold on stable ones and unstable on unstable. In addition $`h(f(U))=g(h(U))`$, where $`U`$ is the part of stable or unstable manifold on which it separated by another manifolds. Analogously dimension 2 in \[Grines\] this homeomorphism can be corrected to needed homeomorphism.
6. Realization of dynamic system with given invariant.
Let us research the problem when distinguishing graph represent a surface with two sets of circles on them and gradient like vector field. Let $`K`$ is a complex received in result of gluing surfaces, appropriate to the lists, to the graph. As each list of words sets a surface with boundary, structure of a surface (to be local homeomorphic to the plane) can be broken only in gluing points, that is on edges and in vertexes of the praph.
1) The condition that, that a complex $`K`$ is locally plane in internal points of edges equivalent to that there is two part of surfaces boundaries that is glued to the edge. It means, that each letter or it reverses meets equally two times in all lists.
2) We shall assume that the first condition is executed. For each vertex we shall consider the set of incident edges. Two edges we shall name adjacent if they lie in the boundary of one of gluing surfaces and have the common vertex in it. This condition equivalently to that there is a word in which the appropriate letters are adjacent or are first and last letters of a word. Two edges we shall name equivalent if there is a chain of the adjacent edges, which connect them. Then a condition that that a complex $`K`$ is local plane in vertex is equivalent to that for each vertex all incident edges are equivalent.
3) Let conditions 1) and 2) are executed. Then distinguishing graph determinate the graph on a surface. This graph is two sets of circles if and only if each vertex is incident to four edges or is the beginning and end of one edge, if it together with this edge forms a loop. Further, in vertex with four incident edges the adjacent edges should belong to various sets of edges (which correspond to two sets of circles).
Theorem 3. Distinguishing graph is a graph of gradient-like vector field if and only if
a) Each letter in the whole set of the lists of words meets equally two times,
b) Each vertex is incident to four edges or is the beginning and end of one edge,
c) Each of four edges, is adjacent for two others from other set of edges.
Proof is followed from discussion above.
Literature.
1. Aranson S.H., Grines V.Z. Topological classification of flows on closed 2-manifolds $`//`$ Advant. math sc.(Russian)- 41, N.1, 1986.- P.149-169.
2. A.V.Bolsinov, A.A.Oshemkov, V.V.Sharko. On Classification of Flows on Manifolds. I $`//`$ Methods of Funct. An. And Topology, $`v.2`$, N.2, 1996, $`P.131146.`$
3. Peixoto M. On the classification of flows on two-manifolds. In: Dynamical Systems, edited by M.Peixoto.- Academic Press.- 1973.- P.389-419.
4. Fleitas G. Classification of Gradient like flows of dimensions two and three $`//`$ Bol. Soc. Brasil. Mat.-9, 1975.- P.155-183.
5. Umanski Ya.L. The circuit of three-dimensional Morse-Smale dynamic system without the closed trajectories $`//`$Isvestiya of USSR Acad. of Sc., 230, N6,1976, $`P.12861289.`$
6. Grines V.Z. Kalay I.N. Classification of three-dimensional gradient like dynamic systems $`//`$ Advant. Math. Sc.(Russian), $`v.49`$, N.2, 1994.- P.149-150.
7. Smale. S. On gradient dynamical system $`//`$ Ann. Math. 74, 1961, P.199-206.
8. Anosov D.V. Smooth dynamic systems 1 $`//`$ Results of a science and engineering. Modern math. Problems. Fund. Directions, V.1, 1985,(Russian)- P.151-242.
9. Matveeev S.V., Fomenko A.T. Algorithmic and computer methods in three-dimensional topology. — M.: MSU, 1991.- 301p.
10.Prishlyak A. On graph, which is embedded in a surface $`//`$ Advant. Math. Sc. (Russian), V.56, N.4, 1997, -P.211-212.
Kiev University
e-mail: prish@mechmat.univ.kiev.ua
|
no-problem/9901/cond-mat9901290.html
|
ar5iv
|
text
|
# Observation of the distribution of molecular spin states by resonant quantum tunneling of the magnetization
\[
## Abstract
Below 360 mK, Fe<sub>8</sub> magnetic molecular clusters are in the pure quantum relaxation regime and we show that the predicted “square-root time” relaxation is obeyed, allowing us to develop a new method for watching the evolution of the distribution of molecular spin states in the sample. We measure as a function of applied field $`H`$ the statistical distribution $`P(\xi _H)`$ of magnetic energy bias $`\xi _H`$ acting on the molecules. Tunneling initially causes rapid transitions of molecules, thereby “digging a hole” in $`P(\xi _H)`$ (around the resonant condition $`\xi _H`$ = 0). For small initial magnetization values, the hole width shows an intrinsic broadening which may be due to nuclear spins.
\]
Strong evidence now exists for thermally-activated quantum tunneling of the magnetization (QTM) in magnetic molecules such as Mn<sub>12</sub>ac and Fe<sub>8</sub> . Crystals of these materials can be thought of as ensembles of identical, iso-oriented nanomagnets of net spin $`S`$ = 10 for both Mn<sub>12</sub>ac and Fe<sub>8</sub>, and with a strong Ising-like anisotropy. The energy barrier between the two lowest lying spin states with $`S_z`$ = $`\pm `$10 is about 60 K for Mn<sub>12</sub>ac and 25 K for Fe<sub>8</sub> . Theoretical discussion of thermally-activated QTM assumes that thermal processes (principally phonons) promote the molecules up to high levels, not far below the top of the energy barrier, and the molecules then tunnel inelastically to the other side. The transitions are therefore almost entirely accomplished via thermal excitations.
At temperatures below 360 mK, Fe<sub>8</sub> molecular clusters display a clear crossover from thermally activated relaxation to a temperature independent quantum regime, with a pronounced resonance structure of the relaxation time as a function of the external field . This can be seen for example by hysteresis loop measurements (Fig. 1). In this regime only the two lowest levels of each molecule are occupied, and only “pure” quantum tunneling through the anisotropy barrier can cause direct transitions between these two states. It was surprising however that the observed relaxation of the magnetization in the quantum regime was found to be non-exponential and the resonance width orders of magnitude too large . The key to understanding this seemingly anomalous behavior now appears to involve the ubiquitous hyperfine fields as well as the (inevitable) evolving distribution of the weak dipole fields of the nanomagnets themselves .
In this letter, we focus on the low temperature and low field limits, where phonon-mediated relaxation is astronomically long and can be neglected. In this limit, the $`S_z`$ = $`\pm `$10 spin states are coupled by a tunneling matrix element $`\mathrm{\Delta }_{\mathrm{tunnel}}`$ which is estimated to be about 10<sup>-8</sup> K . In order to tunnel between these states, the magnetic energy bias $`\xi _H=g\mu _BSH`$ due to the local magnetic field $`H`$ on a molecule must be smaller than $`\mathrm{\Delta }_{\mathrm{tunnel}}`$ implying a local field smaller than $`10^9`$ T for Fe<sub>8</sub> clusters. Since the typical intermolecular dipole fields are of the order of 0.05 T, it seems at first that almost all molecules should be blocked from tunneling by a very large energy bias. Prokofev and Stamp have proposed a solution to this dilemna by assuming that fast dynamic nuclear fluctuations broaden the resonance, and the gradual adjustment of the dipole fields in the sample caused by the tunneling, brings other molecules into resonance and allows continuous relaxation . A crucial prediction of the theory is that at a given longitudinal applied field $`H`$, the magnetization should relax at short times with a square-root time dependence:
$$M(H,t)=M_{\mathrm{in}}+(M_{\mathrm{eq}}(H)M_{\mathrm{in}})\sqrt{\mathrm{\Gamma }_{\mathrm{sqrt}}(\xi _H)t}$$
(1)
Here $`M_{\mathrm{in}}`$ is the initial magnetization at time $`t`$ = 0 (i.e. after a rapid field change), and $`M_{\mathrm{eq}}(H)`$ is the equilibrium magnetization. The rate function $`\mathrm{\Gamma }_{\mathrm{sqrt}}(\xi _H)`$ is proportional to the normalized distribution $`P(\xi _H)`$ of energy bias in the sample:
$$\mathrm{\Gamma }_{\mathrm{sqrt}}(\xi _H)=c\frac{\mathrm{\Delta }_{\mathrm{tunnel}}^2}{\mathrm{}}P(\xi _H)$$
(2)
where $`\mathrm{}`$ is Planck’s constant and $`c`$ is a constant of the order of unity which depends on the sample shape. If these simple relations are true, then measurements of the short time relaxation as a function of the applied field $`H`$ gives experimentalist a powerful new method to directly observe the distribution $`P(\xi _H)`$. Indeed the predicted $`\sqrt{t}`$ relaxation (Eq. (1)) has been seen in preliminary experiments on fully saturated Fe<sub>8</sub> crystals . We show here that it is accurately obeyed for saturated and non-saturated samples (Fig. 2) and we find that a remarkable structure emerges in $`P(\xi _H)`$ as presented in the following.
In order to carefully study $`P(\xi _H)`$ and its evolution as the sample relaxes, we have developed a unique magnetometer consisting of an array of micro-SQUIDs on which we placed a single crystal of Fe<sub>8</sub> molecular clusters. The SQUIDs measure the magnetic field induced by the magnetization of the crystal (see inset of Fig. 1). The advantage of this magnetometer lies mainly in its high sensitivity and fast response, allowing short-time measurements down to 1 ms. Furthermore the magnetic field can be changed rapidly and along any direction.
Figure 2 shows a typical set of relaxation curves plotted against the square-root of time. However instead of saturating the sample before each relaxation measurement so that the initial magnetization $`M_{\mathrm{in}}=M_s`$ as described in , these measurements were made by rapidly quenching the sample from 2 K in zero field (ZFC), i.e. for an initial magnetization $`M_{\mathrm{in}}`$ = 0. The quench takes approximately one second and thus the sample does not have time to relax, either by thermal activation or by quantum transitions, so that the high temperature “thermal equilibrium” spin distribution is effectively frozen in. Once the temperature is stable (in this case 40 mK) a measuring field is applied, the timer is set to $`t`$ = 0, and the relaxation of the magnetization is recorded as a function of time. The entire procedure was repeated for each measuring field shown in Fig. 2. As can be seen for short times $`t<100`$ s the square root relaxation is well obeyed. Note that all curves extrapolate back to $`M`$ = 0 at $`t`$ = 0. A fit of the data to Eq. (1) determines $`\mathrm{\Gamma }_{\mathrm{sqrt}}`$.
A plot of $`\mathrm{\Gamma }_{\mathrm{sqrt}}`$ vs. $`H`$ is shown in Fig. 3 for the zero field cooled data, as well as distributions for three other values of the initial magnetization which were obtained by quenching in small fixed fields (field cooled FC magnetization). The distribution or an initial magnetization close to the saturation value is clearly the most narrow reflecting the high degree of order starting from this state. The distributions become more broad as the initial magnetization becomes smaller reflecting the random fraction of reversed spins. The small satellite bumps are due to flipped nearest neighbor spins on the tri-clinic lattice as seen in computer simulations .
We can exploit this technique of measuring $`P(\xi _H)`$ in order to observe the evolution of molecular states in the sample during relaxation by quantum tunneling of the magnetization. We first field cooled the sample (thermally anneal) as described above in order to obtain the desired initial magnetization state. Then after applying a field $`H_t`$, we let the sample relax for a time $`t_t`$, which we call “tunneling field” and “tunneling time” respectively. During the tunneling time, a small fraction of the molecular spins tunnel and reverse their direction. Finally, we applied a small measuring field and record the short time relaxation which again can be fit to a square root law (Eq. (1)) yielding $`\mathrm{\Gamma }_{\mathrm{sqrt}}`$. The entire procedure is repeated many times for other measuring fields in order to probe the distribution as a function of field $`H`$, and thus we obtain the distribution $`P(\xi _H,H_t,t_t)`$ which we call a “tunneling distribution”.
Figure 4 shows tunneling distributions for field $`H_t`$ = 0 and for tunneling times between 1 and 250 s for the case that the initial magnetization starts from the fully saturated state. Note the rapid depletion of molecular spin states around the resonant field $`H_t`$ = 0 and how quickly the depletion depth and width increase with tunneling time. In effect, a “hole is dug” into the distribution function around $`H_t`$. The hole arises because only spins in resonance can tunnel. The hole is spread out because as the sample relaxes, the internal fields in the sample change such that spins which were close to the resonance condition may actually be brought into resonance. Notice however that “wings” are created on each side of the hole because other spins are pushed further away from resonance. These features are in good agreement with Monte Carlo simulations of the relaxation for non-spherical samples .
In Fig. 4(b) we see the extraordinary effect of sample annealing (i.e. for small values of the initial magnetization $`M_{\mathrm{in}}`$) on the evolution of $`P(\xi _H,H_t,t_t)`$ with time. Now the depletion proceeds over an extremely narrow bias range. This is virtually incontestable experimental proof that we are seeing tunneling relaxation. The narrowing of the hole is because in the annealed sample further incremental relaxation hardly changes the internal demagnetization field. Notice that the initial line shape of $`P(\xi _H)`$ is very accurately fit to a Gaussian for the annealed samples, exactly as predicted for the dipole field distribution of a dense set of randomly oriented spins .
Further investigation of the effect of sample annealing led us to another remarkable discovery (Fig. 5). Progressive annealing such that $`|M_{\mathrm{in}}|<|0.5M_s|`$, eventually leads to a hole linewidth which at short times is independent of further annealing, and has a half linewidth of 0.8 mT. It is interesting that such an intrinsic linewidth was predicted by Prokof’ev-Stamp . It is claimed to come from the nuclear spins which would give rise to a linewidth $`\xi _0`$ of roughly the same order (although only 2% of natural iron has a nuclear moment, there are other nuclei in the clusters that can contribute to the hyperfine fields, i.e. more than 100 hydrogen, 18 nitrogen and 8 bromine atoms!). We notice that any intrinsic linewidth due to the tunneling matrix element itself is 5 orders of magnitude smaller, and would be quite unobservable. According to Eq. (2), the ratio $`\mathrm{\Gamma }_{\mathrm{sqrt}}/E_D`$ (where $`E_D`$ is the Gaussian half-width of $`P(\xi _H)`$ for strongly annealed samples) should be a constant, and thus allows us to estimate $`\mathrm{\Delta }_{\mathrm{tunnel}}`$ from our relaxation measurements. We find that it is indeed a constant (even though $`E_D`$ and $`\mathrm{\Gamma }_{\mathrm{sqrt}}`$ vary with $`M_{\mathrm{in}}`$), and we extract $`\mathrm{\Delta }_{\mathrm{tunnel}}5\times 10^8`$ K for $`|M_{\mathrm{in}}|<|0.5M_s|`$, assuming $`c=1`$. This agrees well with the expected value .
In conclusion, we have developed a new measurement technique yielding $`P(\xi _H)`$ which is related to the internal dipole field distributions always present in crystals of molecular clusters. The distribution evolves during relaxation by tunneling in a non-trivial way, and can be monitored by our technique, revealing the details of how the tunneling is proceeding in the sample, which molecules are tunneling, and how the time-varying internal fields influence the relaxation. The shape of the hole for thermal annealed distributions indicates a fast dynamic relaxation over a field range of 0.8 mT which could correspond in the Prokof’ev-Stamp theory to the nuclear linewidth $`\xi _0`$. Although this is only indirect evidence of the nuclear mechanism, it is hard for us to see what else could be operating at these temperatures. Our evidence for the role of the dipole interactions is on the other hand very direct, and in good agreement with Monte Carlo simulations. We believe that our technique should work for other multi-particle spin systems in the quantum regime (like quantum spin glasses ), and could give quite new information on the non-ergodic relaxation behavior typical of these systems.
|
no-problem/9901/gr-qc9901032.html
|
ar5iv
|
text
|
# Comment on entropy bounds and the generalized second law
## I Introduction
A cornerstone of black hole thermodynamics is the generalized second law (GSL), which asserts that in any process, the generalized entropy
$$S^{}=S+S_{\mathrm{bh}}$$
(1)
never decreases, where $`S`$ denotes the entropy of matter outside of black holes and $`S_{\mathrm{bh}}=𝒜/4`$, where $`𝒜`$ denotes the total surface area of the black hole horizons. (Here and throughout this paper we use units where $`G=c=\mathrm{}=k=1`$.) The validity of the GSL is essential for the consistency of black hole thermodynamics and for the interpretation of $`𝒜/4`$ as representing the physical entropy of a black hole.
It was already recognized at the time the GSL was first postulated that a potential difficulty arises when one lowers a box initially containing energy $`E`$ and entropy $`S`$ toward a black hole . Classically, a violation of the GSL can be achieved if one lowers the box sufficiently close to the horizon. A resolution of this difficulty was proposed by Bekenstein by postulating that the entropy to energy ratio of any matter put into a box must be subject to the universal bound
$$S/E2\pi R$$
(2)
where $`E`$ denotes the energy in the box, and $`R`$ denotes some suitable measure of the size of the box. Naively, at least, such a bound would rescue the GSL by preventing one from lowering a box close enough to a black hole to violate it.
However, an alternative resolution of the apparent difficulty with the GSL was given in . There it was noted that there is a quantum “thermal atmosphere” surrounding a black hole, which produces a large “buoyancy force” on a box when it is slowly lowered very close to the horizon. When this buoyancy force is taken into account, the optimal place to drop such a box into a black hole no longer is at the horizon of the black hole but rather at the “floating point” of the box, which lies at a finite distance from the horizon. When the effects of the buoyancy force on energy balance are properly taken into account, it was found that the GSL always holds in this process .
The analysis of assumed, for simplicity, that the box was “thin” in the sense that its proper height, $`b`$, is small compared with the scale of variation of the redshift factor, $`\chi `$, i.e., $`b\chi (d\chi /dl)^1`$, where $`l`$ denotes proper distance from the horizon. This analysis was then generalized to the case of “thick boxes” in , although this generalization was done in the context of a slightly different process, wherein, rather than dropping the box into the black hole, the contents of the box are allowed to “leak out” as the box is raised. Nevertheless, several years ago Bekenstein argued that for boxes with $`b`$ at least as large as $`A^{1/2}`$ (where $`A`$ denotes the horizontal cross-sectional area of the box), the buoyancy effects of the thermal atmosphere are negligible. He then showed that the bound (2) must hold for such boxes in order that the GSL be valid, in apparent contradiction with the conclusions of the analysis of .
The purpose of this paper is to resolve this apparent contradiction. In the course of his analysis, Bekenstein made some assumptions concerning the nature of unconstrained thermal matter and the location of the floating point of the box. Under these assumptions, it is indeed necessary for the validity of the GSL that the bound (2) hold, as Bekenstein found. However, we shall show that eq. (2) holds automatically as a consequence of the same assumptions used to show that it is necessary for the validity of the GSL. In other words, if one had matter which violated eq. (2), then Bekenstein’s assumption about the nature of unconstrained thermal matter and/or his assumption about the location of the floating point of the box could not be correct.
In the next section, we show that—whether or not eq. (2) is satisfied—the GSL holds in any process where a (possibly “thick”) box, initially containing energy $`E`$ and entropy $`S`$, is lowered toward a black hole and then dropped in. Bekenstein’s arguments are then analyzed in Section 3.
## II Validity of the GSL for “thick” boxes
It was shown in that the bound (2) is not needed for the validity of the GSL for the case of a “thin” box. The analysis of was generalized to “thick” boxes in the Appendix of . However, in a slightly different process was considered (in response to criticisms of ), in which the contents of the box are allowed to slowly leak out as the box is raised. Consequently, the formulas of are not immediately applicable to the present situation where the box is dropped into the black hole. Thus, in this section, we shall extend the analysis and arguments given in the Appendix of to the present case.
To begin, in a given region of space outside of the black hole, unconstrained thermal matter is defined to be the state of matter that maximizes entropy at a fixed volume and energy (as measured at infinity).<sup>*</sup><sup>*</sup>*By contrast, the terminology “thermal matter” would be used to denote matter which is in thermal equilibrium but which may have additional “constraints” resulting, e.g., from the presence of box walls (which may exclude some modes of excitation of the matter) or restrictions on the species of particles that are present. It should be noted that the properties of unconstrained thermal matter may depend upon location, i.e., for unconstrained thermal matter the functional dependence of the entropy density, $`s`$, on energy density, $`e`$, may vary with position outside of the black hole. We make two assumptions about unconstrained thermal matter: (i) We assume that unconstrained thermal matter is (locally) homogeneous, so that the integrated Gibbs-Duhem relation holds
$$e+PTs=0$$
(3)
where $`T`$ is the temperature of the unconstrained thermal matter, and $`P`$ is its pressure. (ii) We assume that the “thermal atmosphere” of a black hole is described by unconstrained thermal matter, with locally measured temperature given by $`T=T_{\mathrm{bh}}/\chi `$, where $`T=T_{\mathrm{bh}}=\kappa /2\pi `$ is the Hawking temperature of the black hole. Both of these assumptions were also made in Bekenstein’s analysis .
Following and , we now compute the change in generalized entropy occurring when matter in a thick box is slowly lowered toward a black hole and then dropped in. Consider a box of cross-sectional area $`A`$ and height $`b`$, containing energy density $`\rho `$ and total entropy $`S`$. As the box is lowered toward the black hole, the energy density will depend on both the proper distance, $`l`$, of the center of the box from the horizon and the proper height, $`y`$, above the center of the box. Following , we adopt the abbreviation
$$f(y)𝑑VA_{b/2}^{b/2}f(y)𝑑y.$$
(4)
The energy of the box as measured at infinity is
$$E_{\mathrm{}}(l)=\rho (l,y)\chi (l+y)𝑑V$$
(5)
where $`\chi `$ is the redshift factor. The weight of the box at infinity is
$$w(l)=\rho (l,y)\frac{\chi (l+y)}{l}𝑑V.$$
(6)
The condition that no extra energy is fed into or taken out of the box as it is lowered isIf the box is filled with matter in thermal equilibrium, then the temperature in the box will follow the Tolman law $`T1/\chi `$. Using $`d\rho =Tds`$ (and, hence, $`\rho /l=Ts/l`$), we see that eq. (7) is equivalent to requiring that the entropy of the box remain constant as it is lowered.
$$0=\frac{dE_{\mathrm{}}}{dl}w=\frac{\rho (l+y)}{l}\chi (l+y)𝑑V.$$
(7)
Thus the work done by the weight of the box on the agent lowering it is
$$W_g(l)=_{\mathrm{}}^lw(l^{})𝑑l^{}=E_i\rho (l,y)\chi (l+y)𝑑V$$
(8)
where $`E_i`$ is the initial energy of the box.
Meanwhile, the thermal radiation exerts a buoyancy force on the box equal to
$$f_b(l)=A[(P\chi )_{lb/2}(P\chi )_{l+b/2}]$$
(9)
where $`P`$ is the radiation pressure of the unconstrained thermal matter. The work done by the buoyancy force on the agent at infinity is then
$$W_b(l)=_{\mathrm{}}^lf_b(l^{})𝑑l^{}=P(l,y)\chi (l+y)𝑑V.$$
(10)
If the contents of the box are dropped into the black hole from position $`l`$, the increase in black hole entropy will be
$$\mathrm{\Delta }S_{\mathrm{bh}}=\frac{1}{T_{\mathrm{bh}}}(E_iW_gW_b)=\frac{1}{T_{\mathrm{bh}}}[\rho (l,y)+P(l,y)]\chi (l+y)𝑑V.$$
(11)
Using eq. (3) together with $`T=T_{\mathrm{bh}}/\chi `$, we obtain
$$\mathrm{\Delta }S_{\mathrm{bh}}=\frac{1}{T_{\mathrm{bh}}}[\rho (l,y)e(l,y)]\chi (l+y)𝑑V+S_{\mathrm{th}}$$
(12)
where $`S_{\mathrm{th}}`$ is the entropy of the thermal radiation displaced by the box. Equation (12) is equivalent to eq. (20) of and it corresponds directly to eq. (A12) of for the process considered in that reference.
It also follows from eq. (3) together with $`T=T_{\mathrm{bh}}/\chi `$ that
$$d(P\chi )=ed\chi .$$
(13)
Minimizing $`\mathrm{\Delta }S_{\mathrm{bh}}`$ with respect to $`l`$, and using (13), we obtain
$$[\rho (l_0,y)e(l_0,y)]\frac{\chi (l+y)}{l}=0.$$
(14)
Thus, the entropy increase of the black hole is minimal when the contents are dropped in from the “floating point”, i.e. when the weight of the box is equal to the weight of the displaced thermal radiation. Equation (14) is identical to eq. (14) of and eq. (A13) of .
To proceed further, we first consider an idealized situation in which we imagine that the box is filled with unconstrained thermal matter.It should be emphasized that we are not assuming here that it is physically realistic to actually have a box filled with unconstrained thermal matter. The consideration of such a box is done here purely for mathematical purposes, to compare the generalized entropy change that would occur in this idealized process to that which occurs in the actual process (see below). Let $`T_0`$ denote the temperature of the matter in the box at the start of the process. Then, when lowered to position $`l`$, the matter in the box will have a temperature distribution $`T=T_{\mathrm{}}(l)/\chi `$, where $`T_{\mathrm{}}(l)`$ is determined by $`T_0`$ and eq. (7). According to our analysis above, the optimal place (in the sense of minimizing $`\mathrm{\Delta }S_{\mathrm{bh}}`$) to drop such a box is at its “floating point”, which is easily seen to be the position, $`l_0`$, at which $`T_{\mathrm{}}(l_0)`$ = $`T_{\mathrm{bh}}`$, since at this position we have $`\rho =e`$. By eq. (12), when the box is dropped into the black hole from its floating point, $`l=l_0`$, we have
$$\mathrm{\Delta }S_{\mathrm{bh}}=S_{\mathrm{th}}=S$$
(15)
and there is no change in the generalized entropy. Consequently, if the box is dropped from any position, $`l`$, we have
$$\mathrm{\Delta }S^{}0$$
(16)
and the GSL holds in this idealized process.
Now consider the actual process in which the box is filled with some (arbitrary) distribution of matter, is lowered to an arbitrary position $`l`$ (not necessarily the floating point of the box) and then is dropped into the black hole. Let us compare the change in generalized entropy in this process with the change in generalized entropy that would occur in the above idealized process where we choose $`T_0`$ so that at position $`l`$ the energies as measured at infinity, $`E_{\mathrm{}}`$, of the two boxes agree. Then, it follows immediately from eqs. (5) and (12) that the change in black hole entropy, $`\mathrm{\Delta }S_{\mathrm{bh}}`$, is the same for both processes. However, since at position $`l`$ the boxes have the same energy at infinity, the entropy, $`S`$, contained in the box in the actual process cannot be larger than the entropy contained in the box in the idealized process. Consequently, the change in generalized entropy in the actual process cannot be smaller than the change in generalized entropy in the idealized process, which was shown above to be non-negative. This proves that the GSL cannot be violated in the actual process.
## III Bekenstein’s analysis
In Bekenstein purports to show that for thick boxes whose “height”, $`b`$, is not small compared with $`A^{1/2}`$ (where, as above, $`A`$ denotes the horizontal cross-sectional area of the box), the contents of the box must satisfy the entropy bound (2) if the GSL is to hold. We now briefly review Bekenstein’s assumptions and conclusions, and then reconcile them with the results of the previous section.
In his analysis, Bekenstein assumes that unconfined thermal matter can be modelled as an $`N`$-species mixture of noninteracting massless particles, so that
$$P=\frac{e}{3}=\frac{N\pi ^2T^4}{45}$$
(17)
Bekenstein then makes the approximation<sup>§</sup><sup>§</sup>§Equation (18) is a good approximation sufficiently near the black hole. Bekenstein’s justification for this approximation is somewhat circular in nature, but eq. (18) is not the source of any difficulties in Bekenstein’s analysis. that
$$\chi (l)\kappa l$$
(18)
where $`\kappa `$ denotes the surface gravity of the black hole. Using this approximation, Bekenstein finds that the exact floating point condition (14) reduces to
$$\frac{(l_0^2b^2/4)^3}{3l_0^2b^4+b^6/4}=\frac{NA}{720\pi ^2E(l_0)b^3}$$
(19)
where $`E(l_0)=\rho 𝑑V`$ is the locally-measured energy of the box at the floating point.
Bekenstein then argues that at the floating point, the quantity
$$\eta ^3\frac{NA}{720\pi ^2E(l_0)b^3}$$
(20)
must satify $`\eta 1`$. In making this argument, Bekenstein makes two additional assumptions: (1) that $`b1/E`$ and (2) that $`N`$ is of order unity. (It is easy to see that these assumptions together with $`Ab^2`$ imply $`\eta 1`$.) However, these assumptions are not innocuous ones since, in conjunction with eq. (17) they would imply that entropy bound (2) is already satisfied by a wide margin for a box in Minkowski spacetime. Namely, since the box’s contents must have lower entropy than unconstrained thermal matter at the same energy and volume, we have for the model of unconstrained thermal matter assumed by Bekenstein,
$$\frac{S}{E}\left(\frac{S}{E}\right)_{\mathrm{th}}\frac{1}{T}$$
(21)
Hence, given that $`b1/E`$, $`N1`$, and $`Ab^2`$, we have
$$EAbT^4\frac{1}{b},$$
(22)
from which it follows that
$$\frac{S}{E}(Ab^2)^{1/4}b=2R.$$
(23)
Nevertheless, Bekenstein’s arguments correctly show that—irrespective of the above two additional assumptions—if the floating point of the box is very close to the horizon (in which case, by eqs. (19) and (20), we have $`\eta 1`$), then buoyancy effects are negligible, and the bound (2) is needed for the validity of the GSL. However, we now show that if unconstrained thermal matter is described by (17), then any box that floats very close to the horizon must automatically satisfy (2). Once again, we use the fact that unconstrained thermal matter maximizes entropy at a fixed volume and energy at infinity,
$$S(E_{\mathrm{}},l_0)S_{\mathrm{th}}(E_{\mathrm{}},l_0).$$
(24)
The unconstrained thermal matter is described by eq. (17) with $`T=T_{\mathrm{}}(l_0)/\chi `$, where $`T_{\mathrm{}}(l_0)`$ is determined by imposing $`e\chi 𝑑V=E_{\mathrm{}}`$. Evaluating this integral using the approximation (18), we find
$$[T_{\mathrm{}}(l_0)]^4=\frac{15(l_0^2b^2/4)^2\kappa ^3E_{\mathrm{}}}{N\pi ^2Abl_0}$$
(25)
The entropy density of the thermal radiation is $`s=4e/3T`$, so
$$S_{\mathrm{th}}(E_{\mathrm{}})=\frac{4E_{\mathrm{}}}{3T_{\mathrm{}}(l_0)}.$$
(26)
It is convenient to express $`E_{\mathrm{}}`$ in terms of the position, $`l_{\mathrm{cm}}`$, of the center of mass of the box. Again applying the $`\chi l`$ approximation, we obtain the simple relation
$$l_{\mathrm{cm}}\frac{(l+y)\rho 𝑑V}{E(l_0)}=\frac{E_{\mathrm{}}}{\kappa E(l_0)}.$$
(27)
By eqs. (24), (26) and (27), we have
$$\frac{S}{E}\frac{8\pi }{3}\left(\frac{T_{\mathrm{bh}}}{T_{\mathrm{}}(l_0)}\right)l_{\mathrm{cm}},$$
(28)
and from eq. (25) and the definition, eq. (20), of $`\eta `$, we find
$$\left(\frac{T_{\mathrm{bh}}}{T_{\mathrm{}}(l_0)}\right)^4=\frac{3\eta ^3b^4l_0}{(l_0^2b^2/4)^2l_{\mathrm{cm}}}.$$
(29)
Now, assuming $`\eta 1`$, the floating point condition (19) yields $`l_0^2(1/4+\eta )b^2`$. Consequently,
$$\frac{T_{\mathrm{bh}}}{T_{\mathrm{}}(l_0)}\left(3\eta \frac{l_0}{l_{\mathrm{cm}}}\right)^{1/4},$$
(30)
and, finally, to leading order in $`\eta `$,
$$\frac{S}{E}\frac{8\pi }{3}(3\eta l_{\mathrm{cm}}^3l_0)^{1/4}\frac{8\pi }{3}b(3\eta )^{1/4}b=2R.$$
(31)
Thus, we see that if the box floats very near the horizon, it follows that the entropy bound (2) is already satisfied by a wide margin. Consequently, the bound (2) does not have to be postulated as an additional requirement.
This research was supported in part by NSF grant PHY 95-14726 and ONR grant N00014-96-1-0127 to the University of Chicago.
|
no-problem/9901/astro-ph9901032.html
|
ar5iv
|
text
|
# H𝛼 spectropolarimetry of B[e] and Herbig Be stars
## 1 Introduction
Classification of a star as a Be star has long been recognised as consignment to a loosely defined phenomenological group, rather than as a definition of the evolutionary status of the star. Presently the Be stars may be divided into three main groups: (i) the classical Be stars, generally thought of as the most rapidly rotating near main sequence B stars (see e.g. Slettebak, 1988); (ii) the Herbig Be stars, first identified by Herbig (1960) as stars in the B spectral type range whose association with star-forming regions and emission line character might indicate that they are very young; (iii) the B\[e\] stars (Allen & Swings, 1976; Zickgraf et al. 1985), noted for the presence of forbidden line as well as Hi emission in their spectra and strong IR continuum excesses (also seen in Herbig Be stars). While it was initially thought that B\[e\] stars are preferentially supergiants, recent work (Gummersbach et al. 1995) has demonstrated from deep LMC observations that B\[e\] characteristics may also be seen at significantly lower luminosities.
It is now widely accepted that classical Be stars are encircled at their equators by ionized, low opening angle, almost Keplerian, disks, that are optically-thick in H$`\alpha `$. By contrast, the circumstellar geometry of Herbig Be and B\[e\] stars is rather more of an open question. In this work, we will open up another avenue for exploring this issue. We present medium resolution spectropolarimetry across the H$`\alpha `$ line, a technique that, in the case of the detection of polarization changes across the line can provide an answer to the most basic question “Is the ionized material around these stars spherically symmetric or not?”.
By comparing H$`\alpha `$ polarization with that of the continuum one can exploit the fact that line and continuum respectively form within a larger and smaller volume and subsequently ‘see’ different scattering geometries. Essentially, H$`\alpha `$ is not significantly scattered by the ionized envelope in which it forms, whereas the continuum arising primarily from the central star embedded in the envelope undergoes electron scattering. In the case that the ionized envelope’s projection on to the plane of the sky is non-circular, a net linear polarization is imprinted on the continuum light, but not on H$`\alpha `$ producing a drop in the polarization percentage across the line (‘line-effect’). The addition of further continuum polarization by either a dusty envelope or the ISM modifies this change and may even produce a net percentage rise across the line – but, significantly, it cannot nullify the change. For example, Schulte-Ladbeck et al. (1994) showed that the H$`\alpha `$ emission line of AG Car displayed enhanced polarization at one epoch while on other occasions a de-polarization across the line was observed. After the correction for ISP however, H$`\alpha `$ was de-polarized with respect to the continuum on all occasions.
The advantage of spectropolarimetry over broadband polarimetry is that a result can be obtained even where it is not possible to distinguish the various contributions to the total continuum polarization. Furthermore, by spectrally-resolving the H$`\alpha `$ line profile one can hope to pick out more subtle effects arising in cases where the assumption that the H$`\alpha `$ emission is unscattered, and hence unpolarized, breaks down. Qualitatively these were demonstrated in model calculations by Wood, Brown & Fox (1993). When there is significant scattering of H$`\alpha `$ the line profile in linear polarized light becomes a probe of the velocity field in the electron-scattering medium. Using this tool we have already shown in the case of the B\[e\] star, HD 87643, that there is direct evidence of a rotating and expanding outflow (Oudmaijer et al 1998).
The pioneering work in this area was made in the seventies, when Clarke & McLean (1974), Poeckert (1975) and Poeckert & Marlborough (1976, hereafter PM) conducted narrow-band polarimetric studies of Be stars that compared the linear polarization on and off H$`\alpha `$. Many instances of line de-polarization were found, showing that the envelopes of Be stars do not project as circles onto the sky. After this time, polarimetric studies were made of several classes of object, but due to observational difficulties it remained a specialist activity. However in the past few years there has been rising interest in the technique. Spectropolarimetry has been performed on several strong H$`\alpha `$ emitting evolved stars, such as AG Car and HR Car (Schulte-Ladbeck et al. 1994, Clampin et al. 1995), where the position angle of the spatially-unresolved flattened electron scattering region has been shown to agree with the observed extension of the optically visible nebulae surrounding these objects. Both the B\[e\] and Herbig Be stars are ideal objects to subject to this style of observation, since they are strong H$`\alpha `$ emitters and often optically bright enough to render studies at medium resolution with high photon counts feasible with 4-meter class telescopes. Furthermore there is a clear need for this type of observation since a change in the linear polarization across H$`\alpha `$ can be the only direct evidence of electron scattering operating on the scale of a few stellar radii as opposed to polarization by a dusty envelope.
In the first instance, the observations presented here were motivated by the aim of examining Herbig Be stars for the presence of ionized circumstellar disks. These reputedly intermediate mass objects present a phenomenology that suggests they are approaching or have recently achieved a main sequence location on the HR diagram – they are the higher mass counterparts of the T Tauri stars. The paradigm for star formation invokes a collapsing cloud and conservation of angular momentum that results in the formation of a flattened circumstellar (accretion) disk, that eventually accretes or is blown away by an outflow. However there is not yet a consensus that accretion disks are commonly associated with the known Herbig Be (and Ae) stars. There is a certain irony that T Tauri stars, their lower mass counterparts, are generally accepted to have disk-like envelopes (e.g. HH30 in Burrows et al. 1996), while evidence is accumulating that their higher mass counterparts, the optically obscured massive Young Stellar Objects (YSOs) are also surrounded by disk-like structures (e.g. Hoare & Garrington 1995, and references therein).
MERLIN radio data on MWC 297, a nearby radio-bright early Herbig Be star, also reveals an elongated (but ionized) structure on a spatial scale of $``$100 AU (Drew et al. 1997). More direct high resolution imaging is clearly worthwhile and of course spectropolarimetry can help identify interesting targets. Nevertheless, at the present, there persists a debate that the observed spectral energy distributions require, on the one hand, dusty disks (e.g. Malfait, Bogaert & Waelkens, 1998) or, on the other, that they can be fit satisfactorily by spherically symmetric dusty envelopes (Miroshnichenko, Ivezić & Elitzur, 1997; see also the overview of this issue in Pezzuto, Strafella & Lorenzetti 1997). The recent direct detection of a rotating disk around a Herbig Ae star by Mannings, Koerner & Sargent (1997) indicates that at least some of these objects have disk-like geometries. Broad-band polarimetry of a number of Herbig stars has revealed variability of the polarization of the objects which could imply deviations from spherical symmetry of the dusty envelopes (e.g. Grinin et al. 1994, who studied UX Ori; Jain & Bhatt, 1995). By contrast, the H$`\alpha `$ spectropolarimetry traces scales even closer to the star, the ionized material.
With regard to the B\[e\] stars, first picked out by Allen & Swings (1976), the argument for embedding them in disk-like equatorial structures has largely been won in that there is widespread acceptance of Zickgraf’s phenomenological model (Zickgraf et al 1985, 1986). This is because there is compelling spectral evidence of a fast, presumably polar, wind at UV wavelengths, that combines with a high emission measure, much more slowly expanding, presumably equatorial, flow traced by strong optical emission lines. Broad-band polarimetry by Zickgraf & Schulte-Ladbeck (1989) and Magalh$`\stackrel{~}{\mathrm{a}}`$es (1992) indicate that for a sub-sample of B\[e\] objects, the circumstellar dust, located at larger distances from the star, is distributed in a geometry deviating from spherically symmetric. The unresolved issue is how these axially-symmetric structures arise and indeed what the stellar evolutionary status of this object class really is. The fact that B\[e\] stars are far from being exclusively supergiants deepens the mystery. In this context, Herbig’s (1994) concern about the difficulty of distinguishing Herbig Be from B\[e\] stars becomes all the more intriguing. To progress in understanding how B\[e\] disks arise, a more complete description of the disk density and velocity field is highly desirable. It is in this respect that H$`\alpha `$ spectropolarimetry has the potential to provide unique insights.
Because of the problems of distinguishing between the B\[e\] and Herbig Be categories, there is always a significant probability that a Herbig Be sample contains some B\[e\] stars. Indeed, for Galactic B\[e\] stars it is often difficult to determine whether an object is a luminous evolved object or a less luminous pre-main sequence object (see e.g. the discussions on HD 87643; Oudmaijer et al.1998, MWC 137; Esteban & Fernández 1998, and HD 45677; de Winter & van den Ancker 1997). Here we exploit this in that our programme of H$`\alpha `$ spectropolarimetry programme includes as targets relatively clear-cut examples of post main sequence B\[e\] stars alongside undisputed Herbig Be stars and objects that might be either. In this paper we give an overview of our observing campaign to date. In Section 2, the way in which targets were selected and the observations are discussed. The results and their interpretation are presented on a case-by-case basis in Sec. 3. Sec. 4 contains a discussion on the power of spectropolarimetry and what we have learned from this program. We conclude in Sec. 5.
## 2 Observations
### 2.1 Sample selection
The target stars were selected from the catalogue of Thé, de Winter & Perez (1994) which lists all objects that had been at that time proposed to be Herbig Ae/Be objects, and provides tables of other emission type objects whose nature is not clear. The list of targets is provided in Table 1. The targets were not selected with foreknowledge of envelope asphericity, rather, they were chosen because of their relative brightness, their position on the sky, and their early (B-type) spectral types.
### 2.2 Spectropolarimetry
The optical linear spectropolarimetric data were obtained using the RGO Spectrograph with the 25cm camera on the 3.9-metre Anglo-Australian telescope during three observing runs in January 1995, December 1995 and December 1996 respectively. During the first two runs, the weather provided some spectacular views of lightning from the telescope, but only limited data. During clear time, we aimed at observing the brightest objects in order to make the best of lower-than-desired count rates. Nevertheless, the resulting polarization measurements proved to be very stable. The last run was mostly clear, opening the way for time to be spent on some of our fainter targets.
The instrumental set-up was similar during all observing runs and consisted of a rotating half-wave plate and a calcite block to separate the light into perpendicularly polarized light waves. Two holes of size 2.7 arcsec and separated by 22.8 arcsec in the dekker allow simultaneous observations of the object and the sky. Four spectra are recorded, the O and E rays of the target object and the sky respectively. One complete polarization observation consists of a series of consecutive exposures at four rotator positions. Per object, several cycles of observation at the four rotator positions were obtained in order to check on the repeatability of the results. Indeed, we find that multiple observations of the same star result in essentially the same polarization spectrum. To prevent the CCD from saturating on the peak of H$`\alpha `$, shorter integration times were adopted for those objects with particularly strong H$`\alpha `$ emission. Spectropolarimetric and zero-polarization standards were observed every night.
A 1024 $`\times `$ 1024 pixel TEK-CCD detector was used which, combined with the 1200V grating, yielded a spectral range of 400 $`\mathrm{\AA }`$, centered on H$`\alpha `$. Wavelength calibration was performed by observing a copper-argon lamp before or after each object was observed. In all observations reported here a slit width of 1.5<sup>′′</sup> was used. A log of the observations is provided in Table 1. Bias-subtraction, flatfielding, extraction of the spectra and wavelength calibration was performed in iraf (Tody 1993). The resulting spectral resolution as measured from arc lines is 60 km s<sup>-1</sup>. The E and O ray data were then extracted and imported into the Time Series/Polarimetry Package (tsp) incorporated in the figaro software package maintained by starlink. The Stokes parameters were determined and subsequently extracted.
A slight drift of a few degrees in position angle (PA) was calibrated by fitting its wavelength dependence in nightly 100% polarized observations of bright unpolarized stars (obtained by inserting an HN-22 filter in the light-path) and removed from the polarization spectra. The instrumental polarization deduced from observations of unpolarized standards proved to be smaller than 0.1% in all cases.
In July 1998, MWC 297 was observed in service time with the ISIS spectrograph and polarimetric optics on the 4.2m William Herschel Telescope, La Palma. The instrumental set-up included the 1200R grating and a 1124$`\times `$1124 TEK2 detector, providing a wavelength coverage of 400 $`\mathrm{\AA }`$ around H$`\alpha `$ and a spectral resolution of 40 km s<sup>-1</sup>. The data reduction was the same as for the AAT data.
Polarization accuracy is in principle only limited by photon-statistics. One roughly needs to detect 1 million photons per resolution element to achieve an accuracy of 0.1% in polarization (the fractional error goes at $``$ 1/$`\sqrt{N}`$). However, although it is probably fair to say that the internal consistency of a polarization spectrum follows photon-statistics, the external consistency (i.e. the absolute value for the polarization, checking for variability) is limited due to systematic errors. For example, when calculating the polarization of a given spectral interval one can reach polarization percentages with a statistical error of several thousandth of a percent. However, instrumental polarization (less than 0.1%), scattered light and low-level intrinsic variability of the polarization standards may influence the zero-points. The quality and amount of data taken of spectropolarimetric standard stars is at present not yet sufficient to reach absolute accuracies below the 0.1% mark (see manual by Tinbergen & Rutten, 1997). A feeling for the possible accuracies in our data can be obtained by studying some of the objects that have been observed on different occasions. Seven, respectively 6 independent observations of the Be star HD 76534 and the polarization standard HD 80558 yield a mean polarization and rotation of (0.49 % with an r.m.s. scatter of 0.03 %, 124<sup>o</sup> with a scatter of 3<sup>o</sup>) and (3.19 $`\pm `$ 0.11% , 162 $`\pm `$ 1.6<sup>o</sup>) respectively. It is encouraging to note that our independent continuum measurements stretching over more than a year are mostly within 0.1%, and often within 0.05%, in polarization.
## 3 Results
Some H$`\alpha `$ parameters and continuum polarizations are presented in Table 2. In the following, the results across the full range of targets observed are summarized. These are grouped such that we begin with those objects showing no discernable polarization changes across H$`\alpha `$ (§3.1), and then move on to objects which do show percentage changes and/or rotations (§3.2).
Unless specifically stated, we have made no attempt below to correct for the interstellar polarization (ISP). This decision is based on the following: the main goal of this study is the detection of polarimetric changes across the H$`\alpha `$ line. Since the wavelength dependence of the interstellar polarization only becomes apparent on wavelength ranges larger than our spectra provide, the ISP will only contribute a constant polarization vector in (Q,U) space to the observed spectra. A further reason to refrain from ISP corrections here, is that the methods commonly used for this (field-star method and continuum variability, see e.g. McLean & Clarke 1979) do not always return unambiguous values. However, in the absence of ISP correction, it is useful to remember the point raised in the introduction that the ISP can change what might otherwise be a reduction of the linear polarization percentage across the H$`\alpha `$ line into an increase in polarization, or an apparently constant polarization, but accompanied by a significant rotation in the position angle. The same effect can occur in the event of additional polarization due to circumstellar dust.
Nevertheless, regardless of the influence of the ISP and polarization due to circumstellar dust, it is possible to derive the intrinsic angle of the electron-scattering material (e.g. Schulte-Ladbeck et al.1994). Assuming the line is depolarized, the vector connecting the line- and continuum polarization in the QU plane will have a slope that is equivalent to the intrinsic angle of the scattering material responsible for the continuum polarization. Since the wavelength dependence of both circumstellar dust polarization and ISP is small, they add only a constant QU vector to all points in both line and continuum, and thus will not affect the difference in line-to-continuum polarization. This slope is measured as $`\mathrm{\Theta }`$ = 0.5$`\times `$atan($`\mathrm{\Delta }`$U/$`\mathrm{\Delta }`$Q).
### 3.1 Stars showing no clear change across H$`\alpha `$
In this subsection we discuss the objects that do not show a line-effect. In principle such an observation implies that the projection of the ionized region on the plane of the sky is (mostly) circular. We will find that this does not necessarily have to be the case. The objects falling into this group are Lk H$`\alpha `$ 218, Hen 3-230, AS 116, HD 52721, V380 Ori, HD 76534, $`\omega `$ Ori and MWC 297. Their polarization spectra are shown in Figure 1.
##### Hen 3-230, AS 116, Lk H$`\alpha `$ 218
These objects are relatively faint targets for which the signal-to-noise ratios in our data are not so high. Hence, the absence of change across H$`\alpha `$ for the time being should be viewed as an absence of any marked contrast. For example in the case of Lk H$`\alpha `$ 218, there seems enhanced polarization at the position of the stronger redshifted emission component in the line. However this is not strictly even a 2$`\sigma `$ detection. Coarser binning can yield a 3$`\sigma `$ enhanced polarization in the line (at smaller resolution this is only present in one pixel however) but in truth it would appear that 100 minutes exposure is not enough for this object. The null results for Hen 3-230 and AS 116 are more sure. Both targets have extremely bright line emission. As previous observations of them are extremely sparse, their evolutionary status remains undetermined. Based on its low excitation spectrum Stenholm & Acker (1987) argue that Hen 3-230 is not a Planetary Nebula, despite having figured in many previous papers to be one. AS 116 was appeared in a catalogue of emission line stars of Miller & Merrill (1951), since then not much work has been published. The IRAS flux peaks at 25$`\mu `$m, which could point at a detached dust shell, but it was not detected in the OH maser by Blommaert, van der Veen & Habing (1993).
##### HD 52721, V380 Ori
These are two quite convincing examples of no line effect. Both, nevertheless, present significant continuum polarizations. Since HD 52721 presents little of an infared continuum excess (Hillenbrand et al. 1992), it might seem plausible that this star is a low-inclination classical Be star behind a significant interstellar column. Indeed the single-peaked H$`\alpha `$ emission is consistent with this, but in contrast to the $`v\mathrm{sin}i`$ measurement of 400$`\pm `$40 km s<sup>-1</sup> reported by Finkenzeller (1985) suggesting high inclination. V380 Ori exhibiting a strong infrared continuum excess, would appear to be optically-veiled and hence sits more convincingly in the HAeBe object class. The absence of a line-effect in V380 Ori may simply imply lower inclination to the line of sight.
##### HD 76534
The initial 1995 data on this source have already been presented by Oudmaijer & Drew (1997). There it was shown that these data did not indicate any changes across H$`\alpha `$. Our new data confirm this and show no hints of pronounced polarization variability in its modest $``$0.5% level. This is despite the source’s propensity for spectral variability clearly illustrated by the 11 January 1995 transformation of H$`\alpha `$ absorption into well-developed double-peaked emission within hours (Oudmaijer & Drew 1997). Table 2 lists the H$`\alpha `$ equivalent widths at the various occasions that the object was observed. The EW changes strongly, but situations similar to the January 1995 data were not observed.
##### $`\omega `$ Ori
It is not clear whether $`\omega `$ Ori should be considered a Herbig Be star, or simply a classical Be star (Sonneborn et al. 1988). The absence of a detectable change across H$`\alpha `$ (Fig. 1) stands in contrast to reports in the literature that the hydrogen recombination lines show de-polarization – PM find that H$`\alpha `$ shows a polarization dip while Clarke & Brooks (1984) find the same in H$`\beta `$. This difference is presumably connected with the stronger H$`\alpha `$ emission reported by PM (line to continuum ratio of 1.8 versus our figure of 1.4 - which was not binned to the same narrow band, and is thus a strong upper limit), and higher linear polarization (0.38% versus our 0.30%). Since classical Be stars and, indeed, Herbig Be stars are known to be emission line variables, this change is probably due to a lowering of the ionized emission measure of the equatorial disk around this star. Given the great disparity between the Thomson scattering and H$`\alpha `$ absorption cross-sections, a relatively modest drop in the H$`\alpha `$ equivalent width could well be accompanied by a collapse in the percentage of linearly-polarized scattered starlight. Hence it would seem that ISP contributes around 0.3% linear polarization in $`\omega `$ Ori, a figure not out of line with PM’s estimate of 0.24% .
##### MWC 297
The weak-to-non-existent effect across H$`\alpha `$ is startling in view of the evidence gathered by Drew et al. (1997) that this early Herbig Be star is viewed at relatively high inclination. Furthermore, the 5 GHz radio image (see Drew et al. 1997), which provides an extinction-free view of the ionized circumstellar medium around MWC 297, indicates an elongated geometry that would suggest a line effect ought to be apparent in such a bright emission line source.
Although at first sight very surprising, we consider two different hypotheses that may explain this apparent paradox. Firstly, we can conclude we are seeing the H$`\alpha `$ line directly, and that the line-forming region is indeed round. Since the H$`\alpha `$ line is formed in a potentially much smaller volume than the continuum 5 GHz radiation, the rounder appearance of the H$`\alpha `$ line-forming region indicates that the geometry changes between the near-stellar scale and the larger scale sampled at radio wavelengths. Spatial evolution of this type has been predicted for lower mass stars (see Frank & Mellema 1996).
Secondly, it may be that H$`\alpha `$ is formed in an edge-on disk like structure, but that the optical light does not reach us directly, and is completely obscured in the line-of-sight. The light that we see could then be ‘mirrored’ by scattering dust clouds located above and/or beneath the obscuring material. If the scattering dust-clouds ‘see’ a nearly circularly symmetric H$`\alpha `$ emitting region, it will not see any de-polarization across the line either. Consequently, the light reaching us will not show any polarization changes across H$`\alpha `$. That dust-scattering plays a role in this object is already suggested by the spectral energy distribution, which shows a notable excess in the $`U`$-band (Bergner et al. 1988; Hillenbrand et al 1992), possibly due to the ‘blueing’ effect.
We may therefore have a similar situation to that in the Red Rectangle (see e.g. Osterbart, Langer & Weigelt 1997, Waelkens et al 1996), where it was only recently realized that the central star is actually not the star itself, but its reflection against dusty knots located above and below a very optically thick dust lane. This finding explained the long-standing problem of the energy balance; the apparently absorbed light from a star with such a modest reddening ($`A_V`$ of order 1) is orders of magnitude less than that being re-radiated in the infrared.
If the circumstances are similar in MWC 297, the reddening ($`A_V`$ $``$ 8 - see discussion of Drew et al, 1997) commonly assigned to this source on the basis of conventional extinction measurements is a severe underestimate. This would not be completely unexpected, as MWC 297 is in certain respects an intermediate object between the optically visible Herbig Be stars, and their more massive counterparts, the Becklin-Neugebauer (BN) type objects, which suffer from large optical extinctions ($`A_V`$ often in excess of 20). While in the optical, MWC 297 has much in common with the Herbig Be stars, at infrared and radio wavelengths it shows evidence of substantial mass loss associated with BN-type objects (Drew et al. 1997).
### 3.2 Objects displaying line effects
Here we present the objects for which the line-effect is observed. First, the Herbig Be stars in this sample are discussed, HD 259431, HD 53367 and HD 37806, then MWC 137, a Herbig Be star that has been recently proposed to be a massive evolved B\[e\] star instead is shown, and we end with the well-known B\[e\] objects HD 50138, HD 87643 and HD 45677.
##### HD 259431
We start with the least certain detection, HD 259431. This object (Fig. 2) shows a hint of de-polarization across H$`\alpha `$, from (1.1%, 102<sup>o</sup>) to 0.8% in the line center. The intrinsic polarization angle in QU space measured from the change from the continuum to line polarization, $`\mathrm{\Theta }`$ = 0.5$`\times `$atan($`\mathrm{\Delta }`$U/$`\mathrm{\Delta }`$Q), gives 17<sup>o</sup>, but with a large uncertainty. The length of this vector is small, of order 0.3%. Our two observations, taken one year apart, do not show any variability within the small error-bars. The compilation by Jain & Bhatt (1996) which contains broad-band polarimetric observations only hints at a slight variability. The high resolution data, only when binned to errors of 0.1% or less, show the line-effect.
##### HD 53367
The polarization spectrum of HD 53367 is shown in Fig. 2. Since there was no difference within the errorbars of the data taken on 2 consecutive nights, these were added to increase the signal-to-noise ratio. The H$`\alpha `$ line is clearly de-polarized with respect to the continuum, while no rotation across the line is present. The intrinsic polarization angle in QU space measured from the slope is 47<sup>o</sup>.
If the line center is assumed to be completely de-polarized, one can use this information to correct the observed polarization for the ISP (the ‘emission line method’). Reading off the polarization in the line-center (Q = 0.0%, U = 0.3%), and subtracting this value from the spectrum then gives an intrinsic polarization of 0.2 $`\pm `$ 0.01 % and a position angle of 44.5 $`\pm `$ 0.5<sup>o</sup> (measured in the bins 6400 – 6500 $`\mathrm{\AA }`$ and 6700 – 6800 $`\mathrm{\AA }`$. Note that the error-bar reflects the internal consistency and not the external consistency), consistent with the slope in the QU plane. This low value of intrinsic continuum polarization is what one would expect from modest electron-scattering (see e.g. PM).
Let us now comment on the significance of the measured polarization angle. On the sky, HD 53367 is located on the periphery of the Canis Majoris ‘hole’ noted for its low reddening sightlines (Welsh 1991). Herbst, Racine & Warner (1978) designated this star a member of the CMa R1 cluster, which they placed at a distance of 1150 pc, the same distance as to the CMa OB1 association (Claria 1974). This distance is suspect as it makes HD 53367 more luminous than a supergiant at the same B0 spectral type. A more reasonable distance estimate would be around 550 pc (adopting a dereddened $`V`$ magnitude of $`4`$; Herbst et al. 1982, and $`M_V4.7`$; for a B0IV star, Schmidt-Kaler 1982). It is then less surprising that the Hipparcos catalogue (ESA, 1997) contains a finite, although very uncertain, parallax measurement for this star (see also van den Ancker, de Winter & Tjin A Djie 1998). In any event there is strong evidence that the cumulative interstellar extinction towards CMa R1 is not more than $`A_V0.2`$, implying that the remaining observed extinction is local (Herbst et al. 1982; Vrba, Baierlein & Herbst 1987).
More important, Vrba et al. (1987) demonstrate quite convincingly that within CMa R1 the polarization angle tends to follow the sweep of the southern dust arc up through Sh2 292, the Hii region ionized by HD 53367, and that the polarization is most likely due to grain alignment. At HD 53367 this angle is about 40<sup>o</sup> and entirely comparable to that of the non-emission line B3V star BD -10<sup>o</sup>1839 just $``$20 arcmin away, and also at a photometric distance of about 600 pc. The interesting feature of HD 53367 is that the separable intrinsic and foreground polarization angles are the same – as indicated by the lack of rotation across H$`\alpha `$ in the observed spectrum. This suggests an orderly star formation process in which the rotation axis of HD 53367 ‘remembers’ the larger scale circumstellar field direction, in preference to a more dynamical mechanism such as the ‘accretion induced collision’ merger model of Bonnell et al (1998), which would result in randomly oriented polarization angles.
##### HD 37806
The double peaked H$`\alpha `$ profile of this object is remarkably different during our two observing epochs (Fig. 3). In December 1996, both peaks were equally bright, while in January 1995 the blue peak is much weaker. Although the signal-to-noise of the January 1995 data is not high, it appears that the object does not show significant changes in polarization. But a rotation is present, which is especially visible in the December 1996 data when the source was observed for longer. The rotation almost exactly occurs in the central dip in H$`\alpha `$, rather than across the entire adequately-resolved line profile. The fact that we observe only rotation is interesting in its own right, as the principle of line de-polarization without superimposed foreground polarization would imply a constant angle across H$`\alpha `$. Clearly there is ISP and perhaps circumstellar polarization present.
If we assume that the underlying, rotated part, of the line-profile is unpolarized, we may attempt a correction for the intervening interstellar- and circumstellar dust polarization, to retrieve the intrinsic spectrum of this object. The (Q,U) vector measured in the central dip of the rotation, corresponds to (-0.06%,0.35%) or a polarization of 0.36% with PA 50<sup>o</sup>. Subtracting these values from the observed spectra results in Fig.4. This figure also shows the ‘polarized flux’ (polarization $`\times `$ intensity). The polarized flux indicates that the double-peaked H$`\alpha `$ line has the same polarization as the continuum, but – by virtue of the manner in which the ISP was corrected for – the central dip between the peaks is de-polarized. The 1995 spectrum shows the same behaviour. Despite its much lower signal-to-noise, the large Red/Blue ratio has virtually disappeared in the polarized flux spectrum, suggesting a large part of the red peak is not associated with the line-forming region responsible for the double peaks.
A consistency check can be made as to whether the choice of ISP for the correction is reasonable. We have searched the Matthewson et al. (1978) catalogue for field stars nearby the object and found 49 objects within a radius of 180 arcmin (one object with P = 12% was excluded for the analysis). The catalogue also provides photometric estimates of the total extinction $`A_V`$ to these objects. A relatively tight relation exists between the observed polarization and $`A_V`$. A least-squares fit to the data gives the relation P (%) = 1.5 $`\times `$ $`A_V`$(mag.) + 0.07. The PA shows mostly a scatter diagram, and gives a mean of 76<sup>o</sup>$`\pm `$ 39<sup>o</sup> for the total sample. The $`A_V`$ towards HD 37806 is ambiguous, but likely to be low: Van den Ancker et al. (1998) reclassify HD 37806 as an A2Vpe star and give its extinction as $`A_V=0.03`$. In contrast, Malfait, Bogaert and Waelkens (1998) find an E(B–V) of 0.14 ($`A_V`$= 0.43 if the ratio of total to selective reddening, $`R`$, is 3.1) for a B9 spectral type. Based on the extinction and the field stars, the ISP towards HD 37806 should be between 0.1% and 0.7%. The ‘emission line’ method gives 0.36%, a value that is at least consistent with the value returned from the field stars.
Taking the derived intrinsic polarization spectrum at face value, it appears that the H$`\alpha `$ line profile is a composite of two unrelated components: a double-peaked polarized component and a single, unpolarized component. Since both the photospheric continuum radiation and the double-peaked component are equally polarized, it would appear that they both appear point-like to the scattering material, while the single component is formed further away from the star, betraying the asymmetry of the scattering region. Perhaps due to the signal-to-noise in the data, no de-polarization is visible in the blue peak. This could suggest that the de-polarization in the red peak is not necessarily due to electron scattering, since one would expect the electron scatterers to be located close to the star. Instead, the data do not exclude the possibility that the red peak is located in an extended nebula (which is not resolved in our data however), while the underlying broader emission and the photosphere are polarized by circumstellar dust, which, by implication, is not distributed spherically symmetric around the star.
Intriguingly, the line-to-continuum ratios of the red peak are constant (see Table 2), while broad-band photometry of this object also appears constant (Van den Ancker et al. 1998) so the blue peak has increased in strength. This fact, combined with the relatively equal red/blue ratio of the double-peaked line in polarized flux, suggests that the polarized red part of the line has also become stronger. This leads to the enigmatic situation that the polarized part of the red peak increased in strength while the unpolarized part of the red peak decreased in strength in such a way that their total has remained constant in time.
Clearly, this object needs further study, both spectropolarimetric, and from a modelling perspective to gain more understanding as to the origin of the observed polarization.
##### MWC 137
Although Thé et al. (1994) labelled this source as a probable Herbig Be star, in a recent study Esteban & Fernández (1998) argued that is much more likely to be an evolved B\[e\] supergiant. Their arguments, based on a kinematical association with molecular clouds at more than 5 kpc, are reasonably convincing, but we note that the nature of this source has long been controversial (see references in Esteban & Fernández).
The polarization spectrum and QU diagram of MWC 137 are shown in Fig. 5. The continuum polarization of the object is very large (6%), and slight polarization changes across H$`\alpha `$ are visible. Much clearer is the broad, observed rotation of H$`\alpha `$, centered on the line peak. The rotation is at most only 3<sup>o</sup>, but it is real as is evident from the significant loop apparent between line and continuum in QU space (Fig. 5); the shift is along the QU vector (+0.3%, +0.5%), corresponding to an intrinsic angle in the continuum of 30<sup>o</sup> and a depolarization of $``$ 0.6% (measured from the length of the polarization vector, $`\sqrt{\mathrm{\Delta }Q^2+\mathrm{\Delta }U^2}`$). The same situation as for HD 37806 described above occurs in this case: the polarization from intervening material has transformed a de-polarized line into a rotated line. Slight changes in the observed polarization are still present in the wings of the emission, suggesting a different polarization of the line wings than both the continuum or the centre of the emission line.
The interstellar reddening towards the object is very large. It was redetermined by Esteban & Fernández (1998) to be $`A_V`$ = 3.77. This is very high, but according to the authors consistent with a very large distance to the object (the authors mention 6 kpc). If true, then the large continuum polarization can be explained mostly by the interstellar reddening. The polarization of the field stars within a radius of 300 (taken from Matthewson et al. 1978) increases linearly with $`A_V`$ up to 4% at $`A_V`$ $``$ 2.2, the largest $`A_V`$ among the field stars in the sample. Unfortunately no information for stars more distant or more reddened is available, but it is clear that a large ISP can be expected for the object.
The rotation across the H$`\alpha `$ line is best interpreted as a depolarization across the line, but modified by the intervening ISP and circumstellar dust polarization. Using the ‘emission line’ method, we find a dust polarization of P=6.2%, $`\mathrm{\Theta }`$=159<sup>o</sup>, in agreement with the large expected ISP. The intrinsic PA of the electron scattering medium of 30<sup>o</sup> appears to be parallel with the bright North-Western component of the ring nebula around the object (see again Esteban & Fernández 1998). This may suggest that asymmetries at very small scales, traced by the electron scattering, and at large scales (from the image) are still aligned.
##### HD 50138
In their search for Herbig Ae/Be stars, Thé et al. (1994) found a subsample of objects with many Herbig characteristics, that nevertheless did not fulfil all their criteria. Based on the strong emission lines, they called this group ‘extreme emission line objects’. Most of these stars are also classified as B\[e\] stars, because of the presence of forbidden lines in the spectrum. HD 50138 is one of these.
Our two observations of HD 50138 (Fig. 6), taken two years apart in January 1995 and January 1997, show essentially the same behaviour. The emission line is double peaked, with a large red to blue ratio of the peaks. The velocity separation of the two peaks is 160 km s<sup>-1</sup> and the central minimum is at 18 km s<sup>-1</sup> (helio-centric), which is 15 km s<sup>-1</sup> blueshifted from the forbidden \[O i\] line at 6363 $`\mathrm{\AA }`$, for which we measure a central velocity of 33 $`\pm `$ 5 km s<sup>-1</sup>(helio-centric), in agreement with the radial velocity determination by Pogodin (1997).
The H$`\alpha `$ line shows strong de-polarization across the red peak, while the blue peak shows at most a slight de-polarization. The ‘intrinsic’ polarization angle, as deduced from the shift between line and continuum in QU space is about 155<sup>o</sup>, close to the measured one, implying that the ISP, if any, has had no great effect on the observed polarization characteristics. This is more or less in keeping with the moderate reddening towards this source (van den Ancker et al. 1998 assign $`A_V=0.59`$, possibly an upper-limiting value considering that $`BV`$ for this mid-B star is close to zero) and the low interstellar polarization in the line of sight, as many objects around HD 50138 have very low polarizations for typical extinction values as 0.5 (Matthewson et al. 1978).
The polarized flux spectrum (polarization $`\times `$ flux, Fig. 6) reveals equally strong blue and red peaks at both epochs. This may indicate that part of the red emission is formed in the same region as the blue peak, such as a rotating disk, but that the excess emission compared to the blue line is formed in a larger volume, resulting in the observed de-polarization. The most straightforward explanation to account for the ‘excess’ flux in the red peak, as in the case of HD 37806, is that the intensity spectrum is a composite of a rotating disk type of geometry close to the star, and an additional, extended single peaked component. A clue to the line formation may be provided by the spectrum taken around the lower opacity H$`\beta `$ line by Jaschek & Andrillat (1998). Their published H$`\beta `$ profile shows the blue and red peaks roughly equal, with some underlying photospheric absorption still visible. The much larger red-to-blue ratio in H$`\alpha `$ could imply that the excess red emission in H$`\alpha `$ is optically thin - since for thin Hi emitting gas the Balmer decrement may be substantially steeper than for optically-thick gas. The double-peaked part of the line could then be an optically-thick line formed very close to the star, while the unpolarized line is optically-thin.
It is clear that any picture of this object must be simplified since the spectrum of HD 50138 shows many more peculiarities. Grady et al. (1996) include this star in their sample of objects exhibiting $`\beta `$ Pic type infall phenomena. Pogodin (1997) drew attention to the variable Hei $`\lambda `$5876 line profile, which sometimes shows an inverse P Cygni behaviour. He also attributed this to infall of material.
##### HD 87643
HD 87643 is a B\[e\] star, for which evidence suggests that it is located at several kpc, indicating a massive and evolved nature of the object. This, and its spectroscopic and spectropolarimetric data have been analysed in more detail in Oudmaijer et al. (1998). For completeness we show the data and the spectrum corrected for ISP and circumstellar polarization in Fig. 7. The polarization of this object shows some striking features. After correction for the intervening polarization, it turns out that most of this structure appears an artefact due to the polarization vector additions; the spectrum corrected for ISP and circumstellar dust has a much smoother behaviour. A comparison with the schematic model calculations by Wood, Brown & Fox (1993) indicates that the polarization profile can be best reproduced with a circumstellar disk that is both rotating and expanding.
##### HD 45677
From their UV - optical low resolution spectropolarimetry Schulte-Ladbeck et al. (1992) infer that HD 45677 is surrounded by a bipolar nebula. After correcting for ISP, the polarization angle of the blue/UV spectrum they present is rotated by about 90<sup>o</sup> with respect to the red part of the spectrum. The explanation advanced for this behaviour is that the red emission is scattered through a dust torus, while the blue emission, to which the dust is optically thick, comes from a scattering bipolar flow, perpendicularly oriented with respect to the torus.
The spectra taken in January 1995 and December 1996 are shown in Fig. 8. On both occasions the polarization across H$`\alpha `$ is enhanced with respect to the continuum. The variability of the continuum polarization is very strong, with the polarization changing from 0.4% to 0.2%, while the position angle changed from 11<sup>o</sup> to 140<sup>o</sup>. The peak of H$`\alpha `$ shows in both cases roughly the same polarization and position angle ($``$0.8%, 75<sup>o</sup>). Although H$`\alpha `$ is enhanced in the observed polarization spectrum, this does not necessarily imply that the intrinsic spectrum exhibits the same effect. In the following we discuss the different elements, ISP, circumstellar dust scattering and electron-scattering that shape the observed polarization spectra. As explained below, we assume that the polarization changes across H$`\alpha `$ are due to electron-scattering.
The spectra are plotted in QU space in Fig. 9. Both circumstellar dust polarization and ISP add only a constant QU vector to all points. The continuum points cluster at different positions on the dates of observation, while the QU vectors across H$`\alpha `$ are almost parallel. A least squares fit through the QU-points between 6550 and 6570 $`\mathrm{\AA }`$, returns intrinsic polarization angles of 168 $`\pm `$ 3<sup>o</sup> and 163 $`\pm `$ 3<sup>o</sup> for the 1995 and 1996 spectra respectively. Depending on the quadrant where the intrinsic QU vectors are located, these values could be rotated by 90<sup>o</sup> and define a projected angle on the sky of $``$ 75<sup>o</sup>. The length of the vector on both occasions corresponds to P $``$ 0.95%, assuming the line to be unpolarized. The electron-scattering region is thus aspherical, has a PA of $``$ 75<sup>o</sup> on the sky at a relatively constant amplitude of about 0.95% .
How does this relate to the view taken by Schulte-Ladbeck et al. (1992) of their similar detection of enhanced H$`\alpha `$ linear polarization? In their ISP-corrected spectrum, H$`\alpha `$ is still enhanced with respect to its adjacent continuum. They explained this in terms of an ionized region much closer to the circumstellar dust than the stellar point source: the H$`\alpha `$ line then sees a larger solid angle of scattering material and is thus more polarized than the continuum. This seems to us an unlikely alternative to the conventional view that the ionized region, with a temperature of $``$10000 K, is located within a very much smaller volume around the star than the dust, which should have an equilibrium temperature below $``$1500 K, the dust condensation temperature (see e.g. the spectral energy distribution modelling of HD 45677 by Sorrell 1989). Furthermore, the ISP correction adopted by Schulte-Ladbeck et al. (1992) was the ad hoc value proposed by Coyne & Vrba (1976) on the basis that it should be comparable with the relatively steady observed polarization in the blue (the red is much more variable). Whatever this correction actually does correspond to, there is no reason to suppose that it accounts for both the ISP and circumstellar dust polarization. If both sources of foreground polarization can be removed with confidence, only then can it be discerned whether there is an intrinsic linear polarization enhancement across H$`\alpha `$. Hence, for the timebeing, we retain the more conventional view that the observed polarization change across H$`\alpha `$ is due to electron scattering in the ionized region, which may project as an oblate ‘disk’ or as a prolate ‘bipolar flow’.
Now we turn to consider the change in polarization of the continuum points. As it is likely that the ISP is constant in time, the additional, potentially variable, mechanisms involved are electron-scattering and circumstellar dust scattering. In the same manner as the polarization changes across H$`\alpha `$ move along a constant angle, temporal changes, due to a polarization mechanism that becomes stronger or weaker, may only affect the magnitude of polarization and not the PA. If we adopt this principle here, the intrinsic PA of the polarizing material responsible for the change in continuum polarization is then $``$ 20<sup>o</sup>.
Since the intrinsic PA of the electron scattering region is $``$ 75<sup>o</sup>, and its effect on the H$`\alpha `$ line appears not to have changed significantly, the variable component does not seem to be electron scattering. This leaves circumstellar dust scattering as the likely variable. Although the polarization variability of HD 45677 is well known, dating back to the paper in 1976 by Coyne & Vrba, it is only the high spectral resolution of the current data that allows us to discriminate between electron and dust scattering.
The major question remaining is why the apparent rotation of about 55<sup>o</sup> between the major axis of the (variable) dust polarization and the H$`\alpha `$ line forming region exists. In principle, this would suggest that multiple geometries, such as a combination of an equatorial disk and a bipolar flow, would show a rotation of 90<sup>o</sup>, regardless of the respective opening angles. The cancellation of perpendicularly-oriented polarization vectors tends to increase for larger opening angles, decreasing the observed polarization percentage, but the 90<sup>o</sup> rotation will remain intact. An answer may lie in the clumpiness of the dusty material around the object. HD 45677 is photometrically variable, a property attributed to the presence of various dust-clouds surrounding and even orbiting the object (de Winter & van den Ancker 1997). Clumpy material is also revealed by the spectroscopic variability. Grady et al. (1993) show the presence of variable redshifted absorption lines which are attributed to infalling and evaporating cometary bodies. These indicate patchiness of the circumstellar material as well.
The result of such clumpy material on the polarization angle is relatively easy to understand. If (one of) the scattering regions is clumpy, a rotation of 90<sup>o</sup> is only retrieved if the clumps are symmetrically distributed around the central star. If not, not all perpendicularly oriented polarization vectors will cancel out, and the net effect is that the observed position angle does not represent the time-averaged mean orientation of the scattering material. A similar argument has been brought forward by Trammell et al. (1994) to explain the rotation of 70<sup>o</sup> (instead of 90<sup>o</sup>) in the spectrum of IRAS 08005-2356. It seems thus that the rotation between the dust ring and the H$`\alpha `$ line forming region, which is less than 90<sup>o</sup>, could be the result of scattering with incomplete cancellation in an inhomogeneous region.
A word of caution should be given here with regard to the ‘true’ orientation of the dusty material. The 20<sup>o</sup> that was measured between the continuum points in Fig. 9, and its comparison with the intrinsic angle of the electron scattering region points to the clumpiness. However, this direction along which the variation occurred should not now be associated with the orientation of the scattering region, since we only measure incomplete cancellation of dust clumps at changing positions. Only sequences of observations of this type may give a clue as to whether the variable dust component arises from a region perpendicular to or colinear with the electron scattering region.
The main result of the new observations of HD 45677 is that it is possible to discriminate between the electron scattering and the dust scattering regions. The former can be probed by the change across H$`\alpha `$ while the latter is probed by the variability of the polarization of the continuum.
## 4 Discussion
### 4.1 What the observations tell us
This paper concerned medium resolution spectropolarimetry of a relatively large sample of B\[e\] and Herbig Be stars, objects which so far have never been observed in this way. We have described the results of these exploratory observations in a qualitative way and we summarize the highlights of both non-detections and detections below, and then briefly discuss the implications of the results.
#### 4.1.1 ‘Non’-detections
The main goal of the observations was to answer the basic question “Does the ionized material around these stars project to circular symmetry on the sky, or not?” by investigating whether or not a ‘line-effect’ is seen across H$`\alpha `$. In principle, a non-detection should imply a circular projection. This encompasses three-dimensional geometries that are spherically-symmetric, or disk-like seen close to face-on.
Two of the non-detections failed in quite different ways to fit into this simple picture:
First, in the case of MWC 297 – a young B1.5 star in the Aquila Rift (Drew et al. 1997) – observation over two hours or so produced evidence of, at most, a subtle line effect. This was despite the fact that the radio image of this object shows a clearly elongated ionized gas distribution. An intriguing interpretation, testable by high resolution imaging, is that we view MWC 297 only indirectly at optical wavelengths. If the direct sightline to the disk should reveal an edge-on structure, but if the disk is obscured from view due to the large extinction, scattering dust clouds may see a more circularly symmetric structure, such as a face-on disk, and reflect a polarization spectrum without a line effect to the observer.
Second, we have presented data on $`\omega `$ Ori, a star that has been reported twice before in the literature as showing an H$`\alpha `$ line effect (in both instances the observations were narrow-band rather than spectropolarimetric). The absence of any such effect in our data suggests that the ionized envelope is smaller than at previous occasions because the, optically thin, electron-scattering is more sensitive to changes in ionization than the optically thick H$`\alpha `$ emission. The non-detection in $`\omega `$ Ori implies that single epoch measurements are not always sufficient to provide a definitive answer on the circumstellar geometry of these objects.
In both cases, it is clear that H$`\alpha `$ spectropolarimetry is best judged in the context of other observational constraints on the target. Indeed they warn not to assume too quickly that the ionized regions in stars without a line effect are face-on disks or spherically-symmetric.
#### 4.1.2 Objects displaying a line-effect
In contrast, the detection of a line-effect immediately tells us that the scattering region does not project to circular symmetry on the sky. The data presented in this paper provide a new, richer variety of line effects than has hitherto been seen in the literature. This is in part due to their relatively high spectral resolution.
The curious cases of the somewhat similar situations of HD 37806 and HD 50138 offer great opportunities to understand the conditions close to the star. Both objects show double peaked H$`\alpha `$ line profiles, that have different $`V/R`$ ratios in normal intensity spectra, but which turn out to be of equal strength in the polarized flux data. Both stars also exhibit a superposed single component of H$`\alpha `$ emission that is also picked out by a change in linear polarization at much the same wavelength. This suggests that the line profile as a whole may be the result of two kinematically-distinct phenomena. These may be a rotating disk (or self-absorbed compact nebula), and a spatially more extended region of less-rapidly expanding Hii whose emission is only polarized by the ISM.
A particularly striking result to emerge from our data concerns the probable Herbig Be star, HD 53367. We have observed depolarization across H$`\alpha `$ without angle rotation. The co-alignment of the local interstellar magnetic field and stellar rotation axis together with the findings of Vrba et al. (1987), favour formation of this relatively massive star ($`M>10`$ M) by collapse rather than by the merger of less massive stars.
The power of repeated observations is shown by the case of HD 45677. Due to the continuum variability and the constant polarization arising from electron scattering, it is possible to distinguish between the electron and circumstellar dust scattering mechanisms. This is the first time that it has been possible to do this. Since the measured intrinsic angles of the dusty and the ionized region differ by 55<sup>o</sup> rather than 90<sup>o</sup> (for perpendicular geometries) or 0<sup>o</sup> (for parallel geometries) we can conclude that the the dust component is clumpy.
### 4.2 Implications for Herbig Be and B\[e\] star research
Among the probable Herbig Be stars we have observed, around half of them have shown no detectable line effect (LkH$`\alpha `$ 218, HD 52721, V380 Ori and MWC 297). This of course means that almost half have (namely HD 259431, HD 37806 and HD 53367) and should encourage further campaigns of this nature. Using narrow bands, Poeckert & Marlborough (1976) surveyed 48 Be stars and found that 21 showed the line effect at a 3$`\sigma `$ level, while a further 8 show the line effect at a 2-3$`\sigma `$ level. They investigated the relation between the intrinsic polarization of these Be stars (measured from the H$`\alpha `$ polarization change) with $`v\mathrm{sin}i`$ (as measure of inclination) and found that their observations could well be explained by inclination effects. Although based on very small number statistics, the comparable incidence of line effects in the Herbig Be stars observed does hint that flattened ionized circumstellar structures are quite common for this object class as well, and that the non-detections in our sample could be due to random sampling of the full range of inclinations. One of our non-detections is of course MWC 297, an object revealed by radio imaging to be non-circular (albeit on a scale of tens of AU).
It is important to appreciate that the H$`\alpha `$ polarization effect is sensitive to much smaller structures than are presently probed directly by imaging. Analytical calculations such as those by by Cassinelli, Nordsieck & Murison (1987, their Fig. 7) demonstrate that the bulk of electron scattering occurs on scales as small as two to three stellar radii. If the Herbig Be stars we have observed are indeed young or even pre-main sequence objects, then the deviation from spherical symmetry of the ionized region is presumably a consequence of the way in which they have formed. Viewed in these terms, the structures that we detect now via these polarization measurements could well be accretion disks that reach to within a stellar radius or so of the stellar surface. This conclusion is at variance with the magnetospheric accretion model widely regarded as applicable to T Tau stars, wherein magnetic channeling inhibits the formation of the inner disk (see Shu et al. 1994). For Herbig Be stars it remains a more open question as to how far magnetic fields determine the accretion geometry. Since the main sequence destiny of these more massive stars is to possess radiative envelopes, it leaves more room to doubt that magnetic fields must play a big role at the stage in which we are able to observe them. It will be interesting to see if further H$`\alpha `$ spectropolarimetry continues to uncover plausible disk accretors.
A conceptual model of how these objects may look, still embedded in accretion disks reaching into the stellar surface, has recently been devised by Drew, Proga & Stone (1998, building on the work of Proga, Stone & Drew 1998). This work shows how observationally-significant disk winds, driven by radiation pressure, would be created. A further piece in the puzzle of Herbig Be and BN objects that these predicted flows can help explain is the high contrast, quite narrow Hi line emission often observed at earlier B spectral types (Drew, 1998).
A further very strong outcome of this study is that all objects that can be classified as (evolved) B\[e\] stars, presented significant polarization changes across H$`\alpha `$. A factor that clearly helps increase the likelihood of detecting a spectropolarimetric line effect in B\[e\] stars is that their H$`\alpha `$ profiles are typically extremely high contrast and often somewhat broader than in Herbig Be stars. Pre-eminent among our B\[e\] group is HD 87643 which has already been discussed in a separate paper (Oudmaijer et al. 1998). Here we have presented MWC 137 (probably an evolved B\[e\] star), HD 45677 and HD 50138. The fact that all observed Galactic B\[e\] stars in our sample show the line-effect in one incarnation or another lends strong support to the Zickgraf et al. (1985, 1986) model. The variety of line-effects observed in our data illustrate that the structures around these stars have their deviation from spherical symmetry in common, but that the details in each case are different. So far, the discussion of these data has been largely qualitative. As models simulating these phenomena begin to be calculated there will no doubt be a considerable sharpening of insight.
## 5 Final remarks
Apart from providing some striking insights into a number of the targets observed, the programme of observations we have described here has offered some lessons in how best to obtain single-line spectropolarimetric data. It is clear that the spectral resolution available to us ($`R5000`$) has in most instances been just enough. As numerical modelling becomes more commonplace, the case for increased spectral resolution will become stronger. The main issue, nevertheless, is the achievement of high enough data quality. Our 8<sup>th</sup> magnitude and brighter objects have come out well in under an hour’s telescope time, while 10<sup>th</sup> to 12<sup>th</sup> magnitude objects require several hours observation with a 4-metre class telescope in at least middling weather conditions. Ultimately, these fainter sources will be best served by 8-metre class facilities where the shorter total integration times will be less subject to weather influence – presently they can be an uncertain struggle.
The overall conclusion of this study is that this relatively unexplored mode of observing does yield valuable new insights. In some instances we have encountered deepening mysteries that suggest conclusions drawn from other data have missed something. MWC 297 and HD 45677 are both good examples of this. At the same time, H$`\alpha `$ spectropolarimetry readily throws up examples that demand sophisticated numerical modelling of a type that is just beginning to become available (Hillier, 1996; Harries, 1996).
##### Acknowledgments
We thank the staff at the Anglo-Australian Telescope for their expert advice and support. Conor Nixon and Graeme Busfield are thanked for their help during some of the observing runs. The allocation of time on the Anglo-Australian Telescope was awarded by PATT, the United Kingdom allocation panel. RDO is funded by the Particle Physics and Astronomy Research Council of the United Kingdom. The data analysis facilities are provided by the Starlink Project, which is run by CCLRC on behalf of PPARC. Part of the observations are based on data obtained from the William Herschel Telescope, Tenerife, Spain, in the Isaac Newton Group service scheme. This research has made use of the Simbad database, operated at CDS, Strasbourg, France.
|
no-problem/9901/cond-mat9901286.html
|
ar5iv
|
text
|
# Antiferromagnetism in doped anisotropic two-dimensional spin-Peierls systems
\[
## Abstract
We study the formation of antiferromagnetic correlations induced by impurity doping in anisotropic two-dimensional spin-Peierls systems. Using a mean-field approximation to deal with the inter-chain magnetic coupling, the intra-chain correlations are treated exactly by numerical techniques. The magnetic coupling between impurities is computed for both adiabatic and dynamical lattices and is shown to have an alternating sign as a function of the impurity-impurity distance, hence suppressing magnetic frustration. An effective model based on our numerical results supports the coexistence of antiferromagnetism and dimerization in this system.
PACS: 75.10 Jm, 75.40.Mg, 75.50.Ee, 64.70.Kb
\]
General interest for spin-Peierls (SP) systems was recently renewed by the discovery of CuGeO<sub>3</sub> , the first inorganic SP material. The SP transition is characterized by a freezing of the spin fluctuations below an energy scale given by the spin gap $`\mathrm{\Delta }_S`$ accompanied by a simultaneous lattice dimerization . Rich phase diagrams have been obtained experimentally upon doping this compound with non-magnetic impurities . In site-substituted systems such as (Cu<sub>1-x</sub>M<sub>x</sub>)GeO<sub>3</sub>, where M=Zn (Ref. ) or Mg (Ref. ), long range antiferromagnetic (AF) order is stabilized at low temperature while the dimerization still persists (D-AF phase). In Mg-doped compounds, for impurity concentrations larger than a critical value ($`x_c0.02`$), a first order transition occurs between the D-AF phase and a uniform AF (U-AF) phase where the dimerization disappears. The coexistence of the two types of order in the D-AF phase is an intriguing phenomenon since lattice dimerization favors the formation of spin singlets on the bonds while low energy spin fluctuations exist in an AF phase.
Theoretically, the effect of impurity doping in SP systems was considered for fixed-dimerized , adiabatic and quantum-dynamical lattices. A single nonmagnetic impurity releases a soliton in the chain which can be viewed as a kink in the lattice distortion. In the absence of interchain couplings, such an excitation can freely propagate away from the impurity. On the other hand, the interchain elastic coupling $`K_{}`$ was shown to produce confinement within some distance from the impurity .
For a finite impurity concentration, the coexistence between SP and AF orders has been previously discussed either considering randomly distributed domain walls in a $`XX`$ chain or assuming small fluctuations of the magnetic exchange constants . Despite their success to describe some experimental results, these models are rather limited since they do not take into account the microscopic origin of the soliton formation nor the interchain couplings. In this paper, a realistic microscopic model with interchain magnetic and elastic couplings is considered to describe the formation of a region with AF correlations in the vicinity of each impurity and which allows an estimation of the effective interaction between impurities in the two-dimensional (2D) system. Thus, we are able to construct and study an effective model in order to understand the effects of a finite impurity concentration.
In a first step, the spin-phonon coupling is treated in the adiabatic approximation. The Hamiltonian $`=_{\mathrm{mag}}+_{\mathrm{el}}`$ is:
$`_{\mathrm{mag}}`$ $`=`$ $`J_{}{\displaystyle \underset{i,a}{}}(1+\delta _{i,a})𝐒_{i,a}𝐒_{i+1,a}+J_{}{\displaystyle \underset{i,a,b}{}}𝐒_{i,a}𝐒_{i,b},`$ (1)
$`_{\mathrm{el}}`$ $`=`$ $`{\displaystyle \underset{i,a}{}}\{{\displaystyle \frac{1}{2}}K_{}\delta _{i,a}^2+K_{}\delta _{i,a}\delta _{i,a+1}\},`$ (2)
where $`a`$ is a chain index and $`i`$ labels the sites along the chains. Atomic displacements are only considered along the chain direction, $`\delta _{i,a}`$ being here a classical variable related to the change of the bond length between sites $`(i,a)`$ and $`(i+1,a)`$. The magnetic part includes a magnetoelastic coupling $`J_{}`$ (hereafter set to unity) and an exchange interaction $`J_{}`$ connecting nearest neighbor (NN) chains. We eventually include in our model a next NN exchange interaction along the chain whose relevance for CuGeO<sub>3</sub> has been emphasized . $`_{\mathrm{el}}`$ is the elastic energy. The interchain elastic interaction ($`K_{}`$) is limited to NN chains. Stability of the lattice implies $`K_{}2|K_{}|`$. Typical values of the parameters for CuGeO<sub>3</sub> are $`J_{}0.1`$ (Ref. ) and $`K_{}/K_{}0.2`$ (Ref. ).
In order to study numerically model (2), we treat exactly the single chain problem using exact diagonalization (ED) or Quantum Monte Carlo (QMC) methods, while the interchain magnetic coupling is treated in a self-consistent mean-field (MF) approximation. This is an standard procedure to include interchain couplings in the study of quasi-one-dimensional systems. Moreover, Inagaki and Fukuyama have used a similar MF approximation to treat the interchain coupling in the bosonized version of (2) within a self-consistent harmonic approximation. Thus, in our procedure, the interchain magnetic coupling is replaced by its MF form:
$`_{\mathrm{MF}}^{}=J_{}{\displaystyle \underset{i,a,b}{}}\{S_{i,a}^zS_{i,b}^z+S_{i,a}^zS_{i,b}^zS_{i,a}^zS_{i,b}^z\}.`$ (3)
By extending a similar approach previously applied to one-dimensional (1D) chains to the case of the 2D lattice, a sweep is performed in the transverse direction, i.e. $`aa+1`$. For each chain $`a`$, we compute the MF values $`S_{i,a}^z`$ and the classical variables $`\{\delta _{i,a}\}`$ by energy minimization, which is achieved by solving iteratively the equations
$`\delta _{i,a}=\{J_{}𝐒_{i,a}𝐒_{i+1,a}+K_{}(\delta _{i,a+1}+\delta _{i,a1})\}/K_{}.`$ (4)
Then, these new values of the AF and SP order parameters enter as input for the chain $`a+1`$. This procedure is iterated until convergence is reached. In this way, we can study numerically finite clusters consisting of $`N`$ coupled chains with $`L`$ sites where, typically, $`N\times L=12\times 18`$ in ED and $`N\times L=6\times 40`$ in QMC, with toroidal boundary conditions.
A similar MF approach can be adapted to study a model equivalent to (2) but with quantum phonon degrees of freedom. In this case, phonon operators $`b_{i,a}^{}`$ and $`b_{i,a}`$ are introduced on each bond and the displacements $`\delta _{i,a}`$ become $`g(b_{i,a}^{}+b_{i,a})`$, where $`g`$ is the magnetoelastic constant. Then, the classical elastic term $`_{\mathrm{el}}`$ is replaced by its quantum version,
$`_{\mathrm{ph}}=\mathrm{\Omega }{\displaystyle \underset{i,a}{}}\{b_{i,a}^{}b_{i,a}^{}+\mathrm{\Gamma }(b_{i,a}^{}+b_{i,a}^{})(b_{i,a+1}^{}+b_{i,a+1}^{})\},`$ (5)
where $`\mathrm{\Gamma }=K_{}/(2K_{})`$, and the phonon frequency $`\mathrm{\Omega }`$ is related to $`g`$ by $`\mathrm{\Omega }=2g^2K_{}`$. The adiabatic limit (2) is recovered when $`\mathrm{\Omega }0`$ (requiring $`g0`$ also). Similarly to the interchain magnetic term, the interchain elastic term of (5) is then treated in mean-field by introducing a lattice order parameter $`\delta _{i,a}^{\mathrm{MF}}=gb_{i,a}^{}+b_{i,a}`$. Then, the term (5) is replaced by:
$`_{\mathrm{ph},\mathrm{MF}}=\mathrm{\Omega }{\displaystyle \underset{i,a}{}}\{b_{i,a}^{}b_{i,a}^{}+{\displaystyle \frac{\mathrm{\Gamma }}{g}}(b_{i,a}^{}+b_{i,a}^{})\delta _{i,a+1}^{MF}\},`$ (6)
Note that in this case it is not necessary to solve an equation similar to (4). To diagonalize the single chain Hamiltonian with $`L8`$, a Lanczos algorithm is used. The phononic degrees of freedom are treated within a variational formalism previously introduced . Note that inelastic neutron scattering experiments on CuGeO<sub>3</sub> reveal a rather large phonon frequency $`\mathrm{\Omega }/J_{}2`$ suggesting large lattice quantum effects in this material.
As a preliminary study, we apply the MF procedure to the case of an homogeneous system without impurities. In models (2) and (5), $`J_{}`$ is expected to stabilize the AF state while a small $`K_{}`$ (or large magnetoelastic coupling $`g`$) tends to favour SP order. For each value of the couplings $`J_{}`$ and $`K`$ (where $`K=K_{}2|K_{}|`$ is the relevant parameter in the SP phase) we obtain the ground state without imposing any restriction on the MF parameters. We found only two different phases, the SP phase where $`S_z=0`$ and $`\delta _{i,a}0`$ and the antiferromagnetic (AF) phase with $`S_z0`$ and $`\delta _{i,a}=0`$. Then, the phase diagram in the $`KJ_{}`$ plane, can be obtained in a more efficient way by a direct comparison of the energies of the Néel antiferromagnetic phase, and of the uniformly dimerized phase. The phase diagram shown in Fig. 1 exhibits a transition line between AF and SP phases. In the adiabatic case, this line could be fitted by a law $`J_{}=\frac{A}{K}+B`$ with $`A=0.3656`$ and $`B=0.06`$ (this artificial small negative value may be a consequence of small finite size effects, see e.g., Ref. ). A phase boundary with a form $`J_{}=\frac{A}{K}`$ has been predicted by Inagaki and Fukuyama . However their bosonized approach do not fix unambiguously the value of $`A`$.
In the case of the adiabatic calculation finite size effects were shown to be small for $`K<2`$. On the other hand, for the dynamical lattice, the calculation is reliable only for larger lattice couplings (i.e. smaller values of $`K`$) due to stronger finite size effects. As expected, for very small $`K`$, lattice quantum fluctuations are less effective in dimerizing the chain than the adiabatic lattice. Then, the phase boundary obtained with quantum phonons is located below the adiabatic one. This tendency becomes clear as $`\mathrm{\Omega }`$ is increased, as shown in the figure. On the other hand, it has been suggested that dynamical phonons induce an effective magnetic frustration. This frustration which becomes relatively important for larger $`K`$ destabilizes the AF phase thus moving the phase boundary upwards as seen in the figure. Consistently with this behaviour, if a next NN exchange term is included in the Hamiltonian, in the adiabatic approximation, the SP phase is more stable and the phase boundary is located above the corresponding curve for $`\alpha =0`$. This behavior is shown in Fig. 1 for the realistic value of $`\alpha =0.36`$ obtained for CuGeO<sub>3</sub> (Ref. ), where $`\alpha `$ is the value of the next NN exchange coupling constant in units of $`J_{}`$. We have checked that the set of realistic parameters $`K20`$ and $`K_{}`$, $`J_{}`$ mentioned above corresponds to a point in the SP phase.
To start our analysis of impurity doping, we consider a single impurity in order to investigate the appearance of AF correlations in the SP phase. As mentioned above, the impurity releases in the chain a topological spin-1/2 solitonic excitation characterized by a change of parity of the dimerization order which occurs in a finite region of longitudinal size $`\xi _{}`$ given by the soliton width. The local magnetization on each chain $`a`$ can be decomposed into uniform and staggered components , $`S_{i,a}^z=M_{i,a}^{\mathrm{unif}}+(1)^{i+a}M_{i,a}^{\mathrm{stag}}`$. In fact, the excess uniform component $`S_{\mathrm{sol}}^z=\pm \frac{1}{2}`$ and the soliton, characterized by a broad maximum of $`M_{i,a}^{\mathrm{stag}}`$, remain confined in the chain with the impurity. However, as seen in Fig. 2(a),
the interchain magnetic coupling $`J_{}`$ generates a large staggered component with the same parity, i.e. $`M_{i,a}^{\mathrm{stag}}`$ keeping the same sign, in the neighboring chains. Simultaneously, the amplitude of the SP dimerization is significantly suppressed compared to the bulk value i.e. far away from the impurity. Large AF correlations can be seen up to more than four chains away from the impurity chain for magnetic couplings as small as $`J_{}=0.1`$, in particular in the vicinity of the SP $``$ AF transition line of Fig. 1. The transverse range of the AF ‘polarization cloud’ around the impurity increases strongly with the transverse coupling $`J_{}`$.
A crucial feature of the polarization surrounding the impurity-soliton area is that the sign of $`M_{i,a}^{\mathrm{stag}}`$ is unambiguously fixed by the orientation ($`S_{\mathrm{sol}}^z=\pm \frac{1}{2}`$) of the soliton and by the position $`(i_0,a_0)`$ of the impurity in such a way that $`\mathrm{sign}\{\mathrm{M}_{\mathrm{i},\mathrm{a}}^{\mathrm{stag}}\}=\mathrm{sign}\{\mathrm{S}_{\mathrm{sol}}^\mathrm{z}\}(1)^{\mathrm{a}_0+\mathrm{i}_0+1}`$. This fact can be simply understood in the strong dimerization limit ($`\delta 1`$) where the introduction of the impurity on a given site releases a spin-1/2 on one of its neighboring sites by breaking a singlet bond. For smaller lattice coupling, the excess spin can effectively hop from site to site on the same sublattice (due to the underlying dimerization), hence producing AF correlations with the parity defined above.
Let us now consider two impurities introduced simultaneously on two sites $`(i_1,a_1)`$ and $`(i_2,a_2)`$ of different chains ($`a_1a_2`$). When the two polarization clouds associated to each soliton-impurity ‘pair’ start to overlap, one expects their interaction to depend on the relative orientation of the two solitons. As seen in Figs. 2(c,d), quite different patterns correspond to the singlet and triplet arrangements of the two spin-1/2 solitons. As confirmed by our calculations, the lowest energy is always obtained for a spin state which leads to the same parity of the staggered magnetization associated to each impurity, i.e. which avoids completely magnetic frustration. The simple argument developed above for a single impurity then suggests that a triplet $`S=1`$ (singlet $`S=0`$) configuration is favored when the two impurities are located on the same sublattice (opposite sublattices). It is then appropriate to define an effective magnetic coupling between the AF clouds associated to each impurity by $`J_{\mathrm{eff}}=E_{\mathrm{S}=1}E_{\mathrm{S}=0}`$. For a wide range of parameters leading to a SP state in the bulk, we have numerically found that $`J_{\mathrm{eff}}`$ is ferromagnetic (F) if the two impurities belong to the same sublattice and antiferromagnetic in the opposite case. This implies that the coupling between the two local moments associated to the impurities is either F or AF in such a way that no frustration occurs. The magnetic coupling, for physical values of the parameters, can be fairly extended in space as seen in Fig. 3(a). Its range is directly controlled by the overlap of the polarization clouds. It follows roughly a behavior like
$`J_{\mathrm{eff}}J_0(1)^{\mathrm{\Delta }a+\mathrm{\Delta }i+1}\mathrm{exp}(C_{}{\displaystyle \frac{\mathrm{\Delta }a}{\xi _{}}})\mathrm{exp}(C_{}{\displaystyle \frac{\mathrm{\Delta }i}{\xi _{}}}),`$ (7)
where $`J_0J_{}`$, $`C_\alpha `$ are of order unity and $`\xi _\alpha J_\alpha /\mathrm{\Delta }_S`$.
To get an insight on how finite size effects might affect our results we have increased our cluster size ($`12\times 18`$) in both directions. The change of the transversal dimension has very little effect because the polarization clouds are almost independent when the impurities are separated by more than four chains (see the very small values of $`J_{\mathrm{eff}}`$ for $`\mathrm{\Delta }a=4`$ in Fig. 3(a)). On the other hand, by increasing the length of each chain, some changes occur but only when the impurities are located at the largest distances. However, the exponential decay of the effective interactions with distance (see below) is not qualitative changed as it can be seen in 3(a)). Therefore, we expect that the numerical values of the fitting parameters would not be much affected by finite size effects leaving the overall behaviour essentially unchanged. In particular, we believe that the presence of long range AF order in the effective model (see discussion below) is a robust feature not affected by finite size effects.
When two impurities are introduced on the same chain ($`a_1=a_2`$) two cases have to be distinguished. If the impurities are located on the same sublattice a similar behavior is observed as described above (compare Fig. 2(b) and Fig. 2(d)), i.e the effective interaction is ferromagnetic. However, the magnitude of $`|J_{eff}|`$ is $`0.4`$ (very slowly decaying as $`\mathrm{\Delta }i`$ increases) for the parameters of Fig. 3(a), i.e. much larger than the values corresponding to impurities in different chains. If the impurities belong to different sublattices then a chain with even number of sites is cut into two segments with even number of sites each. In the lowest energy configuration ($`S=0`$) no soliton-antisoliton pair was observed for separations $`\mathrm{\Delta }i`$ up to $`20`$ in agreement with previous work and the triplet excitation energy remains large ($`\mathrm{\Delta }_S`$). Then, one can expect that for larger chains, when the formation of soliton becomes favourable, the effective interaction between them will be AF and their magnitude will be of the order of $`\mathrm{\Delta }_S`$. In summary, one can assume that the effective interaction between impurities on the same chain has the same form of Eq. (7) with $`\mathrm{\Delta }a=0`$ and $`J_0`$ being now $`\mathrm{\Delta }_S`$. This form is similar to the one adopted in Ref. for impurities in a single chain except that these authors do not consider the sublattice sign alternation. Nevertheless, it should be noticed that this AF interaction should also decay for very large $`\mathrm{\Delta }i`$.
When lattice quantum fluctuations are introduced ($`\mathrm{\Omega }>0`$), the qualitative properties of the effective interaction $`J_{\mathrm{eff}}`$ are preserved. Consistently with the relatively larger stability of the AF phase in the small $`K`$ region shown in Fig. 1 with respect to the adiabatic case, we have found that lattice dynamics lead to an increase of the size of the AF cloud associated to each soliton. Therefore, the magnitude of the magnetic coupling $`J_{\mathrm{eff}}`$ increases with the phonon frequency $`\mathrm{\Omega }`$ as shown in Fig. 3(b).
The final part of our study is the analysis of a simple effective two-dimensional spin-1/2 Heisenberg model between impurities with a long range interaction given by Eq. (7). The ‘bare’ parameters are the same as in Fig. 3 and the parameters of the expression (7) have been obtained by fitting the curves shown in that figure and similar data for the case of $`\mathrm{\Delta }a=0`$. In the direction perpendicular to the chains we have neglected the effective interactions beyond a distance $`\mathrm{\Delta }a=5`$. We have also assumed that even segments are associated to a soliton-antisoliton pair.
A given number of spin-1/2 impurities $`4N_{\mathrm{imp}}16`$ is thrown at random on systems of coupled chains of sizes up to $`40\times 40`$. Then, the staggered magnetization $`M_{\mathrm{stag}}=(1/N_s^2)(_{i,a}(1)^{i+a}S_{i,a}^z)^2`$, where $`N_s=N\times L`$, is computed and averaged over, typically, 12,000-16,000 random realizations. The square root of this quantity is shown in Fig. 4. By extrapolating to the bulk limit for a fixed impurity doping using a polynomial in (1/$`\sqrt{N_s}`$) we found that $`M_{\mathrm{stag}}`$ is finite, implying long range AF order, and slowly decreasing as $`x`$ goes to zero. This behaviour is consistent with experimental results suggesting that $`M_{\mathrm{stag}}`$ decays exponentially to zero as $`x0`$.
In conclusion, spin-1/2 solitons released in 2D anisotropic SP systems by the introduction of impurities were shown to experience spatially extended F or AF exchange interactions depending on their relative positions. These exchange interactions coexisting with the SP order are calculated from realistic microscopic models and used to construct a simple effective model which in turn enables us to show the establishment of long range AF order and to compute the AF order parameter as a function of the impurity doping.
This work was supported in part by the ECOS-SECyT A97E05 program. We thank IDRIS (Orsay, France) and Florida State University for using their supercomputer facilities.
|
no-problem/9901/astro-ph9901315.html
|
ar5iv
|
text
|
# Reply to “Comment on Correlation between Compact Radio Quasars and Ultra-High Energy Cosmic Rays”
$`\backslash `$pacs{}
In ref. we investigated the hypothesis that the highest energy cosmic rays are created in and travel undeflected from an extraordinary class of QSO’s, capable on physical grounds of producing the highest energy particles found anywhere in nature. This a priori hypothesis was motivated by theories of cosmic ray acceleration and the ansatz of a new, neutral, GZK-evading messenger particle. It is well known that many features of powerful AGN’s are not characteristics of every one of them and thus would not be suitable markers (e.g., blazars only look like blazars when viewed from a special direction). The class of compact radio quasars (CQSOs) is the only kind of quasar which holds any hope of at once accelerating particles to very high energy, and at the same time converting them into another, possibly long-lived new particle by interacting with material surrounding the AGN. The distinctive radio spectrum which provides an objective definition of the source class, is in fact produced by interaction with the surrounding material and is thus indicative of the conditions required.
As pointed out by Elbert and Sommers, 3C147 is a remarkable, uniquely suitable candidate source for the highest energy cosmic ray event FE320, once one sets aside the GZK distance limitation. No matter what further restriction on characteristics of the source class is made, 3C147 must be the source of FE320, given our hypothesis that the source is an AGN. Precisely for this reason, it is correct to include it in the analysis independently of any additional constraint ultimately imposed to remove background such as the radio spectrum of a CQSO.
Our energy cut was not arbitrary and was decided before examining the events. Clearly, one must provide some buffer against contamination by mismeasured protons piled up just at the GZK limit. Convoluting a rapidly falling distribution with a gaussian measurement error means the sample is preferentially drawn from the low-side of the distribution. With an $`E^p`$ spectrum and $`p=\{2,3,4\}`$, an event with nominal energy 1(2)-sigma above $`E_{1(2)}^{\mathrm{cut}}`$ has a true energy lower than $`E_{1(2)}^{\mathrm{cut}}`$ {37%, 40%, 42%} ({5.0%, 5.2%, 5.5%}) of the time. In the analysis reported in we required that the event should have an energy of at least $`810^{19}`$ eV plus 1-sigma. This was motivated by the result of ref. that protons from nearby AGN’s ( 100 Mpc) have energies degraded to the $`5810^{19}`$ eV range by interaction with CMBR, independently of their initial energy. In analyses of future data, we plan to impose an additional cut, that the nominal energy of the event be at least $`510^{19}`$ eV plus 2-sigma. For distant sources this would produce a $`95\%`$ clean sample. The existing set of events passes this cut as well. These cuts are soft in the sense that the $`810^{19}`$ eV starting point for defining $`E_1^{\mathrm{cut}}`$ is not precise – $`7\mathrm{or}910^{19}`$ eV could also have been chosen and would have provided weaker or stronger background rejection.
Ag110 has a central energy value of $`1.1\pm 0.310^{20}`$ eV and thus satisfies both cuts. Hoffmann adds a third significant figure to the energy and claims the event should be excluded from the analysis since 1.10 - 0.33 = 0.77. We disagree since the cut energy was defined at 1 significant figure. However our result is robust: excluding Ag110, the CQSO $`\chi ^2`$ probability is unchanged (0.53), the testQSO probability remains small ($`410^7`$), and the probability that randomly distributed QSO’s would produce as low or lower $`\chi ^2`$ is still small (0.016). The possible contention over whether or not to include the event underlines the point emphasized in our paper: a definitive resolution of the issue requires more and more precise data.
Table I contains three typos: the RA error for HP120 should be 2.7 deg; $`\mathrm{\Delta }\mathrm{\Omega }`$ for FE320 and HP120 should be 1.9 and 6.7 deg<sup>2</sup> respectively. Correct values of all parameters were used in the analysis. Note that $`\mathrm{\Delta }\mathrm{\Omega }`$ is only a figure of merit and does not enter the analysis. The conventional difference between how conical errors and rectilinear errors are quoted is correctly incorporated in the formulae (see Eq. 2 and text below) but not explicitly discussed due to space limitations.
Hoffman’s Comment prompted us to analyze the three events which come closest to passing our cuts, keeping in mind that with the energies and errors of these three CR events and the falling cosmic ray spectrum, it is likely that at least one of them should be a proton because the probability that one of the three has a true energy below $`510^{19}`$ eV is {40%,60%}, for $`\{p=2,4\}`$, and at the source protons far outnumber the messengers for a given energy. Rather than diminishing the evidence in favor of the ansatz, the characteristics of these three events fit the CQSO hypothesis very well. Ya110 (RA = $`75.2\pm 10`$ deg, Dec = $`45.5\pm 4`$ deg) would have the same source that produced the highest energy event FE320 (3C147), with a $`\chi ^2`$ residual of 2.0 for 2 degrees of freedom; the CQSO (0.53) and random background probability (0.0058) are essentially unchanged by including Ya110. There is an excellent CQSO candidate for HP105 (B3 1325+436); it is an archtypal CQSO and has a $`\chi ^2`$ residual of 0.46. Adding this event in the analysis decreases the random background probability to 0.003 (0.0029 with Ya110 and 0.0028 without). There is no good candidate for HP102 within a cone of radius 5 deg so it is interpreted as a deflected proton.
We reiterate that whether the random background probability is 3% or 0.3% is not the essential point – either value is low enough that in coin-tossing experiments one would be surprised by it, but not low enough for a statistical fluctuation to be excluded. Our main message is that future detectors should aim for the best possible position resolution in order to settle this important question.
|
no-problem/9901/astro-ph9901246.html
|
ar5iv
|
text
|
# Bars and boxy/peanut-shaped bulges: an observational point of view
## 1 Introduction
Boxy/peanut-shaped (B/PS) bulges have, as their name indicates, excess light above the plane. They are thus easily identified in edge-on systems and display many interesting properties: their luminosity excess, an extreme three-dimensional structure, probable cylindrical rotation, etc. However, the main importance of B/PS bulges resides in their incidence: at least 20-30% of all spiral galaxies possess a B/PS bulge. They are thus essential to our understanding of bulge formation and evolution.
Early theories on the formation of B/PS bulges were centered around accretion scenarios, where one or many satellites galaxies are accreted onto a preexisting bulge, and which lead to axisymmetric structures (e.g. Binney & Petrou 1985). However, such scenarios are restrictive, and it seems that the only viable path is the accretion of a small number of moderate-sized satellites. Thus, accretion probably plays only a minor role in the formation of B/PS bulges. A more attractive mechanism is the buckling of a bar, due to vertical instabilities. This process can form B/PS bulges even in isolated galaxies, and accounts easily for the fact that the fraction of B/PS bulges is similar to that of (strongly) barred spirals. Soon after a bar is formed, it buckles and settles with an increased thickness, appearing boxy or peanut-shaped depending on the viewing angle (e.g. Combes et al. 1990). Hybrid scenarios, where a bar is excited by an interaction and then buckles, have also been suggested.
To test as directly as possible the bar-buckling hypothesis, we have developed reliable bar diagnostics for edge-on spirals (Bureau & Athanassoula 1999; Athanassoula & Bureau 1999), and have searched for bars in a sample of edge-on galaxies with and without B/PS bulges (Bureau & Freeman 1999). This way, we can probe the exact relationship between bars and B/PS bulges.
## 2 Bar diagnostics in edge-on spiral galaxies: the periodic orbits approach
There is no reliable photometric way to identify a bar in an edge-on spiral galaxy. However, \[Kuijken & Merrifield (1995)\] showed that an edge-on barred disk produces characteristic double-peaked line-of-sight velocity distributions which would not occur in an axisymmetric disk. Following their work, we also developed bar diagnostics based on the position-velocity diagrams (PVDs) of edge-on disks, which show the projected density of material as a function of line-of-sight velocity and projected position. The mass model we adopted has a Ferrers bar, two axisymmetric components yielding a flat rotation curve, and four free parameters. All our models are two-dimensional.
We first used the families of periodic orbits in our mass model as building blocks to model real galaxies (Bureau & Athanassoula 1999). Such an approach provides essential insight into the (projected) kinematics of spirals. We showed that the global appearance of a PVD can be used as a reliable tool to identify bars in edge-on disks. Specifically, the presence of gaps between the signatures of the various periodic orbit families follows directly from the non-homogeneous distribution of orbits in a barred galaxy. The two so-called forbidden quadrants of the PVDs are also populated because of the elongated shape of the orbits. Figure 1 shows the surface density and projected PVD of a typical model. The bar is viewed at an angle of 45$`^{}`$from the major axis and only the major families of periodic orbits are considered. The signatures of the $`x_1`$ (parallel to the bar) and $`x_2`$ (perpendicular to the bar) orbits are particularly important to identify the bar and constrain the viewing angle. Because of streaming, the parallelogram-shaped signature of the $`x_1`$ orbits reaches very high radial velocities when the bar is seen end-on and only relatively low velocities when it is seen side-on. The opposite is true for the $`x_2`$ orbits.
## 3 Bar diagnostics in edge-on spiral galaxies: hydrodynamical simulations
We also developed bar diagnostics using hydrodynamical simulations, targeting specifically the gaseous component of spiral galaxies (Athanassoula & Bureau 1999). The simulations are time-dependent and the gas is treated as ideal, isothermal, and non-viscous. We used the same mass model as above, without self-gravity, and modeled star formation and mass loss in a simplistic way. However, the collisional nature of the gas leads to better bar diagnostics than the periodic orbits approach.
The main feature of the PVDs is a gap, present at all viewing angles, between the signature of the nuclear spiral (associated with $`x_2`$ orbits) and that of the outer parts of the disks. There is very little gas in $`x_1`$-like flows. This gap unmistakably reveals the presence of a bar in an edge-on disk. It occurs because the large scale shocks which develop in bars drive an inflow of gas toward the centers, depleting the outer bar regions. If a galaxy has no inner Lindblad resonance (ILR; or, equivalently, has no $`x_2`$ orbits), there is no nuclear spiral and the entire bar region is depleted. Then, the use of stellar kinematics is probably preferable to identify a bar. We will develop such diagnostics in a future paper. Figure 2 shows the gas density distribution and PVD for the same model as above, which has ILRs. Although not shown, the PVDs again vary significantly with the viewing angle, the signature of the nuclear spiral reaching its highest velocities when the bar is seen close to side-on. We also ran simulations covering a large fraction of the parameter space likely to be occupied by real galaxies. The PVDs can then be used to somewhat constrain the mass distribution and bar properties of observed systems.
## 4 The nature of boxy/peanut-shape bulges
The PVDs produced are directly comparable to kinematic observations of edge-on spiral galaxies. In the hope of understanding the formation mechanism of B/PS bulges, we searched for bars in a sample of 30 edge-on spirals with and without B/PS bulges, using emission line long-slit spectroscopy (Bureau & Freeman 1999). The objects were selected from existing catalogs and 2/3 have probable companions. Of the 24 galaxies with a B/PS bulge, 17 have extended emission lines and constitutes our main sample. The remaining 6 galaxies all have extended emission and form a control sample.
In the main sample, 14 galaxies display a clear bar signature in their PVD, and only 3 may be axisymmetric or have suffered interactions. None of the galaxies in the control sample shows evidence for a bar. This means that most B/PS bulges are due to the presence of a thick bar viewed edge-on and only a few may be due to the accretion of external material. In addition, spheroidal bulges do appear axisymmetric. Thus, it seems that most B/PS bulges are edge-on bars and that most bars are B/PS when viewed edge-on. However, the strength of this converse is limited by the small size of the control sample. To illustrate our data, we show the PVD of two galaxies in the main sample in figure 3. Our association of bars and B/PS bulges is supported by the anomalous emission line ratios observed in many objects. These galaxies display large H$`\alpha `$/\[N II\] ratios, often associated with shocks, and these ratios correlate with kinematical structures in the disks. Constraining the viewing angle to the galaxies with our models, the observations also appear to confirm the general prediction of $`N`$-body simulations, that bars are peanut-shaped when seen side-on and boxy-shaped when seen end-on.
Our results are consistent with the current knowledge on the bulge of the Milky Way and strongly support the bar-buckling mechanism for the formation of B/PS bulges. However, we do not test directly for buckling, and other bar-thickening mechanisms and hybrid scenarios can not be excluded. Nevertheless, it is clear that the influence of bars on the formation and evolution of bulges is primordial.
## 5 On-going studies
The bar diagnostics we have developed open up for the first time the possibility of studying the vertical structure of bars observationally. To this end, we have obtained $`K`$-band images of all the sample galaxies. We have also obtained absorption line spectroscopic data to study the stellar kinematics, and a more in-depth investigation of line ratios will give us a better understanding of the large scale effects of bars in disks.
###### Acknowledgements.
We would like to thank A. Bosma, A. Kalnajs, and L. Sparke for useful discussions at various stages of this work. We also thank J.-C. Lambert for computer assistance and G. D. Van Albada for the FS2 code.
|
no-problem/9901/hep-th9901163.html
|
ar5iv
|
text
|
# References
NIKHEF 99-003
DAMTP-1999-17
hep-th/9901163
Januari 26, 1999
An index theorem for non-standard Dirac operators
Jan-Willem van Holten, Andrew Waldron
NIKHEF
P.O. Box 41882
1009 DB Amsterdam
The Netherlands
Kasper Peeters
DAMTP, Cambridge University
Silver Street
Cambridge CB3 9EW
United Kingdom
t$`(8n)`$@nikhef.nl, $`n=4,3,2`$
> Abstract
>
> On manifolds with non-trivial Killing tensors admitting a square root of the Killing-Yano type one can construct non-standard Dirac operators which differ from, but commute with, the standard Dirac operator. We relate the index problem for the non-standard Dirac operator to that of the standard Dirac operator. This necessitates a study of manifolds with torsion and boundary and we summarize recent results obtained for such manifolds.
On manifolds like the four-dimensional Kerr-Newman and Taub-NUT manifolds, the geodesic equations are integrable because of the existence of a symmetric second-rank Killing tensor $`K^{\mu \nu }`$ , allowing the construction of a constant of motion
$$K=\frac{1}{2}K^{\mu \nu }p_\mu p_\nu .$$
(1)
The Killing tensor condition
$$K_{(\mu \nu ;\lambda )}=0$$
(2)
is actually equivalent with the conservation of $`K`$, i.e. $`K`$ commutes with the worldline Hamiltonian
$$H=\frac{1}{2}g^{\mu \nu }p_\mu p_\nu $$
(3)
in the sense of Poisson brackets:
$$\{K,H\}=0.$$
(4)
Related to this, the Klein-Gordon equation with minimal electromagnetic coupling on these background spaces is soluble by separation of variables . Making use of the observation of Penrose and Floyd that such Killing tensors can have a square root of Killing-Yano-type :
$$K_{\mu \nu }=f_{\mu \lambda }f_\nu ^\lambda ,$$
(5)
with the properties
$$f_{\mu \nu }=f_{\nu \mu },f_{\mu \nu ;\lambda }+f_{\mu \lambda ;\nu }=0,$$
(6)
Carter and McLenaghan showed the existence of a Dirac-type linear differential operator which commutes with the standard Dirac operator . Such non-standard Dirac operators take the form
$$D_f\gamma _5\gamma ^\lambda \left(f_\lambda ^\mu D_\mu \frac{1}{3!}\sigma ^{\mu \nu }H_{\mu \nu \lambda }\right),$$
(7)
where (in contrast to the more familiar Dirac operators associated to covariantly constant complex structures) the second term is nonzero,
$$H_{\mu \nu \lambda }=f_{[\mu \nu ;\lambda ]}=f_{\mu \nu ;\lambda }.$$
(8)
The square brackets in the middle expression denote complete anti-symmetrization with unit weight; the last equality is obtained straightforwardly from the Killing-Yano conditions (6). In this letter we discuss the zero mode spectrum of such Dirac operators (see for a systematic classification of manifolds admitting solutions to (6)).
The covariant derivative $`D_\mu `$ includes the standard spin-connection term:
$$D_\mu =_\mu \frac{1}{2}\omega _\mu ^{ab}\sigma _{ab},$$
(9)
with the $`\sigma _{ab}`$ the usual Dirac representation of the generators of the Lorentz group on spinors. In the following we frequently use the representation of vectors and tensors in local Lorentz components, obtained by contraction with a vielbein $`e_\mu ^a`$ or the inverse vielbein $`e_a^\mu `$, e.g.
$$f_\mu ^a=f_\mu ^\nu e_\nu ^a,c_{abc}=e_a^\mu e_b^\nu e_c^\lambda H_{\mu \nu \lambda }.$$
(10)
In this notation one can write the Dirac operator (7) as
$$D_f=\gamma _5\gamma ^a\left(f_a^\mu D_\mu \frac{1}{3!}\sigma ^{bc}c_{abc}\right).$$
(11)
The standard Dirac operator (without torsion) is
$$D=i\gamma ^ae_a^\mu D_\mu ;$$
(12)
then the Killing-Yano properties of $`f_a^\mu `$ guarantee the commutation relation
$$[D_f,D]=0.$$
(13)
Thus, the standard and non-standard Dirac operators can be diagonalized simultaneously, which is at the root of Chandrasekhar’s observation that the Dirac equation in the Kerr-Newman solution is separable . We stress that although one often formulates the commutation relation (13) in terms of Poisson brackets (for example, in classical studies of spinning particles), when examining index problems it is essential that one considers quantum commutators.
In even-dimensional spaces one can define the index of a Dirac operator as the difference in the number of linearly independent zero modes with eigenvalue +1 and $`1`$ under $`\gamma _5`$:
$$\mathrm{index}\left(D\right)=n_+^0n_{}^0.$$
(14)
The index is useful as a tool to investigate topological properties of the base-space, as well as in computing anomalies in quantum field theory; for a review see e.g. . Eq.(13) now leads to a simple but remarkable result for the index of the non-standard Dirac operator $`D_f`$, to wit the
theorem:
$$\mathrm{index}\left(D_f\right)=\mathrm{index}\left(D\right).$$
(15)
Below we sketch the proof of this theorem and discuss some of its possible implications.
Let $`|\lambda ,\mu `$ denote an orthonormal basis of simultaneous eigenvectors of the standard and non-standard Dirac operators:
$$D|\lambda ,\mu =\lambda |\lambda ,\mu ,D_f|\lambda ,\mu =\mu |\lambda ,\mu .$$
(16)
Now as $`\gamma _5`$ anti-commutes with each of the Dirac operators, all non-zero eigenvalues occur in pairs of opposite signs:
$$D\gamma _5|\lambda ,\mu =\gamma _5D|\lambda ,\mu =\lambda \gamma _5|\lambda ,\mu ,$$
(17)
and similarly for $`D_f`$. Projecting the states of fixed eigenvalue onto the eigenstates of $`\gamma _5`$:
$$|\lambda ,\mu ;\pm \frac{1}{2}\left(1\pm \gamma _5\right)|\lambda ,\mu ,$$
(18)
one obtains the equivalent results
$$\begin{array}{c}\gamma _5|\lambda ,\mu ;\pm =\pm |\lambda ,\mu ;\pm ,\hfill \\ \\ D|\lambda ,\mu ;\pm =\lambda |\lambda ,\mu ;,D_f|\lambda ,\mu ;\pm =\mu |\lambda ,\mu ;.\hfill \end{array}$$
(19)
On the other hand, for the zero-modes a mismatch between positive and negative chirality states may occur. Note however, that even in the kernel of the Dirac operator $`D`$ the pairing of chirality eigenstates holds for those vectors which are non-zero modes of $`D_f`$:
$$D_f|0,\mu ;\pm =\mu |0,\mu ;.$$
(20)
Therefore the only states in the kernel of $`D`$ which can contribute to the index $`\mathrm{index}(D)`$ are the simultaneous zero modes of $`D`$ and $`D_f`$, i.e. the states $`|0,0;\pm `$. Denoting the number of positive, resp. negative, chirality double-zero modes by $`n_\pm ^{(0,0)}`$, the index of the standard Dirac operator is
$$\mathrm{index}\left(D\right)=n_+^{(0,0)}n_{}^{(0,0)}.$$
(21)
Now the symmetry of the algebra of (anti-)commutation relations of $`D`$, $`D_f`$ and $`\gamma _5`$ under interchange of $`D`$ and $`D_f`$ implies, that again only the simultaneous zero-modes of $`D_f`$ and $`D`$ contribute to the index of the non-standard Dirac operator $`D_f`$; hence
$$\mathrm{index}\left(D_f\right)=n_+^{(0,0)}n_{}^{(0,0)}=\mathrm{index}\left(D\right).$$
(22)
This is the result we set out to prove.
We now comment on some interesting consequences of this theorem. It is well-known that the index of a Dirac operator can be computed using path-integral methods, being identical to the Witten index of a supersymmetric quantum mechanical model for a spinning particle with supercharge
$$Q=e_a^\mu \mathrm{\Pi }_\mu \psi ^a.$$
(23)
Here $`\psi ^a`$ is the anti-commuting spin variable of the particle, forming a $`d=1`$ supermultiplet with the base-space co-ordinates $`x^\mu `$ , and $`\mathrm{\Pi }_\mu `$ is the covariant momentum of the particle, which is related to the canonical momentum $`p_\mu `$ by
$$\mathrm{\Pi }_\mu =p_\mu +\frac{i}{2}\omega _{\mu ab}\psi ^a\psi ^b.$$
(24)
The existence of a Killing-Yano tensor and a non-standard Dirac operator on the base-manifold now manifests itself as a new non-standard supersymmetry with supercharge
$$Q_f=f_a^\mu \mathrm{\Pi }_\mu \psi ^a+\frac{i}{3!}c_{abc}\psi ^a\psi ^b\psi ^c.$$
(25)
The main algebraic properties of the supercharges under Dirac-Poisson brackets are
$$\{Q,Q\}=2iH,\{Q_f,Q\}=\mathrm{\hspace{0.17em}0},\{Q_f,Q_f\}=2iK,$$
(26)
where $`K`$ now represents the supersymmetric extension of the Killing constant (2); for example, in the case of Kerr-Newman space-time it is the supersymmetric extension of Carter’s constant . Let us again stress that for the study of the index, where one is interested in the spectrum of states, one must consider the quantization of the spinning particle model. This problem is taken up in detail in
One may expect that the equality of the indices of the standard and non-standard Dirac operators now translates to the equality of the Witten index for the standard and non-standard supersymmetry. However, this translation involves some subtleties, which we briefly discuss here. First, let us recall that the Witten index is defined as the difference between the number of odd and even fermion number zero modes of the Hamiltonian of a supersymmetric theory, which may be computed in regularized form as
$$\mathrm{index}_w=\underset{\beta 0}{lim}\text{Tr}\left((1)^Fe^{\beta H}\right).$$
(27)
Here $`H`$ is the Hamiltonian, obtained as the square of the standard supercharge. The formula in (27) makes only definite sense once one understands what is meant by the symbol “Tr”. Of course one should really view this as a sum over the spectrum of the regulator however, even then three distinct cases can occur . (I) The spectrum is discrete, in which case all states save for the zero modes cancel pairwise and the index is some $`\beta `$-independent integer counting the disparity between positive and negative chirality zero modes. (II) The spectrum includes a continuum separated from the (discrete) zero mode sector by a gap. Then, so long as one can show that the densities of positive and negative non-zero modes are equal, the result is again an integer independent of $`\beta `$. (III) The spectrum includes a continuum not separated from zero in which case the index is no longer guaranteed to be integer nor $`\beta `$-independent. For the case we are interested in, namely the chiral gravitational anomaly on manifolds with boundary, Atiyah, Patodi and Singer have shown how to impose boundary conditions for spinors which ensure a well-posed index problem (case (I) in fact). Essentially their non-local boundary conditions stipulate the spectrum of the bulk modes in terms of the discrete spectrum of the compact boundary manifold (see for further details).
Similarly, a quantity for the non-standard supersymmetry can be obtained by defining the regularized expression
$$\mathrm{index}_f=\underset{\beta 0}{lim}\text{Tr}\left((1)^Fe^{\beta K}\right).$$
(28)
The equality of the indices then is equivalent to the statement that taking the limit $`\beta 0`$ these two expressions are equal:
$$\mathrm{index}_f=\mathrm{index}_w.$$
(29)
Comparison of the two expressions shows, that $`\mathrm{index}_f`$ actually is to be interpreted as the Witten index of a theory in which $`K`$ is the Hamiltonian. The relation between the theories of which $`H`$, resp. $`K`$ are the Hamiltonians was investigated in detail in . Here we recall in particular two results of that paper:
1. If $`K`$ is a Killing-Carter constant of motion with respect to the Hamiltonian $`H`$, then $`H`$ is a constant of motion in the theory with Hamiltonian $`K`$.
Corollary: if $`K_{\mu \nu }`$ is a symmetric Killing tensor on a manifold with metric $`g_{\mu \nu }`$, then $`g_{\mu \nu }`$ is a symmetric Killing tensor on a manifold with metric $`K_{\mu \nu }`$. Following we call this reciprocal relation between manifolds with metrics and Killing tensors interchanged a Killing duality.
2. The correspondence can be extended to supercharges in the following way: if $`Q_f`$ is a non-standard Killing-Yano supercharge in a theory with standard supercharge $`Q`$ and Hamiltonian $`H`$, then $`Q`$ is a Killing-Yano supercharge in a dual theory with supercharge $`Q_f`$ and Hamiltonian $`K`$; but if $`H`$ and $`K`$ are different then at least one of the manifolds is endowed with torsion. This is because of the inclusion of the totally anti-symmetric tensor $`c_{abc}`$ in the supercharge $`Q_f`$, as well as in the corresponding non-standard Dirac operator $`D_f`$.
In order to implement the definition (28) and prove the equality (29) it is therefore necessary to study index theorems on manifolds with torsion. Furthermore, typically the manifolds in question (e.g. Taub–NUT and Kerr–Newman) have a boundary. Index theorems on manifolds with boundary are the subject of the Atiyah–Patodi–Singer (APS) index theorem. The extension of their work to include torsion has recently been given in . This is a rather intricate subject. In particular, if one wishes to regard the Dirac operators $`D`$ and $`D_f`$ as operators on independent manifolds then one has to carefully study the Hilbert spaces in which they act. These and other detailed issues have been handled in depth in so here we simply provide a summary of the most important results.
On manifolds with boundary the index splits into three terms. The first is the bulk contribution which can be obtained by a heat kernel, Pauli–Villars or supersymmetric path integral approach. This term has been independently computed by several groups and the result is
$$\mathrm{index}(\mathrm{bulk})=\frac{1}{24.8\pi ^2}_{}\left[R(e)^{mn}R(e)_{nm}\frac{1}{2}F(A)F(A)2\sqrt{g}D_\mu 𝒦^\mu \right].$$
(30)
Here $`F(A)`$ is the abelian field strength of the one form obtained by dualising the totally antisymmetric part of the torsion (remember that the Dirac operator couples only to the trace and totally antisymmetric parts of the contortion tensor). The vector $`𝒦^\mu `$ is given by
$$𝒦^\mu =\left(D^\nu D_\nu +\frac{1}{4}A^\nu A_\nu +\frac{1}{2}R\right)A^\mu .$$
(31)
The second term required for the index is a boundary correction term given by the APS $`\eta `$ invariant. The idea is quite simple, when the manifold is of product form near the boundary (i.e. a cylinder with the boundary manifold as cross section) the solutions of the Dirac equation can be derived directly from the solutions of the boundary Dirac operator. Therefore APS have proven that the additional correction can be constructed directly from the eigenvalues of the boundary Dirac operator
$$\text{index(boundary)}=\frac{1}{2}\left(\eta (0)+h\right),$$
(32)
where $`h`$ denotes the number of zero modes of the boundary Dirac operator and the $`\eta `$ invariant is given by
$$\eta (s)=\underset{\{l0\}}{}\frac{\mathrm{sign}(l)}{|l|^s}$$
(33)
with the sum running over the set of all non-vanishing eigenvalues $`\{l0\}`$ of the boundary Dirac operator. Observe that $`\eta (0)`$ essentially counts the disparity between the number of positive and negative boundary eigenvalues. Typically $`\eta (0)`$ must be computed on a case by case basis, however for boundary manifolds given by a squashed $`S^3`$ this computation has been performed some time ago by Hitchin . The generalization of this computation to the torsion-full case may again be found in .
Of course, one is not always so lucky enough to have a manifold approaching a cylinder near the boundary so that in the most general case a third correction is necessary. In the torsion-free case this correction was found by Gilkey and amounts to the boundary integral over the difference between the Chern–Simons form of the product manifold obtained by extending the boundary three-manifold to a cylinder and the Chern–Simons form of the actual four-manifold in question. In the torsion-full case, in it is shown how to extract a generalized Chern–Simons from the integrand of (30). Denoting quantities computed on the product manifold by the sharp symbol, the generalized Chern–Simons correction to the index is given by
$$\text{index(CS correction)}=\frac{1}{24.8\pi ^2}_{}\left[C^{\mathrm{}}(A)2𝒦^{\mathrm{}}C(A)+2𝒦\right].$$
(34)
Note that $`𝒦`$ denotes the three form obtain by dualising the vector $`𝒦^\mu `$ in (31).
Finally, it is enlightening to see all these corrections computed in an explicit example. The case of Taub–NUT and its torsion-full dual have been analysed in . For the Taub–NUT manifold with metric
$$\mathrm{d}s^2=\frac{r+2m}{r}\left[\mathrm{d}r^2+r^2\mathrm{d}\theta ^2+r^2\mathrm{sin}^2\theta \mathrm{d}\varphi ^2\right]+\frac{4rm^2}{r+2m}\left[\mathrm{d}\psi +\mathrm{cos}\theta \mathrm{d}\varphi \right]^2,$$
(35)
the bulk contribution was calculated to be $`1/12`$ by Hawking . There is no clash with the expectation of integer counting of zero modes by the index since the boundary corrections of APS (as computed by Hitchin) and Gilkey were shown to exactly cancel the bulk term so that the final result for the index of the Taub–NUT manifold is exactly zero. Therefore, we would expect, from the considerations above, also a vanishing result for the index of the dual manifold with metric given by
$$\mathrm{d}s_f^2=\frac{r+2m}{r}\left[\mathrm{d}r^2+\frac{m^2r^2}{(r+m)^2}\left(\mathrm{d}\theta ^2+\mathrm{sin}^2\theta \mathrm{d}\varphi ^2\right)\right]+\frac{4rm^2}{r+2m}\left(\mathrm{d}\psi +\mathrm{cos}\theta \mathrm{d}\varphi \right)^2$$
(36)
The anticommutativity of the Dirac operators $`D`$ and $`D_f`$ holds when the dual manifold has antisymmetric torsion given by
$$T=\frac{4r^2m^2\mathrm{sin}\theta }{(r+m)^2}\mathrm{d}\theta \mathrm{d}\varphi \mathrm{d}\psi .$$
(37)
In fact the results for the three contributions (30), (32) and (34) to the index of the dual Dirac operator $`D_f`$ are given by each of the following three lines respectively
$`\mathrm{index}D_f`$ $`=`$ $`{\displaystyle \frac{r_b^8+12r_b^7m+86r_b^6m^2+340r_b^5m^3+753r_b^4m^4+872r_b^3m^5+408r_b^2m^6}{12(r_b+2m)^4(r_b+m)^4}}`$ (38)
$`{\displaystyle \frac{r_b^8+12r_b^7m+86r_b^6m^2+308r_b^5m^3+569r_b^4m^4+552r_b^3m^5+264r_b^2m^6+48r_bm^7}{12(r_b+2m)^4(r_b+m)^4}}`$
$`{\displaystyle \frac{32r_b^5m^3+184r_b^4m^4+320r_b^3m^5+144r_b^2m^648r_bm^7}{12(r_b+2m)^4(r_b+m)^4}}`$
$`=`$ $`\mathrm{\hspace{0.17em}0}.`$
Note that the vanishing result holds for any positive radius to the boundary $`r_b`$ and this is a very strong test of the formalism presented. Let us conclude by remarking that in theories with local supersymmetry, torsion is inevitably present. Furthermore boundary physics in these theories has become increasingly important , a fact which highlights the interest of our work.
Acknowledgement For two of us (J.W.v.H. and A.W.) this work is part of the research programme of the Foundation for Fundamental Research of Matter (FOM).
|
no-problem/9901/hep-ph9901294.html
|
ar5iv
|
text
|
# 1 Charmed baryon masses used in the sum rules.
TUHE-9911
hep-ph/9901294
New experimental tests of sum rules
for charmed baryon masses
Jerrold Franklin
Department of Physics, Temple University,
Philadelphia, Pennsylvania 19122-6082
V5030E@VM.TEMPLE.EDU
January 13, 1999
## Abstract
New experimental measurements are used to test model independent sum rules for charmed baryon masses. Sum rules for medium-strong mass differences are found to be reasonably well satisfied with increasing accuracy, and the new measurements permit an improved prediction of $`2778\pm 9`$ MeV for the mass of the $`\mathrm{\Omega }_c^0`$. But an isospin breaking sum rule for the $`\mathrm{\Sigma }_c`$ mass splittings is still in significant disagreement posing a serious problem for the quark model of charmed baryons. Individual $`\mathrm{\Sigma }_c`$ mass splittings are investigated, using the new CLEO measurement of the $`\mathrm{\Xi }_c^{}`$ mass splitting, but the accuracy is not yet sufficient for a good test.
PACS numbers: 12.40.Yx., 14.20.-c, 14.40.-n
Model independent sum rules\[1-3\] were derived some time ago for heavy-quark baryon masses using fairly minimal assumptions within the quark model. The sum rules depend on standard quark model assumptions, and an additional assumption that the interaction energy of a pair of quarks in a particular spin state does not depend on which baryon the pair of quarks is in (“baryon independence”). This is a somewhat weaker assumption than full SU(3) symmetry of the wave function, which would require the same spatial wave function for each octet baryon, and each individual wave function to be SU(3) symmetrized. Instead, we use wave functions with no SU(3) symmetry, as described in Ref.. The wave functions can also be different for different quarks. For instance, a u-s pair in the $`\mathrm{\Sigma }^+`$ hyperon can have a different spatial wave function than a u-d pair in the proton, but is assumed to have the same interaction energy as a u-s pair in the $`\mathrm{\Xi }^0`$ hyperon.
In deriving the sum rules, no assumptions are made about the type of potential, and no internal symmetry beyond baryon independence is assumed. The sum rules allow any amount of symmetry breaking in the interactions and individual wave functions, but do rest on baryon independence for each quark-quark interaction energy. Several of the sum rules \[Eqs. (4), (5), and (6) below\] also rely on the assumption that there is no orbital angular momentum so that the three spin-$`\frac{1}{2}`$ quark spins add directly to spin-$`\frac{1}{2}`$ or spin-$`\frac{3}{2}`$. More detailed discussion of the derivation of the sum rules is given in Refs. and .
We have previously tested these sum rules in Refs. and using early measurements of heavy-quark baryon masses. Those tests showed reasonable agreement within fairly large experimental errors for two sum rules for medium-strong charmed baryon mass differences and for one sum rule for bottom baryon mass differences. But there was a relatively large, and worrisome, discrepancy for the isospin breaking mass differences between the $`\mathrm{\Sigma }_c`$ charge states. Since those tests, there have been a number of new experiments\[6-11\] resulting in more accurate and more reliable values for some of the charmed baryon masses used in the sum rules. In this paper we look at the effect on the sum rules of these new experiments, especially the recent CLEO II measurement of the $`\mathrm{\Xi }_c^+`$ and $`\mathrm{\Xi }_c^0`$ masses.
The measured charmed baryon masses that will be used in the sum rules are listed in table I for the expected baryon assignments. The $`\mathrm{\Xi }_c^+`$ baryon and the $`\mathrm{\Xi }_c^+`$ baryon are distinguished, in the quark model, by having different spin states for the u-s quark pair. The $`\mathrm{\Xi }_c^+`$ is the spin-$`\frac{1}{2}`$ usc baryon having the u-s quarks in a spin zero state, and the $`\mathrm{\Xi }_c^+`$ has the u-s quarks in a spin one state. A similar distinction is made for the d-s quark pair in the $`\mathrm{\Xi }_c^0`$ and $`\mathrm{\Xi }_c^0`$ charmed baryons. The numerical values in Table I are given in terms of appropriate mass differences when that corresponds to how the measurement was made. Where new experiments have given more accurate numbers since our previous test of the sum rules, a star has been put after the reference. Masses for light quark (u,d,s) baryons are all taken from the Review of Particle Physics.
The isospin breaking sum rule for the $`\mathrm{\Sigma }_c`$ masses is
$$\mathrm{\Sigma }^++\mathrm{\Sigma }^{}2\mathrm{\Sigma }^0=\mathrm{\Sigma }^++\mathrm{\Sigma }^{}2\mathrm{\Sigma }^0=\mathrm{\Sigma }_c^{++}+\mathrm{\Sigma }_c^02\mathrm{\Sigma }_c^+,$$
(1)
$`(1.7\pm .2)`$ $`(2.6\pm 2.1)`$ $`(2.2\pm 1.2)`$
where we have written the experimental values in MeV below each equation. There is reasonable agreement for the $`\mathrm{\Sigma }\mathrm{\Sigma }^{}`$ sum rule, as well as for several other isospin breaking sum rules for light quark baryons. But the $`\mathrm{\Sigma }_c`$ isospin splitting combination is significantly different from the other two combinations in Eq. (1). As noted in Ref. , this disagreement poses a serious problem because it is difficult to see how any reasonable quark model of charmed baryons could lead to the relatively large negative value for the $`\mathrm{\Sigma }_c`$ combination in Eq. (1). A large number of specific quark model calculations of charmed baryon masses generally satisfy the $`\mathrm{\Sigma }_c`$ sum rule, and all predict large positive values for the $`\mathrm{\Sigma }_c`$ mass combination in Eq. (1).
The experimental input that has been used for this combination of $`\mathrm{\Sigma }_c`$ masses are the two separate mass difference measurements
$`\mathrm{\Sigma }_c^{++}\mathrm{\Sigma }_c^0`$ $`=`$ $`0.6\pm .2\mathrm{Ref}.[5]`$ (2)
$`\mathrm{\Sigma }_c^+\mathrm{\Sigma }_c^0`$ $`=`$ $`1.4\pm .6\mathrm{Ref}.[12].`$ (3)
The $`\mathrm{\Sigma }_c^{++}\mathrm{\Sigma }_c^0`$ mass difference results from four separate experiments that are reasonably consistent with one another, while there is only one experiment that has measured the $`\mathrm{\Sigma }_c^+\mathrm{\Sigma }_c^0`$ difference. There is no reason to question this experimental measurement of $`\mathrm{\Sigma }_c^+\mathrm{\Sigma }_c^0`$, and the result of Ref. for $`\mathrm{\Sigma }_c^{++}\mathrm{\Sigma }_c^0`$ agrees well with the other experiments. However, the extreme importance of the large discrepancy in the $`\mathrm{\Sigma }_c`$ sum rule of Eq. (1) should make a new experimental measure of the mass difference $`\mathrm{\Sigma }_c^+\mathrm{\Sigma }_c^0`$ a high priority.
The new experimental measurement of the $`\mathrm{\Xi }_c^{}`$ masses makes it possible, in principle, to test sum rules for separate mass differences of the $`\mathrm{\Sigma }_c`$. These are
$`\mathrm{\Sigma }_c^{++}\mathrm{\Sigma }_c^0`$ $`=`$ $`\mathrm{\Sigma }^+\mathrm{\Sigma }^{}+2[(\mathrm{\Xi }^{}\mathrm{\Xi }^0)+(\mathrm{\Xi }_c^+\mathrm{\Xi }_c^0)]`$ (4)
$`(0.6\pm .2)`$ $`(6.2\pm 9.7)`$
$`\mathrm{\Sigma }_c^+\mathrm{\Sigma }_c^0`$ $`=`$ $`\mathrm{\Sigma }^0\mathrm{\Sigma }^{}+(\mathrm{\Xi }^{}\mathrm{\Xi }^0)+(\mathrm{\Xi }_c^+\mathrm{\Xi }_c^0)`$ (5)
$`(1.4\pm .6)`$ $`(4.2\pm 4.9)`$
Unfortunately, the experimental errors on the $`\mathrm{\Xi }_c^{}`$ mass differences are still too large at this point to make an accurate comparison with the $`\mathrm{\Sigma }_c`$ mass differences.
Although the discrepancy noted above for the $`\mathrm{\Sigma }_c`$ mass differences puts any other quark model study of charmed baryons into question, we now look at sum rules for medium-strong mass differences, anticipating some eventual resolution (theoretical or experimental) of the difficulties posed by the $`\mathrm{\Sigma }_c`$ mass splittings. A new measurement of the masses of the $`\mathrm{\Sigma }^{++}`$ and $`\mathrm{\Sigma }^0`$ baryons makes possible a more accurate test of the sum rule
$`(\mathrm{\Sigma }_c^+\mathrm{\Lambda }_c^+)+{\displaystyle \frac{1}{2}}(\mathrm{\Sigma }_c^+\mathrm{\Lambda }_c^+)`$ $`=`$ $`(\mathrm{\Sigma }^0\mathrm{\Lambda }^0)+{\displaystyle \frac{1}{2}}(\mathrm{\Sigma }^0\mathrm{\Lambda }^0).`$ (6)
$`(319\pm 2)`$ $`(307)`$
We use the measured $`\mathrm{\Sigma }_c^{++}`$ mass for the $`\mathrm{\Sigma }_c^+`$ mass, but that difference is probably small. A corresponding sum rule for the b-quark baryons $`\mathrm{\Sigma }_b^0`$, $`\mathrm{\Sigma }_b^0`$, $`\mathrm{\Lambda }_b^0`$ has not changed, and is in good agreement .
In Ref. we used a sum rule to predict $`2583\pm 3`$ MeV for the $`\mathrm{\Xi }_c^+`$ mass. This mass has now been measured, and is listed in Table I. This permits a test of the sum rule, which we write here as
$`\mathrm{\Sigma }_c^{++}+\mathrm{\Omega }_c^02\mathrm{\Xi }_c^+`$ $`=`$ $`\mathrm{\Sigma }^++\mathrm{\Omega }^{}\mathrm{\Xi }^0\mathrm{\Xi }^0`$ (7)
$`(10\pm 8)`$ $`(15)`$
The two sum rules in Eqs. (6) and (7) are satisfied to about the same extent as light-quark baryon sum rules relating spin-$`\frac{1}{2}`$ baryon masses to spin-$`\frac{3}{2}`$baryon masses.
The new experimental measurements can be used to improve the accuracy of our previous prediction of the as yet unmeasured $`\mathrm{\Omega }_c^0`$ mass
$$\mathrm{\Omega }_c^0=\mathrm{\Omega }_c^0+2(\mathrm{\Xi }_c^+\mathrm{\Xi }_c^+)(\mathrm{\Sigma }_c^{++}\mathrm{\Sigma }_c^{++})=2779\pm 9,$$
(8)
In conclusion, we can say that increasingly accurate experimental mass determinations are making the model independent sum rules discussed here increasingly useful tests of the quark model for charmed baryons. We see that sum rules for medium-strong energy differences are satisfied at least as well for heavy-quark baryons as for light-quark baryons. However there remains a serious disagreement for the $`\mathrm{\Sigma }_c`$ isospin breaking sum rule, which is violated by three standard deviations. Since sum rules in disagreement are of more concern than those which are satisfied, resolving the $`\mathrm{\Sigma }_c`$ mass differences is of prime importance. Thus far no theoretical suggestion has been forthcoming.
|
no-problem/9901/astro-ph9901177.html
|
ar5iv
|
text
|
# The Optical Light Curves of Cygnus X-2 (V1341 Cyg) and the Mass of its Neutron Star
## 1 Introduction
Reliable measurements of neutron star masses are important for placing constraints on the equation of state of dense nuclear matter. Currently, the most precise mass measurements for neutron stars come from studies of binary radio pulsars (Stairs et al. 1998; Thorsett & Chakrabarty 1998). The masses of the neutron stars in the binary radio pulsars likely reflect the mass at their formation since these particular neutron stars presumably have not accreted any mass since their formation. On the other hand, neutron stars in X-ray binaries have been accreting at large rates for extended periods of time. Hence the mass estimates for neutron stars in X-ray binaries should give us information on the range of allowed masses (van Kerkwijk, van Paradijs, & Zuiderwijk 1995a; van Paradijs 1998). Unfortunately, X-ray binaries are not as “clean” as binary radio pulsars and mass determinations derived from dynamical studies are subject to larger uncertainties (e.g. van Kerkwijk et al. 1995b; Stickland, Lloyd, & Radziun-Woodham 1997). In the case of the high-mass X-ray binaries, the observed radial velocity curves often show pronounced deviations from the expected Keplerian shapes, presumably due to tidal effects and non-radial oscillations in the high-mass secondary star. Since most of the mass of the binary resides in the high-mass O/B secondary star, the derived neutron star masses are quite sensitive to errors in the velocity curves of the visible stars. In the case of many low-mass X-ray binaries, it is often not possible to directly observe the secondary star optically since the accretion disc dominates the observed flux. Hence, reliable dynamical mass estimates are not available for many of these systems.
Cygnus X-2, which is one of the brightest low-mass X-ray binaries known, is one of the rare cases among the persistent low-mass X-ray binaries where the secondary star is easily observed. Cyg X-2 is known to contain a neutron star because Type I X-ray bursts have been observed (Kahn & Grindlay 1984; Kuulkers, van der Klis, & van Paradijs 1995; Wijnands et al. 1997; Smale 1998). The neutron star is believed to be accreting mass from its companion at a near Eddington rate (see Smale 1998). V1341 Cygni, its optical counterpart (Giacconi et al. 1967), is relatively bright, so reasonably precise spectroscopic and photometric observations can be obtained. The orbital period of $`P=9.844`$ days was determined by Cowley et al. (1979) and by Crampton & Cowley (1980) from the observed radial velocity variations of the companion star. Cowley et al. (1979) also reported a spectral type for the companion star in the range of A5 to F2 (they attributed the change in the observed spectral type to X-ray heating of the secondary). Casares, Charles, & Kuulkers (1998, hereafter CCK98) have recently refined the measurements of the orbital parameters. They determined a period of $`P=9.8444\pm 0.0003`$ days, an optical mass function of
$$f(M)\frac{PK_c^3}{2\pi G}=\frac{M_x\mathrm{sin}^3i}{(1+q)^2}=0.69\pm 0.03M_{\mathrm{}},$$
(1)
where $`M_x`$ is the mass of the neutron star, $`M_c`$ is the mass of the companion star, $`K_c`$ is the semi-amplitude of the companion’s radial velocity curve, and where $`q=M_c/M_x=0.34\pm 0.04`$. CCK98 also determined a spectral type for the companion star of A9III and reported no variation of the spectral type with orbital phase, in contradiction to Cowley et al. (1979).
One needs a measurement of the orbital inclination in order to derive mass measurements from the orbital elements of CCK98. Models of the optical/IR light curves are the most “direct” method to determine the inclination (e.g. Avni & Bahcall 1975; Avni 1978). We have gathered $`U`$, $`B`$, and $`V`$ photometric data of Cyg X-2 from the literature with the goal of obtaining the mean light curve and deriving the inclination. We demonstrate that the derived mean orbital light curves show the familiar signature of ellipsoidal variations. The light curves are then modeled to place limits on the inclination. We also show that the photometric period is consistent with the spectroscopic orbital period. We describe below the analysis of the tabulated photometric data, period determination, and the ellipsoidal modelling. We conclude with a discussion of the mass of the neutron star and the distance to the source.
## 2 Cyg X-2 Light curves
### 2.1 Observations
We used all the photoelectric and photographic data as tabulated in the literature for our analysis. These are photoelectric ($`U`$, $`B`$, and $`V`$) data obtained between 1967 and 1984 (Kristian et al. 1967; Peimbert et al. 1968; Mumford 1970; Chevalier, Bonazzola & Ilovaisky 1976; Lyutyi & Sunyaev 1976; Ilovaisky et al. 1978; Kilyachkov 1978; Beskin et al. 1979; Goranskii & Lyutyi 1988), and photographic ($`B`$ and $`V`$) data obtained in 1974 and 1975 (Basko et al. 1976). The uncertainties of the photoelectric data are typically between 0.02–0.03 magnitudes in the $`V`$ and $`B`$ band, and 0.05–0.08 magnitudes in the $`U`$ band, while the photographic data have typical uncertainties between 0.08–0.15 magnitudes (see Goranskii & Lyutyi 1988). Close $`B`$ and $`V`$ photographic magnitudes in time with $`B`$ and $`V`$ photoelectric magnitudes show their measurements to be consistent with each other; we therefore combined them. The mean $`U`$, $`B`$ and $`V`$ band magnitudes (with the rms given in brackets) of Cyg X-2 from our sample are 14.95 (0.31), 15.16 (0.20), and 14.70 (0.21), respectively.
### 2.2 Period analysis
Although several periodicities had been reported between $``$0.25 and $``$14 days before 1979, none of them were consistent with the orbital period as determined from the spectroscopic observations by Cowley et al. (1979) and Crampton & Cowley (1980) (see also CCK98). After 1979 it was shown that folding the photoelectric and photographic data on the spectroscopic period gave an ellipsoidal (i.e. double-peaked) shaped light curve (Cowley et al. 1979; Goranskii & Lyutyi 1988). Still up to today, no period analysis of the Cyg X-2 light curves has given independent proof of the orbital variations.
We therefore subjected all the combined $`U`$ (469 points), $`B`$ (966 points), and $`V`$ (572 points) band data separately to a period analysis using various techniques (e.g., Lomb-Scargle and phase dispersion minimization). We searched the data for periodicities between 0.1 and 1000 days. Plots of the Lomb-Scargle periodograms can be found in the upper panels of Figure 1. In the figure we also give the 3$`\sigma `$ confidence level, above which we regard signals as significant. These confidence levels were determined from a cumulative probability distribution appropriate for our three data sets (see e.g. Homer et al. 1996). The most significant peak found in both the $`B`$ and $`V`$ band data were at a period of $``$4.92 days, whereas no significant peak near this period was found in the U band data. In the lower panels of Figure 1 we give the power spectra of the corresponding window functions in the three passbands. Clearly, the most significant peak in the U band is due to the observing window.
We estimated the error on the periods found by employing a Monte-Carlo technique; we generated $``$10,000 data sets with the same variance, amplitude and period as the observed data. We then subjected the faked data sets to the Lomb-Scargle algorithm; the distribution of the most significant peaks then leads to 1$`\sigma `$ error estimates. We also fitted a sine wave to the data near the found periods, where the errors in the magnitude measurements were scaled so that the fit had a reduced $`\chi ^2`$ of $``$1; the 1$`\sigma `$ uncertainty in the period was determined using $`\mathrm{\Delta }\chi ^2=1`$. This resulted in similar error estimates. We derived $`P_V=4.9200\pm 0.0008`$ days, and $`P_B=4.9213\pm 0.0009`$ days. Clearly the period found in the $`B`$ and $`V`$ band data is half the (spectroscopic) orbital period. Our analysis therefore gives the first independent proof of (half) the orbital period.
No clear peak can be found near the $`78`$ day X-ray period (Wijnands, Kuulkers & Smale 1996; see also Kong, Charles & Kuulkers 1998). However, other significant peaks are found at e.g. $`12.1`$ and $`35.3`$ days in $`V`$, $`10.1`$ and $`39`$ days in $`B`$, $`10.1`$, $`12.1`$, $`35.3`$ and $`125`$ days in $`U`$, and at aliases between the different periods (e.g. the peak at $`0.82`$ days in $`V`$ is the alias of half the orbital period and the $`35.3`$ days period). We note that the $`35`$ day period is close to the second significant peak in X-rays reported by Wijnands et al. (1996).
### 2.3 Mean light curve
Since only the $`B`$ and $`V`$ band data shows significant orbital variations, we will concentrate only on mean light curves from these two bands. As noted by Goranskii & Lyutyi (1988), Cyg X-2 shows a strong concentration of data points towards the lowest magnitudes, which they called the “quiet state” (see Figure 1 of Goranskii & Lyutyi 1988 and Figure 2). On top of that Cyg X-2 displays increases in brightness on time-scales ranging from $`5`$ days to $`10`$ days, flares lasting less than a day, and drops in brightness for a few days (Goranskii & Lyutyi 1988).
As can be seen in Figure 2, the lower envelope of the data points shows most clearly the ellipsoidal modulations. Rather than constructing mean light curves from data selected from prolonged quiet states in the $`B`$ band (Goranskii & Lyutyi 1988), we used a more unbiased method.
We computed nightly averages of the observed times and magnitudes, in order to avoid larger weights to nights with many measurements. The nightly averages were phase folded on the ephemeris given by Crampton & Cowley (1980), where we used the definition for phase zero as inferior conjunction of the companion star, which is the time at which the companion star is closest to us. The nightly averages were binned into 20 phase bins, and we determined the lower envelope of the curve by taking the first 6 lowest magnitudes per bin. We then fitted a sine wave to the lower envelope, and subtracted it from the nightly averages. Finally, we discarded those data points which were greater than $`13`$ times and $`9`$ times the rms in the mean of the whole sine subtracted data sample, for the $`B`$ and $`V`$ band, respectively (there was very little change in the mean light curve when slightly different thresholds were used). The resulting mean $`B`$ and $`V`$ band folded light curves of the accepted nightly averages, which corresponds to the so-called “quiescent state”, are shown in Figure 3. We present the folded $`B`$ and $`V`$ light curves in tabular form in Tables 1 and 2, respectively.
## 3 Ellipsoidal variations
### 3.1 Introduction and outline of model
The folded light curves shown in Figure 3 have the well-known signature of ellipsoidal modulations with maxima at phases 0.25 and 0.75 (the quadrature phases) and minima at phases 0.0 and 0.50 (the conjunction phases). The origin of ellipsoidal variations is easy to understand. The secondary star fills its critical Roche lobe (there is ongoing mass transfer) and hence is greatly distorted. As it moves around in its orbit its projected area on the sky (and hence the total observed flux) changes. If the secondary star fills its Roche lobe and is in synchronous rotation, then the amplitude of its ellipsoidal light curve is only a function of the orbital inclination ($`i=90^{}`$ for a system seen edge-on) and the binary mass ratio. Thus models of the ellipsoidal variations offer a way to measure the inclination and mass ratio. However, the observed light curve may not be totally due to ellipsoidal modulations from the secondary star. For example, the addition of a substantial amount of extra light from the accretion disc will reduce the overall observed light curve amplitude. X-ray heating of the secondary star can sometimes radically alter the shape of the observed light curve (Avni & Bahcall 1975; Avni 1978). One must therefore account for extra sources of light when modelling X-ray binary (optical) light curves. The folded light curves shown in Figure 3 show the secondary star in Cyg X-2 with the least contamination from the accretion disc (and possibly the least effects of the irradiation). It has already been shown that using the lower envelope of the folded light curve for inclination estimates is a good approximation, when compared to ellipsoidal variations with no disc or irradiation contamination (e.g. Pavlenko et al. 1996). We will discuss below models of the folded light curves shown in Figure 3 and the inclination (and hence mass) constraints we derive from them.
We modeled the light curves using the modified version of the Avni (1978) code described in Orosz & Bailyn (1997). This code uses full Roche geometry to describe the shape of the secondary star. In addition, the code accounts for light from a circular accretion disc and for extra light from the secondary star due to X-ray heating. The parameters for the model are the parameters which determine the geometry of the system: the mass ratio $`Q=M_x/M_c`$<sup>1</sup><sup>1</sup>1Following Avni, we use in this section the upper case $`Q`$ to denote the mass ratio defined by the mass of the compact star divided by the mass of the secondary star. CCK98 and others use the lower case $`q`$ to denote the inverse., the orbital inclination $`i`$, the Roche lobe filling factor $`f`$, and the rotational velocity of the secondary star; the parameters which determine the light from the secondary star: its polar temperature $`T_{\mathrm{pole}}`$, the linearized limb darkening coefficients $`u(\lambda )`$, and the gravity darkening exponent $`\beta `$; the parameters which determine the contribution of light from the accretion disc: the disc radius $`r_d`$, flaring angle of the rim $`\beta _{\mathrm{rim}}`$, the temperature at the outer edge $`T_d`$, and the exponent on the power-law radial distribution of temperature $`\xi `$, where $`T(r)=T_d(r/r_d)^\xi `$; and parameters related to the X-ray heating: the X-ray luminosity of the compact object $`L_x`$, the orbital separation (determined from the optical mass function, the mass ratio, and the inclination), and the X-ray albedo $`W`$.
For simplicity, we set many of the model parameters at reasonable values. For example, we assume the secondary star is in synchronous rotation and completely fills its Roche lobe since there is ongoing mass transfer. We initially fix the mass ratio at $`Q=2.94`$, the value found by CCK98. Thus the only remaining geometrical parameter is the inclination $`i`$. Based on the secondary star’s spectral type (A9III, CCK98) we fix its polar temperature at 7000K. We use limb darkening coefficients taken from Wade & Rucinski (1985). The secondary star has a radiative envelope, so the gravity-darkening exponent was set to 0.25 (von Zeipel 1924).
### 3.2 Upper and lower bounds on the inclination
We begin the light curve fitting by considering a simple model first. We fit the $`B`$ and $`V`$ light curves separately to a model with no disc light and no X-ray heating. The only free parameter is the inclination $`i`$. In this case, the best derived value of $`i`$ is a lower limit since the amplitude of the light curve gets smaller as the inclination is decreased or if extra light from the accretion disc is added. We find $`i42^{}`$ for $`B`$ and $`i49^{}`$ for $`V`$. Thus we adopt $`i49^{}`$ as a lower limit. The upper limit on $`i`$ based on the lack of X-ray eclipses is $`i73^{}`$ (for a mass ratio of $`Q=2.94`$, see CCK98). We therefore conclude $`49^{}i73^{}`$.
### 3.3 Addition of disc light
We now consider models where light from the disc is accounted for. The disc parameters discussed above specify the disc radius, thickness, the temperature of the rim, and temperature profile. At each phase, the disc is divided up into a grid of 6300 surface elements (90 divisions in azimuth times 70 divisions in the radial direction). The local intensities are computed assuming backbody spectra (with corrections for limb darkening) at 128 different wavelengths. The observed flux at each wavelength from the disc is obtained by summing over all of the visible surface elements, creating a model disc “spectrum.” The model disc spectrum is added to the model star spectrum (constructed in a similar way with local blackbody spectra and limb darkening corrections), creating the “composite” spectrum. This composite spectrum is integrated with the standard normalized $`UBVRI`$ filter response functions to obtain the final model $`UBVRI`$ fluxes. This method of computing the final fluxes is more precise than simply computing the fluxes at the “effective” wavelengths of the filters because the effective wavelengths of the filters depend on the exact shape of the input spectrum. This technique also can easily handle cases where the disc spectrum has a rather different shape than the stellar spectrum.
In cases where one has observations in several filters, the value of the radial exponent, $`\xi `$ can be quite well constrained (Orosz & Bailyn 1997). However, in the case of Cyg X-2 where we only have $`B`$ and $`V`$ light curves, we found that the value of $`\xi `$ was not well constrained (additional observations in $`R`$ and $`I`$ would help constrain $`\xi `$, see below). We therefore initially fixed $`\xi `$ at $`\xi =0.75`$, which is the value appropriate for a steady-state disc (Pringle 1981).
### 3.4 Addition of X-ray heating
We argue that even though Cyg X-2 is a strong X-ray source, X-ray heating is relatively unimportant with regards to the Cyg X-2 optical light curves. It is well known that strong X-ray heating of the secondary star in an X-ray binary can lead to distortions in the secondary star’s observed radial velocity curve (e.g. Bahcall, Joss, & Avni 1974) and to changes in the observed spectral type of the secondary as a function of orbital phase (e.g. Crampton & Hutchings 1974). However, in the case of Cyg X-2, no distortions in the radial velocity curve or changes in the spectral type were observed by CCK98. X-ray heating of the secondary star can also alter the observed optical light curve by adding light near the photometric phase 0.5 (Avni 1978). There is no evidence for any significant excess light near $`\varphi =0.5`$ in the folded light curves displayed in Figure 3. There are two main reasons why X-ray heating seems to be unimportant in Cyg X-2. First of all, Cyg X-2 has a much larger orbital separation than other X-ray binaries such as Her X-1, so the X-ray flux (i.e. the number of X-ray photons per unit surface area) at the surface of the Cyg X-2 secondary star will be smaller. Secondly, many X-ray binaries are thought to have relatively thick accretion discs (e.g. Motch et al. 1987; de Jong, Augusteijn & van Paradijs 1996). The thick discs may shield the secondary star from much (if not most) of the X-rays from the central source. The accretion disc may also be warped (e.g. Wijers & Pringle 1998), in which case the secondary star may be shielded from the X-rays from the central source.
Because X-ray heating is a small effect here, we will use a simplified computational procedure. We assume that all of the X-rays come from a point centred on the neutron star. The flux of X-rays on each point on the secondary star that can see the central X-ray source is
$$F_{\mathrm{irr}}=\mathrm{\Gamma }\frac{L_x}{d^2},$$
(2)
where $`d`$ is the distance between the point in question and the centre of the neutron star, $`L_x`$ is the X-ray luminosity of the central source, and where $`\mathrm{\Gamma }`$ is cosine of the angle between the surface normal and the direction to the central source. The rim of the accretion disc can shield the points on the secondary star that are near the orbital plane, preventing them from seeing the X-rays from the central source. $`F_{\mathrm{irr}}=0`$ in these cases. The flux of X-ray photons on a particular surface element on the secondary star (specified by the coordinates ($`x,y,z`$)) causes the local temperature to rise according to
$$T_{\mathrm{X}\mathrm{ray}}^4(x,y,z)=T_{\mathrm{pole}}^4\left[\frac{g(x,y,z)}{g_{\mathrm{pole}}(x,y,z)}\right]^{4\beta }+\frac{WF_{\mathrm{irr}}}{\sigma }.$$
(3)
$`W`$ is the X-ray albedo, $`g`$ is the gravity, and $`\sigma `$ is the Stefan-Boltzmann constant (Zhang et al. 1986; Orosz & Bailyn 1997). We see that the amount the local temperature is raised depends on the value of the product $`WF_{\mathrm{irr}}`$. Thus it is possible to derive different values of $`L_x`$ by using different values of $`W`$—the final model light curves are identical as long as the product $`WF_{\mathrm{irr}}`$ remains constant. It is also possible to obtain larger values of $`L_x`$ by using a thicker accretion disc, since thicker accretion discs would shield larger parts of the secondary star. However, in this case, the final model light curves do have subtle differences. We adopt an X-ray albedo of $`W=0.50`$ definiteness and initially fix the X-ray luminosity at $`\mathrm{log}L_x=38.3`$, which is roughly the Eddington luminosity for a $`1.5M_{}`$ neutron star.
### 3.5 Light curve fitting and computation of confidence regions
We fit the $`B`$ and $`V`$ light curves simultaneously. Models using a wide range of the input parameters were computed and parameters sets which gave relatively low values of $`\chi ^2`$ were noted. Two different optimization routines were used: one based on the “grid search” algorithm and one based on the Levenberg-Marquardt algorithm (both adopted from Bevington 1969). The optimization routines were then started using each of these parameter sets as an initial guess. Our best-fitting model (arrived at after several weeks of computation) is shown in Figure 4. Table 3 gives the values of the free parameters (which are the inclination $`i`$, the radius of the disc as a fraction of the Roche lobe radius $`r_d`$, the opening angle of the disc $`\beta _{\mathrm{rim}}`$, and the temperature of the outer edge of the disc $`T_d`$) The model fits reasonably well, with $`\chi ^2=40.93`$ for 36 degrees of freedom. The standard deviations of the residuals are 0.024 mag for $`B`$ and 0.026 mag for $`V`$. We are reasonably certain the global $`\chi ^2`$ minimum was found, based on the large amount of parameter space searched.
The parameter errors given in Table 3 were computed with the Levenberg-Marquardt optimizer program. The sizes of the errors depend on the sizes of the parameter increments one uses to compute the numerical derivatives. As a check on these errors, we also estimated $`1\sigma `$ and $`2\sigma `$ confidence regions in the following way. A grid in the inclination-mass ratio plane was defined with inclinations between $`49^{}`$ and $`74^{}`$ in steps of $`0.25^{}`$ and mass ratios of 2.66, 2.94 and 3.22. Then the light curves were fit with the inclination and mass ratio fixed at the values corresponding to each grid point. In each case, the other parameters were allowed to vary until $`\chi ^2`$ was minimized. This iteration was started at the point where $`i=62.46^{}`$ and $`Q=2.94`$. Then the fit was optimized at a neighbour point using the parameters at the lowest neighbour point as the initial guess for the optimization routine. The iteration was continued until the entire grid was filled up. We found that the value of $`\chi ^2`$ did not depend strongly on the mass ratio for a given inclination. We show in Figure 5 $`\chi ^2`$ vs. $`i`$ for $`Q=2.94`$ (the solid line and filled points). We also show a parabolic fit to the $`\chi ^2`$ values between $`55^{}`$ and $`70^{}`$ (dash-dotted line). This fit shows that the $`\chi ^2`$ vs. $`i`$ curve is roughly parabolic near the minimum. We see that $`\chi ^2=\chi _{\mathrm{min}}^2+1`$ at $`i59^{}`$ and at $`i65^{}`$. Thus $`59^{}i65^{}`$ is an approximate $`1\sigma `$ confidence region. A rough $`2\sigma `$ confidence region is $`56^{}i68^{}`$ where $`\chi ^2=\chi _{\mathrm{min}}^2+4`$ at the endpoints. The value of $`\chi ^2`$ increase sharply as the inclination grows beyond $`68^{}`$ since the model predicts deep eclipses that are not observed. The rough $`1\sigma `$ errors of $`\pm 3^{}`$ are not too different from the $`1\sigma `$ errors of $`\pm 4^{}`$ computed from the Levenberg-Marquardt program. We will adopt $`1\sigma `$ errors of $`\pm 4^{}`$ for the sake of discussion below.
### 3.6 The system geometry: A grazing eclipse of the disc?
We show in Figure 6 a schematic diagram of the binary system (using the model parameters given in Table 3) as it appears on the plane of the sky at phase 0.0. In this geometry we expect a grazing eclipse of the disc. The grazing eclipse of the disc results in a $`V`$ light curve that is $`0.02`$ mag deeper at the photometric phases 0.0 and 0.5 than the uneclipsed light curve. In principle, one would expect to observe characteristic changes in the disc emission line profile (e.g. H$`\alpha `$) as a function of phase for eclipsing systems (Young & Schneider 1980). However, it is not clear if the predicted grazing eclipse in Cyg X-2 is deep enough to lead to easily observable changes in the H$`\alpha `$ line profile. Future spectroscopic observations at the correct orbital phases may help to further constrain the inclination if one could show that partial eclipses do or do not occur.
### 3.7 Changes in the disc temperature profile and secondary star temperature
We did some numerical experiments to explore how the model fits depend on the parameter $`\xi `$. The reason for these experiments is that the disc in Cyg X-2 is probably strongly irradiated by the central X-ray source, and it is likely that the temperature profile of the disc is changed. Vrtilek et al. (1990) showed that the temperature profile of an irradiated accretion disc is like $`T(r)r^{3/7}`$ rather than the familiar $`T(r)r^{3/4}`$ for a steady-state non-irradiated disc. \[Recently, Dubus et al. (1998) argued that a non-warped disc irradiated by a point X-ray source powered by accretion is unchanged by the irradiation. The observation that the accretion discs in most LMXBs are effected by irradiation leads Dubus et al. (1998) to conclude that the discs are either warped or that the central X-ray source is not point-like.\] We computed models where the parameter $`\xi `$ was fixed at several values between $`0.425`$ and $`0.750`$ and where the other parameters were adjusted to minimize $`\chi ^2`$ (the mass ratio was fixed at $`Q=2.94`$). As $`\xi `$ moves from $`0.75`$ to $`0.425`$, the value of $`\chi ^2`$ of the fit increases and the best-fitting value of the inclination $`i`$ decreases. When $`\xi =3/70.425`$, $`i=54.6^{}`$ and $`\chi ^2=48.4`$ for 36 degrees of freedom. We did not estimate confidence regions for the inclination for the case when $`\xi =3/7`$.
We also computed models where the polar temperature of the secondary star was slightly altered from its nominal value of 7000 K. When $`T_{\mathrm{pole}}=6700`$ K (corresponding roughly to a spectral type of F2), the best-fitting inclination was $`i=63.6^{}`$ with $`\chi ^2=40.90`$ for 36 degrees of freedom, slightly lower than the value of 40.93 found above for the model with $`T_{\mathrm{pole}}=7000`$ K. When $`T_{\mathrm{pole}}=7400`$ K (corresponding roughly to a spectral type of A7), the best-fitting inclination was $`i=61.5^{}`$ with $`\chi ^2=41.35`$ for 36 degrees of freedom. Thus our results do not depend strongly on the adopted value of the secondary star’s polar temperature.
### 3.8 Comparison of observed and model disc fraction
We have an independent check on the light curve models. Since the spectrum of the star and the spectrum of the disc are computed separately, we can easily predict what fraction of the flux at a given wavelength is due to the disc (we refer to this number as the “disc fraction” and denote it by $`k`$). Observationally, the disc fraction in a given bandpass can be measured using the optical spectrum of the source and the spectra of suitable template stars (Marsh, Robinson, & Wood 1994). We obtained two spectra of Cyg X-2 1997 on November 1 and 3 with the 2.7m telescope at the McDonald Observatory (Fort Davis, Texas) using the Large Cass Spectrograph, a 600 groove/mm grating (blazed at 4200 Å), and the TI1 $`800\times 800`$ CCD. The spectral resolution is $`3.5`$ Å (FWHM) with wavelength coverage 3525-4940 Å. The signal-to-noise ratios are $`50`$ per pixel near the He II emission line at 4686 Å. We also observed the A9III star HR 2489 on both nights. CCK98 found that the spectrum of this star best matched the absorption line spectrum of Cyg X-2. We normalized each spectrum to its continuum fit and applied the technique of Marsh, Robinson, & Wood (1994) to decompose the spectra of Cyg X-2 into the disc and stellar components. We found a disc fraction of $`k_B=0.36\pm 0.05`$ from the November 1 spectrum (orbital phase $`=0.65`$) and $`k_B=0.37\pm 0.05`$ for the November 3 spectrum (orbital phase $`=0.85`$). We note that these estimates of $`k_B`$ represent an average value over most of the $`B`$ bandpass. For comparison, J. Casares (private communication) finds from the much higher quality spectra published in CCK98 that the disc fraction in the $`B`$ and $`R`$ bands was variable, with $`0k_B0.30`$ and $`0k_R0.40`$ for the $`B`$ and $`R`$ bands, respectively. For this discussion we adopt $`k_B=0.30`$ and $`k_R=0.30`$.
We show in Figure 7 the model disc fraction as a function of wavelength for two models: the best-fitting solution with $`\xi =0.75`$ and the best-fitting solution with $`\xi =0.425`$. We also show the standard $`B`$, $`V`$, and $`R`$ filter response functions. The disc fractions for the $`\xi =0.75`$ model are $`k_B0.55`$ and $`k_R0.35`$. Both of these values are somewhat larger than the observed values. The disc fractions in $`B`$ and $`R`$ for the $`\xi =0.425`$ model are slightly larger than $`0.3`$, much closer to the upper ranges of the observed values. However, the fit for the $`\xi =0.425`$ model is worse than the fit for the $`\xi =0.75`$ model ($`\chi ^2=48.44`$ compared to $`\chi ^2=40.92`$ for 36 degrees of freedom).
### 3.9 The puzzling lack of X-ray heating
We have argued above that X-ray heating of the secondary star in Cyg X-2 is not a large effect since there is no observed change in the spectral type as a function of orbital phase and because there does not appear to be a large amount of excess light at the photometric phase 0.5. The effect of the X-ray heating in the model given in Table 3 is small—there is about $`0.01`$ to 0.02 magnitudes of excess light added near phase 0.5 compared to the light curve without heating. This is in spite of the fact that the neutron star is accreting at nearly the Eddington rate (see Smale 1998). In our model the rim of the disc shields most of the secondary star from the central X-ray source \[although it is also likely that the disc is warped (Dubus et al. 1998), our model currently only uses an axisymmetric disc\].
It turns out that solving the “problem” of no X-ray heating of the secondary star leads to another puzzle. Namely, it is it clear observationally that the disc in Cyg X-2 is slightly fainter in the optical than the secondary star (i.e. the observed disc fractions in $`B`$ and $`R`$ are $`0.3`$). However, Jan van Paradijs pointed out to us that relative faintness of the disc is somewhat surprising since one would expect a substantial amount of light from the reprocessing of the X-rays absorbed by the disc. The He II $`\lambda 4868`$ Å line is in emission, so presumably there is at least some X-ray reprocessing. Based on the relations given in van Paradijs & McClintock (1994), the absolute $`V`$ magnitude of the accretion disc should be $`M_V=2.02\pm 0.56`$. We compute $`M_V=0.38\pm 0.35`$ for the disc, based on our model parameters (the secondary star by itself has $`M_V=0.54\pm 0.24`$, see Section 4.2). Thus the disc in Cyg X-2 is about a factor of 9 fainter in $`V`$ than expected based on the simple scaling laws given in van Paradijs & McClintock (1994). It is possible that much of the reprocessed flux from the disc is emitted at shorter wavelengths than $`B`$, which might account for some of the mismatch between the observed and expected disc brightness.
Presently, the code does not self-consistently account for optical flux from the disc due to reprocessing of absorbed X-rays. The brightness of the disc is set based on the temperature at the outer edge, the radial temperature profile, and the disc radius, and is independent on the adopted value of $`L_x`$. The code is “flexible” in the sense that the power-law exponent on the temperature radial profile can be adjusted to approximate the changes in the disc caused by irradiation. As we showed above, the model disc fractions are not too different than what is observed. It therefore appears that this lack of a self-consistent computation of the disc flux is not a major problem.
### 3.10 Summary of model fitting and discussion
The fits to the $`B`$ and $`V`$ light curves are better if we assume that the disc is in a steady state where $`T(r)r^{0.75}`$. However, in this case the model disc is bluer than what is observed since our predicted disc fraction in $`B`$ ($`k_B0.55`$) is larger than what is observed ($`k_B0.30`$). If we assume that the disc is strongly irradiated so that $`T(r)r^{0.425}`$, then the model disc is redder and the predicted disc fraction in $`B`$ ($`k_B0.32`$) is closer to what is observed. However, in this case, the $`\chi ^2`$ of the fit is much worse ($`\chi ^2=48.4`$ compared to $`\chi ^2=40.9`$ with 36 degrees of freedom for the steady-state case). Fortunately, the best-fitting values of the inclination are not that different in the two cases: $`i62.5^{}`$ for the steady-state disc case and $`i54.6^{}`$ for the irradiated case, which is in the $`2\sigma `$ range of the steady-state disc case. For the sake of the discussion in Section 4 we adopt the $`1\sigma `$ inclination from the steady-state case ($`i=62.5^{}\pm 4^{}`$, where we assume the errors are Gaussian) because of the lower $`\chi ^2`$.
It is important to recall here that observed optical light curves consist mainly of two components: the light from the distorted secondary star and the light from the accretion disc. We have argued that there is very little extra light due to X-ray heating of the secondary star. We assume that the light from the secondary star is modulated in phase while the light from the disc is not (with the possible exception of a grazing eclipse). Thus to model the observed light curves we should compute the relative amounts of disc light and secondary star light at each observed wavelength region for every observed phase.
In our current model we compute the disc light at every observed wavelength region by specifying four parameters: the disc radius in terms of the neutron star Roche lobe radius, the opening angle of the disc rim, the temperature profile of the disc, and the temperature of the disc rim. The flux at each grid point across the disc is computed from the local temperature assuming a blackbody spectrum. The code does not account for flux from the disc due to reprocessing of absorbed X-rays from the central source. Typically, the spectrum of the disc (in the optical) will have a rather different shape than the spectrum of the secondary star. Hence one should have observed light curves in as many wavelength bands as possible in order to better determine the shapes of the disc and stellar spectra. Eclipsing systems with well-defined and smooth light curves like GRO J1655-40 (Orosz & Bailyn 1997; van der Hooft et al. 1998) offer additional constraints on the disc radius, thickness, and temperature.
On the other hand, there is no particular reason to adopt our parameterization of the disc since the important quantity is the relative amount of disc and star light at a particular wavelength. In fact, by using suitably high quality spectra one could simply measure the disc fraction $`k_\lambda `$ at several different wavelengths covering the bandpasses of the observed light curves. In this case the model disc spectrum would be constructed from the model star spectrum since the quantity $`k_\lambda =f_{\mathrm{disc}}/(f_{\mathrm{disc}}+f_{\mathrm{star}})`$ is known an each wavelength point (here $`f_{\mathrm{disc}}`$ and $`f_{\mathrm{star}}`$ refer to the model fluxes from the disc and star, respectively, at a given orbital phase). Then, as before, the model disc spectrum is added to the model star spectrum and the resulting composite spectrum is integrated with the filter response curves to produce model fluxes in each bandpass. Thus one could eliminate the model parameters $`T_d`$ and $`\xi `$. We note that one still needs a model disc to account for the effects of X-ray shadowing by the disc rim and possibly the slight loss of flux due to the eclipse, so the model parameters $`\beta _{\mathrm{rim}}`$ and $`r_d`$ are still needed.
Thus, future modelling of the Cyg X-2 light curves can be improved by observing the light curves in more colours (i.e. at least in $`B`$, $`V`$, $`R`$, and $`I`$ and possibly also in the infrared), and by obtaining quasi-simultaneous spectroscopic observations of Cyg X-2 and template stars over a wide wavelength range. In practice, one needs observations over many orbital cycles in order to average out the variations in the observed disc fraction and to define the lower light curve envelopes.
## 4 Discussion
### 4.1 Mass of the neutron star
The masses of the neutron star and secondary star can be computed from the optical mass function, the mass ratio, and the inclination (CCK98):
$$M_x=\frac{f(M)(1+q)^2}{\mathrm{sin}^3i}=(1.24\pm 0.09M_{})(\mathrm{sin}i)^3.$$
(4)
Using our $`1\sigma `$ limits on the inclination ($`i=62.5^{}\pm 4^{}`$) we find $`M_x=1.78\pm 0.23M_{}`$ and $`M_c=0.60\pm 0.13M_{}`$. The extreme range of allowed inclinations ($`49^{}i73^{}`$) implies an extreme mass range allowed for the neutron star of $`1.42\pm 0.10M_{}M_x2.88\pm 0.21M_{}`$.
Since the values of the optical mass function and the mass ratio were fairly well determined by CCK98, the largest uncertainty on $`M_x`$ is the value of the inclination $`i`$ one chooses. Thus it is instructive to see how the mass of the neutron star varies as a function of the inclination. We plot in Figure 8 the mass of the neutron star as a function of the inclination. We also indicate the $`1\sigma `$ errors at several different inclinations. The mass of the neutron star in Cyg X-2 is consistent at the $`1\sigma `$ level with the canonical neutron star mass of $`1.35_{}`$ (Thorsett & Chakrabarty 1998) for inclinations greater than $`70^{}`$. However, the fits to the light curves get increasingly worse as the inclination grows larger than $`68^{}`$ (i.e. the sharp increase in $`\chi ^2`$ displayed in Figure 5), so it is unlikely that the inclination of Cyg X-2 is much larger than $`68^{}`$. At the lower value of the $`1\sigma `$ inclination range for our model with the unirradiated disc ($`i=58.5^{}`$), the neutron star mass is $`M_x=2.00\pm 0.15M_{}`$, which is more than $`4\sigma `$ larger than the canonical mass of $`1.35M_{}`$.
For most of the parameter space in $`i`$, $`f(M)`$, and $`q`$, the mass of the neutron star in Cyg X-2 exceeds the average mass of the neutron stars in binary radio pulsars. Thus Cyg X-2 contains a rare example of a “massive” neutron star. Perhaps the best-known example of a massive neutron star is the high-mass X-ray binary Vela X-1 (4U 0900-40). Vela X-1 is an eclipsing system, and the neutron star is an X-ray pulsar. Dynamical mass measurements by van Kerkwijk et al. (1995b) gave a mass of $`M_x=1.9_{0.5}^{+0.7}`$ (95% confidence limits). A later analysis of IUE spectra by Stickland et al. (1997) gave a mass consistent with $`1.4M_{}`$ ($`1.34M_{}M_x1.53M_{}`$). However, a recent reanalysis of the UV data (Barziv et al. 1998, in preparation) gives $`M_x=1.9M_{}`$, consistent with the result of van Kerkwijk et al. (1995b). Barziv et al. (1998, in preparation) also find $`M_x=1.9M_{}`$ from optical spectra. The range of derived masses by different groups is an indication of the difficulty in analyzing the radial velocity curves of a high-mass companion star such as the one in Vela X-1. Another example of a possible massive neutron star is the eclipsing high-mass X-ray binary 4U 1700-37. The companion star (HD 153919) is an O6f star with a strong wind. Heap & Corcoran (1992) find $`M_x=1.8\pm 0.4M_{}`$. However, 4U 1700-37 does not pulse and its X-ray spectrum is harder than the spectra of typical X-ray pulsars. This lack of “neutron star signatures” has led Brown, Weingartner, & Wijers (1996) to speculate that 4U 1700-37 contains a “low-mass” black hole rather than a neutron star.
Recent computations of the neutron star and black hole initial mass function by Timmes, Woosley, & Weaver (1996) indicate that Type II supernovae give rise to a bimodal distribution of initial neutron star masses. The average masses for the two peaks are $`1.26\pm 0.06M_{}`$ and $`1.73\pm 0.08M_{}`$, respectively. Type Ib supernovae tend to produce neutron stars in the lower mass range. The masses derived by Timmes et al. (1996) do not include mass that may fall back onto the neutron star shortly after the supernova explosion. The mean mass for the lower-mass distribution of $`1.26\pm 0.06M_{}`$ agrees well with the mean mass of the binary radio pulsars of $`1.35\pm 0.04M_{}`$ determined by Thorsett & Chakrabarty (1998). The mass of the neutron star in Cyg X-2 seems to be significantly larger than both of these masses. However, the neutron star mass of $`M_x=1.78\pm 0.23M_{}`$ given above agrees well with the mean mass of $`1.73\pm 0.08M_{}`$ derived by Timmes et al. (1996) for the higher-mass peak of their bimodal distribution. Thus the current mass of the neutron star in Cyg X-2 might simply be the mass at its formation (within the framework of the models of Timmes et al. (1996)). Alternatively, Zhang, Strohmayer, & Swank (1997) point out that one would expect massive neutron stars to exist in systems where a neutron star formed at $`1.4M_{}`$ has been accreting at the Eddington rate for extended periods of time. If the kilohertz QPOs observed in Cyg X-2 and other neutron star LMXBs can be interpreted as the frequency of the last stable orbit of the inner accretion disc, then the neutron star masses in some X-ray binaries could be as large as $`2M_{}`$. Assuming the neutron stars were formed at $`1.4M_{}`$, the $`0.6M_{}`$ of extra matter is not an unreasonable amount to accrete in $`10^8`$ years (Zhang et al. 1997). One would basically have to know how long Cyg X-2 has been accreting at near Eddington rates in order to determine whether the neutron star formed at “low mass” ($`1.3M_{}`$) or “high mass” ($`1.7M_{}`$). It remains to be seen if a reliable age estimate can be derived from a binary evolution model of this system.
According to King et al. (1997), a companion star mass of at least $`0.75M_{}`$ is needed to maintain steady accretion in a neutron star low-mass X-ray binary like Cyg X-2. If $`M_c>0.75M_{}`$, then $`M_x>1.88M_{}`$ at the 95% confidence level (CCK98), which would require an inclination lower than about $`60^{}`$, near the lower end of the $`1\sigma `$ inclination range. We find $`M_c=0.60\pm 0.13M_{}`$ ($`1\sigma `$) using $`i=62.5^{}\pm 4^{}`$, which is $`1\sigma `$ smaller than minimum $`M_c`$ of King et al. (1997). The extreme range of allowed inclinations ($`49^{}i73^{}`$) implies an extreme mass range allowed for the secondary star of $`0.48\pm 0.09M_{}M_c0.98\pm 0.18M_{}`$. In principle, a measurement of the surface gravity of the secondary star would provide an independent estimate of $`M_c`$ since the density of the Roche-lobe filling secondary star in a semi-detached binary depends only on the orbital period to a good approximation (Pringle 1985). However, one would need to measure $`\mathrm{log}g`$ to better than $`0.03`$ dex to distinguish between $`M_c=0.60M_{}`$ and $`M_c=0.75M_{}`$.
### 4.2 The Distance to the Source
We can compute the distance to the source using the results of our model fitting. Once the inclination $`i`$ is given, we can compute the total mass of the system. The size of the semimajor axis $`a`$ is then computed from Kepler’s third law. We then use Eggleton’s (1983) formula to compute the effective radius of the secondary’s Roche lobe in terms of the orbital separation $`a`$:
$$\frac{R_{Rl}}{a}=\frac{0.49q^{2/3}}{0.6q^{2/3}+\mathrm{ln}(1+q^{1/3})}.$$
(5)
The intrinsic luminosity of the secondary star then follows from the Stefan-Boltzmann relation, where we assume $`T_{\mathrm{eff}}=7000\pm 250`$ K and a bolometric correction of $`0`$. To get the intrinsic luminosity of the entire system we must add light from the accretion disc. We do not have spectroscopic observations in the $`V`$ band available so we will interpolate between the measurements of $`k_B`$ and $`k_R`$ and adopt $`k_V=0.30\pm 0.05`$. Finally, the distance modulus can be computed after we account for interstellar extinction. The most complete study of the interstellar extinction in the direction of Cyg X-2 is that of McClintock et al. (1984). They derived a colour excess of $`E(BV)=0.40\pm 0.07`$ based on spectra from the International Ultraviolet Explorer and on optical photometry and spectra of 38 nearby field stars. Assuming $`A_V=3.1E(BV)`$, the $`1\sigma `$ $`A_V`$ range of McClintock et al. (1984) is $`1.02A_V1.46`$. Using this $`A_V`$ range, we find a distance of $`d=7.2\pm 1.1`$ kpc, where we have adopted $`i=62.5^{}\pm 4^{}`$ and a mean apparent $`V`$ magnitude of 14.8 for the “quiescent” state (Figures 3, 4). The absolute $`V`$ magnitude of the system is $`M_V=0.93\pm 0.25`$, and the absolute $`V`$ magnitudes of the components separately are $`M_V=0.38\pm 0.35`$ for the disc, and $`M_V=0.54\pm 0.24`$ for the secondary star, respectively. Finally, if we use the inclination derived using the irradiated disc ($`i=54.6^{}`$), then we find a distance of $`d=7.9`$ kpc.
Previous distance estimates include the distance derived by Cowley et al. (1979) of $`d=8.7\pm 2.0`$ kpc, where we have propagated their estimated error on the bolometric magnitude of $``$0.5 mag (see Smale 1998). Our distance estimate is consistent with that of Cowley et al. (1979). Recently, Smale (1998) derived a distance of $`11.6\pm 0.3`$ kpc from observations of a type I radius-expansion burst in Cyg X-2, where he assumed a neutron star mass of 1.9 M to derive the Eddington luminosity of the neutron star. Our derived distance is $`4\sigma `$ smaller than that of Smale (1998).
To facilitate a comparison between the various distance estimates, we plot in Figure 9 the distance to Cyg X-2 as a function of the visual extinction $`A_V`$. Since the distance to the source depends weakly on the assumed inclination and somewhat strongly on the assumed disc fraction, we show four $`d`$ vs. $`A_V`$ curves: (1) $`i=70^{}`$, $`k_V=0.3`$; (2) $`i=63^{}`$, $`k_V=0.3`$; (3) $`i=63^{}`$, $`k_V=0.5`$; and (4) $`i=63^{}`$, $`k_V=0.7`$. The computed distance to the source decreases sharply as $`A_V`$ grows larger. If the $`1\sigma `$ $`A_V`$ range of McClintock et al. (1984) is correct ($`1.02A_V1.46`$), then the disc fraction must be greater than $`k_V0.7`$ in order for the derived distance to be consistent with Smale’s (1998) measurement. However, a disc fraction of $`k_V=0.7`$ is much larger than what is observed: our spectra and the spectra of CCK98 give disc fractions of $`0.3`$ in the $`B`$ and $`R`$ bands. If the disc fraction of $`k_V=0.3`$ is correct, then the visual extinction must be rather small ($`A_V0.2`$ mag) in order to get a distance of $`11`$ kpc. However, we note that the distance derived by Smale (1998) depends on the assumed neutron star mass (and other parameters, e.g. Lewin, van Paradijs, & Taam 1993). If one assumes a neutron star mass of $`M_x=1.78M_{}`$, the distance would be $`d=11.2\pm 0.3`$ kpc (using standard burst parameters). This distance is marginally consistent with the estimate of Cowley et al. (1979), but still inconsistent with our distance estimate. One needs a rather small neutron star mass ($`M_x0.9M_{}`$) in order to get a distance from the type I radius-expansion burst consistent with our measurement. Thus it is quite difficult to reconcile the differences between our distance estimate and that of Smale (1998).
## 5 Summary
We have collected from the literature $`U`$, $`B`$, and $`V`$ light curves of Cyg X-2. The $`B`$ and $`V`$ light curves show significant periodicities in their power spectra. The most significant periodicities in the $`B`$ and $`V`$ light curves correspond to half of the orbital period of $`P=9.8444`$ days. The “quiescent” light curves derived from the lower envelopes of the folded $`B`$ and $`V`$ light curves are ellipsoidal. We fit ellipsoidal models to the “quiescent” light curves; from the best-fitting model we derive a $`1\sigma `$ inclination range of $`i=62.5\pm 4^{}`$, and a lower limit on the inclination of $`i49^{}`$. The mass of the neutron star is $`M_x=1.78\pm 0.23M_{}`$, where we have used previous determinations of the mass ratio ($`q=M_c/M_x=0.34\pm 0.04`$) and the optical mass function ($`f(M)=0.69\pm 0.03M_{\mathrm{}}`$), and the $`1\sigma `$ inclination range of $`i=62.5^{}\pm 4^{}`$. We find a distance of $`d=7.2\pm 1.1`$ kpc, which is significantly smaller than a recent distance determination of $`d=11.2\pm 0.3`$ kpc derived from an observation of a type I radius-expansion X-ray burst (assuming $`M_x=1.78M_{}`$), but consistent with earlier estimates.
## Acknowledgments
This research has made use of the Simbad database, operated at CDS, Strasbourg, France. We thank Tariq Shahbaz, Jan van Paradijs, Phil Charles, and Lex Kaper for various useful discussions and Jorge Casares for making his measurements of the disc fraction available to us.
|
no-problem/9901/gr-qc9901085.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The traditional Bekenstein-Hawking entropy of black hole, which is known to be proportional to the area $`A`$ of the horizon, is believed to be appropriate to all kinds of black holes, including the extreme black hole(EBH). However, recently, based upon the study of topological properties, it has been argued that the Bekenstein-Hawking formula of the entropy is not valid for the EBH. The entropy of four-dimensional(4D) extreme Reissner-Nordstrom(RN) black hole is zero regardless of its nonvanishing horizon area. Further study of the topology \[2-4\] shew that the Euler characteristic of such kind of EBH is zero, profoundly different from that of the nonextreme black hole (NEBH). From the relationship between the topology and the entropy obtained in ref., we see that this extreme topology naturally results in the zero entropy.
But these results meet some challenges. By means of the recaculation of the proper distance between the horizon and any fixed point, Zaslavskii argued that a 4D RN black hole in a finite size cavity can approach the extreme state as closely as one likes but its entropy as well as its temperature on the cavity are not zero. Entropy is still proportional to the area. Zaslavskii’s result has also been supported by string thoerists got by counting string states . The geometry of EBH obtained in the approach of Zaslavskii has been studied in and it was claimed that its topology is of the nonextreme sector. Meanwhile the string theorists’ results were interpreted by summing over the topology , however this viewpoint has been refuted by Zaslavskii . These different results indicate that there is a clash for the understanding of the topology as well as the intrinsic thermodynamical properties of EBHs. Comparing and , this clash seems come from two different treatments: one refers to Hawking’s treatment by starting with the original EBH and the other Zaslavskii’s treatment by first taking the boundary limit and then the extreme limit to get the EBH from nonextreme counterpart . Recently we have studied the geometry and intrinsic thermodynamics of extreme Kerr black hole by using these two treatments and found that these different treatments approach to two different topological objects and lead to drastically different intrinsic thermodynamical properties . Of course, all these results obtained are limited in the classical treatment of 4D black holes.
The motivation of the present paper is to extend 4D classical studies to two-dimensional (2D) black holes. We hope that the mathematical simplicity in 2D black holes can help us to understand the problem clearer and deeper. The results on 2D charged dilaton black hole topology and thermodynamics were only announced and briefly summarized in . To make the study general, we will study two kinds of 2D black holes, the 2D charged dilaton black hole as well as the 2D Lowe-Strominger black hole , by using two treatments mentioned above in detail. We will prove that these two treatments result in two different thermal results: Bekenstein-Hawking entropy and zero entropy for EBHs. Besides we will investigate the geometry and topology of 2D EBHs in detail and directly relate two treatments to topological properties of EBHs. We will clearly exhibit Euler characteristic values for 2D EBHs, especially for EBHs obtained from the nonextreme counterpart by Zaslavskii’s treatment. Different Euler characteristics are directly derived from different treatments, rather than by introducing some other conditions, such as the inner boundary condition as done in 4D cases .
The other objective of the present paper is to study this problem quantum mechanically. As early pointed out by t’Hooft, the fields propagating in the region just outside the horizon give the main contribution to the black hole entropy. Many methods, for example, the brick wall model, Pauli-Villars regular theory etc., have been suggested to study the quantum effects of entropy under WKB approximation or one-loop approximation. Suppose the black hole is enveloped by a scalar field, and the whole system, the hole and the scalar field, are filling in a cavity. Adopt the viewpoint that the entropy arises from entanglement\[21-23\], it is of interest to study the quantum effects of these two different treatments on the entropy of the scalar field on the EBHs’ backgrounds under WKB approximation, in particular, to investigate whether these two different treatments will offer two different values of entropy. In Sec.IV we will prove that the entropy of the scalar field depends on two different treatments as well. Some physical understanding concerning these results will also be given.
The organization of the paper is as the following: In Sec.II, the classical entropy of two kinds of 2D EBHs are derived by using two different treatments. And in Sec.III, the geometry and topology of these EBHs are investigated. The Euler characteristics are clearly exhibited. Sec.IV is devoted to the discussion of the entropy of the scalar field on the EBH background. The conclusions and discussions will be presented in the last section.
## 2 Classical entropy
We first study the 2D charged dilaton black hole(CDBH). The action is
$$I=_M\sqrt{g}e^{2\varphi }[R+4(\varphi )^2+\lambda ^2\frac{1}{2}F^2]2_Me^{2\varphi }K$$
(1)
which has a black hole solution with the metric
$`\mathrm{d}s^2=g(r)\mathrm{d}t^2+g^1(r)\mathrm{d}r^2`$ (2)
$`g(r)=12me^{\lambda r}+q^2e^{2\lambda r}`$ (3)
$`e^{2\varphi }=e^{2\varphi _0}e^{\lambda r},A_0=\sqrt{2}qe^{\lambda r}`$ (4)
where $`m`$ and $`q`$ are the mass and electric charge of the black hole respectively. The horizons are located at $`r_\pm =(1/\lambda )\mathrm{ln}(m\pm \sqrt{m^2q^2})`$.
Using the finite-space formulation of black hole thermodynamics, employing the grand canonical emsemble and putting the black hole into a cavity as usual, we calculate the free energy and entropy of the CDBH. To simplify our calculations, we introduce a coordinate transformation
$$r=\frac{1}{\lambda }\mathrm{ln}[m+\frac{1}{2}e^{\lambda (\rho +\rho _0^{})}+\frac{m^2q^2}{2}e^{\lambda (\rho +\rho _0^{})}]$$
(5)
where $`\rho _0^{}`$ is an integral constant, and rewrite Eq.(2) to a particular gauge
$$\mathrm{d}s^2=g_{00}(\rho )\mathrm{d}t^2+\mathrm{d}\rho ^2$$
(6)
The Euclidean action takes the form
$$I=_M\sqrt{\frac{1}{g_{11}}}e^{2\varphi }(\frac{1}{2}\frac{_1g_{00}}{g_{00}}2_1\varphi )$$
(7)
The dilaton charge is found to be
$`D=e^{2\varphi _0}(m+{\displaystyle \frac{1}{2}}e^x+{\displaystyle \frac{m^2q^2}{2}}e^x)`$ (8)
$`x=\lambda (\rho +\rho _0^{})`$ (9)
The free energy, $`F=I/\beta `$, where $`\beta `$ is the proper periodicity of Euclideanized time at a fixed value of the special coordinate and has the form $`\beta =1/T_w=\sqrt{g_{00}}/T_c`$, $`T_c`$ is the inverse periodicity of the Euclidean time at the horizon
$$T_c=\frac{\lambda \sqrt{m^2q^2}}{2\pi (m+\sqrt{m^2q^2})}$$
(10)
Using the formula of entropy
$$S=(F/T_w)_D$$
(11)
we obtain
$$S=\frac{2\pi e^{2\varphi }[m+{\displaystyle \frac{e^x}{2}}+{\displaystyle \frac{(m^2q^2)e^x}{2}}][1+(m^2q^2)e^{2x}]\sqrt{m^2q^2}(m+\sqrt{m^2q^2})}{(m^2q^2)+m[{\displaystyle \frac{e^x}{2}}+{\displaystyle \frac{(m^2q^2)e^x}{2}}]}$$
(12)
Taking the boundary limit $`xx_+=\lambda (\rho _++\rho _0^{})=\mathrm{ln}\sqrt{m^2q^2}`$ in Eq.(12) to get the entropy of the hole, we find
$$S=4\pi e^{2\varphi _0}(m+\sqrt{m^2q^2})$$
(13)
This is just the result given by Nappi and Pasquinucci for the non-extreme CDBH, which confirms that our treatment above is right.
We are now in a position to entend the above calculations to EBH. We are facing two limits, namely, the boundary limit $`xx_+`$ and the extreme limit $`qm`$. We can take the limits in different orders: (A) by first taking the boundary limit $`xx_+`$, and then the extreme limit $`qm`$ as the treatment adopted in ; and (B) by first taking the extreme limit $`qm`$ and then the boundary limit $`xx_+`$, which corresponds to the treatment of Hawking et al. by starting with the original EBH. To do our limits procedures mathematically, we may take $`x=x_++ϵ,ϵ0^+`$ and $`m=q+\eta ,\eta 0^+`$, where $`ϵ`$ and $`\eta `$ are infinitesimal quantities with different orders of magnitude, and substitute them into Eq.(12). It can easily be shown that in treatment (A)
$$S_{CL}(A)=4\pi me^{2\varphi _0}$$
(14)
which is just the Bekenstein-Hawking entropy. However, in treatment (B),
$$S_{CL}(B)=0$$
(15)
which is just the result given by refs..
These peculiar results can also be found in 2D Lowe-Strominger black hole. This 2D black hole is obtained in by instroducing gauge fields through the dimensional compactification of three-dimensional string effective action. The 2D action in this case has the form
$$I=_M\sqrt{g}e^{2\varphi }[R+2\lambda ^2\frac{1}{4}e^{4\varphi }F^2]2_Me^{2\varphi }K$$
(16)
where $`\varphi `$ is a scalar field coming from the compactification and plays the role of dilaton for the 2D action. This action possesses the black hole solution
$`\mathrm{d}s^2`$ $`=`$ $`(\lambda ^2r^2m+{\displaystyle \frac{J^2}{4r^2}})\mathrm{d}t^2+{\displaystyle \frac{1}{\lambda ^2r^2m+{\displaystyle \frac{J^2}{4r^2}}}}\mathrm{d}r^2`$ (17)
$`A_0`$ $`=`$ $`{\displaystyle \frac{J}{2r^2}}`$ (18)
$`e^{2\varphi }`$ $`=`$ $`r`$ (19)
The parameter $`J`$ in this solution gives “charge” to this black hole. The horizons of this black hole locate at
$$r_\pm =\frac{1}{\lambda }\{\frac{m}{2}[1\pm (1(\frac{\lambda J}{m})^2)^{1/2}]\}^{1/2}$$
(20)
where $`r_+`$ is the event horizon and $`r_{}`$ the inner cauchy horizon. In the extreme limit $`\lambda Jm`$, $`r_+`$ and $`r_{}`$ degenerate.
Using the finite space formulation of black hole thermodynamics, employing the grand canonical ensemble and putting the hole into a cavity as usual, we calculate the free energy and the entropy of the hole. As done in 2D CDBH, we introduce a coordinate transformation to simplify the calculation
$$r^2=\frac{1}{2\lambda ^2}(m+\frac{1}{2}e^{2\lambda (\rho +\rho _0)}+\frac{m^2\lambda ^2J^2}{2}e^{2\lambda (\rho +\rho _0)})$$
(21)
where $`\rho _0`$ is an integral constant, and rewrite Eq(17) to a particular gauge Eq(6). After transformation, the event horizon locates at
$$\rho _+=\frac{1}{2\lambda }\mathrm{ln}\sqrt{m^2\lambda ^2J^2}\rho _0$$
(22)
The Euclidean action can be evaluated as
$$I=_M[n^aF_{ab}A^be^{6\varphi }+2Ke^{2\varphi }]$$
(23)
The free energy can be obtained by the evaluation of (23). Employing the constant shift in the gauge potential $`A_aA_a+constant`$, in the equations of motion to avoid divergence in the gauge potential at the horizon as done in , we have the free energy
$$F=2\lambda D\frac{e^{2x}+(m^2\lambda ^2J^2)e^{2x}+2\sqrt{m^2\lambda ^2J^2}}{e^{2x}(m^2\lambda ^2J^2)e^{2x}}$$
(24)
where $`x=\lambda (\rho +\rho _0)`$, $`D`$ is the dilaton charge given by
$$D=[\frac{1}{2\lambda ^2}(m+\frac{1}{2}e^{2x}+\frac{m^2\lambda ^2J^2}{2}e^{2x})]^{1/2}$$
(25)
Using Eq.(11), where $`T_w`$ is the inverse periodicity of the Euclideanized time at a fixed value of the special coordinate. $`T_c`$ is the inverse periodicity at the horizon, reads
$$T_c=\frac{\sqrt{2}\lambda \sqrt{m^2\lambda ^2J^2}}{2\pi (m+\sqrt{m^2\lambda ^2J^2})^{1/2}}$$
(26)
We find the entropy as
$`S`$ $`=`$ $`{\displaystyle \frac{4\pi \sqrt{m^2\lambda ^2J^2}(m+{\displaystyle \frac{e^{2x}}{2}}+{\displaystyle \frac{m^2\lambda ^2J^2}{2}}e^{2x})e^{2x}({\displaystyle \frac{e^{2x}}{2}}+{\displaystyle \frac{m^2\lambda ^2J^2}{2}}e^{2x}+\sqrt{m^2\lambda ^2J^2})}{\sqrt{2}\lambda [(m+{\displaystyle \frac{e^{2x}}{2}}+{\displaystyle \frac{m^2\lambda ^2J^2}{2}}e^{2x})^2\lambda ^2J^2]}}`$
$`\times (m+\sqrt{m^2\lambda ^2J^2})^{1/2}`$
Taking the boundary limit
$$xx_+=\lambda (\rho _++\rho _0)=\frac{1}{2}\mathrm{ln}\sqrt{m^2\lambda ^2J^2}$$
in Eq(27), one has
$$S=\frac{4\pi }{\sqrt{2}\lambda }(m+\sqrt{m^2\lambda ^2J^2})^{1/2}$$
(28)
This is just the result given in for the NEBH.
As in the 2D CDBH we are facing two limits to extend the above calculation to EBH, namely, the boundary limit $`xx_+`$ and the extreme limit $`\lambda Jm`$. And again there are two treatments: (A) Zaslavskii’s treatment by first taking the boundary limit $`xx_+`$, and then the extreme limit $`\lambda Jm`$; and (B) Hawking et al.’s treatment by first taking the extreme limit $`\lambda Jm`$ and then the boundary limit $`xx_+`$. It is easy to find for treatment (A)
$$S_{CL}(A)=\frac{2\sqrt{2}\pi }{\lambda }m^{1/2}$$
(29)
This is just the Bekenstein-Hawking entropy. It depends on the mass $`m`$ only.
However, in treatment (B), we find
$$S_{CL}(B)=0$$
(30)
This is just the result given by refs.
Therefore, through direct thermodynamical calculations for two kinds of extreme 2D black holes, we come to a conclusion that the different results of entropy in fact come from two different treatments.
Now we are facing a puzzle. In statistical physics and thermodynamics, entropy is a function of state only, and does not depend on the history or the process of how the system arrives at the equilibrium state as well as the different treatment of mathematics. But now we use two different treatments for orders of limits to arrive at the final state, namely, the EBH state statisfying the same extreme condition, we get two different values of the entropy. The puzzle is similar to that in 4D RN cases and need further discussions.
## 3 Geometry and topology
In this section, we study the relation between the geometrical properties and two different treatments of taking different limits in detail.
Consider the Euclidean metric of 2D CDBH
$$\mathrm{d}s^2=g(r)\mathrm{d}\tau ^2+g(r)^1\mathrm{d}r^2$$
(31)
where $`g(r)`$ has the form of Eq.(3). Taking the new variable $`\tau _1=2\pi T_c\tau `$ where $`0\tau _12\pi `$, then Eq(31) becomes
$$\mathrm{d}s^2=(\beta /2\pi )^2\mathrm{d}\tau _1^2+\mathrm{d}l^2$$
(32)
$`\beta `$ is the inverse local temperature and $`l`$ is the proper distance. The equilibrium condition of the spacetime reads
$$\beta =\beta _0[g(r_B)]^{1/2},1/\beta _0=T_c=g^{}(r_+)/4\pi $$
(33)
Let us choose the coordinate according to
$$rr_+=4\pi T_cb^1\mathrm{sinh}^2(x/2),b=g^{}(r_+)/2$$
(34)
For the treatment (A), taking the limit $`r_+r_B`$ first, where the hole tends to occupy the entire cavity, the region $`r_+rr_B`$ shrinks and we can expand $`g(r)`$ in the power of series $`g(r)=4\pi T_c(rr_+)+b(rr_+)^2+\mathrm{}`$ near $`r=r_+`$. After substitution into Eqs.(32,33), and take the extreme limit $`r_+=r_{}=r_B`$, $`b=\lambda ^2`$, Eq(31) can be expressed as
$$\mathrm{d}s^2(A)=\lambda ^2(\mathrm{sinh}^2x\mathrm{d}\tau _1^2+\mathrm{d}x^2)$$
(35)
This is just the 2D counterpart of the Bertotti-Robinson(BR) spacetime . However, for the treatment (B), we start from the original extreme 2D CDBH, $`g(r)=(1me^{\lambda r})^2`$. Introducing the variable $`rr_+=r_B\rho ^1`$ and expand the metric coefficients near $`r=r_+`$, we obtain
$$\mathrm{d}s^2(B)=\rho ^2(\lambda ^2r_B^2\mathrm{d}\tau ^2+\mathrm{d}\rho ^2/\lambda ^2)$$
(36)
Now we are in a position to discuss the properties of Eqs(35,36). The horizons of the EBH got in the treatment (A) is determined by
$$g=1/4g^{}(r_+)b^1\mathrm{sinh}^2x=0$$
(37)
So the horizon locates at finite $`x`$, say $`x=0`$. The proper distance between the horizon and any other point is finite. However for the original extreme 2D CDBH (36), the horizon is detected by
$$g=\lambda ^2r_B^2\rho ^2=0$$
(38)
therefore, the horizon is at $`\rho =\mathrm{}`$. The distance between the horizon and any other $`\rho <\mathrm{}`$ is infinite. It is this difference here that gives rise to the qualitatively different topological features of the EBHs.
To exhibit these different topological features, we calculate the Euler characteristic of these two EBHs directly. The formula for the calculation of the Euler characteristic in 2D cases is
$$\chi =\frac{1}{2\pi }R_{1212}e^1e^2$$
(39)
For the nonextreme 2D CDBH, adopting its metric, and substracting the asymptotically flat space’s influence , we arrive at
$$\chi =\frac{1}{2\pi }\beta _0[m\lambda e^{\lambda r}+q^2\lambda e^{2\lambda r}]_{r_+}=1$$
(40)
This result is in accordance with that of the multi-black-holes obtained in .
It is easy to extend the calculation of $`\chi `$ to the cases of EBHs. For the EBH developed from the treatment (A), the Euler characteristic can be directly got by taking $`r_+=r_B`$ first and $`m=q`$ afterwards,
$`\chi (A)`$ $`=`$ $`{\displaystyle \frac{1}{2\pi }}\beta _0[m\lambda e^{\lambda r}+q^2\lambda e^{2\lambda r}]_{r_+=r_B}|_{extr}`$ (41)
$`=`$ $`{\displaystyle \frac{2\pi (m+\sqrt{m^2q^2})\lambda \sqrt{m^2q^2}}{2\pi \lambda \sqrt{m^2q^2}(m+\sqrt{m^2q^2})}}|_{extr}=1`$
The same as that of the NEBH. However for the original extreme 2D CDBH, using the limit procedure (B) and from Eq.(40), we have
$$\chi (B)=\frac{1}{2\pi }\beta _0[m\lambda e^{\lambda r}+m^2\lambda e^{2\lambda r}]_{r_+}$$
(42)
The horizon for the original EBH is $`r_+={\displaystyle \frac{1}{\lambda }}\mathrm{ln}m`$, so
$$\chi (B)=0$$
(43)
It is quite different from that of the NEBH.
In order to obtain the property in gerneral, we preceed our discussion to 2D Lowe-Strominger black hole. The Euclidean metric has the same form as Eq.(31), but now
$$g(r)=m+\lambda ^2r^2+\frac{J^2}{4r^2}=\frac{\lambda ^2}{r^2}(r^2r_+^2)(r^2r_{}^2)$$
(44)
Using the same treatment (A) as that of 2D CDBH, namely, taking the boundary condition first and then the extreme limit, we find that the metric can be expressed as
$$\mathrm{d}s^2(A)=(4\lambda ^2)^1(\mathrm{d}\tau _1^2\mathrm{sinh}^2x+\mathrm{d}x^2)$$
(45)
It is a 2D counterpart of BR spacetime again. While starting from the original EBH, $`g(r)={\displaystyle \frac{\lambda ^2}{r^2}}(r^2r_+^2)^2`$, as the original extreme 2D CDBH, the metric can be written as
$$\mathrm{d}s^2(B)=\rho ^2(4\lambda ^2r_B^2\mathrm{d}\tau ^2+\frac{1}{4\lambda ^2}\mathrm{d}\rho ^2)$$
(46)
The location of the horizon can be found directly for these two expressions of metric. For Eq.(45),
from $`g={\displaystyle \frac{1}{4}}g^{}(r_+)b^1\mathrm{sinh}^2x=0`$, the horizon is at finite $`x`$, say $`x=0`$. While for Eq.(46), the horizon is determined by $`g=4\lambda ^2r_B\rho ^2=0`$, so it is at $`\rho =\mathrm{}`$. The proper distances between a horizon and any other point are finite and infinite for Eqs.(45,46) respectively.
Applying Eq(39), we can also get the results for the Euler characteristic for these two kinds of 2D Lowe-Strominger black holes. Substituting the metric in Eq(39), the Euler characteristic for the NEBH reads
$$\chi =\frac{\beta _0}{2\pi }[\lambda ^2r+\frac{J}{4r^3}]_{r_+}=1$$
(47)
where we have used Eq(17) and (20). The Euler characteristic for the EBHs are obvious. For the EBH got in the treatment (A), we obtain
$$\chi (A)=\frac{\beta _0}{2\pi }[\lambda ^2r+\frac{J^2}{4r^3}]_{r_+=r_B}|_{extr}=1$$
(48)
However, in treatment (B), we find
$$\chi (B)=\frac{\beta _0}{2\pi \lambda ^2}[\lambda ^4r+\frac{m^2}{4r^3}]_{r_+}$$
(49)
Considering $`r_+^2={\displaystyle \frac{m}{2\lambda ^2}}`$ for the original EBH, we finally get
$$\chi (B)=0$$
(50)
These results clearly show that the different treatments result in different geometrical and topological properties. The direct relation to the priority of taking limits, rather than introducing additional condition, makes the outcomes more concise and explicit. Different topological results obtained here make us easier to accept the different classical entropy derived for 2D EBHs. We find that in addition to 4D cases claimed in , in the 2D cases, the topology and the EBHs’ classical entropy are closely related.
## 4 Quantum entropy
An early suggestion by ’t Hooft was that the fields propagating in the region just outside the horizon give the main contribution to the black hole entropy. The entropy of the black hole system arises from entanglement\[21-23\]. Many methods, for example, the brick wall model, Pauli-Villars regular theory,etc., have been suggested to calculate the quantum effects of entropy in WKB approximation or in the one-loop approximation. Let’s first study the 2D CDBH and suppose the CDBH is enveloped by a scalar field, and the whole system, the hole and the scalar field, are filling in a cavity. The wave equation of the scalar field is
$$\frac{1}{\sqrt{g}}_\mu (\sqrt{g}g^{\mu \nu }_\nu \varphi )M^2\varphi =0$$
(51)
Substituting the metric Eq.(2) into Eq.(51), we find
$$E^2(12me^{\lambda r}+q^2e^{2\lambda r})^1f+\frac{}{r}[(12me^{\lambda r}+q^2e^{2\lambda r})\frac{f}{r}]M^2f=0$$
(52)
Introducing the brick wall boundary condition
$`\varphi (x)=0\mathrm{at}r=r_++ϵ`$
$`\varphi (x)=0\mathrm{at}r=L`$
and calculating the wave number $`K(r,E)`$ and the free energy $`F`$, we get
$$K^2(r,E)=(12me^{\lambda r}+q^2e^{2\lambda r})^1[(12me^{\lambda r}+q^2e^{2\lambda r})^1E^2M^2]$$
(53)
$$F_{QM}=\frac{\pi }{6\beta ^2\lambda }[\frac{1}{2}\mathrm{ln}(R^22mR+q^2)+\frac{m}{2\sqrt{m^2q^2}}\mathrm{ln}\frac{Rm\sqrt{m^2q^2}}{Rm+\sqrt{m^2q^2}}]$$
(54)
where $`R=e^{\lambda (r_++ϵ)}`$, and $`ϵ0`$ is the coordinate cutoff parameter. To extend the above discussion to EBH, we are facing two limits $`ϵ0`$ and $`qm`$ again. It can be proved that Eq(54) depends on the order of taking these two limits. We find for treatment (A) which we first take boundary limit $`ϵ0`$ and then the extreme limit $`qm`$
$$F_{QM}(A)=\frac{\pi }{6\beta ^2\lambda }\mathrm{ln}(\frac{1}{m\lambda ϵ}+\frac{m}{2\sqrt{m^2q^2}}\mathrm{ln}\frac{2\sqrt{m^2q^2}}{\lambda ϵ(m+\sqrt{m^2q^2})})$$
(55)
We just leave the term $`\sqrt{m^2q^2}`$ in the second term of Eq.(55) for the moment for the following discussions.
But for treatment (B) by first adopt the extreme limit and then the boundary limit
$$F_{QM}(B)=\frac{\pi }{6\beta ^2\lambda }(\frac{m}{m\lambda ϵ}+\mathrm{ln}\frac{1}{m\lambda ϵ})$$
(56)
Similar to the classical case, different expressions for free energy appear here due to different priority of taking different limits. Through the entropy formula $`S=\beta ^2(F/\beta )`$, we obtain
$`S_{QM}(A)={\displaystyle \frac{\pi }{3\beta \lambda }}(\mathrm{ln}{\displaystyle \frac{1}{m\lambda ϵ}}+{\displaystyle \frac{m}{2\sqrt{m^2q^2}}}\mathrm{ln}{\displaystyle \frac{2\sqrt{m^2q^2}}{\lambda ϵ(m+\sqrt{m^2q^2})}})`$ (57)
$`S_{QM}(B)={\displaystyle \frac{\pi }{3\beta \lambda }}({\displaystyle \frac{m}{m\lambda ϵ}}+\mathrm{ln}{\displaystyle \frac{1}{m\lambda ϵ}})`$ (58)
We conclude that the entropy on the black hole background also depends on two different treatments.
Now we turn to study 2D Lowe-Strominger model. We suppose the 2D black hole is enveloped by a scalar field, and the whole system are filling in a cavity. Substituting the metric Eq(17) into Eq(51), we get the radial equation as
$$E^2(m+\lambda ^2r^2+\frac{J^2}{4r^2})^1f+\frac{}{r}[(m+\lambda ^2r^2+\frac{J^2}{4r^2})\frac{f}{r}]M^2f=0$$
(59)
The wave number is
$$K^2=(m+\lambda ^2r^2+\frac{J^2}{4r^2})^1[(m+\lambda ^2r^2+\frac{J^2}{4r^2})^1E^2M^2]$$
(60)
and the semiclassical quantization condition is
$$n\pi =_{r_++ϵ}^LdrK(r,E)$$
(61)
The free energy satisfies
$`\beta F`$ $`=`$ $`{\displaystyle \underset{n}{}}\mathrm{log}(1e^{\beta E})`$
$`=`$ $`{\displaystyle dn\mathrm{log}(1e^{\beta E})}`$
$`=`$ $`{\displaystyle \frac{\beta }{\pi }}{\displaystyle dE(e^{\beta E}1)^1f(r)}`$
where
$$f(r)=_{r_++ϵ}^Ldr(m+\lambda ^2r^2+\frac{J^2}{4r^2})^1\sqrt{E^2M^2(m+\lambda ^2r^2+\frac{J^2}{4r^2})}$$
(63)
Expanding to the powers of $`M`$ in the limit $`ϵ0`$, the leading contribution can be obtained as
$$f(r)_{r=r_++ϵ}=\frac{1}{\lambda ^2}(\frac{A}{2r_+}\mathrm{ln}\frac{ϵ}{2r_++ϵ}+\frac{B}{2r_{}}\mathrm{ln}\frac{r_+r_{}+ϵ}{r_++r_{}+ϵ})$$
(64)
where
$$A=\frac{r_+^2}{r_+^2r_{}^2},B=\frac{r_{}^2}{r_+^2r_{}^2}$$
(65)
For treatment (A),
$$F(A)=\frac{\pi }{6\beta ^2}(\frac{1}{4r_+\lambda ^2}\mathrm{ln}\frac{2r_+}{ϵ}+\frac{r_{}}{2\lambda ^2(r_+^2r_{}^2)}\mathrm{ln}\frac{2r_+(r_+r_{})}{ϵ(r_+r_{})})$$
(66)
As in Eq.(55), we leave $`r_+r_{}`$for the moment.
For treatment (B), the free energy is
$$F(B)=\frac{\pi }{6\beta ^2}(\frac{1}{4\lambda ^2ϵ}+\frac{1}{4r_+\lambda ^2}\mathrm{ln}\frac{2r_+}{ϵ})$$
(67)
Using the thermodynamic formula we find
$`S_{QM}(A)={\displaystyle \frac{\pi }{3\beta \lambda ^2}}({\displaystyle \frac{1}{4r_+}}\mathrm{ln}{\displaystyle \frac{2r_+}{ϵ}}+{\displaystyle \frac{r_{}}{2(r_+^2r_{}^2)}}\mathrm{ln}{\displaystyle \frac{2r_+(r_+r_{})}{ϵ(r_+r_{})}})`$ (68)
$`S_{QM}(B)={\displaystyle \frac{\pi }{3\beta \lambda ^2}}({\displaystyle \frac{1}{4ϵ}}+{\displaystyle \frac{1}{4r_+}}\mathrm{ln}{\displaystyle \frac{2r_+}{ϵ}})`$ (69)
respectively.
We find that two different values of entropies of scalar field on the backgrounds of extreme holes exhibit again. The results of 2D CDBH and Lowe-Strominger black holes tell us that the entropy of the scalar field on the EBHs’ background depend on the limits procedures as well.
## 5 Conclusions and discussions
Through direct thermodynamic calculations, we have shown that corresponding to two different treatments of the orders of taking the boundary limit and black hole extreme limit, the classical entropies of 2D extreme CDBH and Lowe-Strominger black hole have different values, namely, zero and Bekenstein-Hawking value. We have shown that the geometrical and topological properties are also dependent on these two different treatments by direct provement. Since the extreme conditions are all satisfied for the discussed EBHs, these profoundly different goemetrical and topological properties lead us to an impression that there are two kinds of EBHs which are classified by different topologies. One is the original EBH as Hawking et al. claimed and due to peculiar topology of this EBH, this kind of EBH cannot be formed from their nonextreme counterpart and can only be prepared in the early universe. While the other EBH obtained by treatment (A) with the same topology as that of NEBH can be developed from their NEBH counterpart. Entropies and topological properties for EBHs studied in 4D cases as well as in 2D cases in our paper suggest that there are close relation between the topology and the classical entropy for EBHs. Our results have been supported recently by Pretorius, Vollick and Israel .
Using the brick wall model, we have shown that under WKB approximation the entropy of scalar field on the background of extreme 2D CDBH and Lowe-Strominger black hole have two values given by Eqs(57,58) and (68,69), respectively. These results support our argument from the quantum point of view that the backgrounds of EBHs are different due to different topologies.
Besides the usual ultraviolet divergence $`ϵ0`$, which has been found for both 4D and 2D NEBH cases \[18-23\] and was suggested capable of being overcome by diferent renormalization methods , a new divergent term for Zaslavskii’s treatment emerges in Eqs.(57,68) and is absent in Hawking’s treatment. To understand this difference, let’s go over the arguments of Hawking et al. and Zaslavskii again. Owing to different topological properties, Hawking et al.claimed that their EBH and its NEBH counterpart are completely different objects and their EBH is the original EBH, which can only be prepared at the beginning of the universe. In Hawking’s treatment, since we put $`m=q`$ and $`r_+=r_{}`$ first for 2D CDBH and Lowe-Strominger black hole respectively, and then calculate their entropies by using usual thermodynamical approachs , naturaly their quantum entropy includes ultraviolet divergence only. But for the EBH obtained by using Zaslavskii’s treatment, it can be developed from the nonextreme counterpart by taking the extreme limits, therefore new divergent terms appear here. Keeping in mind that the entropy of scalar fields in the EBH background was derived by WKB approximation, the divergent quantum entropy due to taking extreme limits reflects in fact the divergent quantum fluctuations of the entanglemnt entropy of the whole system including the EBH and the scalar field. In statistical physics we know that infinite fluctuation breaks down the rigious meanings of thermodynamical quantities and is just the characteristic of the point of phase transition. This conclusion is in good agreement with the previous studies about the phase transition of black holes by using Landau-Lifshitz theory \[32-34\]. So a phase transition will happen when a NEBH approaches to the EBH with nonextreme topology obtained by using Zaslavskii treatment. The new divergent terms appears in the quantum entropy here gives the quantum understanding of phase transition and supports previous classical arguments.
B. Wang is greatful to Prof. S. Randjbar-Daemi for inviting him to visit ICTP (Italy), where part of this work was done. This work was supported by Shanghai Higher Education and Shanghai Science and Technology Commission.
|
no-problem/9901/hep-th9901106.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The duality of $`𝒩=4`$, $`D=4`$ supersymmetric Yang-Mills theory to the type IIB string theory on the near-horizon geometry of the three-brane have led to considerable progress in understanding of its large-$`N`$ limit. Along the same lines, type 0B theory on the background of RR charged three-brane solution was proposed to give a dual description of non-supersymmetric $`D=4`$ gauge theory coupled to adjoint bosonic matter . Perturbatively unstable tachyon of type 0 theory was argued to be stabilized in the presence of the three-brane through interaction with background RR flux.
The low-energy theory on the world volume of $`N`$ coincident type 0B D3-branes is $`U(N)`$ gauge theory with six scalar fields in the adjoint representation. According to , this theory has a dual description in terms of type 0B strings on the background of the three-brane. The classical gravity approximation to the dual picture already involves qualitative features expected from the gauge theory, such as logarithmic dependence of the coupling on a scale with UV fixed point at zero coupling . This duality also predicts IR fixed point at infinity .
The gauge theory considered in has the same tree-level bosonic action as $`𝒩=4`$ SYM theory. So, the scalar potentials in both theories have the same flat directions. These flat directions correspond to transverse coordinates of D3-branes. But, unlike in $`𝒩=4`$ theory, in the non-supersymmetric case the flat directions are not protected from being lifted by quantum corrections, which reflects the fact that parallel D3-branes of type 0 strings interact with one another, while type II D-branes are BPS states and they can be moved apart at no energy cost. Qualitative arguments based on the string calculation of the interaction potential suggest that type 0 branes attract at large distances .
We study the interaction between type 0 D3-branes computing the one-loop effective potential in the world-volume field theory. Similar calculations for $`𝒩=4`$ SYM theory with the supersymmetry broken by the finite temperature were done in . We find that the potential has a maximum at zero separation between branes (at zero expectation values of scalar fields) and gains a minimum at finite separation due to Coleman-Weinberg mechanism .
## 2 Interaction potential
The tree-level action of the low-energy theory on the world volume of $`N`$ parallel D3-branes of the type 0 string theory is
$$S=\frac{1}{g_{\mathrm{YM}}^2}d^4x\mathrm{tr}\left\{\frac{1}{2}F_{\mu \nu }^2+\left(D_\mu \mathrm{\Phi }^i\right)^2\frac{1}{2}[\mathrm{\Phi }^i,\mathrm{\Phi }^j]^2\right\}.$$
(2.1)
We consider the theory in the Euclidean space from the very beginning. Strictly speaking, the field theory with the action (2.1) is not renormalizable and require counterterms quadratic and quartic in scalar fields. In what follows we imply that all necessary counterterms are added to the action.
The scalar potential in (2.1) has a degenerate set of minima:
$$\mathrm{\Phi }_{\mathrm{cl}}^i=\mathrm{diag}(y_a^i),a=1,\mathrm{},N.$$
(2.2)
The coordinates $`y_a^i`$, $`i=1,\mathrm{},6`$ describe positions of $`N`$ parallel static three-branes in nine-dimensional space. Since the potential in (2.1) does not depend on $`y_a^i`$, D-branes do not interact at the classical level.
However, one-loop corrections induce interaction between branes via the Coleman-Weinberg mechanism. We calculate the interaction potential expanding scalar fields around the classical background (2.2):
$$\mathrm{\Phi }^i=\mathrm{\Phi }_{\mathrm{cl}}^i+\varphi ^i$$
(2.3)
and integrating out quantum fluctuations. To integrate over the gauge fields, we add to the action the gauge fixing term:
$$S_{\mathrm{gf}}=\frac{1}{g_{\mathrm{YM}}^2\alpha }d^4x\mathrm{tr}\left(_\mu A_\mu \alpha [\mathrm{\Phi }_{\mathrm{cl}}^i,\mathrm{\Phi }^i]\right)^2,$$
(2.4)
where $`\alpha `$ is a gauge fixing parameter. The action for ghosts in the chosen gauge is
$$S_{\mathrm{gh}}=\frac{1}{g_{\mathrm{YM}}^2}d^4x\mathrm{tr}\left(_\mu \overline{c}D_\mu c\alpha [\mathrm{\Phi }_{\mathrm{cl}}^i,\overline{c}][\mathrm{\Phi }^i,c]\right).$$
(2.5)
Expanding the action to the second order in fluctuations and integrating them out we get the one-loop effective potential:
$`\mathrm{\Gamma }`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{Tr}\mathrm{ln}\left[\left(^2+Y^2\right)\delta _{\mu \nu }+\left(1{\displaystyle \frac{1}{\alpha }}\right)_\mu _\nu \right]\mathrm{Tr}\mathrm{ln}\left(^2+\alpha Y^2\right)`$ (2.6)
$`+{\displaystyle \frac{1}{2}}\mathrm{Tr}\mathrm{ln}\left[\left(^2+Y^2\right)\delta ^{ij}(1\alpha )Y^iY^j\right].`$
The first term is the contribution of the gauge fields, the second is that of the ghosts, and the third of the scalars. By $`Y^i`$ we denote the following matrix in the adjoint representation of $`U(N)`$:
$$Y^i=[\mathrm{\Phi }_{\mathrm{cl}}^i,].$$
(2.7)
Taking into account that $`[Y^i,Y^j]=0`$, we find:
$`\mathrm{\Gamma }`$ $`=`$ $`4\mathrm{Tr}\mathrm{ln}\left(^2+Y^2\right)=\mathrm{Vol}\mathrm{\hspace{0.17em}\hspace{0.17em}4}{\displaystyle \frac{d^4p}{(2\pi )^4}\mathrm{tr}\mathrm{ln}\left(p^2+Y^2\right)}`$ (2.8)
$`=`$ $`\text{quadratically divergent term}+\mathrm{Vol}{\displaystyle \frac{1}{8\pi ^2}}\mathrm{tr}Y^4\mathrm{ln}{\displaystyle \frac{Y^2}{M^2}},`$
where $`M`$ is an UV cutoff. The quadratic and the logarithmic divergencies in the effective action should be canceled by appropriate counterterms.
The matrix $`Y^2`$ has eigenvalues $`(y_ay_b)^2`$, so the one-loop corrections induce only two-body interactions of D-branes – the interaction potential is
$$\mathrm{\Gamma }=\mathrm{Vol}\frac{1}{4\pi ^2}\underset{a<b}{}|y_ay_b|^4\mathrm{ln}\frac{|y_ay_b|^2}{\mathrm{\Lambda }^2},$$
(2.9)
where $`\mathrm{\Lambda }`$ is a non-perturbative mass scale of the world-volume theory.
## 3 Equilibrium configuration of D-branes.
The potential of interaction between two D-branes<sup>*</sup><sup>*</sup>*We use the string units, $`\alpha ^{}=1`$. (fig. 1):
$$V(r)=\frac{1}{4\pi ^2}r^4\mathrm{ln}\frac{r^2}{\mathrm{\Lambda }^2},$$
(3.1)
is such that D-branes repulse at short distances. Thus, a stack of D-branes put on top of each other is unstable. The D-branes will tend to separate by distances of order $`\mathrm{\Lambda }`$. In the equilibrium, transverse coordinates of D-branes satisfy the equations:
$$\underset{b}{}(y_a^iy_b^i)|y_ay_b|^2\mathrm{ln}\frac{|y_ay_b|^2}{\mathrm{\Lambda }^2\mathrm{e}^{1/2}}=0.$$
(3.2)
We are interested in the case when the number of D-branes, $`N`$, is large. In the $`N\mathrm{}`$ limit, D-branes form continuous spherically symmetric distribution and the equation (3.2) takes the form:
$$y_a^if(|y_a|^2)=0.$$
(3.3)
The function $`f`$, in principle, can have several zeros, but we adopt rather natural assumption that for the configuration of D-branes which has a minimal energy the equation (3.3) has only one root, $`|y|^2=R^2`$. So, the D-branes in the equilibrium will form a spherical shell of radius $`R`$ in the six-dimensional transverse space with a surface RR charge density $`\rho =N/\pi ^3R^5`$. From eqs. (3.2), (3.3) we find:
$$R=\mathrm{\Lambda }\mathrm{e}^{\frac{589}{840}},$$
(3.4)
and the interaction energy per unit volume, which shifts the tension of a D-brane, is
$$\mathrm{\Delta }T=\frac{7N^2\mathrm{\Lambda }^4}{48\pi ^2}\mathrm{e}^{\frac{589}{210}}.$$
(3.5)
## 4 Discussion
We have calculated the interaction potential between D3-branes of type 0B string theory at weak coupling in the world-volume field theoryThe definition of the YM coupling in terms of VEVs of the dilaton and the tachyon is discussed in .. As long as the one-loop approximation can be trusted, the field theory predicts the repulsion of type 0 D3-branes at short distances and the attraction at large ones. As a result, D3-branes tend to spread over the distances that are determined by a characteristic scale of the low-energy world-volume theory. Such behavior is not expected in the case of dyonic branes discussed in , because the field theory on their world volume is conformal.
### Acknowledgments
This work was supported by NATO Science Fellowship and, in part, by INTAS grant 96-0524, RFFI grant 97-02-17927 and grant 96-15-96455 of the promotion of scientific schools.
|
no-problem/9901/astro-ph9901136.html
|
ar5iv
|
text
|
# Resonances and instabilities in symmetric multistep methods
## 1 Introduction
In many physical problems one has to solve second-order differential equations of the type
$$x^{\prime \prime }(t)=f(x,t),$$
(1)
where $`x(t)`$ is the position at time $`t`$ and $`f(x,t)`$ is the force, assumed to be independent of the velocity. Such an equation or set of equations can be solved efficiently by a multistep method
$$\alpha _kx_{n+k}+\mathrm{}+\alpha _0x_n=h^2\left(\beta _kf_{n+k}+\mathrm{}+\beta _0f_n\right),$$
(2)
where $`x_n`$ is the computed position at time step $`n`$ and $`h`$ is the stepsize. A popular class of such methods is the the Störmer-Cowell class, for which $`\alpha _k=1`$, $`\alpha _{k1}=2`$, $`\alpha _{k2}=1`$, and $`\alpha _{k3}=\mathrm{}=\alpha _0=0`$, with the Störmer method being explicit ($`\beta _k=0`$) and the Cowell method implicit ($`\beta _k0`$). Störmer-Cowell methods have often been used for long-term integrations of planetary orbits (see Quinlan and Tremaine 1990 and references therein). But the Störmer-Cowell methods suffer from a defect, sometimes called an orbital instability, when the stepnumber $`k`$ exceeds 2: if a Störmer-Cowell method with $`k>2`$ is used to integrate a circular orbit, the radius does not remain constant, and the orbit spirals either inwards or outwards (the direction depends on $`k`$). This defect was recognized by Gautschi (1961) and Stiefel and Bettis (1969), who proposed modified multistep methods for orbital integrations. Their methods require a priori knowledge of the frequency of the solution, however, which is usually unknown or, at best, known only approximately.
Lambert and Watson (1976) showed that the orbital instability of the Störmer-Cowell methods can be avoided by choosing the coefficients of a multistep method to be symmetric, so that
$$\alpha _i=\alpha _{ki},\beta _i=\beta _{ki},i=0,\mathrm{},k.$$
(3)
Lambert and Watson analysed in detail the application of symmetric methods to the linear test equation
$$x^{\prime \prime }(t)=\omega ^2x(t),$$
(4)
and showed that if $`\omega ^2h^2`$ lies within an interval $`(0,H_0^2)`$, which they called the interval of periodicity, the solution is guaranteed to be periodic. Quinlan and Tremaine (1990) extended the work of Lambert and Watson (1976) to derive high-order explicit symmetric methods suitable for the integration of planetary orbits, and compared these with high-order Störmer methods. The symmetric methods gave energy errors that did not grow with time, and position errors that grew only linearly with time, whereas the Störmer methods gave energy errors that grew linearly with time, and position errors that grew as the time squared.
Soon after Quinlan and Tremaine’s methods were published, however, Alar Toomre discovered a disturbing feature of the methods, an example of which is shown in Figure 1.
Panel (a) shows the maximum error in the energy of a circular Kepler orbit integrated with the 8th-order symmetric method of Quinlan and Tremaine and with the 8th-order Störmer method, plotted versus the stepsize used in the integration. The energy error decreases with the stepsize as $`h^9`$, as expected for an 8th-order method, and at most stepsizes the error from the symmetric method is much smaller than the error from the Störmer method. But there is a startling spike in the energy error from the symmetric method near a stepsize corresponding to 60 steps per orbit, a stepsize that is well within this method’s interval of periodicity. Figure 2 shows the time development of the instability for a circular orbit integrated with 60 steps per orbit.
The energy error grows exponentially for about the first 400 periods until it reaches a maximum value<sup>1</sup><sup>1</sup>1The value of the energy error at the instability depends on the formula used to compute the velocities. If the multistep equation is written in summed form (Henrici 1962; Quinlan 1994) and the velocities are computed from the summed accelerations, the maximum error is about 10 times smaller than shown in Figure 2, but apart from this difference the instability remains the same. In this paper the summed form was used only for the planetary integrations described in Section 6. of about 0.25. It decreases over the next few hundred periods to a minimum value of about $`10^7`$, and then oscillates between these extremes with a period of roughly 550 orbital periods. The longitude error (not shown in the figure) grows exponentially until it reaches a value of order unity and then stays at that level.
The problems are worse for eccentric orbits, as shown in panel (b) of Figure 1 for an orbit with $`e=0.2`$. The symmetric method is still much better than the Störmer method at most stepsizes, but now there are spikes in the energy error at a number of stepsizes. The spikes that appear at the stepnumbers (the number of steps per orbit) 90, 120, 150, etc. are similar to the spike at 60 steps per orbit: they are instabilities at which the error grows exponentially with time. The smaller spikes at stepnumbers that are multiples of 5 or 6 (54, 55, 65, 66, 70, 72, etc.) are resonances, and not instabilities, since at these stepsizes the energy error grows linearly with time, as shown in panel (b) of Figure 2.
The resonances and instabilities do not occur for the linear test equation (4) that has been used in previous discussions of symmetric multistep methods. The present paper is written to explain their origin, to see if anything can be done to reduce their severity, and to decide if they render the methods unsuitable for problems like the long-term planetary integrations considered by Quinlan and Tremaine (1990).
## 2 Symmetric multistep methods
We start by reviewing symmetric multistep methods and some properties of them that are needed for explaining the resonances and instabilities. These properties are described in more detail by Henrici (1962) and Lambert and Watson (1976). A survey of multistep methods and other methods for integrating second-order differential equations is given by Coleman (1993). Practical techniques for reducing roundoff error in long multistep integrations are described by Quinlan (1994).
We consider $`k`$-step multistep methods of the type (2), where without loss of generality we assume $`\alpha _k=1`$ and $`|\alpha _0|+|\beta _0|>0`$. With the multistep method we associate the linear difference operator
$$L[x(t);h]=\underset{j=0}{\overset{k}{}}\left[\alpha _jx(t+jh)h^2\beta _jx^{\prime \prime }(t+jh)\right].$$
(5)
If $`x(t)`$ has continuous derivatives of sufficiently high order then
$$L[x(t);h]=C_0x(t)+C_1x^{}(t)h+\mathrm{}+C_qx^{(q)}(t)h^q+\mathrm{},$$
(6)
where
$$C_q=\frac{1}{q!}(0^q\alpha _0+\mathrm{}+k^q\alpha _k)\frac{1}{(q2)!}(0^{q2}\beta _0+\mathrm{}+k^{q2}\beta _k)$$
(7)
(the second term on the right is absent if $`q<2`$). The order $`p`$ is the integer for which $`C_0=\mathrm{}=C_{p+1}=0`$, $`C_{p+2}0`$. A method is said to be consistent if its order is at least 1, i.e., if $`C_0=C_1=C_2=0`$.
To discuss the stability of multistep methods we introduce the polynomials
$`\rho (z)`$ $`=`$ $`\alpha _kz^k+\alpha _{k1}z^{k1}+\mathrm{}+\alpha _0,`$ (8)
$`\sigma (z)`$ $`=`$ $`\beta _kz^k+\beta _{k1}z^{k1}+\mathrm{}+\beta _0.`$ (9)
A method is said to be zero-stable if no root of $`\rho (z)`$ has modulus greater than one and if every root of modulus one has multiplicity not greater than two. A consistent method has $`\rho (1)=\rho ^{}(1)=0`$, so for zero-stability $`\rho (z)`$ must have a double root at $`z=1`$. This is called the principal root; the other roots are called spurious. A method is convergent if and only if it is consistent and zero-stable (convergence means essentially that $`x_nx(t)`$ as $`h0`$ with $`nh=t`$). The order of a convergent $`k`$-step method cannot be higher than $`k+2`$.
The orbital instability of the Störmer-Cowell methods can be explained by a simple example (Lambert and Watson 1976). Consider equation (4), whose general solution is the periodic function $`x(t)=A\mathrm{cos}(\omega t)+B\mathrm{sin}(\omega t)`$. Applying the multistep method (2) to this equation we obtain the difference equation
$$\underset{i=0}{\overset{k}{}}(\alpha _i+H^2\beta _i)x_{n+i}=0,H=h\omega ,$$
(10)
whose general solution is
$$x_n=\underset{j=1}{\overset{k}{}}A_jZ_j^n,$$
(11)
where the $`Z_j`$ ($`j=`$1, 2, …, $`k`$) are the roots, assumed distinct, of the polynomial
$$\mathrm{\Omega }(Z;H^2)=\rho (Z)+H^2\sigma (Z).$$
(12)
The roots $`Z_j`$ of $`\mathrm{\Omega }`$ are perturbations of the roots $`z_j`$ of $`\rho `$; let $`Z_1`$ and $`Z_2`$ be the perturbations of the principal roots. The problem with the Störmer-Cowell methods is that, if $`k>2`$, the roots $`Z_1`$ and $`Z_2`$ do not lie on the unit circle. No matter how small $`H^2`$ is the orbit grows or shrinks, and the energy error increases linearly with time.
Lambert and Watson (1976) showed that this orbital instability can be avoided if the multistep coefficients are chosen to be symmetric, as in equation (3). A multistep method is said to have an interval of periodicity $`(0,H_0^2)`$ if for all $`0<H<H_0`$ the roots of $`\mathrm{\Omega }(Z;H^2)`$ satisfy
$$|Z_1|=|Z_2|=1,|Z_j|1(j=3,\mathrm{},k).$$
(13)
Lambert and Watson proved that a convergent multistep method with a non-zero interval of periodicity must be a symmetric method and must have an even order. For a symmetric method the $``$ in (13) can be replaced by $`=`$, and hence inside the interval of periodicity the roots all lie on the unit circle. The solution (11) of equation (4) is then guaranteed to be periodic (or quasi-periodic).
The requirement that a $`k`$-step symmetric multistep method have order $`k`$ (for an explicit method) or $`k+2`$ (for an implicit method) does not determine the $`\alpha `$ and $`\beta `$ coefficients uniquely when $`k>2`$, because for a symmetric method the equations $`C_j=0`$ with $`j`$ odd are not independent of the equations with $`j`$ even. There are thus some free coefficients that can be chosen by the user, although their range is restricted by the requirement of zero-stability. Lambert and Watson (1976) gave examples of explicit methods with orders 2, 4, and 6, and implicit methods with orders 4, 6, and 8. Quinlan and Tremaine (1990) gave examples of explicit methods with orders 8, 10, 12, and 14. Table 1 lists the coefficients of five explicit methods that will be discussed in what follows. The methods SY8, SY10, and SY12 are the 8th-, 10th-, and 12th-order methods of Quinlan and Tremaine; SY8A and SY8B are 8th-order methods that have not previously been published. Table 2 lists the spurious roots of $`\rho (z)`$ for these five methods.
## 3 Origin of the resonances and instabilities
The origin of the resonances and instabilities will be explained using a Kepler orbit as an example. The resonances and instabilities occur for all nonlinear oscillatory problems, not just for the Kepler problem; other examples will be given later.
### 3.1 A simple explanation for the instabilities
We start with a simple explanation for the instability that occurs when a circular orbit is integrated with the method SY8 using 60 steps per orbit. The spurious roots of $`\rho (z)`$ for the method SY8 are located on the unit circle at angles $`\pm 4\pi /5`$, $`\pm 2\pi /5`$, and $`\pm 2\pi /6`$ (i.e., at $`\pm 144^{}`$, $`\pm 72^{}`$, and $`\pm 60^{}`$). At 60 steps per orbit the spurious roots of $`\mathrm{\Omega }(Z;\omega ^2h^2)`$ differ little from those of $`\rho (z)`$; the difference will be ignored here. It is the roots at $`2\pi /5`$ and $`2\pi /6`$ (or $`2\pi /5`$ and $`2\pi /6`$) that cause the trouble. The root at $`2\pi /5`$ allows a 5-step oscillation: in the absence of any forces, equation (2) admits a solution $`x_n=\mathrm{exp}(2\pi in/5)`$. Similarly, the root at $`2\pi /6`$ allows a 6-step oscillation. By themselves the oscillations would be harmless for this problem as long as the start-up routine is accurate. The trouble arises when the 5- and 6-step oscillations can resonate.
Consider first the 5-step oscillation. The perturbations to the $`x`$ and $`y`$ coordinates can both oscillate with a 5-step period. If the $`y`$ oscillation is $`90^{}`$ ahead of the $`x`$ oscillation, the perturbation goes around in a counter-clockwise sense with a 5-step period. Assume that the orbit also moves in a counter-clockwise sense. During one orbital period the perturbation goes around $`60/5=12`$ times. Because the perturbation goes around in the same sense as the orbit, however, the perturbation to the orbital radius completes only $`121=11`$ oscillations. Now consider the 6-step oscillation. This time assume that the $`y`$ oscillation is $`90^{}`$ behind the $`x`$ oscillation, so that the perturbation goes around in a clockwise sense with a period of 6 steps. During one orbital period the perturbation goes around $`60/6=10`$ times, but the radial perturbation completes $`10+1=11`$ oscillations. This leads to the resonance: the radial perturbations from both the 5-step and 6-step oscillations can go around 11 times in one orbital period. The perturbation analysis given below shows that the resonance causes an instability.
The explanation was verified by checking that the instability can be enhanced by adding noise to the initial conditions with the right frequency and phase. When noise was added with a 5-step component polarized in a counter-clockwise sense and a 6-step component polarized in clockwise sense, the instability became obvious sooner, but when the polarizations were reversed, so that the 5-step component was clockwise and the 6-step component counter-clockwise, the instability was delayed.
The explanation predicts that instabilities will occur for other symmetric multistep methods at stepsizes at which the number of steps per orbit $`N`$ satisfies
$$\frac{N}{2\pi /\theta _l}+1=\frac{N}{2\pi /\theta _j}1,$$
(14)
where $`\mathrm{exp}(i\theta _l)`$ and $`\mathrm{exp}(i\theta _j)`$ are spurious roots of $`\rho (z)`$. The order of the method must be at least six for an instability of this type to occur, since the polynomial $`\rho (z)`$ must have at least four spurious roots. According to the prediction the method SY10 of Quinlan and Tremaine should be unstable for circular orbits at 84 steps per orbit, and the method SY12 should be stable as long as the number of steps per orbit is at least 36; both predictions were verified, along with similar predictions for other symmetric multistep methods.
For eccentric Kepler orbits equation (14) must be generalized to allow for the different frequency components present in the motion. The forces in an eccentric Kepler orbit (with unit major axis) can be expanded as (Kovalevsky 1967)
$`{\displaystyle \frac{x(t)}{r(t)^3}}`$ $`=`$ $`{\displaystyle \underset{q=1}{\overset{\mathrm{}}{}}}q\left[J_{q+1}(qe)J_{q1}(qe)\right]\mathrm{cos}(q\omega t),`$ (15)
$`{\displaystyle \frac{y(t)}{r(t)^3}}`$ $`=`$ $`\sqrt{1e^2}{\displaystyle \underset{q=1}{\overset{\mathrm{}}{}}}q\left[J_{q+1}(qe)+J_{q1}(qe)\right]\mathrm{sin}(q\omega t),`$ (16)
where the $`J_q`$ are Bessel functions of the first kind. The forces can be built up from components with frequencies that are integral multiples of the fundamental frequency $`\omega `$. The “1” on the left-hand side of equation (14) must therefore be allowed to take on the integer values 1, 2, 3, …, corresponding to the fundamental frequency and the higher harmonics, and similarly the “1” on the right-hand side must be allowed to take on the same values, independently of the value assumed on the left-hand side. This leads to the prediction of instabilities at $`N=60`$, 90, 120, 150, etc. The width of the instability at small eccentricities (when plotted as in Figure 1) is found to vary with the eccentricity as $`e^0`$ at $`N=60`$, as $`e^1`$ at $`N=90`$, as $`e^2`$ at $`N=120`$, and so on, as expected since the different frequency components have amplitudes that vary as powers of the eccentricity.
### 3.2 Perturbation analysis for a circular orbit
A perturbation analysis can be used to show that circular orbits are unstable when the condition (14) is satisfied. Consider a circular orbit of radius $`R`$ in a two-dimensional axisymmetric potential $`\varphi (r)`$. The circular frequency $`\omega `$ is
$$\omega (r)^2=\varphi ^{}(r)/r.$$
(17)
Let $`x_n`$ and $`y_n`$ be the $`x`$ and $`y`$ coordinates at the $`n`$th time step. Define $`z=x+iy`$, $`r=|z|`$, and write equation (2) as
$$\underset{j=0}{\overset{k}{}}\alpha _jz_{n+j}=h^2\underset{j=0}{\overset{k}{}}\beta _jF_{n+j},$$
(18)
where the complex force $`F`$ is
$$F=f_x+if_y=\frac{z}{r}\varphi ^{}(r).$$
(19)
Provided that the stepsize $`h`$ lies within the interval of periodicity, and that the radius is assumed to be fixed, so that equation (18) is linear in $`z`$, the principal roots $`Z_p^{\pm 1}`$ (in fact all the roots) of (18) will lie on the unit circle, and the numerical solution will have the form
$$z_n=RZ_p^n,Z_pe^{i\omega h},$$
(20)
where $`Z_p`$ is the principal root of $`\mathrm{\Omega }(Z;\omega ^2h^2)`$ corresponding to the assumed counter-clockwise rotation.
Now consider a perturbed circular orbit
$$z_n=RZ_p^n(1+u_n),$$
(21)
where $`RZ_p^nu_n`$ is a small perturbation at time step $`n`$. The perturbed radius is
$$R_n=(z_nz_n^{})^{1/2}R[1+(u_n+u_n^{})/2],$$
(22)
where the denotes complex conjugation. Substituting (21) into equation (18) and linearizing the resulting equation we find
$$\underset{j=0}{\overset{k}{}}\alpha _jZ_p^ju_{n+j}=h^2\underset{j=0}{\overset{k}{}}\beta _jZ_p^j\left(\omega _1u_{n+j}\omega _2u_{n+j}^{}\right),$$
(23)
where
$`\omega _1`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[{\displaystyle \frac{1}{r}}\varphi ^{}(r)+\varphi ^{\prime \prime }(r)\right]={\displaystyle \frac{1}{2}}(\kappa ^22\omega ^2),`$ (24)
$`\omega _2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[{\displaystyle \frac{1}{r}}\varphi ^{}(r)\varphi ^{\prime \prime }(r)\right]={\displaystyle \frac{1}{2}}(4\omega ^2\kappa ^2),`$ (25)
and where $`\kappa `$ is the epicyclic frequency, given by (Binney and Tremaine 1987)
$$\kappa ^2(r)=r\frac{d\omega ^2}{dr}+4\omega ^2=\varphi ^{\prime \prime }(r)+\frac{3}{r}\varphi ^{}(r).$$
(26)
A trial perturbation of the form
$$u_n=A_1S^n+A_2S^n$$
(27)
leads to a solution for $`S`$ that is independent of the step number $`n`$ if the following two equations are satisfied:
$`A_1{\displaystyle \underset{j=0}{\overset{k}{}}}\left(\alpha _jZ_p^jS^j+\omega _1h^2\beta _jZ_p^jS^j\right)`$ $`=`$ $`A_2^{}{\displaystyle \underset{j=0}{\overset{k}{}}}\omega _2h^2\beta _jZ_p^jS^j;`$ (28)
$`A_2{\displaystyle \underset{j=0}{\overset{k}{}}}\left(\alpha _jZ_p^jS^j+\omega _1h^2\beta _jZ_p^jS^j\right)`$ $`=`$ $`A_1^{}{\displaystyle \underset{j=0}{\overset{k}{}}}\omega _2h^2\beta _jZ_p^jS^j.`$ (29)
Taking the complex conjugate of the second equation, we get two equations for the two unknowns $`A_1`$ and $`A_2^{}`$. The determinant of the system must vanish, which gives (using $`H=\omega h`$)
$$D(S)=\mathrm{\Omega }(SZ_p;H^2)\mathrm{\Omega }(S/Z_p;H^2)\omega _2h^2\left[\mathrm{\Omega }(SZ_p;H^2)\sigma (S/Z_p)+\mathrm{\Omega }(S/Z_p;H^2)\sigma (SZ_p)\right]=0.$$
(30)
$`D(S)`$ is a symmetric polynomial of degree $`2k`$ in $`S`$, with real coefficients. There are several things to note about the roots of this polynomial. If $`S`$ is a root then so are $`S^{}`$, $`1/S`$, and $`1/S^{}`$. $`S=1`$ is always a root, since $`\mathrm{\Omega }(Z_p;H^2)=\mathrm{\Omega }(1/Z_p;H^2)=0`$. The root $`S=1`$ corresponds to unperturbed motion, because for this root $`A_1=A_2^{}`$ and hence the linearized perturbation to the radius is zero. If $`h=0`$ then $`D(S)=\rho (S)^2`$, whose roots are the same as those of $`\rho (z)`$, with each root $`S_i`$ having twice the multiplicity of the corresponding root $`z_i`$ of $`\rho (z)`$. Thus when $`h=0`$ the spurious roots of $`\rho (z)`$ (assumed to be distinct) are double roots of $`D(S)`$, and $`S=1`$ is a root of multiplicity four. If $`h`$ is small but nonzero the roots that were double roots of $`D(S)`$ when $`h=0`$ split, and are approximately $`Z_jZ_p^{\pm 1}`$ $`(j=3,\mathrm{},k)`$, where the $`Z_j`$ are the spurious roots of $`\mathrm{\Omega }(Z;H^2)`$. The root $`S=1`$ is a double root when $`h0`$, and two new roots appear at approximately $`Z_p^{\pm \kappa /\omega }`$, corresponding to the usual epicyclic oscillations with frequency $`\omega \pm \kappa `$.
As $`h`$ increases away from $`0`$ the roots of $`D(S)`$ move around the unit circle as just described. Problems arise, however, near stepsizes where $`Z_p^2Z_l/Z_j=1`$ for some spurious roots $`Z_l`$ and $`Z_j`$. When this happens the approximate solutions for two of the roots coincide, $`Z_lZ_p=Z_j/Z_p`$, and a more careful analysis is needed. The troublesome stepsizes can be estimated by replacing $`Z_p`$ by $`\mathrm{exp}(i\omega h)`$ and $`Z_l`$ and $`Z_j`$ by the corresponding roots of $`\rho (z)`$, which we write as $`z_l=\mathrm{exp}(i\theta _l)`$ and $`z_j=\mathrm{exp}(i\theta _j)`$. The troublesome stepsizes then occur when
$$\theta _j\theta _l=2\omega h=4\pi /N,$$
(31)
where $`N`$ is the number of steps per orbit. This is the instability criterion (14) that was given earlier.
Figure 3 shows the maximum magnitude of the roots of $`D(S)`$ for the method SY8 used near 60 steps per orbit with a Kepler potential $`\varphi (r)=1/r`$.
The solid line is the exact solution found by numerical calculation of the roots. The dashed line is an approximate solution found by simplifying $`D(S)`$ by writing
$$\mathrm{\Omega }(SZ_p^{\pm 1};H^2)\underset{m=1}{\overset{8}{}}(SZ_p^{\pm 1}Z_{m,r}),$$
(32)
where the $`Z_{m,r}`$ are the roots of $`\mathrm{\Omega }`$ at the resonant stepsize ($`N60.455`$), and by replacing $`SZ_p`$ by $`Z_{l,r}`$ and $`S/Z_p`$ by $`Z_{j,r}`$ everywhere except in the terms $`(SZ_pZ_{l,r})`$ and $`(S/Z_pZ_{j,r})`$; the same replacements are made in the $`\sigma `$ functions. We thus obtain from $`D(S)=0`$ a quadratic equation in $`S`$, whose roots are easily found. The exact and approximate solutions both show that there are roots of $`D(S)`$ that lie off the unit circle when $`N`$ is in the range 59.2–60.4. The amount by which the roots move off the unit circle depends on the value of $`\kappa ^24\omega ^2`$. For a harmonic oscillator potential $`\varphi (r)r^2`$ the roots of $`D(S)`$ do not move off the unit circle, since $`\kappa ^24\omega ^2`$=0, and the instability does not occur.
### 3.3 Location of the resonances
The resonances in Figure 1 are easier to predict than the instabilities. Since the force acting on an eccentric Kepler orbit is a superposition of components with frequencies that are integral multiples of the fundamental frequency $`\omega `$, the resonances occur when the principal root $`Z_p\mathrm{exp}(i\omega h)`$ raised to some integral power $`q`$ coincides with one of the spurious roots $`Z_j`$, i.e., when
$$N\frac{2\pi q}{\theta _j},q=1,2,3,\mathrm{},$$
(33)
where $`z_j=\mathrm{exp}(i\theta _j)`$ is a spurious root of $`\rho (z)`$. This correctly predicts the resonances for the method SY8 at stepnumbers that are multiples of 5 and 6 (and also 2.5, although these resonances are usually too weak to be seen). The amplitude and growth rate of the resonance decrease as $`q`$ increases, because for small eccentricities the Bessel functions in equations (15) and (16) decrease rapidly with $`q`$.
## 4 Further examples
Three more examples will be given to show how the resonance and instability locations depend on the multistep method and on the equation being integrated.
### 4.1 Kepler orbits with a 12th-order symmetric method
The 12th-order symmetric method SY12 is expected to be better than the 8th-order method SY8 for circular Kepler orbits, since the 12th-order method is free of instabilities for these orbits as long as the integration takes at least 36 steps per orbit. This is confirmed in panel (a) of Figure 4 (the energy error of the symmetric method is caused by roundoff error at most stepsizes in this panel).
For eccentric orbits, however, the 12th-order method can be as troublesome as the 8th-order method, if not more so, because the extra spurious roots allow more opportunities for resonances and instabilities to occur, and because the spurious root at $`2\pi /9`$ on the unit circle leads to a resonance that can be excited even at low eccentricities. Panel (b) of Figure 4 shows resonances at stepnumbers that are multiples of 4.5, 6, and 9, and instabilities at stepnumbers that are multiples of 36, although these are not the only instabilities; there are others in the figure, and more would be present if the stepsizes had been sampled more finely.
### 4.2 Orbits in a logarithmic potential
The Kepler potential is special in that all bound orbits are closed and have $`\omega =\kappa `$, i.e., the azimuthal period $`T_a=2\pi /\omega `$ is equal to the radial period $`T_r=2\pi /\kappa `$ (equation (26) gives $`\kappa `$ for a circular orbit). For most potentials $`T_aT_r`$, and the analysis given previously must be modified. Consider an eccentric orbit in a logarithmic potential $`\varphi (r)=\mathrm{log}(r)`$, for which (in the limit of a circular orbit) $`\kappa =\sqrt{2}\omega `$. A Fourier analysis of the motion contains frequencies $`\omega +q\kappa `$ (where $`q`$ is an integer), and hence the prediction (33) for the resonant stepsizes must be modified to read
$$N\frac{2\pi }{\theta _j}\left(1+\frac{q\kappa }{\omega }\right)=\frac{2\pi }{\theta _j}\left(1+\frac{qT_a}{T_r}\right),q=0,1,2,\mathrm{},$$
(34)
where $`N=T_a/h`$ is the number of steps per azimuthal period. This prediction is verified for the method SY8 in Figure 5, which shows results from integrating an orbit with initial conditions $`x_0=1`$, $`y_0=0`$, $`x_{}^{}{}_{0}{}^{}=0`$, $`y_{}^{}{}_{0}{}^{}=1.1`$, for which $`T_a/T_r=1.41536`$, close to the value $`\sqrt{2}`$ for a circular orbit (the stepsizes were not sampled as finely in this figure as in the other figures).
There is an instability at 60 steps per azimuthal period, just as with the Kepler potential, and resonances at the $`N`$ values predicted by equation (34), taking $`\theta _j`$ to be $`2\pi /5`$ (the short dashed lines in Figure 5) or $`2\pi /6`$ (the long dashed lines).
### 4.3 The one-dimensional hard spring
As a final example we consider the one-dimensional hard-spring equation
$$x^{\prime \prime }(t)=x(t)^3.$$
(35)
While this is not an orbital equation, the motion is similar to an eccentric orbit in that it is periodic and contains frequencies that are integral multiples of the fundamental frequency, although in this case only the odd multiples are present. Figure 6 shows the errors obtained by integrating this equation with the method SY8.
As expected, there are resonances at stepnumbers that are odd multiples of 5 or 6, such as 65, 66, 75, 78, etc.; the even multiples are missing because of the missing frequencies in the motion. There are instabilities at 60 steps per orbit and at integer multiples of this number: 120, 180, etc.; the instabilities at 90, 150, etc., are missing, again because of the missing frequencies.
## 5 A search for better symmetric methods
A question that naturally arises is whether Quinlan and Tremaine (1990) could have chosen better multistep coefficients to reduce the resonance and instability problems. The answer is yes for the 8th- and 10th-order methods, but probably no for the 12th-order method.
There are three properties that we would like a symmetric multistep method to have:
1. The method should have a large interval of periodicity.
2. To avoid instabilities the spurious roots of $`\rho (z)`$ should be well spread out on the unit circle.
3. To avoid resonances the spurious roots of $`\rho (z)`$ should be far from $`z=1`$.
Fukushima (1998) has searched for multistep methods with large intervals of periodicity. It is properties 2 and 3 that concern us here. Unfortunately these properties are not compatible: the spurious roots cannot be well spread out on the unit circle and at the same time be far from $`z=1`$. A compromise must be made, which becomes more difficult the more spurious roots there are, i.e., the higher the order of the method.
A systematic search was made through symmetric multistep methods with $`\alpha `$ coefficients drawn from the set {0, $`\pm 1/8`$, $`\pm 2/8`$,…,$`\pm 7/8`$,$`\pm 1`$,$`\pm 2`$,$`\pm 4`$} (with $`\alpha _k=1`$). This set was chosen because the integration method suffers less from roundoff error if the $`\alpha `$’s are integral powers of 2; it is unlikely that much better methods would have been found by letting the $`\alpha `$’s take on arbitrary values. For each set of coefficients the roots of $`\rho (z)`$ were computed and checked to see if and where they lie on the unit circle. The most promising methods (based on the three properties given above) were compared in integrations of eccentric Kepler orbits.
Consider first the 8th-order methods. For any even integration order $`k`$, the choice 1, $`2`$, $`+2`$, $`2`$, $`+2`$, …, for the $`\alpha `$ coefficients spreads the spurious roots on the unit circle as evenly as possible, and tends to minimize the instabilities. That choice was made for the method SY8A listed in Table 1, which has the largest interval of periodicity of the 8th-order methods that were tested. This method is much more stable than the method SY8; the integration results in Figure 7 show no signs of instabilities even for an orbit with an eccentricity $`e=0.3`$.
The drawback of the method SY8A, however, is the spurious root at $`2\pi /8`$, or $`45^{}`$, which leads to resonances (at stepnumbers $`N`$ that are multiples of 8) that are stronger than for SY8. The method SY8A is certainly an improvement over SY8, but the resonances are annoying.
The resonances can be reduced by picking a method whose spurious roots are farther from $`z=1`$. An example is the method SY8B, whose spurious root closest to $`z=1`$ is at $`76.96^{}`$. (It is easy to find methods whose resonances are weaker than those of SY8B, but these methods have much poorer stability properties.) Figure 7 shows that the resonances are much weaker with this method than with SY8A and SY8: they fall off faster as the number of steps per orbit is increased, being almost unnoticeable when the $`e=0.15`$ orbit is integrated with more than 75 steps per orbit, and the $`e=0.30`$ orbit with more than 120 steps per orbit. But the price we pay for this gain is a decrease in stability, as revealed by the spikes in the error plot, especially at the higher eccentricity, $`e=0.3`$. The instabilities can be avoided if the stepsize is chosen small enough, however, and are not as bad as with the method SY8. The method SY8B is the most promising of the 8th-order methods that were tested.
The search for better 10th- and 12th-order symmetric methods was not as successful. The method SY10 of Quinlan and Tremaine (1990) is unstable at 84 steps per orbit, even for a circular orbit. The method SY10A (like SY8A, with the spurious roots spread out evenly on the unit circle) is much more stable, but has an annoying resonance at stepnumbers $`N`$ that are multiples of 10. None of the other 10th-order methods tested was much better than SY10A. For the 12th-order methods none was found to be better than SY12. The method SY12A (again, with the spurious roots spread out evenly on the unit circle) has an annoying resonance at stepnumbers that are multiples of 12, and its stability properties are no better than those of SY12.
Fukushima (1999) has tested some implicit symmetric methods to see if they are affected by the resonance and instability problems as badly as the explicit methods. The disadvantage of implicit symmetric methods is that the corrector step must be applied at least twice for the benefits of the symmetry to be realized, so that for the same stepsize an implicit method requires at least three times the number of force evaluations required by an explicit method. Unless implicit methods can be found with much weaker resonances and instabilities than the explicit methods, this extra work would be better spent using an explicit method with a smaller stepsize.
## 6 The use of symmetric methods for integrating planetary orbits
The examples considered so far have been chosen for their simplicity and pedagogical value. We now test the method SY12 on a more complex example, typical of those encountered in research problems, to get a better idea of how the method compares with a high-order Störmer method and how its effectiveness is reduced by the resonances and instabilities. The example is the long-term integration of a planetary system, the problem that was the motivation for the high-order symmetric multistep methods of Quinlan and Tremaine (1990), and a research problem on which the method SY12 has been used (the 3 Myr integration of all nine planets by Quinn, Tremaine, and Duncan 1991). We consider an idealized solar system containing only two planets, Jupiter and Saturn, as this is sufficiently complex to suggest how the method will work for a system with more than two planets.
The initial conditions for Jupiter and Saturn are taken from Standish (1990). The initial major axes are $`a_J5.203\mathrm{AU}`$ and $`a_S19.280\mathrm{AU}`$, giving orbital periods $`P_J4332.8`$ and $`P_S30905`$ (in days, the unit of time in this discussion). The initial eccentricities are $`e_J0.048`$ and $`e_S0.051`$. The orbits were integrated for 1 Myr using 1000 different stepsizes $`h`$ spaced equally in $`1/h`$ between $`h=50`$ and $`h=81`$, corresponding to approximately 86.6 and 53.5 steps per orbit for Jupiter. To speed the calculations the energy error was measured every 5th integration step, rather than every step. Jupiter’s longitude at the end of the integration was compared with an accurate value determined by an integration with a much smaller stepsize ($`h=10`$). The integration errors are plotted versus stepsize in Figure 8.
The two planets were first integrated separately, with no gravitational interaction between them. The maximum energy errors are shown in panel (a). The errors for Saturn in this panel are caused by roundoff error and are not interesting. The Störmer method is unstable if it is used for Jupiter’s orbit with a stepsize larger than $`h57`$. The symmetric method is stable for Jupiter’s orbit with stepsizes as large as $`h80`$, although the effects of resonances are seen when the number of steps per orbit is a multiple of 9 (near the $`h`$ values 53.5, 60.2, and 68.8, corresponding to 81, 72, and 63 steps per orbit), and also, to a lesser extent, a multiple of 6 (near $`h=72.2`$, or 60 steps per orbit). The spike in the error near $`h=80`$ (54 steps per orbit) is an instability, not a resonance, as there is a sudden growth in the error to a large value early in the integration. Away from the resonances the maximum energy error is more than 100 times smaller with the symmetric method than with the Störmer method.
The planets were then integrated together including their gravitational interaction, in the hope that it might detune the resonances and remove the spikes from the error plot. This is not what happened. Panel (b) shows a profusion of new resonances resulting from the high-frequency perturbation terms in the Jupiter-Saturn interaction. There is still the instability near $`h=80`$ that was present for Jupiter alone, plus a narrow instability near $`h=78.55`$ that was not present for Jupiter alone, but it is hard to identify the resonances at 60, 63, 72, and 81 steps per Jupiter orbit, as they are lost in the other resonances.
With the symmetric method the maximum energy error away from the resonance peaks is noticeably larger for the Jupiter-Saturn system than for the single-planet (Jupiter) system, whereas for the Störmer method the errors for the two systems are comparable. At first this suggests that the symmetric method is not much better than the Störmer method for the two planet system. But panels (c) and (d) suggest the opposite conclusion. With the symmetric method the average energy error is several orders of magnitude smaller than the maximum error, whereas with the Störmer method the average and maximum errors are comparable. The error in Jupiter’s longitude at the end of the integration is much smaller with the symmetric method than with the Störmer method. Even at the resonance peaks the longitude error is more than an order of magnitude smaller with the symmetric method than with the Störmer method, and away from the resonances the difference is more than three orders of magnitude. The narrowness of the resonances is difficult to appreciate from the figure. Out of the 310 stepsizes that were tested in the range $`h=60`$–70, only 16 (or 8) resulted in longitude errors larger than $`10^5`$ (or $`10^4`$).
These results show that the maximum energy error can be an unduly strict criterion to use for evaluating the performance of a symmetric method. In a planetary integration the position errors are determined by the average energy error, which for the symmetric methods is much smaller than the maximum error. The maximum error is important for calculations of instantaneous orbital elements that depend on the velocity, such as the major axis, eccentricity, and inclination. In the 3 Myr integration of the solar system by Quinn et al. (1991) these elements were digitally filtered to remove oscillations with periods smaller than $`500`$yr; the filtering probably removed any spurious oscillations caused by the symmetric method, and the errors in the output elements were probably closer to the average errors than to the maximum errors. The comparison that Quinn et al. made of their results with the results of a shorter but more accurate Störmer integration (with a smaller stepsize) showed satisfactory agreement.
Despite the resonances and instabilities, then, symmetric methods can still be a better choice than Störmer methods for long integrations of planetary orbits provided that the user is aware of the dangers.
## Acknowledgement
Scott Tremaine and I learned of the resonance and instability problems from Alar Toomre, who discovered them through numerical experiments soon after our 1990 paper was published. Alar’s detailed comments on an early draft of this paper (April 1991) improved my understanding and presentation of the problem; his further experiments and enthusiastic help since then have been greatly appreciated. Scott suggested the perturbation analysis given in Section 3.2, and helped in a number of other ways. This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC). Most of the research was completed while I was working at CITA; I thank Dick Bond for letting me use an office there to finish the paper.
|
no-problem/9901/astro-ph9901343.html
|
ar5iv
|
text
|
# Theoretical interpretation of the X-ray properties of GRB960720
## 1 Introduction
The gamma-ray emission from GRBs is probably produced by internal shocks in a relativistic wind whereas the afterglow (from X-rays to radio bands) is due to the external shock, i.e. the forward shock propagating in the ISM because of its interaction with the wind (Rees and Mészáros, rees (1994) ; Wijers et al., wijers (1997)). Simultaneously, a reverse shock propagates in the wind itself. We illustrate here the possible contribution of this reverse shock to an X-ray emission perduring immediately after the gamma-rays. Such an emission has for been observed in the first GRB detected by Beppo-SAX, GRB960720. We use the detailed X-ray data made available by SAX for this burst (Piro et al., piro (1998)) to make a comparison with our theoretical results.
## 2 The internal shocks
GRB960720 has been observed both by BATSE and Beppo-SAX. It is a single-pulse burst, with a “FRED” profile. Its duration in the 50-700 keV band is around 2–3 s but the X-ray emission lasts longer: Piro et al. (piro (1998)) show that the power-law between the width of the pulse and the energy (already known in the gamma-ray range) is observed down to 2 keV. They find $`W(E)E^{0.46}`$.
We use a simple model to simulate internal shocks and build synthetic bursts: all pressure waves are neglected so that we consider only direct collisions between solid layers. In the shocked material, the magnetic field reaches equipartition values (10 – 1000 G) and the Lorentz factor of the electrons is obtained from the dissipated energy per proton $`ϵ`$ using the formula given by Bykov and Mészáros (bykov (1996)) who suppose that only a fraction $`\zeta `$ of the electrons is accelerated:
$$\mathrm{\Gamma }_e\left[\frac{\alpha _M}{\zeta }\frac{ϵ}{m_ec^2}\right]^\alpha (\alpha _M0.11\mathrm{and}1\alpha 1.5)$$
(1)
For $`\zeta 1`$ the usual equipartition assumption yields values of $`\mathrm{\Gamma }_e`$ of a few hundreds: the gamma-ray emission is due to inverse Compton scattering on the synchrotron photons. Smaller values for the fraction of accelerated electrons ($`\zeta <10^2`$) lead to larger Lorenz factors ($`\mathrm{\Gamma }_e`$ of a few thousands) so that the gamma-rays are directly produced by the synchrotron process, which is the assumption made here. Internal shocks have been shown to successfully reproduce the main temporal and spectral properties of GRBs (Daigne and Mochkovitch, daigne (1998)).
We model GRB960720 with a wind emitted during 4 s and consisting in a slow and a rapid part of equal mass (see figure 1). Two internal shocks are generated and we sum both contributions to the emission to construct the synthetic burst. The profile in the SAX 50-700 keV band looks very similar to GRB960720 as can be seen in figure 2. However the X-ray emission does not last long enough so that the power-law relating $`W(E)`$ and $`E`$ is not reproduced in this spectral range.
## 3 Effect of a medium of uniform density
We now consider the effect of the ISM, whose density $`n`$ is supposed to be uniform. The external shock (forward shock) produces the afterglow. Simultaneously a reverse shock propagates into the wind. Its strength is comparable to that of the internal shocks, which are midly relativistic while the external shock is initially very strong and relativistic. We therefore adopt the same assumptions to compute the emission of the shocked material behind the reverse and the internal shocks.
It is possible to derive a critical ratio of the total energy injected over the density (see Sari and Piran, sari (1995)) :
$$\frac{E^{inj}}{n}|_{crit}\frac{4\pi }{3}m_pc^5\overline{\mathrm{\Gamma }}^8T^3$$
(2)
for which the reverse shock will interfere with the internal shocks. Injecting the typical values used in our example (Lorentz factor $`\overline{\mathrm{\Gamma }}300`$ and duration $`T=4s`$) gives
$$\frac{E_{52}^{inj}}{n}|_{crit}0.071\overline{\mathrm{\Gamma }}_{300}^8T_4^3,$$
(3)
where $`E_{52}^{inj}=\frac{E^{inj}}{10^{52}/4\pi erg/sr}`$. Assuming an efficiency $`f_\gamma `$ for the conversion of wind energy into gamma-rays by internal shocks allows to obtain $`E^{inj}=\frac{E_\gamma }{f_\gamma }`$ from the observed burst energy $`E_\gamma `$.
The total energy being fixed, if the density is smaller than the critical value, the reverse shock does not contribute in gamma-rays and only produces a delayed X–ray emission with an intensity which increases with $`n`$. If the density reaches or exceeds the critical value, the X–ray and the gamma-ray profiles become very affected.
We have represented in figure 2 the profiles for a ratio $`E_{52}^{inj}/n=0.5`$ ($`n0.15n_{crit}`$). The gamma-ray profile is unchanged and the X-ray profiles are improved. The corresponding $`E`$$`W(E)`$ diagram in figure 2 shows a well reproduced power-law over the complete energy range with an index $`0.45`$ in agreement with the observations of GRB 960720. In the spectrum the contribution of the reverse shock to the late emission appears like a X–ray plateau, which is observed (and is even more extended) in GRB960720.
## 4 Conclusions
The reverse shock propagating into the wind following its interaction with the ISM has been shown to produce, for a sufficiently dense medium, a late X-ray emission. In the case of GRB960720, whose gamma-ray properties are well explained by internal shocks, taking into account this additional contribution greatly improves the X-ray profiles (the power-law between the profile width and the energy is reproduced) and the spectrum.
It is now important to study the case of more complex environments. In hypernova models, the progenitor is a massive Wolf-Rayet star with a dense wind ($`\dot{M}\mathrm{3\; 10}^5M_{}/yr`$ and $`v_{\mathrm{}}2000km/s`$ typically) leading to high densities ($`n\frac{\dot{M}}{4\pi r^2m_pv_{\mathrm{}}}`$) near the source. Our first calculations show that in this case even the gamma-ray profile could be strongly affected by the reverse shock, which may represent a potential problem.
|
no-problem/9901/physics9901029.html
|
ar5iv
|
text
|
# The Micro Slit Gas Detector
## 1 Introduction
A new generation of high rate proportional gaseous detectors based on advanced printed circuit technology (PCB) has been introduced during the last year. Important efforts in the research and development of these kind of detectors are justified because of their low cost and robustness. Examples of these detectors are the Gas Electron Multiplier (GEM) , the Micro-Groove Detector (MGD) , and the WELL detector . They have in common the use of thin kapton foils and PCB techniques in order to implement the multiplication structure. The flexibility of the readout is precisely another advantage of these detectors, allowing in some cases an intrinsic two dimensional device. Detector charging up and operation stability are important issues that need to be studied. We present here indications of a good performance for the Micro Slit Gas Detector (MSGD).
## 2 Detector description
The development of kapton etching techniques (commonly used for GEM production) has made possible the easy construction of new detector geometries.
In this case one of the metallic layers, of a 50 $`\mu `$m thick kapton foil copper clad on both sides, is litographically etched with a matrix of rectangular round-corner slits, 105 $`\mu `$m wide and 6 mm long (repeated in the transverse direction with a period of 200 $`\mu `$m). In the opposite side, a pattern 30 $`\mu `$m wide strips with 200 $`\mu `$m pitch is etched, ensuring that the strips run along the slits (see Figure 1).
When kapton is removed, the final device has 30 $`\mu `$m strips suspended only by 200 $`\mu `$m kapton joints regularly spaced at 8 mm (to provide mechanical stiffness) (Figure 2). In this way a “substrate-free” MSGC is achieved, and the detector resembles a wire-chamber.
The first detector prototype, 10$`\times `$10 cm<sup>2</sup>, was enclosed in a gas volume, which was sealed symmetrically by two thin conductive foils, at 3 mm distance from the kapton plane (see Figure 3). The first provides the drift field towards the multiplication region (drift plane), and the second was given, in the test, a certain potencial with respect to the anodes, which we discuss later. Initially this backplane was metallized with the aim to define better the electric field around the anode.
## 3 Detector performance
The signal development takes place in a similar way as in a standard MSGC. Drifting electrons reach the E-field region between anode and cathode, and are then multiplied inside the rectangular slits. The electron avalanche produced in this region is collected by the anode strips. The ion charge is collected by the cathode and the drifting plane, in a proportion depending on the operating voltages. In this case anodes were grounded through a bias resistor while a negative potential was applied to the cathode.
The detector was irradiated with X rays coming from an Cr X-ray tube and the gas mixture used was composed by Ar and DME in different proportions.
The signal was extracted from an OR of 32 anodes and amplified by a ORTEC 142PC preamplifier followed by an AFT Research Amplifier Model 2025. The output was digitized in a Tektronix TDS 684A Oscilloscope.
### 3.1 Operation voltages
Typical operating voltages in the first prototype are very similar to MSGCs. Detector gains obtained are somewhat lower<sup>3</sup><sup>3</sup>3In a typical MSGC with 10 $`\mu `$m anodes and 100 $`\mu `$m cathodes with Ar-DME 50% a gain of aprox. 1000 is achieved with a cathode potential of 550, while in the Micro Slit detector gain is around 600. . This is understandable due to the width of the anode strips (still limited by the PCB production technique), and also as a consequence of the extended gap between anode and cathode because of the non planar geometry and the cathode width <sup>4</sup><sup>4</sup>4New prototypes are under development with wider cathodes. The detector gain exhibits an exponential dependence on the voltage applied to the cathode (Figure 4). In this Figure the maximum gains showed were limited by sparks in the chamber. In some tests afterwards the MSGD was exposed to severe sparking during hours but no damage in its structure was found.
A pulse height spectrum can be seen in Figure 5. The voltage applied to the backplane does not affect essentially the anode signal, as illustrated also in Figure 5.
Figure 6 shows spectra obtained with different values of the cathode voltage. Decreasing it by 10 V, for the drift voltage V<sub>d</sub>=-1600 V, produces a 20$`\%`$ drop in the gain.
In these spectra, the Argon scape peak is clearly separated from that corresponding to the K<sub>α</sub> photon energy at 5.4 KeV. The energy resolution for pulse height spectra measured with V<sub>d</sub>=-1500 V and V<sub>cat</sub>=-515 V is 16$`\%`$ FWHM, and in this field configuration 90% of the ions drift to the cathode electrode.
The dependence of the gain with the cathode voltage was also studied for different gas mixtures. The results of these studies (Figure 4) show that the highest gains were obtained with high argon content in the gas mixture.
Also the dependence on gain with the drift field is showed in Figure 7. Clearly an enhancement of gain is obtained with higher drift field values.
### 3.2 Short term gain variation
Typically variations on the gain during the first operation moments manifest in those detectors using insulating substrates. This is due to the accumulation of charge on the dielectric (charging-up) and polarization, producing electric field modifications, and thus affecting the amplification process. Normally this effect has been avoided using higher conductive coatings (like LPVD diamond) or substrates (like S8900). In the GEM, for example, kapton surface of the holes is clearly traversed by the dipole electric field thus producing some charging up <sup>5</sup><sup>5</sup>5 A small admixture of water in the gas as well as straighter holes have demonstrated to solve the problem. . In this geometry we have designed the electrodes in such a way that exposed area to E-field represents only around 1$`\%`$ of the total. This (see below) represents a major improvement in this type of devices just simplifying the production (no coating needed).
The effect of charging up on the MSGD gain was determined by registering the pulse height spectrum and comparing the maxima from consecutive periods. Figure 8 shows the evolution of the gain during the first 82 minutes of irradiation under a rate of 10<sup>3</sup> Hz mm<sup>2</sup> beginning from a cold start (detector and beam initially switched off). Variations of the gain are less than a 4$`\%`$.
In order to accelerate the effect of this possible charge accumulation, the MSGD was irradiated with a photon rate of $``$ 10<sup>6</sup> during about 10 minutes. Figure 9 compares the spectra before and after the high irradiation. No appreciable change occurs. This behaviour differs from that observed in detectors with dielectric substrate, like standard MSGC or GEM.
## 4 Rate capability
The rate capability of this detector was determined by measuring the current in the group of instrumented anodes for different values of the incident photon flux. Driving the X-ray tube to its maximum current, we could reach up to 2.6 $`\times `$10<sup>6</sup> Hz mm<sup>-2</sup> incident photon flux, collimated over a surface of 3 mm<sup>2</sup>. No appreciable drop in gain was observed. In Figure 10 the relative changes in the detector gain during the irradiation test are shown. They were determined from the observed deviations with respect to a linear fit between X-ray intensity and anode current.
The advantage of the MSGD is the effective absence of any dielectric surface, avoiding the use of delicate high resistive coatings to reach values of 10<sup>6</sup> mm<sup>-2</sup> s<sup>-1</sup>.
## 5 Conclusions
A prototype of a new proportional gas detector, based on the PCB technology, has been designed and tested.
The first tests with this detector show important properties, related mainly to its high rate capability (up to 2.5 MHz mm<sup>-2</sup>) and the absence of charging up effects.
In spite of its similarity to the MSGC in the amplification process the use of the PCB technology reduces considerably the cost and material budget. Besides, it is important to remark the supression of the substrate for supporting the anode structure.
Another interesting possibility is to set up a similar detector with a mirror cathode structure respect to the anode plane, thus having upper and lower drift regions and allowing to reduce the effective drift gap and charge collection time.
## 6 Acknowledgements
This work was only possible due to the invaluable help and collaboration of M. Sánchez (CERN EP/PES/LT Bureau d’ etudes Electroniques) under the responsability of Alain Monfort, L. Mastrostefano and D. Berthet (CERN EST/SM Section des Techniques Photomécaniques).
We also thank B. Adeva, Director of the Laboratory of Particle Physics in Santiago de Compostela, where part of this work has been carried out, for his strong support and careful reading of the manuscript.
We would like to thank A. Gandi, responsible of the Section des Techniques Photomécaniques, anbd A. Placci, responsible of the Technical Assistance Groupe TA1, for their encouragement and logistic support.
Figure Captions
Figure 1: Copper clad kapton design of the Micro Slit Gas Detector (top view).
Figure 2: Scheme of one slit (transverse section). The copper layer is 15 $`\mu `$m thick.
Figure 3: Schematic view of the tested prototype.
Figure 4: Behaviour of the gain as a function of the cathode voltage for different gas mixtures.
Figure 5: Pulse height spectra obtained with different values of the voltage in the backplane.
Figure 6: Effect of the cathode voltage in the response of the detector.
Figure 7: Gain dependence with the drift field.
Figure 8: Evolution of the gain during the first irradiation moments.
Figure 9: Pulse height spectra before and after high irradiation.
Figure 10: Rate capability of the Micro Slit Gas Detector.
|
no-problem/9901/cond-mat9901076.html
|
ar5iv
|
text
|
# Theory of spin wave excitation in manganites
## Abstract
The role of the orbital degrees of freedom is studied theoretically for the spin dynamics of $`R_{1x}A_x`$MnO<sub>3</sub>. Based on the meanfield solution, an RPA calculation has been done and it is found that the $`d_{x^2y^2}`$-type orbital is essential for the double-exchange (DE) interactions, i.e., the DE is basically two-dimensional interaction. Based on this results compared with experiments, we propose that the orbital wavefunction is $`d_{x^2y^2}`$-type locally even in the metallic ferromagnetic state, which fluctuate quantum mechanically. Well agreement of the estimation with experiments suggest that the Jahn-Teller phonon has less importance on the spin dynamics.
Doped manganites $`R_{1x}A_x`$MnO<sub>3</sub> ($`R`$=La, Pr, Nd, Sm ; $`A`$= Ca, Sr, Ba) have recently attracted considerable interests due to the colossal magnetoresistance (CMR) observed near the ferromagnetic (spin $`F`$-type) transition temperature $`T_c`$ . It is now recognized that the most fundamental interaction in these materials is the double exchange-interaction (DE), which connects the transport and magnetism . Therefore the magnetism is a key issue to reveal the mechanism of CMR. Especially, rich magnetic phase diagrams have been clarified over the wide range of the concentration $`x`$ and also the bandwidth. With increasing $`x`$, the parent insulator with a layered antiferromagnetism (spin $`A`$-type AF) changes into a ferromagnetic metal (FM). In addition to these well-known magnetic phases, spin $`A`$-type AF (in La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, Pr<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> ) and the rod type antiferromagnetism (spin $`C`$-type AF, in Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> ) were recently found in the moderately doped metallic region ($`0.5<x<0.8`$).
The neutron scattering experiments have revealed the spin wave excitation at low temperatures, which depends sensitively on the doping $`x`$ and the magnetic structure . In the spin $`A`$-type AF for small $`x`$, the dispersion is two dimensional while it becomes isotropic in FM state. The spin stiffness, however, stays almost constant up to $`x0.125`$ where the phase transition between the insulating and metallic ferromagnetic states occurs. This phase transition is accompanied with that of the orbital structure. In the FM state, the orbital ordering disappears and the spin stiffness begins to increase. In the metallic $`A`$-type AF (AFM) state for higher $`x`$, the dispersion becomes again two dimensional. In this paper we report the calculation of the spin wave dispersion by changing $`x`$ and taking into account the orbital structure. The calculated $`x`$-dependence of the spin stiffness agrees quantitatively with the observed one for $`x0.2`$ where the double-exchange interaction dominates. This $`x`$-dependence strongly supports the large orbital polarization, which is $`d_{x^2y^2}`$ at least locally. Therefore the double-exchange interaction is basically two-dimensional.
We previously reported a meanfield theory (MFT) for the phase diagram of doped manganites in terms of a model including the strong on-site repulsion, orbital degeneracy, and anisotropic covalency . Based on this MFT, we first presents the spin wave dispersion in terms of the random-phase-approximation (RPA). This reproduces qualitatively the $`x`$-dependence of the stiffness and the anisotropy due to the cross-over from superexchange interaction (SE) to DE. Especially for the doped region, only when the orbital configuration becomes $`d_{x^2y^2}`$, i.e., $`x0.2`$, the DE becomes appreciable and the in-plane spin stiffness grows rapidly. Observed values of the in-plane spin stiffness agree quantitatively with the estimated value with $`d_{x^2y^2}`$-orbital ordering. This is understood in terms of the orbital liquid picture and implies that the Jahn-Teller (JT) phonon has less importance on the spin dynamics.
We start with the Hamiltonian
$`H`$ $`=`$ $`{\displaystyle \underset{\sigma \gamma \gamma ^{}ij}{}}t_{ij}^{\gamma \gamma ^{}}d_{i\sigma \gamma }^{}d_{j\sigma \gamma ^{}}`$ (1)
$``$ $`J_H{\displaystyle \underset{i}{}}\stackrel{}{S}_{t_{2g}i}\stackrel{}{S}_{e_gi}`$ (2)
$`+`$ $`J_S{\displaystyle \underset{ij}{}}\stackrel{}{S}_{t_{2g}i}\stackrel{}{S}_{t_{2g}j}+H_{\mathrm{on}\mathrm{site}}`$ (3)
where $`\gamma `$ \[$`=a(d_{x^2y^2}),b(d_{3z^2r^2})`$\] specifies the orbital and the other notations are standard. The transfer integral $`t_{ij}^{\gamma \gamma ^{}}`$ depends on the pair of orbitals $`(\gamma ,\gamma ^{})`$ and the direction of the bond $`(i,j)`$ . The spin operator for the $`e_g`$ electron is defined as $`\stackrel{}{S}_{e_gi}=\frac{1}{2}\underset{\gamma \alpha \beta }{}d_{i\gamma \alpha }^{}\stackrel{}{\sigma }_{\alpha \beta }d_{i\gamma \beta }`$ with the Pauli matrices $`\stackrel{}{\sigma }`$, while the orbital isospin operator is defined as $`\stackrel{}{T}_i=\frac{1}{2}\underset{\gamma \gamma ^{}\sigma }{}d_{i\gamma \sigma }^{}\stackrel{}{\sigma }_{\gamma \gamma ^{}}d_{i\gamma ^{}\sigma }.`$ $`J_H`$ is the Hund’s coupling between $`e_g`$ and $`t_{2g}`$ spins, and $`J_S`$ is the $`AF`$ coupling between nearest neighboring $`t_{2g}`$ spins. $`H_{\mathrm{on}\mathrm{site}}`$ represents the on-site Coulomb interactions between $`e_g`$ electrons. Coulomb interactions induce both the spin and orbital isospin moments, and actually $`H_{\mathrm{on}\mathrm{site}}`$ can be written as
$$H_{\mathrm{on}\mathrm{site}}=\underset{i}{}\left(\stackrel{~}{\beta }\stackrel{}{T}_i^2+\stackrel{~}{\alpha }\stackrel{}{S}_{e_gi}^2\right),$$
(4)
where the coefficients of the spin and isospin operators, i.e., $`\stackrel{~}{\alpha }`$ and $`\stackrel{~}{\beta }`$, are given by $`\stackrel{~}{\alpha }=U\frac{J}{2}>0,`$ and $`\stackrel{~}{\beta }=U\frac{3J}{2}>0.`$ The parameters $`\stackrel{~}{\alpha },\stackrel{~}{\beta },t_0`$, used in the numerical calculation are chosen as $`t_00.72`$ eV, $`U=6.3`$ eV, and $`J=1.0`$ eV, being relevant to the actual manganites.
In the path-integral quantization, we introduce the Stratonovich-Hubbard fields $`\stackrel{}{\phi }_S`$ and $`\stackrel{}{\phi }_T`$, representing the spin and orbital fluctuations, respectively. With the large values of the electron-electron interactions above, both $`\stackrel{}{\phi }_S`$ and $`\stackrel{}{\phi }_T`$ are almost fully polarized. The MFT corresponds to the saddle point configuration of $`\stackrel{}{\phi }_S`$ and $`\stackrel{}{\phi }_T`$. We consider four kinds of spin alignment in the cubic cell: $`F`$-, $`A`$-, $`C`$\- and $`G`$-type. As for the orbital degrees of freedom, we consider two sublattices $`I`$, and $`II`$, on each of which the orbital is specified by the angle $`\theta _{I,II}`$ as
$`|\theta _{I,II}=\mathrm{cos}{\displaystyle \frac{\theta _{I,II}}{2}}|d_{x^2y^2}+\mathrm{sin}{\displaystyle \frac{\theta _{I,II}}{2}}|d_{3z^2r^2}.`$ (5)
We consider four types, i.e., $`F`$-, $`A`$-, $`C`$-, $`G`$-type also for the orbital ordering. Henceforth, we use a notation such as spin A, orbital $`G`$ ($`\theta _I,\theta _{II}`$) etc.. In MFT, the most stable ordering is given by
| $`x=0.0`$ | Spin A | Orbital C:($`60,60`$) |
| --- | --- | --- |
| $`x=0.1`$ | Spin F | Orbital C:($`80,80`$) |
| $`x=0.20.4`$ | Spin A | Orbital F:($`0`$,$`0`$) |
| $`x=0.50.9`$ | Spin C | Orbital F:($`180`$,$`180`$). |
As for $`x=0`$, we further introduced the JT effect by putting the observed distortion of the MnO<sub>6</sub> octahedra.
RPA corresponds to the Gaussian fluctuation around MFT, and the contribution to the spin wave effective action from the $`e_g`$-electrons $`S_{\mathrm{SW}}`$ is obtained as the expansion around the saddle point.
$`S_{\mathrm{SW}}={\displaystyle \underset{q,\mathrm{\Omega }}{}}K_\pi (\stackrel{}{q},\mathrm{\Omega })\pi (\stackrel{}{q}_S+\stackrel{}{q},\mathrm{\Omega })\pi (\stackrel{}{q}_S\stackrel{}{q},\mathrm{\Omega })`$ (6)
$`+{\displaystyle \underset{q,\mathrm{\Omega }}{}}K_\times (\stackrel{}{q},\mathrm{\Omega })\stackrel{}{\pi }(\stackrel{}{q}_S+\stackrel{}{q},\mathrm{\Omega })\left\{\stackrel{}{n}\times \stackrel{}{\pi }(\stackrel{}{q}_S\stackrel{}{q},\mathrm{\Omega })\right\}.`$ (7)
where $`\stackrel{}{q}_S(\stackrel{}{q}_S)`$ is the wavevector and $`\stackrel{}{n}`$ $`(|\stackrel{}{n}|=1)`$ is the direction of the ordered magnetic moment, and $`\stackrel{}{\pi }`$ is the fluctuation perpendicular to it. Because the spin wave is the Goldstone boson, the condition $`K_\pi (0,0)=0,K_\times (0,0)=0,`$ can be derived. Coefficient of the diagonalized quadratic form is obtained as $`K_{()}=K_\pi \pm iK_\times ,`$ zero-point of which $`\left(K_{()}(\stackrel{}{q},\mathrm{\Omega }=i\omega )=0\right)`$ gives the dispersion relation of the excitation $`\omega =\omega (\stackrel{}{q})`$. However in this paper we focus on the static spin stiffness rather than the dynamic spin wave velocity because (a) at $`x=0`$ the spin stiffness is correctly reproduced to be of the order of $`J`$ in the RPA while the spin wave velocity scales with $`t`$, and (b) for the metallic region, $`x0`$, the Landau-damping is not properly treated in our calculation where the Brillouin zone is discretized and thus the gapless individual-excitation is not correctly evaluated. The static spin stiffness $`C_\alpha `$ corresponds to the static response function for small $`|\stackrel{}{q}|`$ as
$$\frac{K_\sigma \left(\stackrel{}{q},\mathrm{\Omega }=0\right)}{\stackrel{~}{\alpha }}\underset{\alpha =x,y,z}{}C_\alpha q_\alpha ^2,$$
(8)
and roughly reflects the exchange-interaction depending on $`x`$, where $`\sigma =1(1)`$ corresponds to spin up (down), respectively.
In RPA calculation the SE corresponds to the contribution from the inter-band transitions, while the DE from the intra-band ones. In this way, the present calculation describes both SE and DE interactions, and hence their crossover in a unified way. Also the contribution from $`J_S`$ should be considered, the value of which is determined in the following way. We require that the experimentally observed anisotropy ratio of the spin stiffness $`R=\left(\frac{D_{x,y}}{D_z}\right)^2`$ is reproduced when the calculated contributions from $`e_g`$-electrons and that from $`J_S`$ are added. The observed value $`R=7.6`$ for LaMnO<sub>3</sub> leads to an estimation as $`J_S=0.997`$ meV. As for AFM at $`x=0.3`$, the observed value $`R=10.4`$ for Nd<sub>0.45</sub>Sr<sub>0.55</sub>MnO<sub>3</sub> gives $`J_S=1.4`$ meV. These estimations are consistent rather with $`J_S0.8`$ meV estimated from the Néel temperature of CaMnO<sub>3</sub> than the earlier meanfield estimations $`J_S8`$ meV. Using these estimations, $`J_S1`$ meV, we can estimate the spin wave stiffness for $`x=0`$ as $`J_{\mathrm{total}}^xS_{\mathrm{total}}^2=1.05`$ meV, including the contribution from $`t_{2g}`$ orbital. The corresponding experimental value is $`J_{\mathrm{total}}^xS_{\mathrm{total}}^2=3.91`$ meV in LaMnO<sub>3</sub> with the reported lattice constants and the magnitude of spin moment, $`S_{\mathrm{total}}=3/2+1/2(1x)`$. The discrepancy may be attributed to the complex lattice deformations such as the Mn-O bond-length (JT-type distortion) and the Mn-O-Mn bond-angle (orthorhombic distortion) observed at $`x=0`$ , which can also be an origin of the anisotropy .
We now turn to the doped case $`x0`$. Fig. 1 shows the doping-dependence of the total stiffness calculated for the optimized spin/orbital structure at each $`x`$. Firstly the spin stiffness due to the double exchange interaction scales roughly with $`x`$ because the orbital is almost fully polarized, while in the absence of the orbital polarization it scales with the electron density $`(1x)`$ rather than the hole $`x`$ for small $`x`$. The observed stiffness enhancement with increasing $`x`$ even in the metallic region therefore also supports the large orbital polarization due to the strong Coulomb interactions. As $`x`$ increases, the spin structure changes from spin $`A`$-type insulator at $`x=0`$ into the nearly isotropic FM, to the AFM with two-dimensional $`d_{x^2y^2}`$ orbital alignment, and to the spin $`C`$ metal with $`d_{3z^2r^2}`$ orbital. Accordingly, the in-plane stiffness shows an increase, moderately at the beginning and then rapidly in the region of AFM. This reflects the fact that the DE is the most effective and prefers the $`d_{x^2y^2}`$-orbital, i.e., the DE is basically two-dimensional with the $`e_g`$-orbitals. In the spin-$`C`$-metal for $`x>0.4`$, one-dimensional orbital along (001)-direction gives rise to a steep increase of the stiffness in this direction.
The observed anisotropy of the spin stiffness is determined by the long range ordering of the orbitals. Fig. 1 also represents the cross-over of the dimensionality which we proposed in the previous report. Yoshizawa $`etal.`$ observed the reentrant of such two-dimensional anisotropy of the stiffness for Nd<sub>0.45</sub>Sr<sub>0.55</sub>MnO<sub>3</sub>, being consistent with our result. Quasi-one-dimensional anisotropy is predicted for Nd<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub> ($`x>0.6`$) .
The in-plane spin stiffness $`J_{\mathrm{total}}^{x\left(y\right)}S_{\mathrm{total}}^2`$ in Fig. 1 could be compared with the experiments. In La<sub>1-x</sub>Sr<sub>x</sub>MnO<sub>3</sub>, Endoh $`etal.`$ observed the plateau of the velocity $`v_x`$ in the orbital-ordered insulating state up to $`x0.12`$ and then the velocity increases in the FM phase. Comparing this with the calculation above, it seems that the moderate increase up to $`x0.15`$ in Fig. 1 corresponds to the plateau, while the rapid increase for $`x>0.15`$ to the increasing velocity observed by Endoh. Then orbital-ordered FM state in Fig. 1 corresponds to the insulating spin $`F`$ phase in experiments. Both the FM and AFM phases in experiments, on the other hand, seems to corresponds to the AFM with $`d_{x^2y^2}`$ orbital ordering in the calculation. This fits well orbital liquid picture by Ishihara $`etal.`$ ; In a perfectly cubic system the orbital state in FM is described as the resonance among $`d_{x^2y^2}`$, $`d_{y^2z^2}`$, and $`d_{z^2x^2}`$. In the actual CMR compounds, however, the slight lattice distortion may breaks the cubic symmetry to stabilize $`d_{x^2y^2}`$ though it is still accompanied with large fluctuation around it.
Now we turn to the absolute value of the spin stiffness in FM phase. With the reported lattice constants the experimental values of the spin stiffness, $`J_{\mathrm{total}}^xS_{\mathrm{total}}^2`$, are 11.61 meV for La<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> and 10.24 meV for Nd<sub>0.7</sub>Sr<sub>0.3</sub>MnO<sub>3</sub> , respectively. These are in a good agreement with $`J_{\mathrm{total}}^xS_{\mathrm{total}}^2`$ =10.53 meV estimated by RPA here with $`x=0.3`$, $`d_{x^2y^2}`$-orbital ordering (a simple tight binding estimation with $`d_{x^2y^2}`$-orbital also gives similar value). This agreement implies the large orbital polarization in FM phase with $`d_{x^2y^2}`$ at least locally.
This orbital liquid picture also explains the spin wave softening and spin canting observed in this system. Some theoretical works shows that the orbital fluctuation such an orbital liquid state leads to the softening of the spin wave dispersion near the zone boundary with the anisotropic feature (the softening almost disappears along $`(\pi ,\pi ,\pi )`$-directions ). As for the spin canting, the observed canting in the metallic region (Nd<sub>0.5</sub>Sr<sub>0.5</sub>MnO<sub>3</sub>) with the FM/AFM transition cannot be explained unless the planer orbital $`d_{x^2y^2}`$ realizes in FM phase (observed slight lattice anisotropy can not stabilize such a planer orbital without the occurrence of the orbital liquid state).
An important conclusion from the agreement between the experiments and RPA calculation of the stiffness constant is that the polaron effect is small in the metallic state at least on the spin dynamics. Polaron should reduce the DE interaction in the doped region via a bandwidth reduction by a factor of $`<X_i^{}X_j>=\mathrm{exp}[_q|u_q|^2/2]`$ ($`u_q=(g_q/\omega _q)(e^{iqR_i}e^{iqR_j})`$), where $`X_i=\mathrm{exp}[_qe^{iqR_i}(g_q/\omega _q)(b_qb_q^{})]`$ is a factor encountered in the canonical transformation eliminating the coupling between electrons and polaronic bosons, $`_{i,\sigma }_qg_q(b_q+b_q^{})d_{i\sigma }^{}d_{i\sigma }`$, with the coupling constant $`g_q`$ and phonon frequency $`\omega _q`$. On the other hand, for $`x=0`$, the SE under the coupling with the polaron is given by,
$$J=4|t_{ij}|^2_0^\beta 𝑑\tau G_0^2(\tau )X_i^{}(\tau )X_j(\tau )X_j^{}(0)X_i(0),$$
(9)
where $`G_0(\tau )=e^{U\tau /2}`$ is the Green’s function for localized electrons. Because we are interested in the large $`U`$ case, the integral is determined by the small $`\tau `$ region where $`X_i^{}(\tau )X_j(\tau )X_j^{}(0)X_i(0)e^{\stackrel{~}{\mathrm{\Delta }}\tau }`$ ($`\stackrel{~}{\mathrm{\Delta }}=_q\omega _q|u_q|^2`$). Then the polaronic effect is to replace $`U`$ by $`U+\stackrel{~}{\mathrm{\Delta }}`$ in the expression for $`J`$, which is a minor correction when $`U>>\stackrel{~}{\mathrm{\Delta }}`$ , being in sharp contrast to DE discussed above. Polaronic effect should therefore correct the RPA-estimation of the stiffness-enhancement as $`x`$ increases to be smaller. Agreement between the observed and estimated stiffness for DE implies therefore that the spin dynamics is not so affected by the polaron. This is also pointed out by Quijada et al.
In summary, we have studied the role of orbitals in the spin dynamics of $`R_{1x}A_x`$MnO<sub>3</sub>. Comparing the experiments with the RPA calculation based on the mean field theory, we conclude the followings. (a) $`x`$-dependence of the stiffness-enhancement suggests the large orbital polarization. (b) the double-exchange interaction prefers $`d_{x^2y^2}`$ orbital and is basically two-dimensional interaction, which leads to the large anisotropy of the spin dynamics. (c) the agreement between experiments and RPA results strongly suggests that the spin dynamics is not so affected by the JT polaron.
The authors would like to thank K. Hirota, Y. Endoh, I. Solovyev, K. Terakura, R. Kajimoto, H. Yoshizawa, T. Kimira, D. Khomskii, A. Millis, and Y. Tokura for their valuable discussions. This work was supported by Priority Areas Grants from the Ministry of Education, Science and Culture of Japan.
|
no-problem/9901/astro-ph9901367.html
|
ar5iv
|
text
|
# References
Scenario of baryogenesis
D.L. Khokhlov
Sumy State University, R.-Korsakov St. 2,
Sumy 244007, Ukraine
E-mail: others@monolog.sumy.ua
## Abstract
Scenario of baryogenesis is considered in which primordial plasma starting from the Planck scale consists of primordial particles being the precursors of electrons and clusters of particles being the precursors of protons. Equilibrium between the precursors of protons and the precursors of electrons is defined by the proton-electron mass difference. At the temperature equal to the mass of electron, primordial particles transit into protons, electrons, photons.
Standard scenario of baryogenesis occurs as follows . In the thermodynamically equilibrium primordial plasma, all the particles with $`mT`$ exist in equal abundances per spin degree of freedom. This means that, at $`T>m_p`$, there exist approximately equal abundances of protons $`p`$ and antiprotons $`\overline{p}`$. Excess of baryonic charge arises in the processes of X, Y-bosons decays under the following conditions: non-conservation of the baryonic charge and CP-violation. At $`T<m_p`$, $`p\overline{p}`$ annihilate, with the excess of $`p`$ survives.
Let us consider another scenario of baryogenesis in which primordial plasma starting from the Planck scale $`T=m_{Pl}`$ consists of primordial fermions. At $`T=m_e`$, primordial fermions transit into protons, electrons, photons.
Let us assume that primordial plasma consists of primordial fermions which have only spin quantum number. That is primordial plasma consists of primordial fermions with the spin up $``$ and down $``$. Then the sequence of transformations $``$ transits the particle into itself. Let us identify simple particle $``$ with the precursor of electron and the cluster of particles $``$ with the precursor of proton. The probability of finding such a cluster is given by
$$w=\left(\frac{1}{2}\right)^5.$$
(1)
Equilibrium between the precursors of protons and the precursors of electrons is defined by the proton-electron mass difference. The proton-electron ratio is given by
$$\frac{N_p}{N_e}=\left(\frac{1}{2}\right)^5\left(\frac{m_e}{m_p}\right)^2$$
(2)
Let us assume that, at $`T=m_e`$, precursors of electrons annihilate into photons, and precursors of protons transit into protons, with the same number of precursors of electrons transit into electrons. Extraction of protons and electrons leads to the appearance of the electric charge. Baryon-photon ratio at $`T=m_e`$ is given by
$$\frac{N_b}{N_\gamma }=\frac{1}{2}\times \frac{3}{4}\times \frac{N_p}{N_e}$$
(3)
where fraction $`1/2`$ takes into account survived electrons, and fraction $`3/4`$ takes into account relation between fermions and bosons. Calculations yield the value $`N_b/N_\gamma =3.5\times 10^9`$.
The observed value of $`N_b/N_\gamma `$ lies in the range $`215\times 10^{10}`$ . Let us assume that the most fraction of baryonic matter decays into non-baryonic matter during the evolution of the universe. Estimate baryon number density at $`T=m_e`$ from the modern total mass density of the universe. According to the model of the universe with the linear evolution law , mass density of the universe is given by
$$\rho =\frac{3}{4\pi Gt^2}.$$
(4)
Modern age of the universe is given by
$$t_0=t_{Pl}\alpha \left(\frac{T_{Pl}}{T_0}\right)^2.$$
(5)
From this the modern age of the universe is equal to $`t_0=1.06\times 10^{18}\mathrm{s}`$, and the modern mass density of the universe is equal to $`\rho _0=3.19\times 10^{30}\mathrm{g}\mathrm{cm}^3`$. Then the baryon number density at $`T=m_e`$ is $`n_b=\rho _0/m_p=1.9\times 10^6\mathrm{cm}^3`$. While adopting the observed photon number density as $`n_\gamma =550\mathrm{cm}^3`$ , the baryon-photon ratio at $`T=m_e`$ is equal to $`N_b/N_\gamma =3.5\times 10^9`$.
In the standard theory of primordial nucleosynthesis , neutron-proton ratio freezes out at $`T1\mathrm{MeV}`$. If baryons arise at $`T=m_e`$, neutron-proton ratio freezes out at $`Tm_e0.5\mathrm{MeV}`$ that leads to the decrease of the freezed out neutron-proton ratio.
|
no-problem/9901/astro-ph9901114.html
|
ar5iv
|
text
|
# The Photo-Evaporation of Dwarf Galaxies During Reionization
## 1 Introduction
The formation of galaxies is one of the most important, yet unsolved, problems in cosmology. The properties of galactic dark matter halos are shaped by gravity alone, and have been rigorously parameterized in hierarchical Cold Dark Matter (CDM) cosmologies (e.g., Navarro, Frenk, & White 1997). However, the complex processes involving gas dynamics, chemistry and ionization, and cooling and heating, which are responsible for the formation of stars from the baryons inside these halos, have still not been fully explored theoretically.
Recent theoretical investigations of early structure formation in CDM models have led to a plausible picture of how the formation of the first cosmic structures leads to reionization of the intergalactic medium (IGM). The bottom-up hierarchy of CDM cosmologies implies that the first gaseous objects to form in the Universe have a low-mass, just above the cosmological Jeans mass of $`10^4M_{}`$ (see, e.g., Haiman, Thoul, & Loeb 1996, and references therein). The virial temperature of these gas clouds is only a few hundred K, and so their metal-poor primordial gas can cool only due to the formation of molecular hydrogen, $`\mathrm{H}_2`$. However, $`\mathrm{H}_2`$ molecules are fragile, and were easily photo-dissociated throughout the Universe by trace amounts of starlight (Stecher & Williams 1967; Haiman, Rees, & Loeb 1996) that were well below the level required for complete reionization of the IGM. Following the prompt destruction of their molecular hydrogen, the early low-mass objects maintained virialized gaseous halos that were unable to cool or fragment into stars. Most of the stars responsible for the reionization of the Universe formed in more massive galaxies, with virial temperatures $`T_{\mathrm{vir}}10^4`$K, where cooling due to atomic transitions was possible. The corresponding mass of these objects at $`z10`$ was $`10^8\mathrm{M}_{}`$, typical of dwarf galaxies.
The lack of a Gunn-Peterson trough and the detection of Ly$`\alpha `$ emission lines from sources out to redshifts $`z=5.6`$ (Weymann et al. 1998; Dey et al. 1998; Spinrad et al. 1998; Hu, Cowie, & McMahon 1998) demonstrates that reionization due to the first generation of sources must have occurred at yet higher redshifts; otherwise, the damping wing of Ly$`\alpha `$ absorption by the neutral IGM would have eliminated the Ly$`\alpha `$ line in the observed spectrum of these sources (Miralda-Escudé 1998). Popular CDM models predict that most of the intergalactic hydrogen was ionized at a redshift $`8z15`$ (Gnedin & Ostriker 1997; Haiman & Loeb 1998a,c). The end of the reionization phase transition resulted in the emergence of an intense UV background that filled the Universe and heated the IGM to temperatures of $`1`$$`2\times 10^4`$K (Haiman & Loeb 1998b; Miralda-Escudé, Haehnelt, & Rees 1998). After ionizing the rarefied IGM in the voids and filaments on large scales, the cosmic UV background penetrated the denser regions associated with the virialized gaseous halos of the first generation of objects. Since a major fraction of the collapsed gas had been incorporated by that time into halos with a virial temperature $`10^4`$K, photoionization heating by the cosmic UV background could have evaporated much of this gas back into the IGM. No such feedback was possible at earlier times, since the formation of internal UV sources was suppressed by the lack of efficient cooling inside most of these objects.
The gas reservoir of dwarf galaxies with virial temperatures $`10^4`$K (or equivalently a 1D velocity dispersion $`10\mathrm{km}\mathrm{s}^1`$) could not be immediately replenished. The suppression of dwarf galaxy formation at $`z>2`$ has been investigated both analytically (Rees 1986; Efstathiou 1992) and with numerical simulations (Thoul & Weinberg 1996; Quinn, Katz, & Efstathiou 1996; Weinberg, Hernquist, & Katz 1997; Navarro & Steinmetz 1997). The dwarf galaxies which were prevented from forming after reionization could have eventually collected gas at $`z=1`$–2, when the UV background flux declined sufficiently (Babul & Rees 1992; Kepner, Babul, & Spergel 1997). The reverse process during the much earlier reionization epoch has not been addressed in the literature. (However, note that the photo-evaporation of gaseous halos was considered by Bond, Szalay, & Silk (1988) as a model for Ly$`\alpha `$ absorbers at lower redshifts $`z4`$.)
In this paper we focus on the reverse process by which gas that had already settled into virialized halos by the time of reionization was evaporated back into the IGM due to the cosmic UV background which emerged first at that epoch. The basic ingredients of our model are presented in §2. In order to ascertain the importance of a self-shielded gas core, we include a realistic, centrally concentrated dark halo profile and also incorporate radiative transfer. Generally we find that self-shielding has a small effect on the total amount of evaporated gas, since only a minor fraction of the gas halo is contained within the central core. Our numerical results are described in §3. In particular, we show the conditions in the highest mass halo which can be disrupted at reionization. We also use the Press-Schechter (1974) prescription for halo abundance to calculate the fraction of gas in the Universe which undergoes the process of photo-evaporation. Our versatile semi-analytic approach has the advantage of being able to yield the dependence of the results on a wide range of reionization histories and cosmological parameters. Clearly, the final state of the gas halo depends on its dynamical evolution during its photo-evaporation. We adopt a rough criterion for the evaporation of gas based on its initial interaction with the ionizing background. The precision of our results could be tested in specific cases by future numerical simulations. In §4 we discuss the potential implications of our results for the state of the IGM and for the early history of low-mass galaxies in the local Universe. Finally, we summarize our main conclusions in §5.
## 2 A Model for Halos at Reionization
We consider gas situated in a virialized dark matter halo. We adopt the prescription for obtaining the density profiles of dark matter halos at various redshifts from the Appendix of Navarro, Frenk, & White (1997, hereafter NFW), modified to include the variation of the collapse overdensity $`\mathrm{\Delta }_c`$. Thus, a halo of mass $`M`$ at redshift $`z`$ is characterized by a virial radius,
$$r_{\mathrm{vir}}=0.756\left(\frac{M}{10^8h^1M_{\mathrm{}}}\right)^{1/3}\left[\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}\frac{\mathrm{\Delta }_c}{200}\right]^{1/3}\left(\frac{1+z}{10}\right)^1h^1\mathrm{kpc},$$
(1)
or a corresponding circular velocity,
$$V_c=\left(\frac{GM}{r_{\mathrm{vir}}}\right)^{1/2}=31.6\left(\frac{r_{\mathrm{vir}}}{h^1\mathrm{kpc}}\right)\left[\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}\frac{\mathrm{\Delta }_c}{200}\right]^{1/2}\left(\frac{1+z}{10}\right)^{3/2}\mathrm{km}\mathrm{s}^1.$$
(2)
The density profile of the halo is given by
$$\rho (r)=\frac{3H_0^2}{8\pi G}(1+z)^3\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}\frac{\delta _c}{cx(1+cx)^2},$$
(3)
where $`x=r/r_{\mathrm{vir}}`$ and $`c`$ depends on $`\delta _c`$ for a given mass $`M`$. We include the dependence of halo profiles on $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, the current contributions to $`\mathrm{\Omega }`$ from non-relativistic matter and a cosmological constant, respectively (see Appendix A for complete details).
Although the NFW profile provides a good approximation to halo profiles, there are indications that halos may actually develop a core (e.g., Burkert 1995; Kravtsov et al. 1998; see, however, Moore et al. 1998). In order to examine the sensitivity of the results to model assumptions, we consider several different gas and dark matter profiles, keeping the total gas fraction in the halo equal to the cosmological baryon fraction. The simplest case we consider is an equal NFW profile for the gas and the dark matter. In order to include a core, instead of the NFW profile of equation (3) we also consider the density profile of the form fit by Burkert (1995) to dwarf galaxies,
$$\rho (r)=\frac{3H_0^2}{8\pi G}(1+z)^3\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}\frac{\delta _c}{(1+bx)\left[1+(bx)^2\right]}$$
(4)
where $`b`$ is the inverse core radius, and we set $`\delta _c`$ by requiring the mean overdensity to equal the appropriate value, $`\mathrm{\Delta }_c`$, in each cosmology (see Appendix A). We also consider two cases where the dark matter follows an NFW profile but the gas is in hydrostatic equilibrium with its density profile determined by its temperature distribution. In one case, we assume the gas is isothermal at the halo virial temperature, given by
$$T_{\mathrm{vir}}=\frac{\mu V_c^2}{2k_B}=36100\frac{\mu }{0.6m_p}\left(\frac{r_{\mathrm{vir}}}{h^1\mathrm{kpc}}\right)^2\frac{\mathrm{\Omega }_0}{\mathrm{\Omega }(z)}\frac{\mathrm{\Delta }_c}{200}\left(\frac{1+z}{10}\right)^3\mathrm{K},$$
(5)
where $`\mu `$ is the mean molecular weight as determined by ionization equilibrium, and $`m_p`$ is the proton mass. The spherical collapse simulations of Haiman, Thoul, & Loeb (1996) find a post-shock gas temperature of roughly twice the value given by equation (5), so we also compare with the result of setting $`T=2T_{\mathrm{vir}}`$. In the second case, we let the gas cool for a time equal to the Hubble time at the redshift of interest, $`z`$. Gas above $`10^4`$K cools rapidly due to atomic cooling until it reaches a temperature near $`10^4`$K, where the cooling time rapidly diverges. In this case, hydrostatic equilibrium yields a highly compact gas cloud when the halo virial temperature is greater than $`10^4`$K. In reality, of course, a fraction of the gas may fragment and form stars in these halos. However, this caveat hardly affects our results since only a small fraction of the gas which evaporates is contained in halos with $`T_{\mathrm{vir}}>10^4`$K. Throughout most of our subsequent discussion we consider the simple case of identical NFW profiles for both the dark matter and the gas, unless indicated otherwise.
We assume a helium mass fraction of $`Y=0.24`$, and include it in the calculation of the ionization equilibrium state of the gas as well as its cooling and heating (see, e.g., Katz, Weinberg, & Hernquist 1996). We adopt the various reaction and cooling rates from the literature, including the rates for collisional excitation and dielectronic recombination from Black (1981); the recombination rates from Verner & Ferland (1996), and the recombination cooling rates from Ferland et al. (1992) with a fitting formula by Miralda-Escudé (1998, private communication). Collisional ionization rates are adopted from Voronov (1997), with the corresponding cooling rate for each atomic species given by its ionization rate multiplied by its ionization potential. We also include cooling by Bremsstrahlung emission with a Gaunt factor from Spitzer & Hart (1971), and by Compton scattering off the microwave background (e.g., Shapiro & Kang 1987).
In assessing the effect of reionization, we assume for simplicity a sudden turn-on of an external radiation field with a specific intensity per unit frequency, $`\nu `$,
$$I_{\nu ,0}=10^{21}I_{21}(z)(\nu /\nu _L)^\alpha \text{ erg cm}^2\text{ s}^1\text{ sr}^1\text{Hz}^1,$$
(6)
where $`\nu _L`$ is the Lyman limit frequency. Our treatment of the response of the cloud to this radiation, as outlined below, is not expected to yield different results with a more gradual increase of the intensity with cosmic time. The external intensity $`I_{21}(z)`$ is responsible for the reionization of the IGM, and so we normalize it to have a fixed number of ionizing photons per baryon in the Universe. We define the ionizing photon density as
$$n_\gamma =_{\nu _L}^{\mathrm{}}\frac{4\pi I_{\nu ,0}}{h\nu c}\frac{\sigma _{HI}(\nu )}{\sigma _{HI}(\nu _L)}𝑑\nu ,$$
(7)
where the photoionization efficiency is weighted by the photoionization cross section of $`HI`$, $`\sigma _{HI}(\nu )`$, above the Lyman limit. The mean baryon number density is
$$n_b=2.25\times 10^4\left(\frac{1+z}{10}\right)^3\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)\text{ cm}^3.$$
(8)
Throughout the paper we refer to proper densities rather than comoving densities. As our standard case we assume a post–reionization ratio of $`n_\gamma /n_b=1`$, but we also consider the effect of setting $`n_\gamma /n_b=0.1`$. For example, $`\alpha =1.8`$ and $`n_\gamma /n_b=1`$ yield $`I_{21}=1.0`$ at $`z=3`$ and $`I_{21}=3.5`$ at $`z=5`$, close to the values required to satisfy the Gunn-Peterson constraint at these redshifts (see, e.g., Efstathiou 1992). Note that $`n_\gamma /n_b1`$ is required for the initial ionization of the gas in the Universe (although this ratio may decline after reionization).
We assume that the above uniform UV background illuminates the outer surface of the gas cloud, located at the virial radius $`r_{\mathrm{vir}}`$, and penetrates from there into the cloud. The radiation photoionizes and heats the gas at each radius to its equilibrium temperature, determined by equating the heating and cooling rates. The latter assumption is justified by the fact that both the recombination time and the heating time are initially shorter than the dynamical time throughout the halo. At the outskirts of the halo the dynamics may start to change before the gas can be heated up to its equilibrium temperature, but this simply means that the gas starts expanding out of the halo during the process of photoheating. This outflow should not alter the overall fraction of evaporated gas.
The process of reionization is expected to be highly non-uniform due to the clustering of the ionizing sources and the clumpiness of the IGM. As time progresses, the HII regions around the ionizing sources overlap, and each halo is exposed to ionizing radiation from an ever increasing number of sources. While the external ionizing radiation may at first be dominated by a small number of sources, it quickly becomes more isotropic as its intensity builds up with time (e.g., Haiman & Loeb 1998a,b; Miralda-Escudé, Haehnelt, & Rees 1998). The evolution of this process depends on the characteristic clustering scale of ionizing sources and their correlation with the inhomogeneities of the IGM. In particular, the process takes more time if the sources are typically embedded in dense regions of the neutral IGM which need to be ionized first before their radiation shines on the rest of the IGM. However, in our analysis we do not need to consider these complications since the total fraction of evaporated gas in bound halos depends primarily on the maximum intensity achieved at the end of the reionization epoch.
In computing the effect of the background radiation, we include self-shielding of the gas which is important at the high densities obtained in the core of high redshift halos. For this purpose, we include radiative transfer through the halo gas and photoionization by the resulting anisotropic radiation field in the calculation of the ionization equilibrium. We also include the fact that the ionizing spectrum becomes harder at inner radii, since the outer gas layers preferentially block photons with energies just above the Lyman limit. We neglect self-shielding due to helium atoms. Appendix B summarizes our simplified treatment of the radiative transfer equations.
Once the gas is heated throughout the halo, some fraction of it acquires a sufficiently high temperature that it becomes unbound. This gas expands due to the resulting pressure gradient and eventually evaporates back to the IGM. The pressure gradient force (per unit volume) $`k_B(T\rho /\mu )`$ competes with the gravitational force of $`\rho GM/r^2`$. Due to the density gradient, the ratio between the pressure force and the gravitational force is roughly the ratio between the thermal energy $`k_BT`$ and the gravitational binding energy $`\mu GM/r`$ (which is $`k_BT_{\mathrm{vir}}`$ at $`r_{\mathrm{vir}}`$) per particle. Thus, if the kinetic energy exceeds the potential energy (or roughly if $`T>T_{\mathrm{vir}}`$), the repulsive pressure gradient force exceeds the attractive gravitational force and expels the gas on a dynamical time (or faster for halos with $`TT_{\mathrm{vir}}`$).
We compare the thermal and gravitational energy (both of which are functions of radius) as a benchmark for deciding which gas shells are expelled from each halo. Note that infall of fresh IGM gas into the halo is also suppressed due to its excessive gas pressure, produced by the same photo-ionization heating process.
This situation stands in contrast to feedback due to supernovae, which depends on the efficiency of converting the mechanical energy of the supernovae into thermal energy of the halo gas. The ability of supernovae to disrupt their host dwarf galaxies has been explored in a number of theoretical papers (e.g., Larson 1974; Dekel & Silk 1986; Vader 1986, 1987). However, numerical simulations (Mac-Low & Ferrara 1998) find that supernovae produce a hole in the gas distribution through which they expel the shock-heated gas, leaving most of the cooler gas still bound. In the case of reionization, on the other hand, energy is imparted to the gas directly by the ionizing photons. A halo for which a large fraction of the gas is unbound by reionization is thus prevented from further collapse and star formation.
When the gas in each halo is initially ionized, an ionization shock front may be generated (cf. the discussion of Ly$`\alpha `$ absorbers by Donahue & Shull 1987). The dynamics of such a shock front have been investigated in the context of the interstellar medium by Bertoldi & McKee (1990) and Bertoldi (1989). Their results imply that the dynamics of gas in a halo are not significantly affected by the shock front unless the thermal energy of the ionized gas is greater than its gravitational potential energy. Furthermore, since gas in a halo is heated to the virial temperature even before reionization, the shock is weaker when the gas is ionized than a typical shock in the interstellar medium. Also, as noted above, the ionizing radiation reaching a given halo builds up in intensity over a considerable period of time. Thus, we do not expect the ionization shock associated with the first encounter of ionizing radiation to have a large effect on the eventual fate of gas in the halo.
## 3 Results
We assume the most popular cosmology to date (Garnavich et al. 1998) with $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. We illustrate the effects of cosmological parameters by displaying the results also for $`\mathrm{\Omega }_0=1`$, and for $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. The models all assume $`\mathrm{\Omega }_bh^2=0.02`$ and a Hubble constant $`h=0.5`$ if $`\mathrm{\Omega }_0=1`$ and $`h=0.7`$ otherwise (where $`H_0=100h\text{ km s}^1\text{Mpc}^1`$).
Figure 1 shows the temperature of the gas versus its baryonic overdensity $`\mathrm{\Delta }_b`$ relative to the cosmic average (cf. Efstathiou 1992). The curves are for $`z=8`$ and assume $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. We include intergalactic radiation with a flux given by equation (6) for $`\alpha =1.8`$ and $`n_\gamma /n_b=1`$. The dotted curve shows $`t_H=t_{cool}`$ with no radiation field, where $`t_H`$ is the age of the Universe, approximately equal to $`6.5\times 10^9h^1(1+z)^{3/2}\mathrm{\Omega }_0^{1/2}`$ years at high redshift. This curve indicates the temperature to which gas has time to cool through atomic transitions before reionization. This temperature is always near $`T=10^4`$K since below this temperature the gas becomes mostly neutral and the cooling time is very long. It is likely that only atomic cooling is relevant before reionization since molecular hydrogen is easily destroyed by even a weak ionizing background (Haiman, Rees, & Loeb 1996). The solid curve shows the equilibrium temperature for which the heating time $`t_{heat}`$ due to a UV radiation field equals the cooling time $`t_{cool}`$. The decrease in the temperature at $`\mathrm{\Delta }_b<10`$ is due to the increased importance of Compton cooling, which is proportional to the gas density rather than its square. At a given density, gas is heated at reionization to the temperature indicated by the solid curve, unless the net cooling or heating time is too long. The dashed curves show the temperature where the net cooling or heating time equals $`t_H`$. By definition, points on the solid curve have an infinite net cooling or heating time, but there is also a substantial regime at low $`\mathrm{\Delta }_b`$ where the net cooling or heating time is greater than $`t_H`$. However, this regime has only a minor effect on halos, since the mean overdensity inside the virial radius of a halo is of order 200. On the other hand, if gas leaves the halo and expands it quickly enters the regime where it cannot reach thermal equilibrium.
Figure 2 presents an example for the structure of a halo with an initial total mass of $`M=3\times 10^7M_{\mathrm{}}`$ at $`z=8`$. We assume the same cosmological parameters as in Figure 1. The bottom plot shows the baryon overdensity $`\mathrm{\Delta }_b`$ versus $`r/r_{\mathrm{vir}}`$, and reflects our assumption of identical NFW profiles for both the dark matter and the baryons. The middle plot shows the neutral hydrogen fraction versus $`r/r_{\mathrm{vir}}`$, and the top plot shows the ratio of thermal energy per particle ($`\mathrm{TE}=\frac{3}{2}k_BT`$) to potential energy per particle ($`\mathrm{PE}=\mu |\varphi (r)|`$, where $`\varphi (r)`$ is the gravitational potential) versus $`r/r_{\mathrm{vir}}`$. The dashed curves assume an optically thin halo, while the solid curves include radiative transfer and self-shielding. The self-shielded neutral core is apparent from the solid curves, but since the point where $`\mathrm{TE}/\mathrm{PE}=1`$ occurs outside this core, the overall unbound fraction does not depend strongly on the radiative transfer in this case. Its value is $`67\%`$ assuming an optically–thin halo, and $`64\%`$ when radiative transfer is included and only a fraction of the external photons make their way inside. Even when the opacity at the Lyman limit is large, some ionizing radiation still reaches the central parts of the halo because, (i) the opacity drops quickly above the Lyman limit, and (ii) the heated gas radiates ionizing photons inwards.
Figure 3 shows the unbound gas fraction after reionization as a function of the total halo mass. We assume $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, and $`n_\gamma /n_b=1`$. The three pairs of curves shown consist of a solid line (which includes radiative transfer) and a dashed line (which assumes an optically thin halo). From right to left, the first pair is for $`\alpha =1.8`$ and $`z=8`$, the second is for $`\alpha =5`$ and $`z=8`$, and the third is for $`\alpha =1.8`$ and $`z=20`$. In each case the self-shielded core lowers the unbound fraction when we include radiative transfer (solid vs dashed lines), particularly when the unbound fraction is sufficiently large that it includes part of the core itself. High energy photons above the Lyman limit penetrate deep into the halo and heat the gas efficiently. Therefore, a steepening of the spectral slope from $`\alpha =1.8`$ to $`\alpha =5`$ decreases the temperature throughout the halo and lowers the unbound gas fraction. This is only partially compensated for by our UV flux normalization, which increases $`I_{21}`$ with increasing $`\alpha `$ so as to get the same density of ionizing photons in equation (7). Increasing the reionization redshift from $`z=8`$ to $`z=20`$ increases the binding energy of the gas, because the high redshift halos are denser. Although the corresponding increase of $`I_{21}`$ with redshift (at a fixed $`n_\gamma /n_b`$) counteracts this change, the fraction of expelled gas is still reduced due to the deeper potential wells of higher redshift halos.
From plots similar to those shown in Figure 3, we find the total halo mass at which the unbound gas fraction is $`50\%`$. We henceforth refer to this mass as the $`50\%`$ mass. Figure 4 plots this mass as a function of the reionization redshift for different spectra and cosmological models. The solid line assumes $`\alpha =1.8`$ and the dotted line $`\alpha =5`$, both for $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The other lines assume $`\alpha =1.8`$ but different cosmologies. The short-dashed line assumes $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ and the long-dashed line assumes $`\mathrm{\Omega }_0=1`$. All assume $`n_\gamma /n_b=1`$. Gas becomes unbound when its thermal energy equals its potential binding energy. The thermal energy depends on temperature, but the equilibrium temperature does not change much with redshift since we increase the UV flux normalization by the same $`(1+z)^3`$ factor as the mean baryonic density. With this prescription for the UV flux, the $`50\%`$ mass occurs at a value of the circular velocity which is roughly constant with redshift. Thus for each curve, the change in mass with redshift is mostly due to the change in the characteristic halo density, which affects the relation between circular velocity and mass.
The cosmological parameters have only a modest effect on the $`50\%`$ mass, and change it by up to $`35\%`$ at a given redshift. Lowering $`\mathrm{\Omega }_0`$ reduces the characteristic density of a halo of given mass, and so a higher mass is required in order to keep the gas bound. Adding a cosmological constant reduces the density further through $`\mathrm{\Delta }_c`$ \[see equations (10) and (11)\]. For the three curves with $`\alpha =1.8`$, the circular velocity of the $`50\%`$ mass equals $`13\mathrm{km}\mathrm{s}^1`$ at all redshifts, up to variations of a few percent.
The spectral shape of the ionizing flux affects modestly the threshold circular velocity corresponding to the $`50\%`$ mass, because assuming a steeper spectrum (i.e. with a larger $`\alpha `$) reduces the gas temperature and thus requires a shallower potential to keep the gas bound. A higher flux normalization has the opposite effect of increasing the threshold circular velocity. The left panel of Figure 5 shows the variation of circular velocity with spectral shape, for two normalizations ($`n_\gamma /n_b=1`$ and $`n_\gamma /n_b=0.1`$ for the solid and dashed curves, respectively). The right panel shows the complementary case of varying the spectral normalization, using two values for the spectral slope ($`\alpha =1.8`$ and $`\alpha =5`$ for the solid and dashed curves, respectively). All curves assume an $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ cosmology.
Obviously, $`50\%`$ is a fairly arbitrary choice for the unbound gas fraction at which halos evaporate. Figure 3 shows that for a given halo, the unbound gas fraction changes from $`10\%`$ to $`90\%`$ over a factor of $`60`$ in mass, or a factor of $`4`$ in velocity dispersion. When $`50\%`$ of the gas is unbound, however, the rest of the gas is also substantially heated, and we expect the process of collapse and fragmentation to be inhibited. In the extreme case where the gas expands until a steady state is achieved where it is pressure confined by the IGM, less than $`10\%`$ of the original gas is left inside the virial radius. However, continued infall of dark matter should limit the expansion. Numerical simulations may be used to define more precisely the point at which gas halos are disrupted. Clearly, photo-evaporation affects even halos with masses well above the $`50\%`$ mass, although these halos do not completely evaporate. Note that it is also clear from Figure 3 that not including radiative transfer would have only a minor effect on the value of the $`50\%`$ mass (typically $`5\%`$).
Given the values of the unbound gas fraction in halos of different masses, we can integrate to find the total gas fraction in the Universe which becomes unbound at reionization. This calculation requires the abundance distribution of halos, which is readily provided by the Press-Schechter mass function for CDM cosmologies (relevant expressions are given, e.g., in NFW). The high-mass cutoff in the integration is given by the lowest mass halo for which the unbound gas fraction is zero, since halos above this mass are not significantly affected by the UV radiation. The low-mass cutoff is given by the lowest mass halo in which gas has assembled by the reionization redshift. We adopt for this low-mass cutoff the linear Jeans mass, which we calculate following Peebles (1993, §6). The gas temperature in the Universe follows the cosmic microwave background temperature down to a redshift $`1+z_t740(\mathrm{\Omega }_bh^2)^{2/5}`$, at which the baryonic Jeans mass is $`1.9\times 10^5(\mathrm{\Omega }_bh^2)^{1/2}M_{\mathrm{}}`$. After this redshift, the gas temperature goes down as $`(1+z)^2`$, so the baryon Jeans mass acquires a factor of $`[(1+z)/(1+z_t)]^{3/2}`$. Until now we have considered baryons only, but if we add dark matter then the mean density (or the corresponding gravitational force) is increased by $`\mathrm{\Omega }_0/\mathrm{\Omega }_b`$, which decreases the baryonic Jeans mass by $`(\mathrm{\Omega }_0/\mathrm{\Omega }_b)^{3/2}`$. The corresponding total halo mass is $`\mathrm{\Omega }_0/\mathrm{\Omega }_b`$ times the baryonic mass. Thus the Jeans cutoff before reionization corresponds to a total halo mass of
$$M_J=6.9\times 10^3\left(\frac{\mathrm{\Omega }_0h^2}{0.2}\right)^{\frac{1}{2}}\left(\frac{\mathrm{\Omega }_bh^2}{0.02}\right)^{\frac{3}{5}}\left(\frac{1+z}{10}\right)^{\frac{3}{2}}M_{\mathrm{}}.$$
(9)
This value agrees with the numerical spherical collapse calculations of Haiman, Thoul, & Loeb (1996).
We thus calculate the total fraction of gas in the Universe which is bound in pre-existing halos, and the fraction of this gas which then becomes unbound at reionization. In Figure 6 we show the fraction of the collapsed gas which evaporates as a function of the reionization redshift. The solid line assumes $`\alpha =1.8`$, and the dotted line assumes $`\alpha =5`$, both for $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The other lines assume $`\alpha =1.8`$, the short-dashed line with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ and the long-dashed line with $`\mathrm{\Omega }_0=1`$. All assume $`n_\gamma /n_b=1`$ and a primordial $`n=1`$ (scale invariant) power spectrum. In each case we normalized the CDM power spectrum to the present cluster abundance, $`\sigma _8=0.5\mathrm{\Omega }_0^{0.5}`$ (see, e.g., Pen 1998), where $`\sigma _8`$ is the root-mean-square amplitude of mass fluctuations in spheres of radius $`8h^1`$ Mpc. The fraction of collapsed gas which is unbound is $`0.4`$–0.7 at $`z=6`$ and it increases with redshift. This fraction clearly depends strongly on the halo abundance but is relatively insensitive to the spectral slope $`\alpha `$ of the ionizing radiation. In hierarchical models, the characteristic mass (and binding energy) of virialized halos is smaller at higher redshifts, and a larger fraction of the collapsed gas therefore escapes once it is photoheated. Among the three cosmological models, the characteristic mass at a given redshift is smallest for $`\mathrm{\Omega }_0=1`$ and largest for $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$.
In Figure 7 we show the total fraction of gas in the Universe which evaporates at reionization. The solid line assumes $`\alpha =1.8`$, and the dotted line assumes $`\alpha =5`$, both for $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$. The other lines assume $`\alpha =1.8`$, the short-dashed line with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ and the long-dashed line with $`\mathrm{\Omega }_0=1`$. All assume $`n_\gamma /n_b=1`$. For the different cosmologies, the total unbound fraction goes up to 20–25$`\%`$ if reionization occurs as late as $`z=6`$$`7`$; in this case a substantial fraction of the total gas in the Universe undergoes the process of expulsion from halos. However, this fraction typically decreases at higher redshifts. Although a higher fraction of the collapsed gas evaporates at higher $`z`$ (see Figure 6), a smaller fraction of the gas in the Universe lies in halos in the first place. The latter effect dominates except for the open model up to $`z7`$. As is well known, the $`\mathrm{\Omega }_0=1`$ model produces late structure formation, and indeed the collapsed fraction decreases rapidly with redshift in this cosmological model. The low $`\mathrm{\Omega }_0`$ models approach the $`\mathrm{\Omega }_0=1`$ behavior at high $`z`$, but this occurs faster for the flat model with a cosmological constant than for the open model with the same value of $`\mathrm{\Omega }_0`$.
Changing the dark matter and gas profiles as discussed in §2 has a modest effect on the results. For example, with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, and $`z=8`$, and for our standard model where the gas and dark matter follow identical NFW profiles, the total unbound gas fraction is $`19.8\%`$ and the halo mass which loses $`50\%`$ of its baryons is $`5.25\times 10^7M_{\mathrm{}}`$. If we let the mass and the baryons follow the profile of equation (4) the corresponding results are $`20.0\%`$ and $`5.31\times 10^7M_{\mathrm{}}`$ for $`b=10`$ in equation (4) and $`20.9\%`$ and $`6.84\times 10^7M_{\mathrm{}}`$ for $`b=5`$ (i.e. a larger core). With an NFW mass profile but gas in hydrostatic equilibrium at the virial temperature, the unbound fraction is $`19.2\%`$, and the $`50\%`$ mass is $`4.33\times 10^7M_{\mathrm{}}`$. If we let the gas temperature be $`T=2T_{\mathrm{vir}}`$, the unbound fraction is $`22.0\%`$ and the $`50\%`$ mass is $`1.18\times 10^8M_{\mathrm{}}`$. For clouds of gas which condense by cooling for a Hubble time, the unbound fraction is $`18.2\%`$, and the $`50\%`$ mass is $`3.38\times 10^7M_{\mathrm{}}`$. We conclude that centrally concentrated gas clouds are in general more effective at retaining their gas, but the effect on the overall unbound gas fraction in the Universe is modest, even for large variations in the profile. If we return to the NFW profile but adopt $`f=0.01`$ instead of $`f=0.5`$ in the NFW prescription for finding the collapse redshift (see Appendix A), we find an unbound fraction of $`20.3\%`$, and a $`50\%`$ mass of $`6.06\times 10^7M_{\mathrm{}}`$. Finally, lowering $`\mathrm{\Omega }_b`$ by a factor of 2 changes the unbound fraction to $`19.0\%`$ and the $`50\%`$ mass to $`5.44\times 10^7M_{\mathrm{}}`$. Our predictions appear to be robust against variations in the model parameters.
## 4 Implications for the Intergalactic Medium and for Low Redshift Objects
Our calculations show that a substantial fraction of gas in the Universe may lie in virialized halos before reionization, and that most of it evaporates out of the halos when it is photoionized and heated at reionization. The resulting outflows of gas from halos may have interesting implications for the subsequent evolution of structure in the IGM. We discuss some of these implications in this section.
In the pre-reionization epoch, a fraction of the gas in the dense cores of halos may fragment and form stars. Some star formation is, of course, needed in order to produce the ionizing flux which leads to reionization. These population III stars produce the first metals in the Universe, and they may make a substantial contribution to the enrichment of the IGM. Numerical models by Mac-Low & Ferrara (1998) suggest that feedback from supernovae is very efficient at expelling metals from dwarf galaxies of total mass $`3.5\times 10^8M_{\mathrm{}}`$, although it ejects only a small fraction of the interstellar medium in these hosts. Obviously, the metal expulsion efficiency depends on the presence of clumps in the supernova ejecta (Franco et al. 1993) and on the supernova rate – the latter depending on the unknown star formation rate and the initial mass function of stars at high redshifts. Reionization provides an alternative method for expelling metals efficiently out of dwarf galaxies by directly photoheating the gas in their halos, leading to its evaporation along with its metal content.<sup>3</sup><sup>3</sup>3Note that we have assumed zero metallicity in calculating cooling. Even if some metals had already been mixed into the IGM, the metallicity of newly formed objects was likely too low to affect cooling since even at $`z3`$ the typical metallicity of the Lyman alpha forest has been observed to be $`<0.01`$ solar (Songaila & Cowie 1996; Tytler et al. 1995).
Gas which falls into halos and is expelled at reionization attains a different entropy than if it had stayed at the mean density of the Universe. Gas which collapses into a halo is at a high overdensity when it is photoheated, and is therefore at a lower entropy than if it were heated to the same temperature at the mean cosmic density. However, the overall change in the entropy density of the IGM is small for two reasons. First, even at $`z=6`$ only about $`25\%`$ of the gas in the Universe undergoes evaporation. Second, the gas remains in ionization equilibrium and is photoheated during its initial expansion. For example, if $`z=6`$, $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`n_\gamma /n_b=1`$, and $`\alpha =1.8`$, then the recombination time becomes longer than the dynamical time only when the gas expands down to an overdensity of 26, at which point its temperature is 22,400 K compared to an initial (non-equilibrium) temperature of 19,900 K for gas at the mean density. The resulting overall reduction in the entropy is the same as would be produced by reducing the temperature of the entire IGM by a factor of 1.6. This factor reduces to 1.4 if we increase $`z`$ to 8 or increase $`\alpha `$ to 5. Note that Haehnelt & Steinmetz (1998) showed that differences in temperature by a factor of $`3`$$`4`$ result in possibly observable differences in the Doppler parameter distribution of Ly$`\alpha `$ absorption lines at redshifts 3–5.
When the halos evaporate, recombinations in the gas could produce Ly$`\alpha `$ lines or radiation from two-photon transitions to the ground state of hydrogen. However, a simple estimate shows that the resulting luminosity is too small for direct detection unless these halos are illuminated by an internal ionizing source. In an externally illuminated $`z=6`$, $`10^8M_{\mathrm{}}`$ halo our calculations imply a total of $`1\times 10^{50}`$ recombinations per second. Note that the number of recombinations is dominated by the high density core, and if we did not include self-shielding we would obtain an overestimate by a factor of $`15`$. If each recombination releases one or two photons with a total energy of $`10.2`$ eV, then for $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ the observed flux is $`5\times 10^{20}`$ erg s<sup>-1</sup> cm<sup>-2</sup>. This flux is well below the sensitivity of the planned Next Generation Space Telescope, even if part of this flux is concentrated in a narrow line.
The photoionization heating of the gaseous halos of dwarf galaxies resulted in outflows with a characteristic velocity of $`20`$$`30\mathrm{km}\mathrm{s}^1`$. These outflows must have induced peculiar velocities of a comparable magnitude in the IGM surrounding these galaxies. The effect of the outflows on the velocity field and entropy of the IGM at $`z=5`$–10 could in principle be searched for in the absorption spectra of high redshift sources, such as quasars. These small-scale fluctuations in velocity and the resulting temperature fluctuations have been seen in recent simulations by Bryan et al. (1998). However, the small halos responsible for these outflows were only barely resolved even in these high resolution simulations of a small volume.
The evaporating galaxies could contribute to the high column density end of the Ly$`\alpha `$ forest (cf. Bond, Szalay, & Silk 1988). For example, shortly after being photoionized, a $`z=8`$, $`5\times 10^7M_{\mathrm{}}`$ halo has a neutral hydrogen column density of $`2\times 10^{16}`$ cm<sup>-2</sup> at an impact parameter of $`0.5r_{\mathrm{vir}}=0.66`$ kpc, $`6\times 10^{17}`$ cm<sup>-2</sup> at $`0.25r_{\mathrm{vir}}`$, and $`9\times 10^{20}`$ cm<sup>-2</sup> (or $`9\times 10^{18}`$ cm<sup>-2</sup> if we do not include self-shielding) at $`0.1r_{\mathrm{vir}}`$ (assuming $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$, $`\alpha =1.8`$, and $`n_\gamma /n_b=1`$). These column densities will decline as the gas expands out of the host galaxy. Abel & Mo (1998) have suggested that a large fraction of the Lyman limit systems at $`z3`$ may correspond to mini-halos that survived reionization. Remnant absorbers due to galactic outflows can be distinguished from large-scale absorbers in the IGM by their compactness. Close lines of sight due to quasar pairs or gravitational lensed quasars (see, e.g., Crotts & Fang 1998; Petry, Impey, & Foltz 1998, and references therein) should probe different HI column densities in galactic outflow absorbers but similar column densities in the larger, more common absorbers. Follow-up observations with high spectroscopic resolution could reveal the velocity fields of these outflows.
Although much of the gas in the Universe evaporated at reionization, the underlying dark matter halos continued to evolve through infall and merging, and the heated gas may have accumulated in these halos at lower redshifts. This latter process has been discussed by a number of authors, with an emphasis on the effect of reionization and the resulting heating of gas. Thoul & Weinberg (1996) found a reduction of $`50\%`$ in the collapsed gas mass due to heating, for a halo of $`V_c=50\mathrm{km}\mathrm{s}^1`$ at $`z=2`$, and a complete suppression of infall below $`V_c=30\mathrm{km}\mathrm{s}^1`$. The effect is thus substantial on halos with virial temperatures well above the gas temperature. Their interpretation is that pressure support delays turnaround substantially and slows the subsequent collapse. Indeed, as noted in §2, the ratio of the pressure force to the gravitational force on the gas is roughly equal to the ratio of its thermal energy to its potential energy. For a given enclosed mass, the potential energy of a shell of gas increases as its radius decreases. Before collapse, each gas shell expands with the Hubble flow until its expansion is halted and then reversed. Near turnaround, the gas is weakly bound and the pressure gradient may prevent collapse even for gas below the halo virial temperature. On the other hand, gas which is already well within the virial radius is tightly bound, which explains our lower value of $`V_c13\mathrm{km}\mathrm{s}^1`$ for halos which lose half their gas at reionization.
Three dimensional numerical simulations (Quinn, Katz, & Efstathiou 1996; Weinberg, Hernquist, & Katz 1997; Navarro & Steinmetz 1997) have also explored the question of whether dwarf galaxies could re-form at $`z2`$. The heating by the UV background was found to suppress infall of gas into even larger halos ($`V_c75\mathrm{km}\mathrm{s}^1`$), depending on the redshift and on the ionizing radiation intensity. Navarro & Steinmetz (1997) noted that photoionization reduces the cooling efficiency of gas at low densities, which suppresses further the late infall at redshifts below 2. We note that these various simulations assume an isotropic ionizing radiation field, and do not calculate radiative transfer. Photoevaporation of a gas cloud has been calculated in a two dimensional simulation (Shapiro, Raga, & Mellema 1998), and methods are being developed for incorporating radiative transfer into three dimensional cosmological simulations (e.g., Abel, Norman, & Madau 1999; Razoumov & Scott 1999).
Our results have interesting implications for the fate of gas in low-mass halos. Gas evaporates at reionization from halos below $`V_c13\mathrm{km}\mathrm{s}^1`$, or a velocity dispersion $`\sigma 10\mathrm{km}\mathrm{s}^1`$. A similar value of the velocity dispersion is also required to reach a virial temperature of $`10^4`$ K, allowing atomic cooling and perhaps star formation before reionization. Thus, halos with $`\sigma 10\mathrm{km}\mathrm{s}^1`$ could have formed stars before reionization. They would have kept their gas after reionization, and could have had ongoing star formation subsequently. These halos were the likely sites of population III stars, and could have been the progenitors of dwarf galaxies in the local Universe (cf. Miralda-Escudé & Rees 1998). On the other hand, halos with $`\sigma 10\mathrm{km}\mathrm{s}^1`$ could not have cooled before reionization. Their warm gas was completely evaporated from them at reionization, and could not have returned to them until very low redshifts, possibly $`z1`$, so that their stellar population should be relatively young.
It is interesting to compare these predictions to the properties of dwarf spheroidal galaxies in the Local Group which have low central velocity dispersions. At first sight this appears to be a difficult task. The dwarf galaxies vary greatly in their properties, with many showing evidence for multiple episodes of star formation as well as some very old stars (see the recent review by Mateo 1998). Another obstacle is the low temporal resolution of age indicators for old stellar populations. For example, if $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ then the age of the Universe is $`43\%`$ of its present age at $`z=1`$ and $`31\%`$ at $`z=1.5`$. Thus, stars that formed at these redshifts may already be $`10`$ Gyr old at present, and are difficult to distinguish from stars that formed at $`z>5`$.
Nevertheless, one of our robust predictions is that most early halos with $`\sigma 10\mathrm{km}\mathrm{s}^1`$ could not have formed stars in the standard hierarchical scenario. Globular clusters belong to one class of objects with such a low velocity dispersion. Peebles & Dicke (1968) originally suggested that globular clusters may have formed at high redshifts, before their parent galaxies. However, in current cosmological models, most mass fluctuations on globular cluster scales were unable to cool effectively and fragment until $`z10`$, and were evaporated subsequently by reionization. We note that Fall & Rees (1985) proposed an alternative formation scenario for globular clusters involving a thermal instability inside galaxies with properties similar to those of the Milky Way. Globular clusters have also been observed to form in galaxy mergers (e.g., Miller et al. 1997). It is still possible that some of the very oldest and most metal poor globular clusters originated from $`z10`$, before the UV background had become strong enough to destroy the molecular hydrogen in them. However, primeval globular clusters should have retained their dark halos but observations suggest that globular clusters are not embedded in dark matter halos (Moore 1996; Heggie & Hut 1995).
Another related population is the nine dwarf spheroidals in the Local Group with central velocity dispersions $`\sigma 10\mathrm{km}\mathrm{s}^1`$, including five below $`7\mathrm{km}\mathrm{s}^1`$ (e.g., Mateo 1998). In the hierarchical clustering scenario, the dark matter in a present halo was most probably divided at reionization among several progenitors which have since merged. The velocity dispersions of these progenitors were likely even lower than that of the final halo. Thus the dwarf galaxies could not have formed stars at high redshifts, and their formation presents an intriguing puzzle. There are two possible solutions to this puzzle, (i) the ionizing background dropped dramatically at low redshifts, allowing the dwarf galaxies to form at $`z1`$, or (ii) the measured stellar velocity dispersions of the dwarf galaxies are well below the velocity dispersions of their dark matter halos.
Unlike globular clusters, the dwarf spheroidal galaxies are dark matter dominated. The dark halo of a present-day dwarf galaxy may have virialized at high redshifts but accreted its gas at low redshift from the IGM. However, for dark matter halos accumulating primordial gas, Kepner, Babul, & Spergel (1997) found that even if $`I_{21}(z)`$ declines as $`(1+z)^4`$ below $`z=3`$, only halos with $`V_c20\mathrm{km}\mathrm{s}^1`$ can form atomic hydrogen by $`z=1`$, and $`V_c25\mathrm{km}\mathrm{s}^1`$ is required to form molecular hydrogen.
Alternatively, the dwarf dark halos could have accreted cold gas at low redshift from a larger host galaxy rather than from the IGM. As long as the dwarf halos join their host galaxy at a redshift much lower than their formation redshift, they will survive disruption due to their high densities. The subsequent accretion of gas could result from passages of the dwarf halos through the gaseous tidal tail of a merger event or through the disk of the parent galaxy. In this case, retainment of cold, dense, and possibly metal enriched gas against heating by the UV background requires a shallower potential well than accumulating warm gas from the IGM. Simulations of galaxy encounters (Barnes & Hernquist 1992; Elmegreen, Kaufman, & Thomasson 1993) have found that dwarf galaxies could form but with small amounts of dark matter. However, the initial conditions of these simulations assumed parent galaxies with a smooth dark matter distribution rather than clumpy halos with dense sub-halos inside them. Simulations by Klypin et al. (1999) suggest that galaxy halos may have large numbers of dark matter satellites, most of which have no associated stars. If true, this implies that the dwarf spheroidal galaxies might be explained even if only a small fraction of dwarf dark halos accreted gas and formed stars.
A common origin for the Milky Way’s dwarf satellites (and a number of halo globular clusters), as remnants of larger galaxies accreted by the Milky Way galaxy, has been suggested on independent grounds. These satellites appear to lie along two (e.g., Majewski 1994) or more (Lynden-Bell & Lynden-Bell 1995, Fusi-Pecci et al. 1995) polar great circles. The star formation history of the dwarf galaxies (e.g., Grebel 1998) constrains their merger history, and implies that the fragmentation responsible for their appearance must have occured early in order to be consistent with the variation in stellar populations among the supposed fragments (Unavane, Wyse, & Gilmore 1996; Olszewski 1998). Observations of interacting galaxies (outside the Local Group) also suggest the formation of “tidal dwarf galaxies” (e.g., Duc & Mirabel 1997).
Finally, there exists the possibility that the measured velocity dispersion of stars in the dwarf spheroidals underestimates the velocity dispersion of their dark halos. Assuming that the stars are in equilibrium, their velocity dispersion could be lower than that of the halo if the mass profile is shallower than isothermal beyond the stellar core radius. As discussed in §2, halo profiles are thought to vary from being shallow in a central core to being steeper than isothermal at larger distances. The velocity dispersion and mass to light ratio of a dwarf spheroidal could also appear high if it is non-spherical or the stellar orbits are anisotropic. Some dwarf spheroidals may even not be dark matter dominated if they are tidally disrupted (e.g., Kroupa 1997). The observed properties of dwarf spheroidals require a central mass density of order $`0.1M_{\mathrm{}}`$ pc<sup>-3</sup> (e.g., Mateo 1998), which is $`7\times 10^5`$ times the present critical density. The stars therefore reside either in high-redshift halos or in the very central parts of low redshift halos. Detailed observations of the velocity dispersion profiles of these stars could be used to discriminate between these possibilities.
## 5 Conclusions
We have shown that the photoionizing background radiation which filled the Universe during reionization likely boiled most of the virialized gas out of CDM halos at that time. The evaporation process probably lasted of order a Hubble time due to the gradual increase in the UV background as the HII regions around individual sources overlapped and percolated until the radiation field inside them grew up to its cosmic value – amounting to the full contribution of sources from the entire Hubble volume. The precise reionization history depends on the unknown star formation efficiency and the potential existence of mini-quasars in newly formed halos (Haiman & Loeb 1998a).
The total fraction of the cosmic baryons which participate in the evaporation process depends on the reionization redshift, the ionizing intensity, and the cosmological parameters, but is not very sensitive to the precise gas and dark matter profiles of the halos. The central core of halos is typically shielded from the external ionizing radiation by the surrounding gas, but this core typically contains $`<20\%`$ of the halo gas and has only a weak effect on the global behavior of the gas. We have found that halos are disrupted up to a circular velocity $`V_c13\mathrm{km}\mathrm{s}^1`$ for a shallow, quasar-like spectrum, or $`V_c11\mathrm{km}\mathrm{s}^1`$ for a stellar spectrum, assuming the photoionizing sources build up a density of ionizing photons comparable to the mean cosmological density of baryons. At this photoionizing intensity, the value of the circular velocity threshold is nearly independent of redshift. The corresponding halo mass changes, however, from $`10^8M_{\mathrm{}}`$ at $`z=5`$ to $`10^7M_{\mathrm{}}`$ at $`z=20`$, assuming a shallow ionizing spectrum.
Based on these findings, we expect that both globular clusters and Local Group dwarf galaxies with velocity dispersions $`10\mathrm{km}\mathrm{s}^1`$ formed at low redshift, most probably inside larger galaxies. The latter possibility has been suggested previously for the Milky Way’s dwarf satellites based on their location along polar great circles.
###### Acknowledgements.
We are grateful to Jordi Miralda-Escudé, Chris McKee, Roger Blandford, Lars Hernquist, and David Spergel for useful discussions. We also thank Renyue Cen and Jordi Miralda-Escudé for assistance with the reaction and cooling rates. RB acknowledges support from Institute Funds. This work was supported in part by the NASA NAG 5-7039 grant (for AL).
APPENDIX A: Halo profile
We follow the prescription of NFW for obtaining the density profiles of dark matter halos, but instead of adopting a constant overdensity of 200 we use the fitting formula of Bryan & Norman (1998) for the virial overdensity:
$$\mathrm{\Delta }_c=18\pi ^2+82d39d^2$$
(10)
for a flat Universe with a cosmological constant and
$$\mathrm{\Delta }_c=18\pi ^2+60d32d^2$$
(11)
for an open Universe, where $`d\mathrm{\Omega }(z)1`$. Given $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, we define
$$\mathrm{\Omega }(z)=\frac{\mathrm{\Omega }_0(1+z)^3}{\mathrm{\Omega }_0(1+z)^3+\mathrm{\Omega }_\mathrm{\Lambda }+(1\mathrm{\Omega }_0\mathrm{\Omega }_\mathrm{\Lambda })(1+z)^2}.$$
(12)
In equation (3) $`c`$ is determined for a given $`\delta _c`$ by the relation
$$\delta _c=\frac{\mathrm{\Delta }_c}{3}\frac{c^3}{\mathrm{ln}(1+c)c/(1+c)}.$$
(13)
The characteristic density is given by
$$\delta _c=C(f)\mathrm{\Omega }(z)\left(\frac{1+z_{coll}}{1+z}\right)^3.$$
(14)
For a given halo of mass $`M`$, the collapse redshift $`z_{coll}`$ is defined as the time at which a mass $`M/2`$ was first contained in progenitors more massive than some fraction $`f`$ of $`M`$. This is computed using the extended Press-Schechter formalism (e.g. Lacey & Cole 1993). NFW find that $`f=0.01`$ fits their $`z=0`$ simulation results best. Since we are interested in high redshifts when mergers are very frequent, we adopt the more natural $`f=0.5`$ but also check the $`f=0.01`$ case. \[For example, the survival time of a $`z=8`$, $`5\times 10^7M_{\mathrm{}}`$ halo before it merges is $`30`$$`40\%`$ of the age of the Universe at that redshift (Lacey & Cole 1993).\] In both cases we adopt the normalization of NFW, which is $`C(0.5)=2\times 10^4`$ and $`C(0.01)=3\times 10^3`$.
APPENDIX B: Radiative Transfer
We neglect atomic transitions of helium atoms in the radiative transfer calculation. We only consider halos for which $`k_BT`$ is well below the ionization energy of hydrogen, and so following Tajiri & Umemura (1998) we assume that recombinations to excited levels do not result in further ionizations. On the other hand, recombinations to the ground state result in the emission of ionizing photons all of which are in a narrow frequency band just above the Lyman limit frequency $`\nu =\nu _L`$. We follow separately these emitted photons and the external incoming radiation. The external photons undergo absorption with an optical depth at the Lyman limit determined by
$$\frac{d\tau _{\nu _L}}{ds}=\sigma _{HI}(\nu _L)n_{HI}.$$
(15)
The emitted photons near $`\nu _L`$ are propagated by the equation of radiative transfer,
$$\frac{dI_\nu }{ds}=\sigma _{HI}(\nu )n_{HI}I_\nu +\eta _\nu .$$
(16)
Assuming all emitted photons are just above $`\nu =\nu _L`$, we can set $`\sigma _{HI}(\nu )=\sigma _{HI}(\nu _L)`$ in this equation and propagate the total number flux of ionizing photons,
$$F_1_{\nu _L}^{\mathrm{}}\frac{I_\nu }{h\nu }𝑑\nu .$$
(17)
The emissivity term for this quantity is
$$_{\nu _L}^{\mathrm{}}\frac{\eta _\nu }{h\nu }𝑑\nu =\frac{\omega }{4\pi }\alpha _{HI}n_{HII}n_e,$$
(18)
where $`\alpha _{HI}`$ is the total recombination coefficient to all bound levels of hydrogen and $`\omega `$ is the fraction of recombinations to the ground state. In terms of Table 5.2 of Spitzer (1978), $`\omega =(\varphi _1\varphi _2)/\varphi _1`$. We find that a convenient fitting formula up to $`64,000`$K, accurate to $`2\%`$, is (with $`T`$ in K)
$$\omega =0.2050.0266\mathrm{ln}(T)+0.0049\mathrm{ln}^2(T).$$
(19)
When these photons are emitted they carry away the kinetic energy of the absorbed electron. When the photons are re-absorbed at some distance from where they were emitted, they heat the gas with this extra energy. Since $`k_BTh\nu _L`$ we do not need to compute the exact frequency distribution of these photons. Instead we solve a single radiative transfer equation for the total flux of energy (above the ionization energy of hydrogen) in these photons,
$$F_2_{\nu _L}^{\mathrm{}}\frac{I_\nu }{h\nu }(h\nu h\nu _L)𝑑\nu .$$
(20)
The emissivity term for radiative transfer of $`F_2`$ is
$$_{\nu _L}^{\mathrm{}}\frac{\eta _\nu }{h\nu }(h\nu h\nu _L)𝑑\nu =\frac{2.07\times 10^{11}}{T^{1/2}}\frac{\chi _1(\beta )\chi _2(\beta )}{4\pi }n_{HII}n_e\text{ erg cm}^3\text{ s}^1\text{ sr}^1,$$
(21)
where $`\beta =h\nu _L/kT`$, $`T`$ is in K, and the functions $`\chi _1`$ and $`\chi _2`$ are given in Table 6.2 of Spitzer (1978). We find a fitting formula up to $`64,000`$K, accurate to $`2\%`$ (with $`T`$ in K):
$$\chi _1(T)\chi _2(T)=\{\begin{array}{cc}0.78\hfill & \text{if }T<10^3\text{ K}\hfill \\ 0.172+0.255\mathrm{ln}(T)0.0171\mathrm{ln}^2(T)\hfill & \text{otherwise.}\hfill \end{array}$$
(22)
From each point we integrate along all lines of sight to find $`\tau _{\nu _L}`$, $`F_1`$ and $`F_2`$ as a function of angle. Because of spherical symmetry, we do this only at each radius, and the angular dependence only involves $`\theta `$, the angle relative to the radial direction. We then integrate to find the photoionization rate. For each atomic species, the rate is
$$\mathrm{\Gamma }_{\gamma i}=_0^{4\pi }𝑑\mathrm{\Omega }_{\nu _i}^{\mathrm{}}\frac{I_\nu }{h\nu }\sigma _i(\nu )𝑑\nu \mathrm{s}^1,$$
(23)
where $`\nu _i`$ and $`\sigma _i(\nu )`$ are the threshold frequency and cross section for photoionization of species $`i`$, given in Osterbrock \[1989; see Eq. (2.31) for HI, HeI and HeII\]. For the external photons the UV intensity is $`I_{\nu ,0}e^{\tau _\nu }`$, with the boundary intensity $`I_{\nu ,0}=I_{\nu _L,0}(\nu /\nu _L)^\alpha `$ as before, and $`\tau _\nu `$ approximated as $`\tau _{\nu _L}(\nu /\nu _L)^3`$. Since $`\sigma _i(\nu )`$ has the simple form of a sum of two power laws, the frequency integral in $`\mathrm{\Gamma }_{\gamma i}`$ can be done analytically, and only the angular integration is computed numerically (cf. the similar but simpler calculation of Tajiri & Umemura 1998). There is an additional contribution to photoionization for HI only, from the emitted photons just above $`\nu =\nu _L`$, given by $`_0^{4\pi }𝑑\mathrm{\Omega }\sigma _i(\nu _L)F_1`$. The photoheating rate per unit volume is $`n_iϵ_i`$, where $`n_i`$ is the number density of species $`i`$ and
$$ϵ_i=_0^{4\pi }𝑑\mathrm{\Omega }_{\nu _i}^{\mathrm{}}\frac{I_\nu }{h\nu }\sigma _i(\nu )(h\nu h\nu _L)𝑑\nu \mathrm{s}^1\mathrm{ergs}\mathrm{s}^1.$$
(24)
The rate for the external UV radiation is calculated for each atomic species similarly to the calculation of $`\mathrm{\Gamma }_{\gamma i}`$. The emitted photons contribute to $`ϵ_{HI}`$ an extra amount of $`_0^{4\pi }𝑑\mathrm{\Omega }\sigma _i(\nu _L)F_2`$.
|
no-problem/9901/astro-ph9901379.html
|
ar5iv
|
text
|
# The twisted parsec-scale structure of 0735+178
## 1. Introduction
0735+178 was first classified as a BL Lacertae object by Carswell et al. (carswell 74 (1974)), who identified a pair of absorption lines in an otherwise featureless optical spectrum, thereby obtaining a lower limit to the redshift of the source $`z_{\mathrm{min}}=0.424`$. Hence, any observed apparent speeds in 0735+178 calculated using this redshift are strictly lower limits.
Radio maps of this source at milliarcsecond resolution have been obtained by using VLBI arrays at centimeter wavelengths, and show a compact core and a jet of emission extending to the northeast. Polarimetric VLBI observations of this source were first performed by Gabuzda, Wardle & Roberts (De89 (1989)) at a wavelength of 6 cm: the corresponding polarized intensity image shows a jet magnetic field that is predominantly perpendicular to the jet axis, together with a dramatic change in the polarized flux by 40% over a period of 24 hours. The multi-epoch 6 cm VLBI observations of 0735+178 presented by Bääth & Zhang (BZ91 (1991)) indicated superluminal motion in their component *C0* at an apparent velocity of $``$7.9 $`h^1c`$ ($`H_{}`$= 100 $`h`$ km s<sup>-1</sup>Mpc<sup>-1</sup>, $`q_{}`$= 0.5), moving between the core and a stationary component, which they labeled *B*, situated about 4.2 mas from the core. This situation resembles that found in the quasar 4C 39.25 (Alberdi et al. An93 (1993) and references therein), where a very strong component moves superluminally between the core and an outer stationary component, and its increasing brightness as it moves is interpreted as Doppler enhancement caused by a bend toward the line of sight. Gabuzda et al. (1994a ; hereafter G94) confirmed the motion of the superluminal component *C0*, which they designated as *K2*, at a velocity of 7.4 $`h^1c`$, and detected two new superluminal components with apparent transverse speeds of 5.0 $`h^1c`$ and 4.2 $`h^1c`$. The observations of G94 were near the time of intersection of the moving component *K2* with the stationary one *K1* (component *B* of Bääth & Zhang BZ91 (1991)), which took place at epoch $``$1989.8. Their images do not show any evidence for a violent interaction between these components, and they interpret this in terms of a model similar to that suggested by Bääth & Zhang (BZ91 (1991)), in which the stationary component *K1* is associated with a bend in the jet toward the line of sight. However, if this is the case, we might expect a deceleration of *K2* as it approaches *K1*, along with an increase of its flux due to enhancement of Doppler boosting, as observed and simulated in the case of 4C 39.25 (Alberdi et al. An93 (1993)). Neither of these were observed in 0735+178. If *K1* corresponds to a bend toward the observer, it also remains unclear why G94 measured component *K1* to have shifted $``$ 0.7 mas outward sometime between 1981.9 and 1983.5, after its collision with *K2*, and to have remained in that position afterward. One explanation for this “dragging” of *K1* is that we are dealing with an interaction between a moving shock (*K2*) and a standing conical shock (*K1*), as Gómez et al. (Go97 (1997)) have modeled through numerical simulations. In this scenario, an increase in the Mach number of the flow causes the stationary shock to reset to a new site downstream of its original position.
Higher resolution VLBI maps obtained by Bääth et al. (Bal91 (1991)) at 1.3 cm reveal a complex structure consisting of several distinct superluminal components moving in a rather straight jet extending to the northeast. However, a fit to the kinematical properties of the superluminal components led Bääth et al. (Bal91 (1991)) to suggest a bend of the jet in the inner milliarcsecond. Indeed, a quite complex structure in the inner milliarcsecond of 0735+178 appears in the 1.3 cm VLBI maps presented by Zhang & Bääth (ZB91 (1991)). Over a time span of about 1.2 yrs, the jet structural position angle was found to change from $``$ 25 to $``$ 73. It is unclear whether Zhang & Bääth (ZB91 (1991)) observed different components tracing a bent common path, or components being ejected from the core with different position angles, as G94 suggested based on their observations. The first direct evidence of a curved structure in the inner jet of 0735+178 was presented by Kellermann et al. (Ke98 (1998)), as part of a Very Long Baseline Array survey (VLBA<sup>1</sup><sup>1</sup>1 The VLBA is an instrument of the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. ) at 15 GHz.
In this paper we present the first polarimetric VLBI observations of 0735+178 at 1.3 cm and 7 mm, obtained with the VLBA. The images show an apparently quite twisted structure in the inner two milliarcseconds of the jet, in very good agreement with the total intensity 2 cm VLBA image obtained by Kellermann et al. (Ke98 (1998)). We analyze and interpret the magnetic field struture and jet morphology, and discuss the possibility of a precessing jet in 0735+178.
## 2. Observations
The observations were performed on 1996 November 11 and December 22 using the VLBA at 1.3 cm and 7 mm in snapshot mode. The data were recorded in 1-bit sampling VLBA format with 32 MHz bandwidth per circular polarization. The reduction of the data was performed within the NRAO Astronomical Image Processing System (AIPS) software in the usual manner (e.g., Leppänen et al. Le95 (1995)). Delay differences between the right- and left-handed systems were estimated over a short scan of cross-polarized data of a strong calibrator (3C 454.3). The instrumental polarization was determined using the feed solution algorithm developed by Leppänen et al. (Le95 (1995)). We refer the readers to Gómez et al. (Go98 (1998)) for further details about the reduction and calibration of the data.
Due to poor weather conditions in some of the antenna locations during the second run, specially at Brewster, much of the 7 mm data had to be deleted, leading to a significantly poorer total intensity image and the loss of all polarization at this epoch.
## 3. Results
Figures 1 and 2 show the VLBA total and linearly polarized intensity images of 0735+178 at 1.3 cm and 7 mm, respectively. Tables 1 and 2 summarize the physical parameters obtained for 0735+178 at 1.3 cm and 7 mm, respectively. Tabulated data corresponds to position in right ascension and declination relative to the core component, total flux, polarized flux, magnetic field position angle, degree of polarization, and separation and structural position angle relative to the core component. Components in the total intensity maps were analyzed by model-fitting the uv data with circular Gaussian components using the Difmap software (Shepherd sh97 (1997)). Components are labeled from east to west using upper-case letters for the images at 1.3 cm, and are marked in Figs. 1 and 2. Due to the potentially large uncertainties associated with the process of model-fitting, only components well above the noise level were considered. For example, there is some weak evidence for the existence of components *d* and *c1* in the 1.3 cm maps, but the errors in the model-fitting are sufficiently large that we have decided not to include these in the final model. Component *B* is best fit at the second 7 mm epoch by two separate components labeled *b2* and *b1*, the combined flux of which compares quite well with that of component *b* in the 7 mm image at the first epoch. To reduce the errors in the model-fitting we have fit the emission from the section of the jet beyond component *B* as a single component, labeled *A*.
### 3.1. The Twisted Structure of the Inner Jet of 0735+178
Bääth et al. (Bal91 (1991)) and Bääth & Zhang (BZ91 (1991)) suggested the existence of bent structure in the jet of 0735+178 in order to explain the apparent acceleration of superluminal components as they moved further from the core. G94 found that components seem to move along linear trajectories, but to be ejected with different position angles, which could lead to apparent curved structure in the VLBI jet.
Our observations directly confirm the existence of a very twisted structure in the inner region of the jet in 0735+178, in good agreement with the observations presented by Kellermann et al. (Ke98 (1998)). In our 1.3 cm maps we observe two sharp 90 bends in the projected trajectory of the jet, one near the position of component *C* and the other close to *B*. At 7 mm the bend near the region of component *c2* (component *C* at 1.3 cm) is also quite evident, while the second bend can only be traced at the second epoch, since the emission beyond component *b* is resolved out in November 1996.
Through a comparison with our maps, we can offer an explanation for the otherwise puzzling structure Zhang & Bääth (ZB91 (1991)) observed in their 1.3 cm VLBI maps. It is possible that they observed distinct components at different positions of a common curved path, consistent with that traced in our images. However in the 1.3 cm VLBI map by Bääth et al. (Bal91 (1991)) they observed in the inner milliarcsecond a straight jet extending northeast at a position angle of $``$45, in contrast with the structure we observe in our images. Furthermore, G94 measured different ejection position angles for components in 0735+178.
In order to allow an easier comparison with our images, we have marked in our November 1996 map of Fig. 1 the expected position for components *K3*, *K4*, *K5*, and *K6* of G94 using their measured proper motions and assuming ballistic trajectories from the core since their last observing epoch in 1992.44. For component *K*5 we have used the position reported for their 1990.47 epoch, otherwise the extrapolation would place this component far too east of *C*, outside of the structure we have mapped. No proper motion data is available for component *K6,* but assuming a similar value to that observed for the other components, we have used 0.3 mas yr<sup>-1</sup>. Figure 1 shows that this extrapolation places the G94 components within the structure found on our images, suggesting ballistic motion with systematically changing component ejection directions as the most plausible explanation for the curved structure of 0735+178.
However, the data is also consistent with motion of components following a common curved path. Component *K3* was observed to change drastically its position angle relative to the core between 1987.41 and 1990.47, from 75 to 44 (see G94). Component *K5* also experienced a change in its position angle between 1990.47 and 1992.44, with its velocity vector becoming more aligned toward the direction of component *C*, between the position of components *c3* and *c2*. The positions of components G94 in the inner milliarcsecond are consistent with the twisted structure we have mapped, therefore we cannot rule out the possibility of a common curved funnel through which components flow. Note also that this curved funnel could change with time if it were produced by precession of the nozzle in 0735+178. In the case of ballistic motion, the direction of ejection of new components should have changed progressively from 54, to 35, and then to 15 from 1984 to 1991.25 to explain the different structural position angles observed for the birth of *K4*, *K5*, and *K6*. The direction of ejection would then have needed to change to 78 within about 4 yr, if we assume that component *C* was ejected around the beginning of 1995, coincident with an outburst in total flux at 22 and 37 GHz measured at Metsähovi (Teräsranta et al. Hi98 (1998)). Since then, it would have had to remain at a similar orientation (at least on the plane of the sky) to give birth to components *c3* and *d*, which according to two small flares in the Metsähovi data (Harri Teräsranta, private communication) took place around 1995.8 and 1996.7 (just before our first epoch), respectively. Note that the apparently curved jet of 0735+178 would appear more pronounced if components are ejected with different speeds, as seems to be the case.
Because of the short time range between our two epochs, the differences in the positions of the components lie within the errors in the model-fitting, therefore we can only rely on the observed magnetic field structure, and previous detections of components motions, to study the possibility of ballistic motions for the components in 0735+178.
### 3.2. Polarization
The peak polarization at both epochs and wavelengths is located at the core, with a degree of polarization between 2 and 4%, very close to the values measured by Gabuzda, Wardle, & Roberts (De89 (1989)) and G94 at 6 and 3.6 cm. However, these values should be regarded as approximate, since maxima in the images of polarized intensity are not always precisely coincident with maxima in the total intensity (see also Gómez et al. Go98 (1998)). Greater differences between the peaks in total and polarized intensity are found in the 7 mm map. Some polarized emission is also visible further from the core, especially at the first epoch at 1.3 cm. In this map the jet between components *C* and *B* can be traced in polarization, where it has a rather uniform intensity. For the second epoch at 1.3 cm the core and component *C* are clearly detected in polarization, and there is also weaker polarized flux detected at the position of *B*. In the 7 mm polarization map the core and component *c2* are clearly visible. Another component appears close to the core, and we have tentatively identified this component as the polarized counterpart of *d*. Component *c3* is also observed in polarization, while *c1* is just marginally detected.
A very similar orientation of the magnetic field vector in the jet is observed at both epochs in the 1.3 cm maps and in the 7 mm map at November 1996. The magnetic field in the core at our first epoch is oriented toward the east, in the direction of component *C*. This is consistent with the values found by Gabuzda, Wardle, & Roberts (De89 (1989)), although opacity effects due to the different wavelengths of observation may render this comparison meaningless. If the core is optically thick at both wavelengths, we should consider an extra rotation of 90 in the polarization angle when comparing to the remaining optically thin jet. At epoch November 1996, adding the flux of the core and component *d* at 7 mm, we obtain a very similar value (the same to within the uncertainty) as that measured for the core at 1.3 cm. The same is true for the second epoch. The core polarization angle in the 7 mm map shows a slightly different value than at 1.3 cm, while component *d* shows a magnetic field more aligned with the jet. These differences between the 1.3 cm and 7 mm maps are probably due to differences in resolution, although they may reflect a change in the opacity of the core. G94 found a rotation of the polarization angle in the core when observed at different epochs and frequencies. These changes in the polarization angle may be due to ejections of new components with polarization angles different than the core. Because of the lower resolution, these may appear in their maps as variations of a single blended component -their core- similar to what we have experienced when comparing our 1.3 cm and 7 mm maps.
The magnetic field structure in the jet between *C* and *B* is complex, and difficult to interpret unambiguously. One obvious possibility is that the magnetic field follows the direction of the apparently twisted jet flow. In this case, the magnetic field throughout this part of the jet would be longitudinal. However, another interpretation is possible: that different features move from the core in different structural position angles, and that the magnetic field vectors are transverse to the flow direction in *C* and along the flow direction from the core in *B*. This alternative point of view is also consistent with the fact the magnetic field in component *c3* also points back toward the core. In either of these interpretations the magnetic field in the jet north of *C* is longitudinal.
## 4. Theoretical Interpretations and Conclusions
Although 0735+178 represents the most abruptly bent jet observed at milliarcseconds scales to our knowledge, many jets in BL Lacs and quasars appear to be curved. While this is amplified by projection effects, since relativistic jets in blazars must be pointing close to the line of sight to explain superluminal motion and related phenomena, it is not clear what causes the bends of at least 5 that must be present to account for the observed twists.
One idea (Hardee Ha87 (1987)), which has supporting observational evidence (Denn & Mutel DM98 (1998)), is that the jet precesses, with the velocity vectors of the flow following the bends in the jet axis. Numerical three-dimensional magnetohydrodynamical simulations (Hardee, Clarke, & Rosen Ha97 (1997)) show how Kelvin-Helmholtz (K-H) helical instabilities can develop from an initial induced precession at the jet inlet, resulting in a twisted jet with a helical geometry and magnetic field configuration. Such K-H instabilities can also arise from an external pressure gradient not aligned with the initial direction of the jet flow.
Another possibility for jet bending is ballistic precession, such that the flow velocity is directed radially away from the jet apex, in the manner of SS 433; this may be suggested by the trajectories of components *K3*, *K4*, *K5*, and *K6* (G94). Alternatively, the nozzle of the jet could change direction in a more erratic fashion than for precession. Bends can also occur in response to gradients in the pressure of the external medium that confines the jet. Another possibility, also suggested for 0735+178 by G94, is that components only illuminate part of a broad jet funnel, giving the illusion of a bent trajectory.
Each possibility for jet bending results in specific predictions. For example, in the case of erratic change in the nozzle direction, knots should separate from the core following ballistic trajectories. For ballistic precession of the nozzle, the different sections of the jet should connect smoothly and the magnetic field direction should relate to the radius vector from the core, rather than follow the jet curvature. For precession in which the flow velocity vector changes direction with the jet axis, moving components should follow the bends, as should the magnetic field vector. In addition, the flux density and apparent velocity of the components should change as the angle between the velocity and the line of sight vary, as in 4C 39.25 (Alberdi et al. An93 (1993)). A precessing jet can be distinguished from one that changes because of external pressure gradients by the regular pattern of the bend, which should follow the geometry of a helix in projection.
With only 41 days between our two epochs, the expected motions of the components in the jet of 0735+178 lie within the errors in our model-fitting, hence we cannot yet conclude whether components on these scales follow ballistic trajectories or the direction of the jet axis. A ballistic extrapolation of the position of the components observed in G94 shows that ballistic motions could explain the observed twisted structure for the jet. The information provided by the direction of the magnetic field, which follows the bend near component *B*, suggests that the jet in 0735+178 precesses such that the flow velocity is parallel to the jet axis. Indeed, the curved jet axis is consistent with the projection of a helix, which would have to experience an increase in the helical wavelength beyond component *B* to account for the rather straight jet observed at larger scales (see G94). However, the interpretation of the magnetic field structure in *B* and in the jet between *B* and *C* is not entirely clear. It may be that the magnetic field in this part of the jet is associated with radial flow from the core along different position angles, and is longitudinal -directed back toward the core, suggestive of ballistic motion.
Further high resolution polarimetric VLBI observations of 0735+178 are necessary to measure proper motions and determine whether the apparent twisted structure in this source represents a common curved path for the jet flow or a superposition of independently ejected ballistically moving components. Such observations would also make it possible to search for systematic changes in the jet emission of the sort expected if the nozzle of the jet of 0735+178 precesses.
This research was supported in part by Spain’s Dirección General de Investigación Científica y Técnica (DGICYT), grants PB94-1275 and PB97-1164, by NATO travel grant SA.5-2-03 (CRG/961228), and by U.S. National Science Foundation grant AST-9802941.
|
no-problem/9901/hep-ph9901291.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
The most popular regularization method for perturbative calculations in gauge theories is dimensional regularization . In conjunction with the minimal substraction (MS) scheme, it leads to renormalized Green functions satisfying the Slavnov-Taylor identities. It has problems, however, in supersymmetric gauge theories, because invariance under supersymmetry transformations depends on the specific dimensionality of the objects involved. Hence, in general, dimensional regularization does not preserve supersymmetry. To improve the situation, Siegel proposed a modified version of the method , called regularization by dimensional reduction (or dimensional reduction, for short). It treats integral momenta (or space-time points) as $`d`$-dimensional vectors but takes all fields to be four-dimensional tensors or spinors. The relation between the four-dimensional and $`d`$-dimensional spaces is given by dimensional reduction from 4 to $`d`$ dimensions, i.e., the 4-dimensional space is decomposed into a direct sum of $`d`$\- and (4-$`d`$)-dimensional subspaces in the following sense (we always work in Euclidean space) : the 4-dimensional metric-tensor $`\delta _{\mu \nu }`$ (satisfying the properties $`\delta _{\mu \nu }\delta _{\nu \rho }=\delta _{\mu \rho }`$ and $`\delta _{\mu \mu }=4`$) and the $`d`$dimensional one $`\stackrel{~}{\delta }_{\mu \nu }`$ (satisfying $`\stackrel{~}{\delta }_{\mu \nu }\stackrel{~}{\delta }_{\nu \rho }=\stackrel{~}{\delta }_{\mu \rho }`$ and $`\stackrel{~}{\delta }_{\mu \mu }=d`$) are related by
$$\delta _{\mu \nu }\stackrel{~}{\delta }_{\nu \rho }=\stackrel{~}{\delta }_{\mu \rho }.$$
(1)
Although this method is known to be inconsistent , the inconsistencies are in many cases under control and dimensional reduction is actually the preferred regularization method for explicit calculations in supersymmetric theories<sup>3</sup><sup>3</sup>3In Ref. some modifications were proposed which make the scheme consistent at the price of breaking supersymmetry (at higher orders).
Differential renormalization is a position-space method that performs regularization and substraction in one step by substituting ill-defined expressions by derivatives of well-defined ones . Recently, a new version aiming to preserve gauge invariance and supersymmetry has been developed and automatized at the one-loop level. This version, called constrained differential renormalization (CDR), is based on a set of rules that determine the renormalization of the Green functions. In Ref. , T. Hahn and one of the authors (M.P.V.) argued and explicitly checked that, to one loop, CDR and dimensional reduction in the MS scheme render the same results, up to a redefinition of the renormalization scales. Our purpose here is to discuss this in greater detail. In Section 2 we do it in position space. Its counterpart in momentum space is briefly discussed in Section 3. In Section 4, we illustrate the equivalence by comparing the calculation of the anomalous magnetic moment of a charged lepton in supergravity when CDR and dimensional reduction are employed. Finally, Section 5 is devoted to conclusions.
## 2 CDR and dimensional reduction in position space
CDR, as the usual differential renormalization method, is naturally formulated in position space. CDR renormalizes each Feynman graph by reducing it to a linear combination of basic funcions (and their derivatives) which are then replaced by their renormalized expressions. The renormalization of the basic functions is fully determined by four rules to be described below, which are significant for the fulfilment of Ward identities. A generic (one-loop) basic function is a string of propagators, with a differential operator $`𝒪`$ acting on the last one. $`𝒪`$ is either the identity or a “product” of space-time derivatives. Basic functions with differential operators with contracted or uncontracted indices are considered independent, because it turns out that contraction of Lorentz indices does not commute with CDR. For this reason, to decompose a Feynman graph into basic functions one must simplify all the (Dirac) algebra and contract all Lorentz indices. Notice that reducing the renormalization of a Feynman graph to the renormalization of the basic functions is equivalent to linearity and compatibility of CDR with the Leibniz rule for the derivative of a product (which is used in the decomposition).
The renormalization of the basic functions is determined by four rules :
1. Differential reduction: singular expressions are substituted by derivatives of regular ones. We distinguish two cases:
1. Functions with singular behaviour worse than logarithmic ($`x^4`$) are reduced to derivatives of logarithmically singular functions without introducing extra dimensionful constants.
2. Logarithmically singular functions are written as derivatives of regular functions. This requires introducing an arbitrary dimensionful constant.
2. Formal integration by parts: derivatives act formally by parts on test functions. In particular,
$$[F]^R=F^R,$$
(2)
where $`F`$ is an arbitrary function and $`R`$ stands for renormalized.
3. Delta function renormalization rule:
$$[F(x,x_1,\mathrm{},x_n)\delta (xy)]^R=[F(x,x_1,\mathrm{},x_n)]^R\delta (xy).$$
(3)
4. The general validity of the propagator equation:
$$\left[F(x,x_1,\mathrm{},x_n)(\mathrm{}^xm^2)\mathrm{\Delta }_m(x)\right]^R=\left[F(x,x_1,\mathrm{},x_n)(\delta (x))\right]^R,$$
(4)
where $`\mathrm{\Delta }_m(x)=\frac{1}{4\pi ^2}\frac{mK_1(mx)}{x}`$ and $`K_1`$ is a modified Bessel function.
Rule 1 reduces the “degree of singularity”, connecting singular and regular expressions. Rule 2 is essential to make sense of rule 1, for otherwise the right-hand-side of it would not be a well-defined distribution. These two rules are the essential prescriptions of the method of differential renormalization. Forbidding the introduction of dimensionful scales outside logarithms, we completely fix the scheme<sup>4</sup><sup>4</sup>4This prescription simplifies calculations and the renormalization group equation. Nevertheless, in all cases we have studied (scalar and spinor QED and QCD) the inclusion of dimensionful constants outside logarithms does not spoil gauge invariance, as long as the other rules are respected.. Note that the last three rules are valid mathematical identities among tempered distributions when applied to a well-behaved enough function $`F`$. The rules formally extend their range of applicability to arbitrary functions.
Rule 1 specifies the renormalization of any one-loop expression up to arbitrary local terms. The other rules lead to a system of algebraic equations for these local terms . It turns out that a solution exists, and this solution is unique once an initial condition is given (apart from the requirement in rule 1a of not introducing extra dimensionful constants, which is also an initial condition):
$$\left[\mathrm{\Delta }_0(x)^2\right]^R=\left[\left(\frac{1}{4\pi ^2x^2}\right)^2\right]^R=\frac{1}{(4\pi ^2)^2}\frac{1}{4}\mathrm{}\frac{\mathrm{log}x^2M^2}{x^2}.$$
(5)
This is the most general realization of rule 1b for $`\mathrm{\Delta }_0(x)^2`$, and introduces the unique dimensionful constant of the whole process, $`M`$, which has dimensions of mass and plays the role of renormalization group scale.
The decomposition of Feynman graphs into basic functions can be performed in both dimensional regularization and dimensional reduction in exactly the same way as we have described for CDR. Although in the dimensional methods this prescription is not necessary (for in $`d`$ dimensions everything is well-defined), we shall assume that all Lorentz indices have been contracted before identifying the basic functions. These contain only d-dimensional objects both in dimensional regularization and in dimensional reduction. Indeed, although in the latter constractions with the 4-dimensional metric tensor are performed, Eq. 1 projects them into the $`d`$-dimensional subspace. Hence, the regulated basic functions are identical in these two methods. On the other hand, expressions dimensionally regulated satisfy rules 2 to 4 because they are well-defined distributions. They also satisfy rule 1a for the same reason (what agrees with the scaling property of $`d`$-dimensional integrals, which forbids the appearance of new dimensionful constants). A renormalization scale $`\mu `$ is introduced to keep the coupling constant with a fixed dimension and appears only inside logarithms. Rule 1b is never needed because the (formal) degree of divergence is non-integer in the dimensional methods. Instead, the use of rule 1a in expressions which diverge logarithmically when $`ϵ=\frac{4d}{2}0`$, gives rise to poles of the form $`\frac{1}{ϵ}`$. In particular, the regularized value of $`\mathrm{\Delta }_0(x)^2`$ is
$$\mu ^{2ϵ}\frac{\mathrm{\Gamma }^2(\frac{d}{2}1)}{4^2\pi ^d}x^{42d}=\frac{1}{(4\pi ^2)^2}\left[\pi ^2\frac{1}{ϵ}\delta ^{(d)}(x)\frac{1}{4}\mathrm{}\frac{\mathrm{log}(x^2\mu ^2\pi \gamma _Ee^2)}{x^2}\right]+O(ϵ),$$
(6)
where we have included the global factor $`\mu ^{2ϵ}`$ to have a dimensionless argument in the logarithm, expanded in $`ϵ`$ and used the $`d`$-dimensional equalities
$$x^p=\frac{\mathrm{}x^{p+2}}{(p+2)(dp)}$$
(7)
and
$$\mathrm{}\left[\frac{\mathrm{\Gamma }(\frac{d}{2}1)}{4\pi ^{d/2}}x^{2d}\right]=\delta ^{(d)}(x),$$
(8)
to rewrite it. $`\gamma _E=1.781\mathrm{}`$ is Euler’s constant.
Now, since the dimensionally regularized basic functions satisfy the CDR rules, they must also be a solution to the set of algebraic equations discussed above, but with the initial condition given by Eq. 6. This is true for each order of the Laurent series in $`ϵ`$. Therefore, substracting the $`\frac{1}{ϵ}`$ poles, which always multiply a local term, and taking the limit $`ϵ0`$ (i.e., using the MS scheme) one obtains renormalized basic functions that are $`ϵ`$-independent solutions of the equations. In particular, the renormalization of $`\mathrm{\Delta }_0(x)^2`$ is given by Eq. 5 with
$$M^2=\mu ^2\pi \gamma _Ee^2.$$
(9)
Once the initial condition is completely fixed, the solution to the set of equations is unique, so it must be the same in CDR and in dimensional regularization or dimensional reduction. Summarizing, the renormalized basic functions in dimensional regularization, dimensional reduction and CDR are identical if the MS scheme is used for the former methods and Eq. 9 holds.
This does not mean that the renormalized Feynman diagrams are the same in the three methods, because in the dimensional ones the substraction must be performed after multiplying by the coefficients outside the basic functions. Then, if these coefficients contain $`O(ϵ)`$ pieces, the structure of the Laurent series can be spoiled. Extra local $`O(ϵ^0)`$ terms are picked up from the $`\frac{1}{ϵ}`$ poles of the basic functions, and the final result does not, in general, coincide with the CDR one. However, this does not occur in the case of dimensional reduction because, if the decomposition of the diagram has been performed as in CDR, there are no $`O(ϵ)`$ pieces out of the basic functions. The reason is that all the coefficients are 4-dimensional and are never projected into $`d`$ dimensions since all contractions with $`d`$-dimensional objects were already performed and included in the definition of the basic functions. In other words, with this decomposition all external indices are 4-dimensional (and all internal ones are $`d`$-dimensional). Therefore, renormalized Feynman diagrams in CDR and in dimensional reduction with MS coincide, if Eq. 9 holds. This is not true in dimensional regularization because the dimension $`d`$ can appear explicitly outside the basic functions, for everything is considered $`d`$-dimensional. For example, $`\stackrel{~}{\delta }_{\mu \mu }=d`$ can appear out of the basic functions. Dimensional regularization only coincides with dimensional reduction and CDR at the level of basic functions.
## 3 CDR and dimensional reduction in momentum space
Obviously, if CDR and dimensional reduction give the same renormalized amplitudes in position space, they do too in momentum space, because the Fourier transforms of well-defined distributions are uniquely determined. The decomposition into basic functions is performed exactly as in position space, the basic functions corresponding now to (tensor) basic integrals of a set of internal momenta times a product of propagators. This can be done because the linearity of CDR in position space (together with rule 2) implies linearity in momentum space. Then, one just has to substitute divergent basic integrals by the Fourier transforms of the corresponding renormalized expressions in position space. In the case of dimensional reduction this is the same as doing the full calculation in momentum space. The Fourier transform of the initial condition Eq. 5 is
$$\left[\frac{d^4k}{(2\pi )^4}\frac{1}{k^2(kp)^2}\right]^R=\frac{1}{16\pi ^2}\mathrm{log}\frac{\overline{M}^2}{p^2},$$
(10)
where $`\overline{M}=2M/\gamma _E`$. The relation between this $`\overline{M}`$ and $`\mu `$ is given by
$$\overline{M}^2=\frac{\mu ^24\pi }{\gamma _E}e^2=\overline{\mu }^2e^2,$$
(11)
where $`\overline{\mu }`$ is the renormalization scale of the $`\overline{\text{MS}}`$ scheme. This is the relation found in Ref. .
## 4 A physical example: $`(g2)_l`$ in supergravity
The calculation of the anomalous magnetic moment of a charged lepton, $`(g2)_l`$, in supergravity is a convenient place to compare CDR with dimensional regularization and dimensional reduction. First, $`(g2)_l`$ is an observable; second, it is power counting divergent (and hence regularization dependent); and third, supersymmetry requires it to vanish . The diagrams giving $`e\kappa ^2`$ corrections, where $`e`$ is the electric charge and $`\kappa ^2=8\pi G_N`$, with $`G_N`$ Newton’s constant, are depicted in Fig. 1.
This calculation has been discussed in detail several times . Table 1 gathers the different contributions in dimensional regularization, dimensional reduction and CDR.
Although the contribution of each diagram diverges, the sum of all diagrams where a graviton (D1-D5) is interchanged is finite (in all methods), as is the sum of those with a gravitino interchange (D6-D10), and hence the total sum. Whereas dimensional regularization breaks supersymmetry and gives a non-zero result , a vanishing correction is obtained both in dimensional reduction and in CDR . We see that CDR and dimensional reduction in MS do give the same results for each diagram if the renormalization scales are related by Eq. 11. The total graviton (gravitino) contribution being finite, it is identical in both methods. (In Ref. the scale-independent parts of the CDR result have errors due to the omission of a local term in one basic function, but the total graviton and gravitino contributions are correct because that local term cancels in the sums.)
## 5 Conclusions
We have discussed the one-loop equivalence of CDR and dimensional reduction in the MS ($`\overline{\text{MS}}`$) scheme. The result also applies in the presence of anomalies, for both methods can be used with the same computing rules in position (momentum) space: Feynman diagrams are decomposed completely into basic functions (integrals), doing all the algebra in 4 dimensions, and then the singular (divergent) expressions are replaced by the renormalized ones. In the two methods, chiral anomalies appear as ambiguities in the writing of the external tensors: it is possible to add pieces which vanish in 4 dimensions but change the decomposition into basic functions (integrals), and this can affect the final result due to the non-commutation of renormalization with contraction of Lorentz indices. In dimensional reduction this can be also understood as the fact that these pieces are projected into $`d`$ dimensions, where they do not vanish any longer. In Ref. we showed how the right ABJ anomaly was recovered in the context of CDR and checked that a democratic treatment of the traces located all the anomaly in the chiral current. Exactly the same applies to dimensional reduction.
CDR has been only developed at the one-loop level, but an extension of the method to higher orders, based on the same rules 14 or their extension, is in principle possible. It does not follow from our discussion that such a method should be equivalent to dimensional reduction. On the one hand, dimensional reduction might not obey the extended rules; on the other, the mere presence of subdivergencies changes the simple procedure discussed here. In the best of the worlds, the extended CDR would preserve gauge invariance and supersymmetry, and not suffer from inconsistencies as the ones in dimensional reduction.
## Acknowledgements
We thank J.I. Latorre for discussions, and the organizers for a pleasant meeting and for their patience. This work has been supported by CICYT, under contract number AEN96-1672 and by Junta de Andalucía, FQM101.
|
no-problem/9901/cond-mat9901069.html
|
ar5iv
|
text
|
# Theory of sound attenuation in glasses: The role of thermal vibrations.
## Abstract
Sound attenuation and internal friction coefficients are calculated for a realistic model of amorphous silicon. It is found that, contrary to previous views, thermal vibrations can induce sound attenuation at ultrasonic and hypersonic frequencies that is of the same order or even larger than in crystals. The reason is the internal-strain induced anomalously large Grüneisen parameters of the low-frequency resonant modes.
Sound attenuation in glasses is poorly understood. This is because many competing factors lead to sound wave damping. Most important are thermally activated structural relaxation, hypothetical tunneling states, topological defects, and thermal vibrations. Sorting out different contributions for a given temperature $`T`$ and sound wave frequency $`\nu =\mathrm{\Omega }/2\pi `$ is a difficult task.
Experiments show the following features: (i) At temperatures $`T10`$ K and ultrasonic frequencies (10 MHz to 1 GHz) the sound attenuation coefficient $`\mathrm{\Gamma }(T)`$ exhibits a small, frequency-dependent peak . (ii) At higher temperatures, between 10 and 200 K, another peak in $`\mathrm{\Gamma }(T)`$ develops whose center increases only moderately when $`\nu `$ increases. The peak broadens at hypersonic frequencies and is not seen above 100 GHz . As a function of frequency $`\mathrm{\Gamma }(\nu )\nu `$ at the peak temperatures . (iii) At hypersonic frequencies $`\mathrm{\Gamma }(T)`$ appears to be almost independent of (or slightly increasing with) $`T`$ above the peak (ii) to at least 300 K . (iv) Room temperature $`\mathrm{\Gamma }(\nu )\nu ^2`$ from at least 200 MHz ; this dependence continues for up to about 300 GHz and seems valid for any temperature above the peak (ii). Finally, (v) the attenuation coefficients for longitudinal ($`\mathrm{\Gamma }_L`$) and transverse ($`\mathrm{\Gamma }_T`$) waves are similar. .
While the low-temperature behavior (i) of $`\mathrm{\Gamma }`$ is understood based on the interaction of sound waves with tunneling states , features (ii) through (v) lack consistent theoretical justification. The higher-temperature peak (ii) shows many attributes of a thermally activated relaxational process, but a calculation shows that to fit experiment, different sets of relaxational processes are needed for different $`\nu `$ . Also the plateau region (iii) is difficult to explain by a thermally-activated relaxation process since numerical fits require unphysically large attempt frequencies. Further, thermal relaxation processes give attenuation that increases more slowly than quadratic with increasing $`\nu `$, contradicting (iv). Thermal vibrations have been overlooked as a sound-wave damping factor on grounds that vibrational modes would need unreasonably large Grüneisen parameters ($`\gamma 200`$ for vitreous silica ) to account for the measured $`\mathrm{\Gamma }`$. Until now, however, there has been no numerical study to test this argument.
In this paper we examine the role thermal vibrations play in the sound attenuation in glasses. We will use the term “vibron” to refer to any quantized vibrational mode in a glass. Our analysis is restricted to the region $`\mathrm{\Omega }\tau _{\mathrm{in}}1`$ (the so called Akhiezer regime), where $`\tau _{\mathrm{in}}`$ is the inelastic lifetime or thermal equilibration time of a thermal vibron. We show that the unusually strong coupling (measured by Grüneisen parameters $`\gamma `$) between sound waves and the low-frequency resonant modes explains the features (iii) through (v). As for the interpretation of (ii), our calculation shows that confusion arises because there actually are two different peaks. One is caused by relaxational processes (not addressed here) and dominates below 1 GHz and another is due to thermal vibrations and dominates at the lowest hypersonic frequencies. A double peak structure should be expected at intermediate frequencies. There is some indication for such structure in measurements on vitreous silica. Our calculation is also a prediction: the existing measurements on amorphous Si report $`\mathrm{\Gamma }`$ at too low frequencies (300 MHz) to see contributions of thermal vibrations. But even at higher frequencies (say, 30 GHz) one may expect traces of thermally activated peaks due to various defects. Recently discovered amorphous Si with 1 at. % H in which tunneling (and perhaps also relaxational) processes are inhibited would be excellent to test our results.
In the Akhiezer regime a sound wave passing through a solid can be attenuated by two processes. First, if the wave is longitudinal, periodic contractions and dilations in the solid induce a temperature wave via thermal expansion. Energy is dissipated by heat conduction between regions of different temperatures. Second, dissipation occurs as the gas of vibrons tries to reach an equilibrium characterized by a local (sound-wave induced) strain. This is the internal friction mechanism. To establish the relative importance of the two processes, consider order-of-magnitude formulas $`\mathrm{\Gamma }_h(\mathrm{\Omega }^2/\rho v^3)(\kappa T\alpha ^2\rho ^2v^2/C^2)`$ and $`\mathrm{\Gamma }_i(\mathrm{\Omega }^2/\rho v^3)(CT\tau _{\mathrm{in}}\gamma ^2)`$ for the heat conductivity and internal friction processes, respectively. Here $`\rho `$ is density, $`C`$ specific heat per unit volume, $`v`$ sound velocity, $`\kappa `$ thermal conductivity, and $`\alpha `$ is the coefficient of thermal expansion. The ratio $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_i(\kappa \alpha ^2\rho ^2v^2)/(C^3\tau _{\mathrm{in}}\gamma ^2)`$ becomes more intuitive when putting $`\alpha C\gamma /B`$ ($`B\rho v^2`$ is the bulk modulus) and $`\kappa CD`$ where $`D`$ is diffusivity. Then $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_iD/(v^2\tau _{\mathrm{in}})`$. The factor $`v^2\tau _{\mathrm{in}}`$ measures the ability of vibrons to absorb energy from a sound wave of velocity $`v`$. The difference between a glass and a crystal lies in the values of $`D`$ and $`\tau _{\mathrm{in}}`$. In crystals $`Dv^2\tau _{\mathrm{in}}`$, that is, energy is carried by phonon wave packets with group velocity $`v`$. The ratio $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_i`$ is then of order unity. In glasses energy is transferred by diffusion (spreading rather than ballistic propagation of wave packets) and $`D`$ is not related to $`\tau _{\mathrm{in}}`$. One of the reasons the contribution to $`\mathrm{\Gamma }_i`$ of thermal vibrons was previously underestimated is that $`\tau _{\mathrm{in}}`$ was guessed from thermal conductivity ; this gave too small $`\tau _{\mathrm{in}}`$. For amorphous Si $`D10^6`$ $`\mathrm{m}^2/s`$, $`v8\times 10^3`$ m/s, and $`\tau _{\mathrm{in}}`$ $`10^{12}`$s give $`\mathrm{\Gamma }_h/\mathrm{\Gamma }_i`$ 0.02. Since these are typical values, $`\mathrm{\Gamma }_h`$ can be neglected. This is consistent with experiment: compared with crystals glasses have smaller $`\kappa `$ and yet $`\mathrm{\Gamma }`$ can be larger .
Internal friction leads to sound-wave energy attenuation $`\mathrm{\Gamma }=(\mathrm{\Omega }^2/\rho v^3q^2)\eta _{\alpha \beta \gamma \delta }q_\alpha e_\beta q_\gamma e_\delta `$, where $`\eta _{\alpha \beta \gamma \delta }`$ is the internal friction tensor with cartesian coordinates $`\alpha \mathrm{}\delta `$ and $`𝐪`$ ($`𝐞`$) is the wave vector (polarization) of the sound wave. Summation over repeated indexes is assumed. We will evaluate $`\mathrm{\Gamma }`$ for both longitudinal ($`L`$) and transverse ($`T`$) sound waves with wave vectors averaged over all directions:
$`\mathrm{\Gamma }_L={\displaystyle \frac{\mathrm{\Omega }^2}{15\rho v_L^3}}(\eta _{\alpha \alpha \beta \beta }+2\eta _{\alpha \beta \alpha \beta }),`$ (1)
$`\mathrm{\Gamma }_T={\displaystyle \frac{\mathrm{\Omega }^2}{30\rho v_T^3}}(3\eta _{\alpha \beta \alpha \beta }\eta _{\alpha \alpha \beta \beta }).`$ (2)
The coefficients $`\eta _{\alpha \beta \gamma \delta }`$ are the real part of a complex tensor $`\overline{\eta }_{\alpha \beta \gamma \delta }`$ which can be obtained by solving a kinetic equation in relaxation time approximation ,
$`\overline{\eta }_{\alpha \beta \gamma \delta }={\displaystyle \underset{j}{}}Tc_j\tau _j{\displaystyle \frac{\gamma _{\alpha \beta }^j\gamma _{\gamma \delta }^j\left(\overline{\gamma }_{\alpha \beta }\gamma _{\gamma \delta }^j+\gamma _{\alpha \beta }^j\overline{\gamma }_{\gamma \delta }\right)/2}{1i\mathrm{\Omega }\tau _j}}.`$ (3)
The summation is over all vibrational modes $`j`$; $`c_j`$ and $`\tau _j`$ denote mode specific heat and relaxation time. The Grüneisen tensor $`\gamma _{\alpha \beta }^j`$ is the relative shift of mode frequency $`\omega _j`$ per unit strain $`ϵ_{\alpha \beta }`$; $`\overline{\gamma }`$ is the mode average of $`\gamma ^j`$ weighted with $`c_j/(1i\mathrm{\Omega }\tau _j)`$. The applicability of kinetic theory to the problem of internal friction was justified by DeVault and coworkers who obtained $`\eta `$ from a microscopic theory as an autocorrelation function of the momentum current density operator. Remarkably, the microscopic theory shows that the momentum current in a solid is not monopolized by ballistically propagating vibrational modes as in the case of the energy current. Nonpropagating (even localized) modes can contribute as much as propagating ones to the momentum current. One consequence is that the concept of “minimum kinetic coefficient,” as introduced for electrical or heat conductivity of disordered systems, is not realized for internal friction. We generalized DeVault’s theory to include internal strain, the atomic rearrangements in a strained solid. We found that internal strain affects internal friction only by modifying $`\gamma ^j`$, as in the case of thermal expansion: $`\gamma ^j`$ now reflects the change between the initial mode frequency and the frequency of the mode after the rescaling (scaling parameter $`1+ϵ`$) plus the rearranging of atomic positions (to achieve a new equilibrium at strain $`ϵ`$). Internal strain is very important for thermal expansion of glasses; we will show that it is important for $`\eta `$ (and $`\mathrm{\Gamma }`$) as well.
We calculate $`\eta `$ and $`\mathrm{\Gamma }`$ for the model of amorphous Si based on the Wooten-Winer-Weaire atomic coordinates and Stillinger-Weber interatomic forces , with 1000 atoms arranged in a cube of side 27.549 Å with periodic boundary conditions. Diagonal Grüneisen parameters $`\gamma _{\alpha \alpha }^j/3(\gamma _{11}^j+\gamma _{22}^j+\gamma _{33}^j)/3`$ for this model were given in Ref. , transverse $`\gamma _{\alpha \beta }`$ are calculated here. Vibrational lifetimes $`\tau _j`$ are extracted from their 216-atom version values (see also Ref. ). The model has sound velocities $`v_L=7640`$ m/s and $`v_T=3670`$ m/s.
Figure 1 shows the calculated $`\mathrm{\Gamma }(\nu )`$ for longitudinal and transverse sound waves in amorphous Si from 10 MHz to 1 THz at 300 K. The attenuation $`\mathrm{\Gamma }\nu ^2`$ up to about 100 GHz, where the condition for the applicability of kinetic theory, $`\mathrm{\Omega }\tau _{\mathrm{in}}1`$ reaches its limit ($`\tau _{\mathrm{in}}`$ 1 ps). Our calculation is not valid beyond this point. In comparison, the measured attenuation of longitudinal waves in vitreous silica grows quadratically with $`\nu `$ up to at least 400 GHz suggesting that $`\tau _{\mathrm{in}}`$ in vitreous silica is several times smaller than in amorphous Si. This is not surprising since Si is remarkably harmonic: room temperature heat conductivity of crystalline Si is larger by an order of magnitude than that of quartz, and a similar relation may hold for the corresponding $`\tau _{\mathrm{in}}`$ of the glassy phases.
More surprising is the comparison with crystalline Si. Figure 1 shows that $`\mathrm{\Gamma }_L`$ is similar for the amorphous and crystalline cases (measured $`\mathrm{\Gamma }`$ for vitreous silica is several times larger than for quartz). One would naively expect the sound attenuation in a glass to be much smaller than in the corresponding crystal since, owing to a distribution of bond lengths and bond angles, anharmonicity of the glass is higher (and $`\tau _{\mathrm{in}}`$ smaller). The same interatomic potential, for example, yields $`\tau _{\mathrm{in}}`$ for high-frequency phonons in crystalline Si at 300 K about five times larger than in amorphous Si. The reason why $`\mathrm{\Gamma }`$ in glasses can be of the same order or even higher than in crystals is the internal-strain induced anomalously large Grüneisen parameters of the resonant modes (see also Fig.3). (Resonant modes are low-frequency extended modes whose amplitude is unusually large at a small, typically undercoordinated region .) Atomic rearrangements caused by internal strain are largest in the same regions of undercoordination where the resonant modes have largest amplitude . This leads to high sensitivity (measured by $`\gamma `$) of the frequencies of these modes to strain. If the internal strain is neglected, the sound attenuation is an order of magnitude smaller, as seen in Fig. 1. (Since the resonant modes have low frequencies, their $`\tau _j`$ is longer than an average $`\tau _{\mathrm{in}}`$; this adds even more weight to these modes.) Fewer than one percent of the modes are capable of increasing $`\mathrm{\Gamma }`$ by a decade! We believe the measured $`\mathrm{\Gamma }`$ for vitreous silica is also caused by the strong coupling of sound waves and resonant modes. Vitreous silica is a much more open structure than amorphous Si so the number of resonant modes should be higher, bringing $`\mathrm{\Gamma }`$ above the crystalline value.
Another interesting feature in Fig.1 is the relative attenuation strength for longitudinal and transverse sound waves. While our model of amorphous Si gives $`\mathrm{\Gamma }_L/\mathrm{\Gamma }_T1/3`$ at 300 K, the measured ratio for crystalline Si is reversed: $`\mathrm{\Gamma }_L/\mathrm{\Gamma }_T3`$ . This again shows how differently is sound attenuated in glasses and in crystals. The ratio $`\mathrm{\Gamma }_L/\mathrm{\Gamma }_T`$ can be written as $`(v_T/v_L)^3(\gamma _L^2/\gamma _T^2)`$, where $`\gamma _L`$ and $`\gamma _T`$ are effective Grüneisen parameters. A crude way to estimate $`\gamma _L^2`$ and $`\gamma _T^2`$, suggested by Eqs. 1 and 2 is to take mode averages of $`(\gamma _{\alpha \alpha }^j\gamma _{\beta \beta }^j+2\gamma _{\alpha \beta }^j\gamma _{\alpha \beta }^j)/15`$ and $`(3\gamma _{\alpha \beta }^j\gamma _{\alpha \beta }^j\gamma _{\alpha \alpha }^j\gamma _{\beta \beta }^j)/30`$. Our model gives $`\gamma _L^23`$ and $`\gamma _T^21`$. The ratio $`\mathrm{\Gamma }_L/\mathrm{\Gamma }_T`$ is then about 1:3, in accord with the full calculation. Assuming the same ratio $`\gamma _L^2/\gamma _T^23`$ for vitreous silica ($`v_L=5800`$ m/s and $`v_T=3800`$ m/s), transverse and longitudinal waves are attenuated about equally. This is observed in experiment. The explanation of the measured $`\mathrm{\Gamma }_L/\mathrm{\Gamma }_T`$ in crystalline Si can be found in Ref. .
In Fig. 2 we plot $`\mathrm{\Gamma }(T)`$ for different $`\nu `$. A remarkable feature is a peak at about 20 K at 1MHz and below. As $`\nu `$ increases, the peak shifts towards higher $`T`$ and vanishes above 4-5 GHz. Two factors cause the peak. (a) The sum $`_jc_j(\gamma ^j)^2`$ saturates at much lower temperatures (about 50 K) than the model Debye temperature $`T_D450`$ K. This is because the relevant $`j`$ are resonant modes with small frequencies. (b) For low-frequency modes $`T\tau _j`$, after increasing linearly develops a peak, before going constant \[much like $`\mathrm{\Gamma }(T)`$ itself\]. As the temperature dependence of $`\mathrm{\Gamma }`$ follows $`_jc_j(\gamma ^j)^2T\tau _j`$, the peak appears. At large $`\nu `$ the peak vanishes because of the factor $`1/(1+\mathrm{\Omega }^2\tau ^2)`$ in Eq. 3. At $`T`$ above 100 K $`\mathrm{\Gamma }(T)`$ is nearly constant, as observed in experiment as a plateau (iii). This again follows from (a) and (b).
We are not aware of any experiment with which we could compare our calculations. The measurement of $`\mathrm{\Gamma }(T)`$ of sputtered amorphous Si films reported in Ref. , for example, was performed at 300 MHz. This is too low to see any contributions from thermal vibrations. The whole temperature spectrum is dominated by a single peak of the type (ii), except at very low temperatures. This peak is expected to increase linearly with $`\nu `$, until thermal vibrations become relevant (roughly at 10 GHz), causing a plateau (iv) that increases as $`\nu ^2`$ at higher frequencies. Even at smaller frequencies one may see some vibrational contribution to $`\mathrm{\Gamma }(T)`$ at large enough $`T`$, since the thermally activated peak decreases as $`1/T`$ at large $`T`$.
Anomalous low $`T`$ thermal expansion already suggested very large $`\gamma `$ values for low $`\omega `$ modes. Our large $`\gamma `$ values agree nicely with trends in $`\alpha (T)`$. Like thermal expansion, $`\mathrm{\Gamma }`$ should be strongly sample and model dependent. There is evidence that our highly homogeneous model of amorphous Si becomes free of resonant modes when the number of atoms grows to infinity. That means an infinite model would predict $`\mathrm{\Gamma }`$ about a decade smaller than calculated here. Amorphous silicon, however, can be prepared only in thin films where voids and other inhomogeneities are unavoidable. Voids loosen the strict requirements of a tetrahedral random network (for example by introducing free boundary conditions). Then, as in our finite models, regions of undercoordinated atoms will allow the formation of resonant modes. While this issue for amorphous silicon will be ultimately settled by experiment, our calculation combined with the existing data on vitreous silica strongly suggests the reality of resonant modes.
Our final note concerns the mode dependence of transverse Grüneisen parameters like $`\gamma _{12}`$. Similarly to volumetric $`\gamma _{\alpha \alpha }/3`$, transverse $`\gamma _{12}`$ in Fig. 3 ($`\gamma _{13}`$ and $`\gamma _{23}`$ look the same) is unusually large for resonant modes and have scattered values for high-frequency localized modes. (More resonant modes have $`\gamma _{12}`$ negative than positive which suggests that resonant modes are trapped at highly anisotropic undercoordinated regions whose sizes change under shear.) The $`1570`$ meV vibrons (diffusons) have $`\gamma _{12}0`$ (average magnitude 0.02), while the corresponding $`\gamma _{\alpha \alpha }/3`$ are of order unity. Such small values (zeros in an infinite model) are characteristic for diffusons, which are extended modes whose polarization directions (atomic displacements) point, in general, at random. There remains only a short-range correlation between polarization directions which determines the diffuson’s frequency $`\omega _d`$. If a shear, say, $`ϵ_{12}`$ is applied, $`\omega _d`$ changes to $`\omega _d^{}(ϵ_{12})`$. Since long-range order in the diffuson polarization is absent, $`\omega _d^{}(ϵ_{12})\omega _d^{}(ϵ_{12})`$, and $`\gamma _{12}`$ which is a linear coefficient in the expansion of $`\omega _d^{}`$ in $`ϵ_{12}`$ must vanish.
We thank J. L. Feldman for helpful discussions. The work was supported by NSF Grant No. DMR 9725037. J. F. acknowledges also support from the U.S. ONR.
|
no-problem/9901/cond-mat9901089.html
|
ar5iv
|
text
|
# Frequency Dependent Specific Heat of Amorphous Silica: A Molecular Dynamics Computer Simulation
## Introduction
The dynamics of supercooled liquids can be studied by many different techniques, such as light and neutron scattering, dielectric measurements, NMR, or frequency dependent specific heat measurements, to name a few vigo . In order to arrive at a better understanding of these systems also various types of computer simulations have been used to supplement the experimental data. However, essentially all of these simulations have focussed on the investigation of static properties or have studied the time dependence of structural quantities, like the mean squared displacement of a tagged particle or the decay of the intermediate scattering function. What these simulations have not addressed so far, apart from a noticeable study of Grest and Nagel grest87 , is the time dependence of thermodynamic quantities, like the specific heat. The reason for the lack of simulations in this direction is that the accurate determination of this quantity in a simulation is very demanding in computer resources because of its collective nature. This fact is of course very regrettable since one of the simplest ways to determine the glass transition temperature in a real experiment is to measure the (static) specific heat. Using ac techniques it is today also possible to measure the frequency dependent specific heat, $`c(\nu )`$, and thus to gain more insight into this observable birge85\_jeong95 . What is so far not possible in real experiments is to measure $`c(\nu )`$ at frequencies higher than 1MHz, and thus the influence of the microscopic dynamics, which is in the THz range, cannot be investigated. For computer simulations it is, however, no problem to study $`c(\nu )`$ also at these high frequencies and in this paper we report the outcome of such an investigation for the strong glass former silica.
## Model and Details of the Simulation
The silica model we use is the one proposed by van Beest et al. beest90 . In this model the interaction $`\varphi (r_{ij})`$ between two particles $`i`$ and $`j`$ a distance $`r_{ij}`$ apart is given by a two body potential of the form
$$\varphi (r_{ij})=\frac{q_iq_je^2}{r_{ij}}+A_{ij}\mathrm{exp}(B_{ij}r_{ij})\frac{C_{ij}}{r_{ij}^6}.$$
(1)
The values of the partial charges $`q_i`$ and the constants $`A_{ij}`$, $`B_{ij}`$ can be found in Ref. beest90 . Since the quantity we want to investigate, $`c(\nu )`$, is a collective one, it is necessary to average it over many independent realizations. Thus the system sizes we used are rather small, 336 ions, despite the fact that the dynamics of such a small system will show appreciable finite size effects horbach96 . However, exploratory runs with larger systems showed that these effects do not change the results substantially. The simulations were done at constant volume using a box size of 16.8 Å, thus at a density of 2.36g/cm<sup>3</sup>, close to the experimental value of the density, which is at 2.2g/cm<sup>3</sup>. The equations of motion have been integrated with the velocity form of the Verlet algorithm with a time step of 1.6 fs. The temperatures investigated were 6100 K, 4700 K, 4000 K, 3580 K, 3250 K, and 3000 K. At all temperatures the system was first equilibrated for a time which is significantly longer than the typical $`\alpha `$-relaxation time of the system at this temperature.
## Results
In real experiments the frequency dependent specific heat is usually measured in the $`NPT`$ ensemble. Although algorithms exist with which the static equilibrium properties of a system can be measured in a simulation in this ensemble, these algorithms introduce an artificial dynamics of the particles and are therefore not suited to investigate the dynamical properties of the system in this ensemble. Hence we calculated the frequency dependent specific heat in the microcanonical ensemble. Whereas in the $`NPT`$ ensemble the specific heat is related to the fluctuations of the enthalpy, in the $`NEV`$ ensemble it is related to the fluctuations of the kinetic energy grest87 ; scheidler99 . It can be shown that in this ensemble the specific heat at frequency $`\nu `$ is given by
$$c(\nu )=\frac{k_B}{2/3K(t=0)i2\pi \nu _0^{\mathrm{}}𝑑t\mathrm{exp}(i2\pi \nu t)K(t)},$$
(2)
where $`K(t)`$ is the autocorrelation function of the kinetic energy $`E_{\mathrm{kin}}`$ and is defined as
$$K(t)=\frac{N}{\overline{E}_{\mathrm{kin}}^2}\left[(E_{\mathrm{kin}}(t)\overline{E}_{\mathrm{kin}})(E_{\mathrm{kin}}(0)\overline{E}_{\mathrm{kin}})\right].$$
(3)
Here $`\overline{E}_{\mathrm{kin}}`$ is the mean kinetic energy and $`N`$ is the total number of ions. The derivation of Eq. (2) can be found in Ref. scheidler99 .
From Eq. (2) we see that the relevant quantity is the autocorrelation function $`K(t)`$. The time dependence of this quantity, normalized by its value at time $`t=0`$, is shown in Fig. 1 for all the temperatures we investigated.
From this figure we recognize that for high temperatures $`K(t)/K(0)`$ decays very quickly to a value around 0.13 and then goes to zero like a stretched exponential. With decreasing temperature the function shows a plateau at intermediate times, the length of which increases quickly with decreasing temperature. Such a time and temperature dependence is very similar to the one found for the relaxation behavior of structural quantities, such as the intermediate scattering function horbach98 . Apart from these features the curves for the lowest temperatures show also a local minimum at around 0.02 ps, the depth of which increases with decreasing temperatures. The existence of this dip, as well as the observed high frequency oscillations, can be understood by realizing that within the harmonic approximation, which will be valid at even lower temperatures, the correlator is closely related to the autocorrelation function of the velocity, which is well known to show such a dip.
Using Eq. (2) we calculated from $`K(t)`$ the frequency dependent specific heat $`c(\nu )`$. The real and imaginary part of this quantity are shown in Fig. 2 for all temperatures investigated. Let us first discuss $`c^{}(\nu )`$: For very high frequencies we expect this function to go to the ideal gas value of 1.5, since the configurational
degrees of freedom are not able to take up energy at such high frequencies. With decreasing $`\nu `$ the function shows a fast increase in the frequency range which corresponds to the microscopic vibrations. For low temperatures this regime is followed by a plateau the height of which corresponds to the static specific heat of the system if no relaxation would take place, i.e. to the specific heat of the vibrational degrees of freedom. Since, however, on the time scale of the $`\alpha `$-relaxation time the system relaxes, $`c^{}(\nu )`$ shows at the corresponding frequencies a further upward step. This feature is related to the fact that for very long times, or small frequencies, those configurational degrees of freedom which are not of vibrational type are relaxing and thus can take up energy. At even smaller frequencies the curves then show a plateau, the height of which is the static specific heat of the system. We see that with decreasing temperature this value is decreasing but is always significantly above the harmonic value given by the Dulong-Petit value of 3.0, since the relaxing configurational degrees of freedom give rise to an enhancement of the static specific heat.
We also note that the height of the first step in $`c^{}(\nu )`$ \[coming from the low frequency side\] is the configurational part of the specific heat. We see that at the lowest temperature this height is rather small, 0.7$`k_B`$ per particle, in agreement with the experimental observation for strong glass-formers.
All these features can also be seen well in the imaginary part of $`c(\nu )`$. At high frequencies we have a microscopic peak which corresponds to the vibrational degrees of freedom. The location and the height of this peak is essentially independent of temperature. At intermediate and low temperatures a second peak is seen at low frequencies, the so-called $`\alpha `$-peak. Its position depends strongly on temperature in agreement with the observation that the $`\alpha `$-relaxation time of structural quantities increases quickly with decreasing temperatures horbach98 . From the location of this peak we can read off $`\nu _{\mathrm{max}}`$, the frequency scale of the relaxation of the specific heat. As it will be shown elsewhere scheidler99 , the product of $`\nu _{\mathrm{max}}`$ with $`\tau (T)`$, the $`\alpha `$-relaxation time of the intermediate scattering function, is essentially independent of the temperature, thus showing the intimate connection between the frequency dependent specific heat and the structural relaxation, in agreement with the prediction of Götze and Latz gotze89 .
To summarize we can say that we have presented the results of a large scale molecular dynamics computer simulation of a realistic model of viscous silica to investigate the frequency dependence of the specific heat. In the frequency regime which is accessible also to experiments our results are in qualitative agreement with the experimental data birge85\_jeong95 . At higher frequencies we see the influence of the vibrational degrees of freedom on $`c(\nu )`$. Since no experimental data for $`c(\nu )`$ is available for silica we are not able to compare the results of the present simulations with reality. However, in a previous investigation we have shown that the present model gives very good quantitative agreement of the static specific heat with the one of real silica horbach99 and thus it is not unreasonable to assume that the results of the simulation on the dynamic quantity is reliable also.
## ACKNOWLEDGMENTS
We thank U. Fotheringham for suggesting this work and A. Latz for many helpful discussions. Part of this work was supported by Schott Glaswerke, by SFB 262/D1 of the Deutsche Forschungsgemeinschaft, and BMBF Project 03 N 8008 C.
|
no-problem/9901/hep-th9901145.html
|
ar5iv
|
text
|
# References
It is well known that the $`S`$-matrix of an integrable two-dimensional quantum field theory factorises into products of two-particle amplitudes. Then, the property of factorisation itself and the usual ‘axioms’ of $`S`$-matrix theory constrain the allowed form of the $`S`$-matrix to such an extent that it becomes possible to conjecture its form. Namely, consistency with factorisation translates into the ‘Yang Baxter equation’ while, on general grounds, it is assumed that the $`S`$-matrix exhibits ‘unitarity’, ‘crossing symmetry’, ‘analyticity’, and satisfies the ‘bootstrap equations’ (for a nice recent review see and the references therein).
In addition to this, and especially when the $`S`$-matrix is diagonal, many works on factorised $`S`$-matrices assume ‘Real analyticity’. This means that two-particle amplitudes are not only analytic functions of their arguments, but they take complex-conjugate values at complex-conjugate points. However, this is not required by the usual $`S`$-matrix axioms.
The aim of this letter is to review the consequences of analyticity for two-dimensional $`S`$-matrix amplitudes, and to serve as a reminder that Real analyticity is not essential. In the framework of $`S`$-matrix theory, it is just a special case of a general condition known as ‘Hermitian analyticity’ which is valid only when the theory happens to be time-reversal symmetric. For the two-particle $`S`$-matrix amplitudes, we will deduce the form of the Hermitian analyticity constraints which are given by eq. (21) and constitute our main result.
We provide several examples of diagonal $`S`$-matrices which are Hermitian analytic but not Real analytic. All of them correspond to non-parity invariant integrable field theories. The simplest is a fermion model proposed by Federbush in 1960 . The others are different quantum field theories associated with the non-abelian affine Toda equations recently constructed in . In all these cases, the scattering amplitudes between different particles are not Real analytic. Instead, those amplitudes connected through a parity transformation are related in such a way that Real analyticity would be satisfied if the theory becomes parity invariant. This unusual analyticity condition is just a consequence of Hermitian analyticity.
Other cases where Real analyticity is not satisfied arise in the quantum group approach to factorised $`S`$-matrices and, in particular, to (abelian) affine Toda field theories with imaginary coupling constant . There, the $`S`$-matrix is a quantum group $`R`$-matrix times a Real analytic scalar factor, which implies that those amplitudes whose matrix structure is trivial are Real analytic. In this approach, the relationship between the analytic continuations of the different amplitudes is dictated by the properties of the $`R`$-matrix and, in general, Hermitian analyticity is not satisfied either.
At this point it is important to mention that, in the framework of standard $`S`$-matrix theory where Hermitian analyticity is formulated, the axiom of ‘unitarity’ actually means that the $`S`$-matrix is unitary: $`S^{}S=1`$. In contrast, in the quantum group approach the role of unitarity is played by another condition called ‘$`R`$-matrix unitarity’ (RU). In fact, Takács and Watts have recently highlighted that some of the resulting $`S`$-matrices are not unitary, which does not prevent them describing physically relevant (non-unitary) models . We will show that Hermitian analyticity ensures that RU is equivalent to physical unitarity without any extra requirements. In other words, Hermitian analyticity is a sufficient condition to guarantee that a factorised $`S`$-matrix obtained through the quantum group approach is unitary.
An excellent classical review of analyticity in $`S`$-matrix theory is provided by the book of Eden et al. , which will be our main reference in the following. Analyticity is the assumption that the physical $`S`$-matrix amplitudes are real boundary values of analytic functions as a consequence of causality and the existence of macroscopic time. On top of this, the unitarity equations are expected to evaluate the discontinuities of those analytic functions across their normal-threshold cuts. This requires that the physical $`S`$\- and $`S^{}`$-matrix amplitudes are opposite boundary values of the same analytic functions, which states the property known as Hermitian analyticity .
Let us consider a generic integrable theory whose spectrum consists of several degenerate multiplets labelled by a set of finite dimensional vector spaces $`V_A,V_B,\mathrm{}`$ with different masses $`m_A,m_B,\mathrm{}`$. Particles in the same multiplet will be distinguished by a flavour index ‘$`i`$’ and, for simplicity, we will assume that all these particles are bosonic. Since the theory is integrable, the only non-vanishing (connected) two-particle $`S`$-matrix amplitudes are of the form
$$\stackrel{}{k}_{A_k},\stackrel{}{k}_{B_l}S1\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}=(2\pi )^2\delta ^{(2)}(p_{A_i}+p_{B_j}k_{A_k}k_{B_l})i_{ijkl}^{AB},$$
(1)
where $`|\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}`$ is the state of two particles with mass $`m_A`$ and $`m_B`$, and momentum $`\stackrel{}{p}_{A_i}`$ and $`\stackrel{}{p}_{B_j}`$:
$$|\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}=a_{A_i}^{}(\stackrel{}{p}_{A_i})a_{B_j}^{}(\stackrel{}{p}_{B_j})|0.$$
(2)
Lorentz invariance allows one to decompose the scattering amplitude into scalar and pseudoscalar parts:
$$_{ijkl}^{AB}=M_{ijkl}^{AB}(s)+\mathrm{\hspace{0.25em}4}ϵ_{\mu \nu }p_{A_i}^\mu p_{B_j}^\nu P_{ijkl}^{AB}(s),$$
(3)
where $`M_{ijkl}^{AB}(s)`$ and $`P_{ijkl}^{AB}(s)`$ are functions of the Mandelstam variable $`s=(p_{A_i}+p_{B_j})^2`$ only.
Analyticity postulates that the scalar and pseudoscalar components of the scattering amplitudes, $`M_{ijkl}^{AB}`$ and $`P_{ijkl}^{AB}`$, are boundary values of analytic functions. This means that they can be continued to complex values of $`s`$, and that the resulting functions are analytic. In this case, since the theory is integrable, they should exhibit only two cuts along $`s(m_A+m_B)^2`$ and $`s(m_Am_B)^2`$ on the real axis with square root branching points, corresponding to the physical processes in the $`s`$\- and $`t`$-channel, respectively. Then, the physical $`s`$-channel amplitudes are given by the limit onto the cut from the upper-half complex $`s`$-plane,
$$M_{ijkl}^{AB^{\mathrm{phys}}}(s)=\underset{ϵ0^+}{lim}M_{ijkl}^{AB}(s+iϵ),P_{ijkl}^{AB^{\mathrm{phys}}}(s)=\underset{ϵ0^+}{lim}P_{ijkl}^{AB}(s+iϵ),$$
(4)
which is the generalization of the well known Feynman’s $`iϵ`$ prescription in perturbation theory.
Hermitian analyticity goes one step beyond. It postulates that the physical $`S`$\- and $`S^{}`$-matrix amplitudes are opposite boundary values of the same analytic functions, a property that has been proved in perturbation theory , in potential theory , and using $`S`$-matrix theory alone (see and the references therein). Since
$$\stackrel{}{k}_{A_k},\stackrel{}{k}_{B_l}S^{}1\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}=\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}S1\stackrel{}{k}_{A_k},\stackrel{}{k}_{B_l}^{},$$
(5)
this condition can be written as
$$\left[M_{klij}^{AB^{\mathrm{phys}}}(s)\right]^{}=\underset{ϵ0^+}{lim}M_{ijkl}^{AB}(siϵ),$$
(6)
and a similar equation for $`P_{ijkl}^{AB}`$. Therefore, taking into account that both $`M_{ijkl}^{AB}(s)`$ and $`M_{klij}^{AB}(s)`$ are analytic functions of $`s`$, and using that if $`f(z)`$ is analytic so also is $`g(z)=[f(z^{})]^{}`$, Hermitian analyticity results in the following relationships:
$$M_{ijkl}^{AB}(s)=\left[M_{klij}^{AB}(s^{})\right]^{},P_{ijkl}^{AB}(s)=\left[P_{klij}^{AB}(s^{})\right]^{}.$$
(7)
An immediate and vital consequence of Hermitian analyticity is that the unitarity equations $`S^{}S=1`$ evaluate the discontinuities of $`M_{ijkl}^{AB}(s)`$ and $`P_{ijkl}^{AB}(s)`$ across the two-particle cuts .
In two-dimensions, it is customary to use rapidities instead of momenta,
$$(p_{A_i}^0,\stackrel{}{p}_{A_i})=(m_A\mathrm{cosh}\theta _{A_i},m_A\mathrm{sinh}\theta _{A_i}).$$
(8)
Then, the (real) Mandelstam variable $`s`$ is a function of the absolute value of the rapidity difference of the colliding particles $`\theta =|\theta _{A_i}\theta _{B_j}|>0`$,
$$s=(p_a+p_b)^2=m_a^2+m_b^2+\mathrm{\hspace{0.25em}2}m_am_b\mathrm{cosh}\theta ,$$
(9)
and the two-particle amplitudes become functions of $`\theta `$. Understood as complex variables, the change of variables between $`s`$ and $`\theta `$ allows one to open the two cuts. Hence, $`M_{ijkl}^{AB}(\theta )`$ and $`P_{ijkl}^{AB}(\theta )`$ are meromorphic and the physical sheet is mapped into the region $`0\mathrm{Im}\theta \pi `$, which is the first Riemann sheet in the complex $`\theta `$-plane.
Regarding analyticity, notice that
$$\underset{ϵ0^+}{lim}M_{ijkl}^{AB}(s\pm iϵ)=M_{ijkl}^{AB}(\pm \theta ),\theta >0.$$
(10)
Therefore, since the amplitudes are meromorphic functions of $`\theta `$, the Hermitian analyticity relationships (7) translate into
$$M_{ijkl}^{AB}(\theta )=\left[M_{klij}^{AB}(\theta ^{})\right]^{},P_{ijkl}^{AB}(\theta )=\left[P_{klij}^{AB}(\theta ^{})\right]^{}.$$
(11)
Finally, let us consider the full $`S`$-matrix amplitude corresponding to (1):
$$\stackrel{}{k}_{A_k},\stackrel{}{k}_{B_l}S\stackrel{}{p}_{A_i},\stackrel{}{p}_{B_j}=4(2\pi )^2\delta (\theta _{A_i}\theta _{A_k})\delta (\theta _{B_j}\theta _{B_l})𝒮_{ijkl}^{AB},$$
(12)
where
$`𝒮_{ijkl}^{AB}=\delta _{ik}\delta _{jl}+i{\displaystyle \frac{_{ijkl}^{AB}}{4m_Am_B\mathrm{sinh}\theta }}`$
$`=\delta _{ik}\delta _{jl}+i\left({\displaystyle \frac{M_{ijkl}^{AB}(\theta )}{4m_Am_B\mathrm{sinh}\theta }}+P_{ijkl}^{AB}(\theta )\mathrm{sign}(\theta _{A_i}\theta _{B_j})\right),`$ (13)
and we have used that
$$4ϵ_{\mu \nu }p_{A_i}^\mu p_{B_j}^\nu =\mathrm{\hspace{0.25em}4}m_Am_B\mathrm{sinh}(\theta _{A_i}\theta _{B_j})=\mathrm{\hspace{0.25em}4}m_Am_B\mathrm{sinh}\theta \mathrm{sign}(\theta _{A_i}\theta _{B_j}),$$
(14)
together with the standard relativistic normalization
$$\stackrel{}{p}_{B_j}\stackrel{}{p}_{A_i}=\delta _{AB}\delta _{ij}(2\pi )\mathrm{\hspace{0.25em}2}p_{A_i}^0\delta (\stackrel{}{p}_{A_i}\stackrel{}{p}_{B_j}).$$
(15)
Using the Heaviside function $`\vartheta (x)=0`$ if $`x<0`$ and $`=1`$ if $`x>0`$, eq. (13) can be written as
$$𝒮_{ijkl}^{AB}=\vartheta (\theta _{A_i}\theta _{B_j})S_{AB}^{}{}_{ij}{}^{kl}(\theta )+\vartheta (\theta _{B_j}\theta _{A_i})S_{BA}^{}{}_{ji}{}^{nm}(\theta ),$$
(16)
where
$$S_{AB}^{}{}_{ij}{}^{kl}(\theta )=\delta _{ik}\delta _{jl}+i\left(\frac{M_{ijkl}^{AB}(\theta )}{4m_Am_B\mathrm{sinh}\theta }+P_{ijkl}^{AB}(\theta )\right),$$
(17)
is the scattering amplitude of the process where particle $`A_i`$ initially is on the left-hand side of particle $`B_j`$, while
$$S_{BA}^{}{}_{ji}{}^{lk}(\theta )=\delta _{ik}\delta _{jl}+i\left(\frac{M_{ijkl}^{AB}(\theta )}{4m_Am_B\mathrm{sinh}\theta }P_{ijkl}^{AB}(\theta )\right),$$
(18)
is the amplitude of the process where $`A_i`$ initially is on the right-hand side of $`B_j`$. These amplitudes can be seen as the matrix elements of two maps
$$S_{AB}(\theta ):V_AV_BV_BV_A,S_{BA}(\theta ):V_BV_AV_AV_B,$$
(19)
where $`\theta `$ is the rapidity difference of the incoming particles. Equivalently, in the symbolic algebraic notation commonly used to describe two-dimensional factorised $`S`$-matrix theories , they correspond to
$`A_i(\theta )B_j(\theta ^{})={\displaystyle \underset{k,l}{}}S_{AB}^{}{}_{ij}{}^{kl}(\theta \theta ^{})B_l(\theta ^{})A_k(\theta ),`$
$`B_j(\theta )A_i(\theta ^{})={\displaystyle \underset{k,l}{}}S_{BA}^{}{}_{ji}{}^{lk}(\theta \theta ^{})A_k(\theta ^{})B_l(\theta ).`$ (20)
The two-particle amplitudes $`S_{AB}^{}{}_{ij}{}^{kl}`$ and $`S_{BA}^{}{}_{ji}{}^{lk}`$ are analytic functions of $`\theta `$. Moreover, taking into account (11), (17), and (18), they satisfy
$$S_{AB}^{}{}_{ij}{}^{kl}(\theta )=\left[S_{BA}^{}{}_{lk}{}^{ji}(\theta ^{})\right]^{}$$
(21)
which summarises Hermitian analyticity in two-dimensional factorised $`S`$-matrix theories and is our central result. Eq. (21) means that the two maps defined in (19) are related according to $`S_{AB}(\theta )=S_{BA}^{}(\theta ^{})`$, where the dagger stands for Hermitian conjugation.
A direct consequence of (21) is that the scattering amplitudes will not be Real analytic functions unless they exhibit additional symmetry properties. To spell this out, recall the behaviour of the two-particle $`S`$-matrix amplitudes with respect to parity (P) and time-reversal (T) transformations:
$$\mathrm{P}:S_{AB}^{}{}_{ij}{}^{kl}(\theta )S_{BA}^{}{}_{ji}{}^{lk}(\theta ),\mathrm{T}:S_{AB}^{}{}_{ij}{}^{kl}(\theta )S_{BA}^{}{}_{lk}{}^{ji}(\theta ).$$
(22)
This shows that Real analyticity is a special case of Hermitian analyticity which is valid only when the amplitude happens to be symmetric with respect to time-reversal transformations, a conclusion that could have been anticipated on general grounds .
Another important consequence of (21) concerns the formulation of the unitarity condition. For real $`\theta >0`$, the unitarity of the $`S`$-matrix, $`S^{}S=1`$, translates into
$$\underset{k,l}{}S_{AB}^{}{}_{ij}{}^{kl}(\theta )\left[S_{AB}^{}{}_{i^{}j^{}}{}^{kl}(\theta )\right]^{}=\delta _{ii^{}}\delta _{jj^{}}.$$
(23)
However, using the Hermitian analyticity condition (21), unitarity can be equivalently written as
$$\underset{k,l}{}S_{AB}^{}{}_{ij}{}^{kl}(\theta )S_{BA}^{}{}_{lk}{}^{j^{}i^{}}(\theta )=\delta _{ii^{}}\delta _{jj^{}},$$
(24)
which is nothing else than the condition of ‘$`R`$-matrix unitarity’ (RU) that arises naturally in the quantum group approach to factorised $`S`$-matrices . Actually, to be precise, RU is the analytic continuation of (24) to the complex $`\theta `$-plane. Therefore, we conclude that there is no difference between physical unitarity and the quantum group inspired $`R`$-matrix unitarity if the $`S`$-matrix amplitudes exhibit Hermitian analyticity.
In order to validate the Hermitian analyticity condition given by eq. (21), it is necessary to check whether it is preserved by the bootstrap equations. Suppose that $`S_{AB}^{}{}_{ij}{}^{kl}(\theta )`$ has a simple pole at $`\theta =iu_{AB}^C`$ on the physical strip corresponding to a bound state in the multiplet $`V_{\overline{C}}`$. Thus, its residue is provided by the projector of $`V_AV_B`$ into $`V_{\overline{C}}V_BV_A`$ and, near the pole, the amplitude will be of the form
$$S_{AB}^{}{}_{ij}{}^{kl}(\theta )\frac{i}{\theta iu_{AB}^C}\underset{a}{}G_{ij}^aH_{lk}^{a}{}_{}{}^{},$$
(25)
where, in the symbolic notation of already used in eq. (20), the coupling constants $`G_{ij}^a`$ and $`H_{ji}^a`$ are defined through the identities <sup>1</sup><sup>1</sup>1In these equations, we use the standard notation for the fusion angles such that $`\overline{u}_{AB}^C=\pi u_{AB}^C`$, $`\overline{u}_{AC}^B+\overline{u}_{BC}^A+\overline{u}_{AB}^C=\pi `$, and $`m_{\overline{C}}=\mathrm{e}^{+i\overline{u}_{AC}^B}m_A+\mathrm{e}^{i\overline{u}_{BC}^A}m_B`$.
$`\underset{\theta _1\theta _2iu_{AB}^C}{lim}\left(\theta _1\theta _2iu_{AB}^C\right)A_i(\theta _1)B_j(\theta _2)=i{\displaystyle \underset{a}{}}G_{ij}^a\overline{C}_a\left({\displaystyle \frac{\theta _1i\overline{u}_{AC}^B+\theta _2+i\overline{u}_{BC}^A}{2}}\right),`$
$`\overline{C}_a(\theta )={\displaystyle \underset{kl}{}}H_{lk}^{a}{}_{}{}^{}B_l(\theta i\overline{u}_{BC}^A)A_k(\theta +i\overline{u}_{AC}^B).`$ (26)
Correspondingly, using the Hermitian analyticity condition,
$$S_{BA}^{}{}_{lk}{}^{ji}(\theta )=\left[S_{AB}^{}{}_{ij}{}^{kl}(\theta ^{})\right]^{}\frac{i}{\theta iu_{AB}^C}\underset{a}{}H_{lk}^aG_{ij}^{a}{}_{}{}^{},$$
(27)
which shows that the amplitude $`S_{BA}^{}{}_{lk}{}^{ji}(\theta )`$ also exhibits a simple pole at the same location. Moreover, it provides an equivalent definition of the coupling constants through the projection of $`V_BV_A`$ into $`V_{\overline{C}}V_AV_B`$:
$`\underset{\theta _1\theta _2iu_{AB}^C}{lim}\left(\theta _1\theta _2iu_{AB}^C\right)B_l(\theta _1)A_k(\theta _2)=i{\displaystyle \underset{a}{}}H_{lk}^a\overline{C}_a\left({\displaystyle \frac{\theta _1i\overline{u}_{BC}^A+\theta _2+i\overline{u}_{AC}^B}{2}}\right),`$
$`\overline{C}_a(\theta )={\displaystyle \underset{ij}{}}G_{ij}^{a}{}_{}{}^{}A_i(\theta i\overline{u}_{AC}^B)B_j(\theta +i\overline{u}_{BC}^A),`$ (28)
In other words, $`G_{ij}^a`$ and $`H_{ji}^a`$ are the coupling constants of the fusions $`A_iB_j\overline{C}_a`$ and $`B_jA_i\overline{C}_a`$, respectively, and, since the amplitudes are not always parity symmetric, $`G_{ij}^aH_{ji}^a`$ in general. Eqs. (25) and (27) imply that, near the pole, the map $`S_{AB}(\theta )`$ is of the form
$$S_{AB}(\theta )\frac{i}{\theta iu_{AB}^C}P_{BA}^{C}{}_{}{}^{}P_{AB}^C,$$
(29)
where $`P_{AB}^C:V_AV_BV_{\overline{C}}`$ is a projection operator. Eq. (29) is explicitly consistent with the Hermitian analyticity condition and manifests that the poles in $`S_{AB}(\theta )`$ and $`S_{BA}(\theta )`$ correspond to particles in the same multiplet $`V_{\overline{C}}`$.
The bootstrap equations express the fact that there is no difference whether the scattering process with any particle in, say, $`V_D`$ occurs before or after the fusion of particles $`A_i`$ and $`B_j`$ into particle $`\overline{C}_a`$. In our case, using eqs. (26) and (28), this condition allows one to write four different but equivalent expressions for the scattering amplitudes involving the particles in $`V_{\overline{C}}`$ and $`V_D`$. For our purposes, it will be enough to consider only the following two
$`{\displaystyle \underset{b}{}}S_{\overline{C}D}^{}{}_{am}{}^{bn}(\theta )H_{lk}^{b}{}_{}{}^{}`$ $`=`$ $`{\displaystyle \underset{ij}{}}H_{ji}^{a}{}_{}{}^{}S_{AD}^{}{}_{im}{}^{kp}(\theta +i\overline{u}_{AC}^B)S_{BD}^{}{}_{jp}{}^{ln}(\theta i\overline{u}_{BC}^A),`$
$`{\displaystyle \underset{b}{}}H_{lk}^bS_{D\overline{C}}^{}{}_{nb}{}^{ma}(\theta )`$ $`=`$ $`{\displaystyle \underset{ij}{}}S_{DB}^{}{}_{nl}{}^{pj}(\theta i\overline{u}_{BC}^A)S_{DA}^{}{}_{pk}{}^{mi}(\theta +i\overline{u}_{AC}^B)H_{ji}^a.`$ (30)
Then, if the scattering amplitudes for the particles in $`V_A`$, $`V_B`$, and $`V_D`$ satisfy the condition (21), it is straightforward to check that
$$\underset{b}{}S_{\overline{C}D}^{}{}_{am}{}^{bn}(\theta )H_{lk}^{b}{}_{}{}^{}=\underset{b}{}\left[H_{lk}^bS_{D\overline{C}}^{}{}_{nb}{}^{ma}(\theta ^{})\right]^{},$$
(31)
which proves that the amplitudes $`S_{\overline{C}D}^{}{}_{am}{}^{bn}`$ and $`S_{D\overline{C}}^{}{}_{nb}{}^{ma}`$ obtained by means of the bootstrap principle are also Hermitian analytic.
It is worth noticing that the Hermitian analyticity constraints given by eq. (21) cannot be satisfied in all possible bases on $`V_A,V_B,\mathrm{}`$. Consider the following change of basis on $`V_A`$ and $`V_B`$: $`A_i(\theta )\stackrel{~}{A}_i(\theta )=_pL_{A}^{}{}_{i}{}^{p}(\theta )A_p(\theta )`$ and $`B_j(\varphi )\stackrel{~}{B}_j(\varphi )=_qL_{B}^{}{}_{i}{}^{q}(\varphi )B_q(\varphi )`$, where $`L_A(\theta )`$ and $`L_B(\varphi )`$ are invertible but not necessarily unitary matrices. In the new basis, the scattering amplitude $`S_{AB}^{}{}_{ij}{}^{kl}(\theta \varphi )`$ becomes
$$S_{\stackrel{~}{A}\stackrel{~}{B}}^{}{}_{pq}{}^{rs}(\theta \varphi )=\underset{i,j,k,l}{}L_{A}^{}{}_{p}{}^{i}(\theta )L_{B}^{}{}_{q}{}^{j}(\varphi )S_{AB}^{}{}_{ij}{}^{kl}(\theta \varphi )L_{B}^{1}{}_{l}{}^{s}(\varphi )L_{A}^{1}{}_{k}{}^{r}(\theta ).$$
(32)
Then, if $`S_{\stackrel{~}{A}\stackrel{~}{B}}^{}{}_{ij}{}^{kl}`$ satisfies the Hermitian analyticity condition (21) it is straightforward to check that
$$\underset{p,q}{}M_{A}^{}{}_{i}{}^{p}(\theta )M_{B}^{}{}_{j}{}^{q}(\varphi )S_{AB}^{}{}_{pq}{}^{kl}(\theta \varphi )=\underset{r,s}{}\left[S_{BA}^{}{}_{sr}{}^{ji}(\varphi ^{}\theta ^{})\right]^{}M_{A}^{}{}_{r}{}^{k}(\theta )M_{B}^{}{}_{s}{}^{l}(\varphi ),$$
(33)
where
$$M_{A}^{}{}_{k}{}^{i}(\theta )=\underset{p}{}L_{A}^{}{}_{p}{}^{i}(\theta )\left[L_{A}^{}{}_{p}{}^{k}(\theta ^{})\right]^{}\mathrm{and}M_{B}^{}{}_{l}{}^{j}(\varphi )=\underset{p}{}L_{B}^{}{}_{p}{}^{j}(\varphi )\left[L_{B}^{}{}_{p}{}^{l}(\varphi ^{})\right]^{}.$$
(34)
Therefore, given a set of two-particle scattering amplitudes, a sufficient condition to ensure that there is a basis where they are Hermitian analytic is that there exists two matrices $`M_A`$ and $`M_B`$ of the form given by eq. (34) such that the constraints (33) hold. Notice that, $`M_A`$ and $`M_B`$ are hermitian positive definite matrices for real values of $`\theta `$. We will refer to this condition as ‘weak’ Hermitian analyticity, thus making reference to the fact that it generalizes a condition found by Liguori, Mintchev and Rossi in the context of exchange algebras . There, the amplitudes $`S_{AB}^{}{}_{ij}{}^{kl}(\theta )`$ for real values of $`\theta `$ provide the exchange factors, and weak Hermitian analyticity arises as a sufficient condition to allow the construction of a unitary scattering operator in a Fock representation of the algebra.
In the rest of the letter, we provide several examples of diagonal $`S`$-matrix theories where Hermitian analyticity holds but Real analyticity is not satisfied. In all these cases the multiplets are not degenerate and, hence, no flavour indices are needed. Then, the Hermitian analyticity condition (21) simplifies to
$$S_{AB}(\theta )=\left[S_{BA}(\theta ^{})\right]^{}.$$
(35)
It is worth noticing that, in the diagonal case, the P and T transformations of the $`S`$-matrix amplitudes are identical. This explains why all our examples involve non-parity invariant theories.
Our first example will be the Federbush model , which was studied in great detail, among others, by Schroer, Truong, and Weisz . The Federbush model describes two massive Dirac fields $`\psi _I`$ and $`\psi _{II}`$ whose interaction Lagrangian is
$$L_{\mathrm{FM}}=\mathrm{\hspace{0.25em}2}\pi \lambda ϵ_{\mu \nu }J_I^\mu J_{II}^\nu ,$$
(36)
where $`J_I^\mu `$ and $`J_{II}^\nu `$ are the conserved vector currents of the Dirac fields. The two-particle $`S`$-matrix amplitudes of the Federbush model are particularly simple and can be written as
$$S_{I,I}(\theta )=S_{II,II}(\theta )=\mathrm{\hspace{0.25em}1},S_{I,II}(\theta )=\mathrm{e}^{2\pi \lambda i},S_{II,I}(\theta )=\mathrm{e}^{+2\pi \lambda i}.$$
(37)
Since they are given just by rapidity independent phase factors, the amplitudes $`S_{I,II}`$ and $`S_{II,I}`$ are clearly not Real analytic. However, it is straightforward to check that they satisfy eq. (35) or, in other words, that the $`S`$-matrix of the Federbush model is Hermitian analytic.
The second example is provided by the integrable perturbation of the $`SO(3)_k`$ Wess-Zumino-Witten model discussed by Brazhnikov in <sup>2</sup><sup>2</sup>2Using the construction of ref. , the model of Brazhnikov can be described as a Symmetric Space sine-Gordon (SSSG) theory associated with the compact type-I symmetric space $`SU(3)/SO(3)`$, and it is expected that many other SSSG theories exhibit similar properties. The spectrum of stable fundamental (or basic) particles consists just of two particles $`\psi `$ and $`\vartheta `$ associated with the two simple roots of $`su(2)`$, the Lie algebra of $`SO(3)`$, whose mass can be taken to be different. The two-particle scattering amplitude for $`\psi `$ and $`\vartheta `$ has been calculated at tree-level in and, properly normalized, the result is
$$S_{\psi \vartheta }(\theta )=\mathrm{\hspace{0.25em}1}+\frac{2i}{k\mathrm{sinh}(\theta \theta _0)}+\mathrm{}=\left[S_{\vartheta \psi }(\theta ^{})\right]^{},$$
(38)
where $`\theta _0`$ is a non-vanishing real constant whose value depends on the coupling constants of the model. These tree-level amplitudes are not Real analytic but, on the contrary, they satisfy the Hermitian analyticity condition eq. (35).
As our last example, let us consider the Homogenous sine-Gordon (HSG) theories constructed in . There is a HSG theory for each simple compact Lie group $`G`$ that corresponds to an integrable perturbation of the conformal field theory (CFT) associated with the coset $`G_k/U(1)^{\times r_g}`$, where $`r_g`$ is the rank of $`G`$, or, equivalently, of the theory of level-$`k`$ $`G`$-parafermions . The semiclassical spectrum of stable particles of the HSG theories has been obtained in . If the group $`G`$ is simply laced, the spectrum consists of $`k1`$ particles for each simple root $`\stackrel{}{\alpha }_i`$ of $`g`$, the Lie algebra of $`G`$, whose masses are given by
$$M_{\stackrel{}{\alpha }_i}(n)=\frac{k}{\pi }m_{\stackrel{}{\alpha }_i}\mathrm{sin}\left(\frac{\pi n}{k}\right),i=\mathrm{\hspace{0.25em}1},\mathrm{},r_g,n=\mathrm{\hspace{0.25em}1},\mathrm{},k1.$$
(39)
In this equation,
$$m_{\stackrel{}{\alpha }_i}=\mathrm{\hspace{0.25em}2}m\sqrt{(\stackrel{}{\alpha }_i\stackrel{}{\lambda }_+)(\stackrel{}{\alpha }_i\stackrel{}{\lambda }_{})},$$
(40)
the constant $`m`$ is the only dimensionful parameter of the theory, and $`\stackrel{}{\lambda }_\pm `$ are continuous vector coupling constants taking values in the fundamental Weyl chamber of the Cartan subalgebra of $`g`$. For a generic choice of $`\stackrel{}{\lambda }_\pm `$, all these masses will be different, and an exact diagonal $`S`$-matrix for these theories has been recently proposed . For our purposes, it will be enough to quote the result for $`G=SU(3)`$. For the fundamental particles, corresponding to $`n=1`$ in (39), the two-particle amplitudes can be written as
$`S_{\stackrel{}{\alpha }_j,\stackrel{}{\alpha }_j}(\theta )={\displaystyle \frac{\mathrm{sinh}\frac{1}{2}\left(\theta +\frac{2\pi }{k}i\right)}{\mathrm{sinh}\frac{1}{2}\left(\theta \frac{2\pi }{k}i\right)}},j=\mathrm{\hspace{0.25em}1},2,`$
$`S_{\stackrel{}{\alpha }_1,\stackrel{}{\alpha }_2}(\theta )=\mathrm{e}^{ϵ\frac{\pi }{k}i}{\displaystyle \frac{\mathrm{sinh}\frac{1}{2}\left(\theta \sigma \frac{\pi }{k}i\right)}{\mathrm{sinh}\frac{1}{2}\left(\theta \sigma +\frac{\pi }{k}i\right)}}=\left[S_{\stackrel{}{\alpha }_2,\stackrel{}{\alpha }_1}(\theta ^{})\right]^{},`$ (41)
where the value of the real parameter $`\sigma `$ depends on the coupling constants $`\stackrel{}{\lambda }_\pm `$, and $`ϵ`$ can be taken to be $`+1`$ or $`1`$. Eq. (41) provides a set of diagonal two-particle $`S`$-matrix amplitudes that satisfy unitarity, crossing symmetry, and the bootstrap equations . However, they are Hermitian analytic and not Real analytic.
To sum up, our main point was to recall that Real analyticity is not essential in $`S`$-matrix theory; it is just a special case of a general property called Hermitian analyticity. Then, we have derived the constraints implied by Hermitian analyticity for the two-particle scattering amplitudes in two-dimensional factorised $`S`$-matrix theories, which are summarised by eq. (21). These constraints are consistent with the bootstrap equations and agree with the properties of the scattering amplitudes of several non-parity invariant theories whose $`S`$-matrix is diagonal already discussed in the literature . In addition, they also manifest that Real analyticity is recovered only for those amplitudes which are time-reversal invariant.
An important consequence of Hermitian analyticity is that it ensures the equivalence between the genuine unitarity of the $`S`$-matrix and the condition of ‘unitarity’ satisfied by the $`S`$-matrices derived from the quantum group construction of refs. . In this construction, the $`S`$-matrix is a quantum group $`R`$-matrix times a Real analytic scalar factor, and the relationship between the analytic continuations of the different amplitudes is dictated by the properties of the $`R`$-matrix. As an example, using the results of Gandenberger in , one can check that all the $`S`$-matrix amplitudes corresponding to the affine Toda field theory associated with $`a_2^{(1)}`$ satisfy <sup>3</sup><sup>3</sup>3I thank Gustav Delius for providing a proof that this relationship will hold for other affine Toda theories. When the quantum parameter $`q`$ is a pure phase, it follows from the fact that complex conjugation replaces $`q`$ by $`q^1`$, which exchanges the two sides in the quantum coproduct, together with the time-reversal invariance of the $`R`$-matrices and the Real analyticity of the scalar factor (see ).
$$S_{AB}^{}{}_{ij}{}^{kl}(\theta )=\left[S_{AB}^{}{}_{kl}{}^{ij}(\theta ^{})\right]^{}$$
(42)
instead of (21). This shows that the amplitudes with a trivial matrix structure will be Real analytic, but Hermitian analyticity will not be satisfied unless the $`S`$-matrix exhibits additional symmetry properties, like parity invariance if (42) holds. All this results in the non-unitarity of these $`S`$-matrices reported in .
In the same article, Takács and Watts pointed out the possibility that some of these $`S`$-matrices could be conjugate to unitary matrices by means of a rapidity-dependent change of basis of the one-particle states. However, they found rather difficult to check it directly and proposed to investigate instead if the two- and three-particle $`S`$-matrices have pure phase eigenvalues, which is a necessary condition. Following this method, they have singled out a number of $`S`$-matrix theories where such changes of basis should exist . Concerning this, we have obtained a sufficient condition for the existence of a basis where Hermitian analyticity is satisfied; we call it ‘weak Hermitian analyticity’, and it is summarized by eqs. (33) and (34). It would be interesting to use weak Hermitian analyticity to characterise those $`S`$-matrix theories that become unitarity in some particular basis and, in any case, to investigate the physical meaning of such basis.
Acknowledgments
I wish to thank D. Olive for drawing my attention to the role of Hermitian analyticity in $`S`$-matrix theory during the Durham’98 TMR Conference. I would also like to thank J. Sánchez Guillén and J.M. Sánchez de Santos for valuable discussions, and to G. Delius and G. Watts for their clarifying comments about affine Toda theories and the quantum group approach to factorised $`S`$-matrices. This research is supported partially by CICYT (AEN96-1673), DGICYT (PB96-0960), and the EC Commission via a TMR Grant (FMRX-CT96-0012).
|
no-problem/9901/astro-ph9901267.html
|
ar5iv
|
text
|
# A Deep 12 Micron Survey with ISO Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdon) and with the participation of ISAS and NASA
## 1 Introduction
Whenever a radically new wavelength or sensitivity regime is opened in astronomy, new classes of object and unexplained phenomena are discovered. The IRAS satellite, in opening the mid- to far-IR sky, revealed that the bolometric luminosities of many galaxies are dominated by this spectral region. This has raised questions concerning the evolution of galaxies in the mid- to far-IR, the role of dust-extinction in the formation of galaxies, the relationship between quasars and galaxies, and the nature of galaxy formation itself. Limited as they were to fluxes not much smaller than 1Jy, the IRAS surveys were constrained to the fairly local universe for the vast majority of the detected objects. The evolutionary properties of IR galaxies, both the normal galaxy population and the unusual ultra- and hyper-luminous objects (see e.g. Clements et al. 1996a), are thus still mostly unknown.
Recent work in the visible and near-IR has had a major impact on our understanding of the star formation history of galaxies (eg. Steidel et al., 1996, Cowie et al., 1994, Giavalisco et al., 1996, Lilly et al., 1996). It appears that the star formation rate in the universe peaked at around z=1 and has been declining since (Madau et al. 1998). Many of the objects studied in deep, high redshift fields appear to be distorted, and are possibly undergoing tidal interaction or merging (Abraham et al., 1996). It is well–known that tidal interactions and mergers in the local universe produce significant amounts of star formation (Joseph & Wright, 1985, Clements et al., 1996b, Lawrence et al., 1989), and that these are usually associated with a significant luminosity in the mid- to far-IR. The dust responsible for this emission is heated by the star formation process, which it also obscures. We must thus consider that the view of the universe obtained in the visible and near-IR, corresponding to the rest-frame optical and near-UV of many of the objects observed, may well be biased by such obscuring material. The question of how much of the star-formation in the universe is obscured by dust thus becomes important. This issue can only be properly addressed by selecting objects in the mid- or far-IR which are less affected by such obscuration.
Previous work in the mid- and far-IR used data from IRAS, with all-sky sensitivities of $``$0.1 Jy in the mid-IR bands (12 and 25 $`\mu `$m), and $``$ 0.3–1 Jy at 60 and 100 $`\mu `$m. These typically allow the detection of galaxies out to z=0.2, though a few exceptional objects, such as the gravitationally lensed z=2.286 galaxy IRAS10214+4724 (Rowan-Robinson et al., 1991, Serjeant et al., 1995), have also been detected.
Most evolutionary studies with IRAS have concentrated on the 60$`\mu `$m waveband (Sanders et al., 1990, Hacking & Houck, 1987, Bertin et al., 1997). This work has found evidence for strong evolution in the 60$`\mu `$m population, at rates similar to those of optically selected AGN, but the nature of this evolution is still unclear, and it is difficult to extrapolate to higher redshift with any confidence.
At mid-IR wavelengths, the IRAS mission has produced both large-area surveys of fairly nearby objects (eg. Rush et al., 1993), and small-area, deep surveys in repeatedly scanned regions (eg. Hacking & Houck, 1987). The former surveys do not probe sufficiently deeply into the universe to be able to say much about galaxy evolution, but they do have the advantage that plentiful data exists for the nearby galaxy samples they produce. The latter surveys are plagued by stellar contamination. The vast majority of the 12$`\mu `$m objects in the survey of Hacking et al., for example, are stars – there are only five galaxies in their entire survey.
The Infrared Space Observatory (ISO, Kessler et al. 1996) provides a major improvement to our observational capabilities beyond those of the IRAS satellite. For observations in the mid-IR, the ISOCAM instrument (Cesarsky et al. 1996) allows us to reach flux limits $``$100 times fainter than those achieved by IRAS while observing fairly large areas ($``$0.1 sq. degree) in integration times of only a few hours. We can thus probe flux regimes that were previously impossible to study.
This paper presents the results of a survey of four high galactic latitude fields using the LW10 (12$`\mu `$m) filter, which was specifically designed to match the 12$`\mu `$m filter on the IRAS satellite. The present results can thus be compared to existing IRAS data with minimal model-dependent K- corrections. This survey is much deeper than any based on IRAS data, and is sufficiently deep to detect distant galaxies. The survey region is also fairly small and at high galactic latitude, so that stellar contamination should not be a major problem.
There are of course other studies in the mid-IR underway using the ISO satellite. These include the DEEP and ULTRADEEP surveys (Elbaz et al., in preparation), the ELAIS survey (Oliver et al., in preparation) and the ISOHDF project (Oliver et al., 1997; Desert et al., 1998 (Paper I); Aussel et al. 1998). All of these programmes use the LW2 6.7$`\mu `$m and/or LW3 15$`\mu `$m filter on ISO. Only the ISOHDF results have been published to date. At 15$`\mu `$m these observations reach a flux limit of $``$0.1 mJy, about 5 times deeper than the observations discussed here, but cover only 1/24th of the area. There are also deep surveys at longer wavelengths using the PHOT instrument at 175$`\mu `$m (Kawara et al., 1998, and Puget et al., 1998). Future missions will also be probing this part of the electromagnetic spectrum. The first of these will be the WIRE mission (Fang et al., 1996) which will obtain a large area, deep survey at 12 and 25$`\mu `$m. The SIRTF project (Werner & Bicay, 1997) and IRIS satellite (Okuda, 1998) will also be used for deep number counts, and should be able to probe significantly deeper than ISO. Finally the planned NGST (Stockman & Mather, 1997) will provide incomparable performance in this cosmologically interesting waveband. The present work provides the first results of the exploration of the distant universe at these mid-IR wavelengths, and can provide a guideline for future missions, useful for their planning and preparation.
The paper is organised as follows. Section 2 describes the observations, data reduction and calibration. Section 3 provides details on identifications of the 12$`\mu `$m source population, star-galaxy separation, and on individual source properties. Section 4 discusses number counts, comparison with statistics at other wavelengths, and with model predictions. Section 5 summarises our conclusions.
## 2 Observations and Data Reduction
The observations presented here were part of a survey of cometary dust trails (ISO project name JDAVIES/JKDTRAIL). The original goal was to observe the width and structures of the cometary trails, which are produced by large particles, ejected from the comet into independent heliocentric orbits but with very similar orbital elements and very small radiation pressure effects. The comet 7P/Pons-Winnecke was selected because of its similarity to other comets with dust trails, the detection of a dust trail by IRAS (Sykes & Walker, 1992) and its suitability for observation by ISO. Four fields were imaged, each field being a raster map centred on the ephemeris prediction for particles with the same orbital elements as the nucleus except for the mean anomaly of the orbit, which was shifted by $`+1^{}`$ (ahead) and $`0.5^{}`$, $`1^{}`$, and $`2^{}`$ (behind). Each raster was 11 by 7 pointings, with a spacing of $`60^{\prime \prime }`$ by $`48^{\prime \prime }`$. The pixel field of view was $`6^{\prime \prime }`$, so that there was substantial overlap between individual frames of the 32 by 32 pixel ISOCAM LW array (Cesarsky et al., 1996) to ensure good flatfielding and high observational redundancy. A typical position on the sky was visited 12 times during the raster, each time with a different pixel of the array. The rasters were rotated such that the predicted cometary trail would run parallel to the short axis of the raster. Unfortunately for the cometry trails programme, the observations took place on 17 August 1997, one day later than assumed for the ephemeris calculations and therefore the trail is predicted to run horizontally across the very bottom edge of the image. Based on the results of observations of the trail of another comet (P/Kopff) from the same observing program (Davies et al., 1997), we expect that the trail would occupy at most the lower $`1^{}`$ of the image and that it would be relatively smooth. The presence of the dust trail in the field should not have any effect on the observations presented here, though an excess of sources in the bottom of the raster could potentially be related to structures in the dust trail. No such excess is seen. All observations presented here are in the LW10 filter, which is very similar to the IRAS 12 $`\mu `$m filter in wavelength-dependent response. Since these fields are at high galactic lattitude ($`>`$50 degrees), and, in the absence of cometary trails, are effectively blank fields, they become ideal for a deep survey of the extragalctic 12$`\mu `$m source population. The positions of the fields are given in Table 1.
The data reduction process is described in detail in Paper 1. Basically, for each AOT file (total 4) the raw data (CISP format) and pointing history (IIPH format) are read and merged. The raw data cube (typically 1244 readouts of 32 by 32 pixels) is deglitched for fast and slow cosmic ray impacts. A transient correction is applied to recover the linearity and the nominal flux response of the camera. A triple–beam method is then applied to the processed data cube, in order to find the best (ON- (OFF1+OFF2)/2) differential value of the sky brightness for each pixel and each raster position, where OFF1 and OFF2 refer to the previous and next raster position value for the same camera pixel. The resulting low–level reduced cube is then simply flat–fielded (there is no need for a dark correction because we perform a differential measurement on the sky). The flat–fielded reduced cube along with an error cube is made up of 77 values for each of the 32 by 32 pixels of the camera. The 2 cubes are then projected onto the sky using an average effective position for ISO during each raster position, with a $`1/\sigma ^2`$ optimal weighting. A noise map is thus calculated as well as a sky differential map. Any given source will leave 2 negative half–flux ghosts 60 arcseconds away on both sides along the raster scan direction. This is a trade–off in order to beat the 1/f noise regime that is reached by the camera for long integrations per position. On the final map, shown in Fig. 1, we search for point sources with a Gaussian fitting algorithm that uses the noise map for weighting the pixels and deducing the noise of the final flux measurement. A FWHM of 9 arcseconds was used. The final internal flux is converted to $`\mu `$Jy by using the ISOCAM cookbook value (Cesarsky et al., 1994). The present understanding of ISOCAM calibration is that no additional factor should be applied to the pre-flight sensitivity estimates for the determination of surface brightness.
We have devised a scheme to assess the reproducibility of the sources, in order to test for false sources that would be due to undetected glitches. This is described in detail in Paper I, but consists of breaking the observations of each object into a number of independent subsurveys. Sources are deemed reliable if they are detectable, with suitably reduced significance, in each of these subsurveys. Of the 193 sources above the 3$`\sigma `$ limit only 7 fail the reproducibility test (4%) and these are not considered in the following. Visual screening helped in removing a further 38 residual companions of strong sources due to imperfect fitting. Visual screening also showed that two sources, F1\_0 and F2\_0, were significantly extended at 12$`\mu `$m. We thus use aperture photometry to obtain an accurate flux for these objects. The aperture used had a diameter of 20 arcsecs.
Simulation of the expected PSF from ISO after the same processing reveals that part of the flux is missed by the optimised Gaussian fitting. We therefore correct the fluxes and errors by a factor of 1.52 determined from this modelling. The final absolute photometry should be in error by no more than an estimated 30%. The fluxes are given at the nominal wavelength of 12 $`\mu `$m (i.e. an additional correction of $`1.04=12/11.5`$ is applied since the nominal ISOCAM calibration is for a wavelength of 11.5$`\mu `$m), in order to facilitate the comparison with previous IRAS observations. This assumes a flat spectrum in $`\nu F_\nu `$ as was used for IRAS calibration. The flux prediction for known stars should thus be colour corrected, since they have a Rayleigh-Jeans spectral index, in order for comparison to ISOCAM measurements: the real flux at 12$`\mu `$m should be divided by 0.902. An additional factor comes from the fact that the PSF is smaller for stellar sources (which are dominated by the short wavelength part of the broad filter) than for the assumed extragalactic sources which have a broader spectrum. Thus the real flux should also be multiplied by a supplementary factor of 1.13 (see Section 3.1 for this a priori calibration and the comparison with flux measurements of known stars in the fields). The basic calibration of ISOCAM, before the corrections for point sources are applied, can be checked by comparing the integrated surface brightness of these fields with values interpolated from the DIRBE experiment on COBE (Hauser et al. 1997a). The ISO surface brightnesses agree with the DIRBE values to better than 5%.
The sensitivity that is achieved in the central area of the fields is about $`1\sigma =100\mu `$Jy. This was achieved with a total integration time of 4 minutes for each camera field–of–view. Astrometry was assessed by matching ISO sources to bright stars in the fields. We estimate the astrometric accuracy to be $``$6” (2 $`\sigma `$).
## 3 The 12$`\mu `$m Source Population
A total of 186 candidate 12$`\mu `$m sources are found in the survey to a 3$`\sigma `$ flux limit of $``$ 300$`\mu `$Jy. Visual inspection then removes 38 of these sources as fragments of brighter sources incorrectly identified as separate objects, giving a master list of 148 objects. For the remainder of this paper we shall restrict ourselves to discussion of only those sources detected at 5$`\sigma `$ sensitivity or above in this master list. This is for several reasons. Firstly, a number of uncertainties remain in the identification of the weakest sources. These are the sources most likely to be affected by the remnants of subtracted glitches or by weak, undetected glitches. Further examination of the detailed time histories and reproducibilities of these sources is underway, and a full catalogue reaching to the faintest flux limits can then be constructed. Secondly, the problems of Malmquist bias (Oliver, 1995) are most easily controlled in catalogues detected with significances $`5\sigma `$. A source list using a 5$`\sigma `$ detection threshold is thus best suited to our examination of the 12$`\mu `$m source counts. 50 objects are detected at 5$`\sigma `$ or greater significance. Details of these objects are given in Table 2, and they are discussed further in the following sections.
### 3.1 Optical Identifications and Star-Galaxy Separation
Comparison of the 12$`\mu `$m ISOCAM images with Digital Sky Survey (DSS)<sup>1</sup><sup>1</sup>1Based on photographic data of the National Geographic Society – Palomar Observatory Sky Survey (NGS-POSS) obtained using the Oschin Telescope on Palomar Mountain. The NGS-POSS was funded by a grant from the National Geographic Society to the California Institute of Technology. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. images shows that a number of the sources are associated with bright stars. Before we are able to analyse the galaxy component of the 12$`\mu `$m source population, these and any other contaminating stars must be identified and removed. This was achieved by using the US Naval Observatory (USNO) all-sky photometric catalogue (Monet et al., 1996). The database was searched for all optical objects within 12 arcseconds of each ISO position. 12 sources were immediately identified with HST Guide Star Catalogue (GSC) stars, though inspection of the DSS images shows that one of these is in fact a galaxy (03 01 06.16 -10 44 23.6, the GSC ‘star’ 5290\_640). B and R band photometry was extracted for 29 of the 32 optically identified objects – three of the GSC stars were too bright to allow photometry from the B survey plates used by the USNO catalogue. These magnitudes were then corrected for the estimated galactic extinction. A comparison of the final F<sub>B</sub>/F<sub>R</sub> and F<sub>12</sub>/F<sub>R</sub> flux ratios was then made. Figure 2 shows the optical/ISO colour-colour diagram, together with a Black Body colour track. Simple stars, without associated dust or stellar companions, should lie on or near to this colour track. As can be seen, almost all of the GSC stars and several other objects lie near to the Black Body line. This allows us to remove all those stars that have not been identified in the GSC. Three such objects are removed. One star (F3\_9: 03 09 42.14 -08 35 44.6) seems to be anomalously blue (B-R = -0.8 from the USNO catalogue). However, this object and another bright star (F3\_0: 03 09 42.6 -08 35 33.4) are so close to one another that accurate photographic photometry is likely to be difficult, resulting in the anomalous colours. These objects are removed from further analysis.
Of the 32 optically identified 12$`\mu `$m sources we thus conclude that 13 are stars and that the remaining 19 are optically identified galaxies. 18 sources, all probably galaxies, thus remain without optical identifications to the limits of the USNO-A catalogue ie. around 20th magnitude in B and R.
The colour-colour plot also allows us to check the calibration for the 12$`\mu `$m survey. We can use the B-R colours to provide a rough spectral type for all stars in the survey. This can then be cross-referenced to the surface temperature of that stellar type. The 12$`\mu `$m flux can then be extrapolated from the R band flux, assuming a simple Black Body spectrum. This approach suggests that the flux calibration is accurate at the $``$20% level (see Table 3). We have also checked these results using detailed spectral energy distributions (SEDs) based on the Kurucz stellar atmosphere codes instead of a simple Black Body extrapolation, and arrive at very similar conclusions. The main source of uncertainly here is the treatment of undersampled unresolved sources in the ISOCAM reduction systems. As more data becomes available on the details of the ISOCAM PSF, this systematic uncertainty will be reduced. There is also the possibility that one or more of the stars in the survey have genuine IR excesses. Ground-based near- to mid-IR photometry will be required to confirm this.
We are then able to remove all 13 stars from the 12$`\mu `$m source lists generated in this survey, and can thus examine the statistics of faint galaxies at 12$`\mu `$m.
### 3.2 Individual Sources
We discuss here individual sources of note in this survey.
IRAS 03031-0943 This lies at 03 05 36.4 -09 31 27.0 (J2000) and is an IRAS source identified with a B=18.6 galaxy at z=0.112 (Clements et al., 1996a). It is associated with object F1\_0 in the present survey. This galaxy has IRAS fluxes of 0.85 and 0.51 Jy at 100 and 60$`\mu `$m respectively, and limits of 0.15 and 0.095 Jy at 25 and 12$`\mu `$m, consistent with the measured ISO 12$`\mu `$m flux of 12.1$`\pm `$0.2 mJy. This galaxy has a 60$`\mu `$m luminosity of 10$`{}_{}{}^{11}L_{}^{}`$ (H<sub>0</sub>=100kms<sup>-1</sup>Mpc<sup>-1</sup>, q<sub>0</sub>=0.5) which places it among the high luminosity IRAS galaxies but at lower luminosity than the ultraluminous class (see eg. Sanders & Mirabel, 1996). Its optical spectrum contains strong emission from the H$`\alpha `$-NII blend and SII, but the redshift measurement spectrum is of too low a resolution to provide any emission line diagnostics (Clements, private communication). We thus do not know what sort of power source is energetically dominant in this object — starburst or AGN.
NPM1G-10.0117 This lies at 03 01 06.2 -10 44 24 (J2000) and is a B=16.63 galaxy used in a proper-motion survey (Klemola et al., 1987). It is associated with object F2\_0 in this survey, and has a 12$`\mu `$m flux of 9.10$`\pm `$0.26 mJy. It is also identified with HST Guide Star GSC 5290\_640, but is clearly a galaxy in the Digitised Sky Survey images. Little else is currently known about it.
Altogether, we can say relatively little about the galaxies identified so far in this survey since the identification programme has only just started. Nevertheless, it is a useful check of the processing to note that the only IRAS galaxy within the survey region has been detected by ISOCAM.
## 4 The 12 $`\mu `$m Number Counts
Integral number counts from a survey with homogeneous sensitivity are calculated by summing up the number of sources to a given flux limit, and then dividing by the survey area:
$$N(>S)=\underset{flux>S}{}\frac{1}{\mathrm{\Omega }}$$
(1)
where $`\mathrm{\Omega }`$ is the area of the survey. However, in our case the noise in the survey is inhomogeneous (it has a bowl-like shape) since the border pixels were observed with smaller integration times. We thus have to make a correction to equation 1 to account for this. If we define $`\eta (\sigma )=\mathrm{\Omega }/\mathrm{\Omega }(s)`$, where $`\mathrm{\Omega }(s)`$ is the area where the $`1\sigma `$ sensitivity is better than $`s`$, then the corrected number counts are given by:
$$N(>S)=\underset{flux>S}{}\frac{1}{\mathrm{\Omega }}\times \frac{1}{\eta (S/n)}$$
(2)
where n is the detection threshold of the survey. In the present paper we consider only those sources detected at $`>5\sigma `$ confidence, so n=5. We plot the area coverage, $`\eta `$, in Fig. 3.
A correction must also be applied to account for Malmquist bias (Oliver, 1995). This bias arises when looking at number counts for a population with rapidly increasing numbers at fainter fluxes, as is the case for our 12$`\mu `$m galaxy sample. In the presence of observational noise, some galaxies close to, but below, the flux limit will be scattered above the flux limit by noise and will appear in the final catalogue. Similarly some galaxies close to but above the flux limit will be scattered out of the catalogue. However, since there are many more galaxies at fainter fluxes, more galaxies will be scattered above the flux limit than below it. Number counts that are uncorrected for this bias thus show a steep rise in counts towards the faintest flux levels. In the case of Gaussian noise and a Euclidean count slope, which approximates to the present case, Murdoch et al (1973) tabulated the effects of this bias to a detection level of 5$`\sigma `$, allowing for the observed fluxes to be corrected. Oliver (1995) provides a numerical version of this correction which we apply here. For observations probing below the 5$`\sigma `$ limit this simple correction cannot be applied, and a more complex Monte Carlo approach must be adopted (eg. Bertin et al. 1997).
Figure 4 shows the Malmquist-bias corrected integral number count plot from the present work and from 12$`\mu `$m IRAS surveys, along with some other information. The first thing to notice in this diagram is that we have reached flux limits almost 100 times fainter than the deepest IRAS number counts at these wavelengths. We are thus able to see much deeper into the universe than the IRAS surveys and can provide considerably more powerful sampling of the 12$`\mu `$m galaxy population.
Secondly, our survey is the first flux limited 12$`\mu `$m survey to be dominated by galaxies rather than stars. The deepest IRAS sample (Hacking & Houck 1987) included $``$50 objects of which only 5 were galaxies. As discussed above, the present survey contains 50 objects above the 5$`\sigma `$ flux limit, of which only 13 are stars.
The integral counts of stellar identifications in the 12 $`\mu `$m survey are shown in Fig. 5. We find good agreement with model counts by Franceschini et al. (1991) based on the Bahcall and Soneira galactic model and on a stellar luminosity function scaled from the V band to $`\lambda =12\mu m`$ according to Hacking & Houck (1987). This agreement suggests that no major new stellar component is emerging at faint fluxes with respect to those detected in the optical.
We have so far shown the integral number counts for the galaxies in our survey. A more statistically meaningful way to compare observed and theoretical number counts is to examine them in a differential form, i.e. $`dN(S_{12})/dS_{12}`$ versus $`S_{12}`$. This is done in Fig. 6, where we report the Euclidean-normalised differential counts from our 12$`\mu `$m survey compared to the 15$`\mu `$m number counts derived from ISOCAM observations of the Hubble Deep Field (Paper I). The lines correspond to predictions for Euclidean-normalised counts based on both non-evolving and strongly evolving population models.
The no-evolution model is based on the local luminosity function (Saunders et al., 1990) at 60$`\mu `$m and on the Rush et al (1993) results at 12$`\mu `$m. (for more details see Franceschini et al. 1997). This minimal curve significantly under predicts the observed counts from ISO. We thus appear to have detected evolution in the 12$`\mu `$m source population at a $`3.5\sigma `$ significance level.
On the other hand, our observed 12$`\mu `$m counts are matched by a model assuming an evolving luminosity function. This is described in terms of two populations, which we assume dominate the extragalactic sky at these wavelengths:
(1) Gas-rich systems, i.e. spiral, irregular and starbursting galaxies, with luminosity functions evolving with cosmic time as $`N(L,z)=N(L,z=0)exp(k\tau (z))`$, where $`N(L,z=0)`$ is the locally observed distribution (see above), $`\tau (z)`$ is the lookback time $`(t_0t)/t_0`$, where t is the age of the universe at a redshift z and $`t_0`$ is the present age, and $`k=3`$ is the evolution parameter. This corresponds to density evolution, yielding an average increase in galaxy co-moving number density of a factor of 5.8 at z=1 (for an assumed $`q_0`$=0.15 value of the cosmological deceleration parameter).
(2) Active Galactic Nuclei, which are described by a model based again on the Rush et al. (1993) local luminosity function and assuming pure luminosity evolution: $`N(L,z)=N(L_0,z)`$, where $`L(z)=L_0\mathrm{exp}(k\tau [z])`$ with $`k=3`$.
Note that the same model with the additional contribution of a population of high-redshift starbursts (forming elliptical and S0 galaxies as described by Franceschini et al. 1994) accounts nicely for the cosmological far-IR background recently detected in the far-infrared and submillimeter wavebands by Puget et al. (1996), Hauser et al. (1997b) and Fixsen et al (1998).
It is also interesting to compare the 12$`\mu `$m counts described here with the 15$`\mu `$m counts from the HDF. These counts are derived from two different ISOCAM filters (LW10 and LW3) with rather different response functions. As can be seen in Fig. 6, there appears to be a clear offset between the two differential galaxy counts by roughly a factor 2 – 4 (though the two bins with overlapping fluxes are in formal agreement). No simple model can explain this shift in the counts by such a large factor over such a narrow flux interval. We interpret this shift as probably not due to actual changes in the counts, but to the different responses of the LW10 and LW3 filters to the complex SEDs of galaxies in the mid-IR. Specifically the 7$`\mu `$m PAH emission feature, which enters the LW3 15$`\mu `$m band at z$``$0.5 to 1, and the 10$`\mu `$m absorption feature. Unfortunately, the strength of these mid-IR spectral features varies considerably from object to object (see eg. Elbaz et al., 1998). A full understanding of the effects of these features on mid-IR number counts thus awaits a better theoretical treatment, a better understanding of the variation of these features locally, and a better idea of the nature of objects making up the milli–Jansky 12$`\mu `$m source population. Xu et al. (1998) used a three component model including cirrus, starburst and AGN contributions to fit the mid-IR SEDs of a large sample of local galaxies. They then extrapolate from this to predict the effects of the mid-IR SEDs on number counts under various evolutionary assumptions. Such an approach may be useful for understanding the present work and its relation to the ISOHDF data. However, assumptions would have to be made about the nature of the faint mid-IR galaxy population, and whether it was significantly different from the local galaxies studied in detail by IRAS and ISO. There are already suggestions from the ISOHDF that there are more and bigger starbursts in the faint mid-IR selected galaxies than in the local population (Rowan-Robinson et al., 1997). At this stage we lack redshifts, and thus luminosity and star-formation-rate estimates, for our 12$`\mu `$m galaxies. A large number of assumptions would thus need to be made about these objects for an empirical approach similar to Xu et al. (1998) to be applied. There would thus be considerable uncertainties in such an analysis.
A proper test of models of this population is thus even more critically dependent on obtaining the redshifts of individual sources than similar work at optical or far-IR wavelengths. We have therefore begun a followup programme to identify and determine the redshifts for all the 12$`\mu `$m galaxies discussed in this paper. Once this data is available, we will be able to draw firmer conclusions about the nature and evolution of the mJy 12$`\mu `$ source population.
Our number counts can directly set a lower limit to the extragalactic infrared background light between 8 and 15 $`\mu `$m. By integrating the light from the galaxies with a flux larger than 0.5 mJy, we find that $`\nu I_\nu (\mathrm{EBL}_{12\mu \mathrm{m}})>0.50\pm 0.15\mathrm{nWm}^2\mathrm{sr}^1`$. An upper limit of $`468\mathrm{n}\mathrm{W}\mathrm{m}^2\mathrm{sr}^1`$ has been reported by Hauser et al. (1998) from DIRBE (COBE) measurements, which are hampered by the zodiacal light.
## 5 Conclusions
We have performed a deep survey at 12$`\mu `$m using the CAM instrument on the ISO satellite. We have detected 50 objects to a 5$`\sigma `$ flux threshold of $`500\mu `$Jy. in a 0.1 deg<sup>2</sup> area, of which 13 appear to be stars on the basis of optical images and optical-IR colours. The remaining 37 objects appear to be galaxies. We have examined the source count statistics for this population and find evidence for evolution in this population, while for stars the counts are consistent with current galactic structure models and extrapolations of the optical luminosity functions to the mid-IR.
Our galaxy counts, when compared with the deep ISOCAM counts in the Hubble Deep Field using the LW3 15$`\mu `$m filter, also show evidence for significant effects from the complex mid-infrared features in the spectral energy distributions of galaxies.
###### Acknowledgements.
It is a pleasure to thank Herve Aussel, David Elbaz, Matt Malkan and Jean-Loup Puget for helpful comments and contributions. The Digitised Sky Survey was produced at the STSCI under US Government Grant NAG W–2166. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The USNO-A survey was of considerable help, and we would like to express our thanks to all those who helped to produce it. DLC and ACB are supported by an ESO Fellowship and by the EC TMR Network programme, FMRX-CT96-0068.
|
no-problem/9901/astro-ph9901168.html
|
ar5iv
|
text
|
# COSMOLOGY UPDATE 1998
## 1 1998, A Remarkable Year for Cosmology
The birth of the hot big-bang model dates back to the work of Gamow and his collaborators in the 1940s. The emergence of the hot big-bang cosmology began in 1964 with the discovery of the microwave background radiation. By the 1970s, the black-body character of the microwave background radiation had been established and the success of big-bang nucleosynthesis demonstrated, and the hot big-bang was being referred to as the standard cosmology. Today, it is universally accepted and provides an accounting of the Universe from a fraction of a second after the beginning, when the Universe was a hot, smooth soup of quarks and leptons to the present, some $`14\mathrm{Gyr}`$ later. Together with the standard model of particle physics and ideas about the unification of the forces, it provides a firm foundation for speculations about the earliest moments of creation.
The standard cosmology rests upon three strong observational pillars: the expansion of the Universe; the cosmic microwave background radiation (CBR); and the abundance pattern of the light elements, D, <sup>3</sup>He, <sup>4</sup>He, and <sup>7</sup>Li, produced seconds after the bang (see e.g., Peebles et al, 1991; or Turner & Tyson, 1999). In its success, it has raised new, more profound questions: the origin of the matter/antimatter asymmetry, the origin of the smoothness and flatness of the Universe, the nature and origin of the primeval density inhomogeneities that seeded all the structure in the Universe, the quantity and composition of the dark matter that holds the Universe together, and the nature of the big-bang event itself. This has motivated the search for a more expansive cosmological theory.
In the 1980s, born of the inner space/outer space connection, a new paradigm emerged, one deeply rooted in fundamental physics with the potential to extend our understanding of the Universe back to $`10^{32}\mathrm{sec}`$ and to address the fundamental questions posed, but not addressed by the hot big-bang model. That paradigm, known as inflation \+ cold dark matter, holds that most of the dark matter consists of slowly moving elementary particles (cold dark matter), that the Universe is flat and that the density perturbations that seeded all the structure seen today arose from quantum-mechanical fluctuations on scales of $`10^{23}\mathrm{cm}`$ or smaller. It took awhile for the observers and experimentalists to take this theory seriously enough to try to disprove it, and in the 1990s it began to be tested in a serious way.
This could prove to be a watershed year in cosmology, as important as 1964, when the CBR was discovered. The crucial new data include a precision measurement of the density of ordinary matter and of the total amount of matter, both derived from a measurement of the primeval deuterium abundance and the theory of BBN; fine-scale (down to $`0.3^{}`$) measurements of the anisotropy of the CBR; and a measurement of the deceleration of the Universe based upon distance measurements of type Ia supernovae out to redshift of close to unity.
Together, these measurements, which are harbingers for the precision era of cosmology that is coming, provide the first plausible, complete accounting of the matter/energy density in the Universe and evidence that the primeval density perturbations arose from quantum fluctuations during inflation. In addition, there exists a large body of cosmological data – from measurements of large-scale structure to the evolution of galaxies and clusters – that supports the cold dark matter theory of structure formation.
The accounting of matter and energy goes like this (in units of the critical density for $`H_0=65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$): light neutrinos, between 0.3% and 15%; stars and related material, between 0.3% and 0.6%; baryons (total), $`5\%\pm 0.5\%`$; matter (total), $`40\%\pm 10\%`$; and vacuum energy (or something similar), $`80\%\pm 20\%`$; within the uncertainties, a total equalling the critical density (see Fig. 1).
The recently measured primeval deuterium abundance (Burles & Tytler, 1998a,b) and the theory of big-bang nucleosynthesis now accurately pin down the baryon density (Schramm & Turner, 1998; Burles et al, 1999), $`\mathrm{\Omega }_B=(0.019\pm 0.0012)h^20.05`$ (for $`h=0.65`$); see Fig. 2. Using the cluster baryon fraction, determined from x-ray measurements (Mohr et al, 1998; Evrard, 1996) and SZ measurements (Carlstrom, 1999), $`f_B=M_{\mathrm{baryon}}/M_{\mathrm{TOT}}=(0.07\pm 0.007)h^{3/2}`$, and assuming that clusters provide a fair sample of matter in the Universe, $`\mathrm{\Omega }_B/\mathrm{\Omega }_M=f_B`$, it follows that $`\mathrm{\Omega }_M=(0.3\pm 0.05)h^{1/2}0.4\pm 0.1`$. Other direct measurements of the matter density are consistent with this (see e.g., Turner, 1999).
That $`\mathrm{\Omega }_M\mathrm{\Omega }_B`$ is strong, almost incontrovertible, evidence for nonbaryonic dark matter; the leading particle candidates are axions, neutralinos and neutrinos. The recent evidence for neutrino oscillations, based upon atmospheric neutrino data presented by the SuperKamiokande Collaboration, indicates that neutrinos contribute at least as much mass as bright stars; the failure of the, top-down, hot dark matter scenario of structure formation restricts the contribution of neutrinos to be less than about 15% of the critical density (see e.g., Dodelson et al, 1996; White, Frenk & Davis, 1983). Because relic axions and neutralinos behave like cold dark matter (i.e., move slowly), they are the prime particle dark-matter candidates.
The position of the first acoustic peak in the angular power spectrum of temperature fluctuations of the CBR is a sensitive indicator of the curvature of the Universe: $`l_{\mathrm{peak}}200/\sqrt{\mathrm{\Omega }_0}`$, where $`R_{\mathrm{curv}}^2=H_0^2/|\mathrm{\Omega }_01|`$. CBR anisotropy measurements now span multipole number $`l=2`$ to around $`l=1000`$ (see Figs. 3 and 4); while the data do not yet speak definitively, it is clear that $`\mathrm{\Omega }_01`$ is preferred. Several experiments (Python V, Viper, MAT and Boomerang) with new results around $`l=30700`$ should be reporting in soon. Ultimately, the MAP (launch in 2000) and Planck (launch in 2007) satellites will cover $`l=2`$ to $`l=3000`$ with precision limited essentially by sampling variance, and should determine $`\mathrm{\Omega }_0`$ to a precision of 1% or better.
The same angular power spectrum that indicates $`\mathrm{\Omega }_01`$ also provides evidence that the primeval density perturbations are of the kind predicted by inflation. The inflation-produced Gaussian curvature fluctuations lead to an angular power spectrum with a series of well defined acoustic peaks. While the data at best define the first peak, they are good enough to exclude many models where the density perturbations are isocurvature (e.g., cosmic strings and textures): in these models the predicted spectrum is devoid of acoustic peaks (Allen et al, 1997; Pen et al, 1997).
The oldest approach to determining $`\mathrm{\Omega }_0`$ is by measuring the deceleration of the expansion. Sandage’s deceleration parameter, $`q_0(\ddot{R}/R)_0/H_0^2=\frac{\mathrm{\Omega }_0}{2}[1+3p/\rho ]`$, depends upon both $`\mathrm{\Omega }_0`$ and the equation of state, $`p(\rho )`$. Because distant objects are seen at an earlier epoch, by measuring the (luminosity) distance to objects as a function of redshift the deceleration of the Universe can be determined. (If the Universe is slowing down, distant objects should be moving faster than predicted by Hubble’s law, $`v_0=H_0d`$.) Accurate distance measurements to some fifty supernovae of type Ia (SNe Ia) carried out by two groups (Riess et al, 1998; Perlmutter et al, 1998) indicate that the Universe is speeding up, not slowing down (i.e., $`q_0<0`$). The simplest explanation is a cosmological constant, with $`\mathrm{\Omega }_\mathrm{\Lambda }0.6`$. This result makes the CBR determination of the total density ($`\mathrm{\Omega }_0=1`$) and direct measures of the matter density ($`\mathrm{\Omega }_M0.4`$) consistent: the “missing energy” exists in a smooth component that cannot clump and thus is not found in clusters of galaxies.
The concordance of the three measurements that bear on the quantity and composition of matter and energy in the Universe is illustrated in Fig. 5. The SN Ia results are sensitive to the acceleration (or deceleration) of the expansion and constrain the combination $`\frac{4}{3}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$. (Note, $`q_0=\frac{1}{2}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$; $`\frac{4}{3}\mathrm{\Omega }_M\mathrm{\Omega }_\mathrm{\Lambda }`$ corresponds to the deceleration parameter at redshift $`z0.4`$, the median redshift of these samples). The (approximately) orthogonal combination, $`\mathrm{\Omega }_0=\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }`$ is constrained by CBR anisotropy. Together, they define a concordance region around $`\mathrm{\Omega }_01`$, $`\mathrm{\Omega }_M1/3`$, and $`\mathrm{\Omega }_\mathrm{\Lambda }2/3`$. The constraint to the matter density alone, $`\mathrm{\Omega }_M=0.4\pm 0.1`$, provides a cross check, and it is consistent with these numbers. Cosmic concordance!
While the evidence for inflation + cold dark matter is not definitive and we should be cautious, 1998 could well mark a turning point in cosmology as important as 1964. Recall, after the discovery of the CBR it took a decade or more to firmly establish the cosmological origin of the CBR and the hot big-bang cosmology as the standard cosmology.
## 2 Inflation + Cold Dark Matter
Inflation has revolutionized the way cosmologists view the Universe and provides the current working hypothesis for extending the standard cosmology to much earlier times. It explains how a region of size much, much greater than our Hubble volume could have become smooth and flat without recourse to special initial conditions (Guth 1981), as well as the origin of the density inhomogeneities needed to seed structure (Hawking, 1982; Starobinsky, 1982; Guth & Pi, 1982; and Bardeen et al, 1983). Inflation is based upon well defined, albeit speculative physics – the semi-classical evolution of a weakly coupled scalar field – and this physics may well be connected to the unification of the particles and forces of Nature.
It would be nice if there were a standard model of inflation, but there isn’t. What is important, is that almost all inflationary models make three very testable predictions: flat Universe, nearly scale-invariant spectrum of Gaussian density perturbations, and nearly scale-invariant spectrum of gravitational waves. These three predictions allow the inflationary paradigm to be decisively tested. While the gravitational waves are an extremely important and challenging test, I will not mention them again here (see e.g., Turner 1997a).
The tremendous expansion that occurs during inflation is key to its beneficial effects and robust predictions: A small, subhorizon-sized bit of the Universe can grow large enough to encompass the entire observable Universe and much more. Because all that we can see today was once so extraordinarily small, it began flat and smooth. This is unaffected by the expansion since then and so the Hubble radius today is much, much smaller than the curvature radius, implying $`\mathrm{\Omega }_0=1`$. Lastly, the tremendous expansion stretches quantum fluctuations on truly microscopic scales ($`\begin{array}{c}<\hfill \\ \hfill \end{array}10^{23}\mathrm{cm}`$) to astrophysical scales ($``$ millions of light years).
The curvature perturbations created by inflation are characterized by two important features: 1) they are almost scale-invariant, which refers to the fluctuations in the gravitational potential being independent of scale – and not the density perturbations themselves; 2) because they arise from fluctuations in an essentially noninteracting quantum field, their statistical properties are that of a Gaussian random field.
Scale invariance specifies the shape of the spectrum of density perturbations. The normalization (overall amplitude) depends upon the specific inflationary model (i.e., scalar-field potential). Ignoring numerical factors for the moment, the overall amplitude is specified by the fluctuation in the gravitational potential, $`\delta \varphi (\delta \rho /\rho )_{\mathrm{HOR}}V^{3/2}/m_{\mathrm{PL}}^3V^{}`$, which is also equal to the amplitude of density perturbations when they cross the horizon. To be consistent with the COBE measurement of CBR anisotropy on the $`10^{}`$ scale, $`\delta \varphi `$ must be around $`2\times 10^5`$. Not only did COBE produce the first evidence for the existence of the density perturbations that seeded all structure (Smoot et al, 1992), but also, for a theory like inflation that predicts the shape of the spectrum of density perturbations, it fixed the amplitude of density perturbations on all scales. The COBE normalization began precision testing of inflation.
## 3 Cold Dark Matter – A Cosmological Necessity!
While we don’t know what the cold dark matter consists of, there is overwhelming evidence that it must be there. (Generically, cold dark matter refers to particles that comprise the bulk of the matter density, move very slowly, interact feebly with ordinary matter (baryons) and are not comprised of ordinary matter.) The biggest surprise in cosmology that I can imagine is the nonexistence of cold dark matter. Here is a brief summary of the most compelling evidence for cold dark matter:
* For more than a decade there has been growing evidence that the total amount of matter is significantly greater than what baryons can account for. Today, the discrepancy is about a factor of eight: $`\mathrm{\Omega }_M=0.4\pm 0.1`$ and $`\mathrm{\Omega }_B=(0.02\pm 0.002)h^20.05`$. Unless BBN is grossly misleading us and/or determinations of the matter density are way off, most of the matter must be nonbaryonic. (The discovery of dark energy does nothing to change this fact; it explains the discrepancy between the matter density, $`\mathrm{\Omega }_M=0.4`$, inferred from matter that clusters, and the total density, $`\mathrm{\Omega }_0=1`$, inferred from CBR anisotropy.)
* We now know that galaxies formed at redshifts of order 2 to 4 (see Fig. 6), that clusters formed at redshifts of 1 or less and that superclusters are forming today. That is, structure formed from the bottom up, as predicted if the nonbaryonic matter is cold. (Hot dark matter leads to a top-down sequence of structure formation; see, White, Frenk, and Davis, 1983.)
* The cold dark matter model of structure formation is consistent with an enormous body of data: CBR anisotropy, large-scale structure, abundance of clusters, the clustering of galaxies and clusters, the evolution of clusters and galaxies and their clustering, the structure of the Lyman-$`\alpha `$ forest, and a host of other data.
* The only plausible candidate for the bulk of the dark matter in the halo of own galaxy is cold dark matter particles. The last-hope baryonic candidate, dark stars or MACHOs, can account for only about half the mass of the halo and probably much less. This follows from the fact the microlensing rates toward the Magellanic Clouds, which are about 30% of that expected if the halo were comprised entirely of MACHOs, and the growing evidence that the Magellanic lenses are in the clouds themselves or other nonhalo components of the Galaxy.
The two leading particle candidates for cold dark matter are the axion and the neutralino. Both are well motivated by fundamental physics concerns and both have a predicted relic abundance that is comparable to the critical density. There are other “dark-horse” candidates including primordial black holes and superheavy relic particles, which should not be forgotten (Kolb, 1999). As far as cosmological infrastructure goes, they would be every bit as good as axions and neutralinos.
## 4 Neutrinos by the Numbers
Cosmic neutrinos are almost abundant as CBR photons: $`n_{\nu \overline{\nu }}=\frac{3}{11}n_\gamma `$ (per species) $`112\mathrm{cm}^3`$. Cosmologists are confident of their relic abundance (at least within the standard model of particle physics) because they were in thermal equilibrium until the Universe was a second old; thereafter, their weak interactions were too “weak” to keep them in thermal equilibrium and their temperature decreased as $`R^1`$. Shortly after neutrinos “decoupled” electrons and positrons annihilated, raising the photon temperature slightly so that $`T_\nu /T_\gamma =(4/11)^{1/3}`$. Because the yields of big-bang nucleosynthesis are so sensitive to the phase-space distribution of neutrinos, the success of BBN is also a confirmation of the standard cosmic history of neutrinos.
The sensitivity of BBN to neutrinos allowed Steigman, Schramm and Gunn (1977) to use the yield of <sup>4</sup>He to constraint the number of light neutrino species. Their original limit, $`N_\nu <7`$, bettered the laboratory limit at the time by almost a factor of 1000. A recent analysis (Burles et al, 1999) finds $`N_\nu =2.84\pm 0.3`$ (95% cl; see Fig. 7), not quite as good as the LEP determination of $`3.07\pm 0.24`$, but still very impressive. Since we are convinced that there are just three standard neutrinos, both the LEP and BBN determinations are now used to search for the existence of new particles; in the case of BBN, light (mass less than about 1 MeV) species with sufficiently potent interactions to be present in significant numbers around the time of BBN. The current BBN limit, with the prior $`N_\nu 3`$, is: $`N_\nu <3.2`$ (95% cl).
Neutrinos were the first candidate for nonbaryonic dark matter (motivated by since refuted evidence for a 30 eV electron neutrino mass in 1978), and this led to the hot dark matter theory of structure formation. While many of its features are qualitatively correct, e.g., the existence of voids, walls and sheets, it predicted “top down” formation of structure (White, Frenk & Davis, 1983) and it is now very clear that structure formed from the “bottom up.”
Because they are known to exist and are so abundant, neutrinos may well make up a significant part of the mass budget and be an interesting cosmic spice. Here are the numbers:
$`\mathrm{\Omega }_\nu `$ $`=`$ $`{\displaystyle \frac{m_\nu }{90h^2\mathrm{eV}}}{\displaystyle \frac{m_\nu }{40\mathrm{eV}}}`$
$`\mathrm{\Omega }_\nu /\mathrm{\Omega }_B`$ $`=`$ $`{\displaystyle \frac{m_\nu }{1.7\mathrm{eV}}}`$
$`\mathrm{\Omega }_\nu /\mathrm{\Omega }_{}`$ $`=`$ $`{\displaystyle \frac{m_\nu }{0.3h\mathrm{eV}}}{\displaystyle \frac{m_\nu }{0.2\mathrm{eV}}}`$
The SuperKamiokande data, which indicate a neutrino mass-squared difference of around $`10^2\mathrm{eV}^2`$, put the neutrino contribution to the cosmic mass budget at an amount at least comparable to that of bright stars. WOW! If the $`0.1\mathrm{eV}`$ mass corresponds to the lightest neutrino species or if the mass difference squared arises from nearly degenerate neutrino species, the total could even greater – $`_im_{\nu _i}1.7\mathrm{eV}`$ would make neutrinos as important as baryons.
A neutrino mass of a few tenths of an eV is already very interesting from the point of view of large-scale structure formation. Hu et al (1998) have shown a neutrino species of this mass can have a potentially detectable influence on large-scale structure, one which could well be detectable with Sloan Digital Sky Survey data. (When CBR anisotropy data are folded in as well, the detection mass-limit might well be even lower.) At the other extreme, the effect on the formation of large-scale structure is so profound (and bad), that $`\mathrm{\Omega }_\nu >0.15`$ ($`m_\nu 4\mathrm{eV}`$) can already be ruled out (Dodelson et al, 1996).
Finally, in the context of physics beyond the standard model, cosmology still has much to tell us about neutrinos. The properties of massive, decaying neutrinos can be severely constrained by BBN (Dodelson et al, 1994) and the CBR (Lopez et al, 1998). Further, a massive tau neutrino that decays a few seconds after the bang or later and produces relativistic particles, can change the balance of matter and radiation, leading to an interesting variant of cold dark matter called $`\tau `$CDM (see below, and Dodelson et al, 1996).
## 5 Inflation + CDM in the Era of Precision Cosmology
As we look forward to the abundance (avalanche!) of high-quality observations that will test inflation + CDM, we have to make sure the predictions of the theory match the precision of the data. In so doing, CDM + inflation becomes a ten (or more) parameter theory. For astrophysicists, and especially cosmologists, this is daunting, as it may seem that a ten-parameter theory can be made to fit any set of observations. This is not the case when one has the quality and quantity of data that will be coming. The standard model of particle physics offers an excellent example: it is a nineteen-parameter theory and because of the high-quality of data from experiments at Fermilab’s Tevatron, SLAC’s SLC, CERN’s LEP and other facilities it has been rigorously tested and the parameters measured to a precision of better than 1% in some cases. My worry as an inflationist is not that many different sets of parameters will fit the upcoming data, but rather that no set will!
In fact, the ten parameters of CDM + inflation are an opportunity rather than a curse: Because the parameters depend upon the underlying inflationary model and fundamental aspects of the Universe, we have the very real possibility of learning much about the Universe and inflation. The ten parameters can be organized into two groups: cosmological and dark-matter (Dodelson et al, 1996).
Cosmological Parameters
1. $`h`$, the Hubble constant in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$.
2. $`\mathrm{\Omega }_Bh^2`$, the baryon density. BBN implies: $`\mathrm{\Omega }_Bh^2=0.019\pm 0.0024`$ (95% cl).
3. $`n`$, the power-law index of the scalar density perturbations. CBR measurements indicate $`n=1.1\pm 0.2`$; $`n=1`$ corresponds to scale-invariant density perturbations. Most models predict $`n0.900.98`$; the range of predictions runs from $`0.7`$ to $`1.2`$ (Lyth & Riotto, 1996).
4. $`dn/d\mathrm{ln}k`$, “running” of the scalar index with comoving scale ($`k=`$ wavenumber). Most models predict a value of $`𝒪(\pm 10^3)`$ or smaller (Kosowsky & Turner, 1995).
5. $`S`$, the overall amplitude squared of density perturbations, quantified by their contribution to the variance of the CBR quadrupole anisotropy.
6. $`T`$, the overall amplitude squared of gravity waves, quantified by their contribution to the variance of the CBR quadrupole anisotropy. Note, the COBE normalization determines $`T+S`$.
7. $`n_T`$, the power-law index of the gravity wave spectrum. Scale-invariance corresponds to $`n_T=0`$; for inflation, $`n_T`$ is given by $`\frac{1}{7}\frac{T}{S}`$.
Dark-matter Parameters
1. $`\mathrm{\Omega }_\nu `$, the fraction of critical density in neutrinos. While the hot dark matter theory of structure formation is not viable, it is possible that a small fraction ($`\mathrm{\Omega }_\nu <0.15`$) of the matter density exists in the form of neutrinos.
2. $`\mathrm{\Omega }_X`$, the fraction of critical density in a smooth component of unknown composition and negative pressure ($`w_X\begin{array}{c}<\hfill \\ \hfill \end{array}0.3`$). The SN Ia results and CBR anisotropy provide strong evidence for such a component, with the simplest example being a cosmological constant ($`w_X=1`$).
3. $`g_{}`$, the quantity that counts the number of ultra-relativistic degrees of freedom around the time of matter-radiation equality. In the standard cosmology/standard model of particle physics $`g_{}=3.3626`$ (photons in the CBR + 3 massless neutrino species). The amount of radiation controls when the Universe became matter dominated and thus affects the present spectrum of matter inhomogeneity.
As mentioned, the parameters involving density and gravity-wave perturbations depend directly upon the inflationary potential. In particular, they can be expressed in terms of the potential and its first three derivatives:
$`S`$ $``$ $`{\displaystyle \frac{5|a_{2m}|^2}{4\pi }}2.2{\displaystyle \frac{V_{}/m_{\mathrm{Pl}}^4}{(m_{\mathrm{Pl}}V_{}^{}/V_{})^2}}`$
$`n1`$ $`=`$ $`{\displaystyle \frac{1}{8\pi }}\left({\displaystyle \frac{m_{\mathrm{Pl}}V_{}^{}}{V_{}}}\right)^2+{\displaystyle \frac{m_{\mathrm{Pl}}}{4\pi }}\left({\displaystyle \frac{m_{\mathrm{Pl}}V_{}^{}}{V_{}}}\right)^{}`$
$`{\displaystyle \frac{dn}{d\mathrm{ln}k}}`$ $`=`$ $`{\displaystyle \frac{1}{32\pi ^2}}\left({\displaystyle \frac{m_{\mathrm{Pl}}^{}{}_{}{}^{3}V_{}^{\prime \prime \prime }}{V_{}}}\right)\left({\displaystyle \frac{m_{\mathrm{Pl}}V_{}^{}}{V_{}}}\right)`$
$`+{\displaystyle \frac{1}{8\pi ^2}}\left({\displaystyle \frac{m_{\mathrm{Pl}}^{}{}_{}{}^{2}V_{}^{\prime \prime }}{V_{}}}\right)\left({\displaystyle \frac{m_{\mathrm{Pl}}V_{}^{}}{V_{}}}\right)^2`$
$`{\displaystyle \frac{3}{32\pi ^2}}\left(m_{\mathrm{Pl}}{\displaystyle \frac{V_{}^{}}{V_{}}}\right)^4`$
$`T`$ $``$ $`{\displaystyle \frac{5|a_{2m}|^2}{4\pi }}=0.61(V_{}/m_{\mathrm{Pl}}^4)`$
$`n_T`$ $`=`$ $`{\displaystyle \frac{1}{8\pi }}\left({\displaystyle \frac{m_{\mathrm{Pl}}V_{}^{}}{V_{}}}\right)^2`$
where $`V(\varphi )`$ is the inflationary potential, prime denotes $`d/d\varphi `$, and $`V_{}`$ is the value of the scalar potential when the present horizon scale crossed outside the horizon during inflation.
If one can measure $`S`$, $`T`$, and $`(n1)`$, one can recover the value of the potential and its first two derivatives (see e.g., Turner 1993; Lidsey et al, 1997)
$`V_{}`$ $`=`$ $`1.65Tm_{\mathrm{Pl}}^{}{}_{}{}^{4},`$ (1)
$`V_{}^{}`$ $`=`$ $`\pm \sqrt{{\displaystyle \frac{8\pi }{7}}{\displaystyle \frac{T}{S}}}V_{}/m_{\mathrm{Pl}},`$ (2)
$`V_{}^{\prime \prime }`$ $`=`$ $`4\pi \left[(n1)+{\displaystyle \frac{3}{7}}{\displaystyle \frac{T}{S}}\right]V_{}/m_{\mathrm{Pl}}^{}{}_{}{}^{2},`$ (3)
where the sign of $`V^{}`$ is indeterminate (under the redefinition $`\varphi \varphi `$ the sign changes). If, in addition, the gravity-wave spectral index can also be measured the consistency relation, $`T/S=7n_T`$, can be used to test inflation.
Bunn & White (1997) have used the COBE four-year dataset to determine $`S`$ as a function of $`T/S`$ and $`n1`$; they find
$`{\displaystyle \frac{V_{}/m_{\mathrm{Pl}}^4}{(m_{\mathrm{Pl}}V_{}^{}/V_{})^2}}`$ $`=`$ $`{\displaystyle \frac{S}{2.2}}=(1.7\pm 0.2)\times 10^{11}`$ (4)
$`\times {\displaystyle \frac{\mathrm{exp}[2.02(n1)]}{\sqrt{1+\frac{2}{3}\frac{T}{S}}}}`$
From which it follows that
$$V_{}<6\times 10^{11}m_{\mathrm{Pl}}^{}{}_{}{}^{4},$$
(5)
equivalently, $`V_{}^{1/4}<3.4\times 10^{16}\mathrm{GeV}`$. This indicates that inflation must involve energies much smaller than the Planck scale. (To be more precise, inflation could have begun at a much higher energy scale, but the portion of inflation relevant for us, i.e., the last 60 or so e-folds, occurred at an energy scale much smaller than the Planck energy.)
Finally, it should be noted that the ‘tensor tilt,’ deviation of $`n_T`$ from 0, and the ‘scalar tilt,’ deviation of $`n1`$ from zero, are not in general equal; they differ by the rate of change of the steepness. The tensor tilt and the ratio $`T/S`$ are related: $`n_T=\frac{1}{7}\frac{T}{S}`$, which provides a consistency test of inflation.
### 5.1 Present status of Inflation + CDM
A useful way to organize the different CDM models is by their dark-matter content; within each CDM family, the cosmological parameters vary. One classification is (Dodelson et al, 1996):
1. sCDM (for simple): Only CDM and baryons; no additional radiation ($`g_{}=3.36`$). The original standard CDM is a member of this family ($`h=0.50`$, $`n=1.00`$, $`\mathrm{\Omega }_B=0.05`$), but is now ruled out (see Fig. 9).
2. $`\tau `$CDM: This model has extra radiation, e.g., produced by the decay of an unstable massive tau neutrino (hence the name); here we take $`g_{}=7.45`$.
3. $`\nu `$CDM (for neutrinos): This model has a dash of hot dark matter; here we take $`\mathrm{\Omega }_\nu =0.2`$ (about 5 eV worth of neutrinos).
4. $`\mathrm{\Lambda }`$CDM (for cosmological constant): This model has a smooth component in the form of a cosmological constant; here we take $`\mathrm{\Omega }_\mathrm{\Lambda }=0.6`$.
Figure 9 summarizes the viability of these different CDM models, based upon CBR measurements and current determinations of the present power spectrum of inhomogeneity derived from redshift surveys (see Fig. 10). sCDM is only viable for low values of the Hubble constant (less than $`55\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) and/or significant tilt (deviation from scale invariance); the region of viability for $`\tau `$CDM is similar to sCDM, but shifted to larger values of the Hubble constant (as large as $`65\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$). $`\nu `$CDM has an island of viability around $`H_060\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$ and $`n0.95`$. $`\mathrm{\Lambda }`$CDM can tolerate the largest values of the Hubble constant.
### 5.2 The best fit Universe!
Considering other relevant data too – e.g., age of the Universe, determinations of $`\mathrm{\Omega }_M`$, measurements of the Hubble constant, and limits to $`\mathrm{\Omega }_\mathrm{\Lambda }`$$`\mathrm{\Lambda }`$CDM emerges as the ‘best-fit CDM model’ (Krauss & Turner, 1995; Ostriker & Steinhardt, 1995; Liddle et al, 1996; Turner, 1997b); see Fig. 11. Moreover, its ‘smoking-gun signature,’ accelerated expansion, has apparently been confirmed (Riess et al, 1998; Perlmutter et al, 1998) and it provides an excellent fit to the current CBR anisotropy data (see Fig. 4).
Despite my general enthusiasm, I would caution that it is premature to conclude that $`\mathrm{\Lambda }`$CDM is anything but the model to take aim at. Further, it should be noted that the SN Ia data do not yet discriminate between a cosmological constant and something else with large, negative pressure (e.g., rolling scalar field or frustrated topological defects).
## 6 Checklist for the Next Decade
As I have been careful to stress the basic tenets of inflation \+ CDM have not yet been confirmed definitively. However, a flood of high-quality cosmological data is coming, and could make the case in the next decade. Here are some of the important aspects of inflation \+ CDM that will be addressed by these data:
* Map of the Universe at 300,000 yrs. COBE mapped the CMB with an angular resolution of around $`10^{}`$; two new satellite missions, NASA’s MAP (launch 2000) and ESA’s Planck Surveyor (launch 2007), will map the CMB with 100 times better resolution ($`0.1^{}`$). From these maps of the Universe as it existed at a simpler time, long before the first stars and galaxies, will come a gold mine of information: Among other things, a definitive measurement of $`\mathrm{\Omega }_0`$; a determination of the Hubble constant to a precision of better than 5%; a characterization of the primeval lumpiness; and possible detection of the relic gravity waves from inflation. The precision maps of the CMB that will be made are crucial to establishing inflation + cold dark matter.
* Map of the Universe today. Our knowledge of the structure of the Universe is based upon maps constructed from the positions of some 30,000 galaxies in our own backyard. The Sloan Digital Sky Survey will produce a map of a representative portion of the Universe, based upon the positions of a million galaxies. The Anglo-Australian 2-degree Field survey will determine the position of several hundred thousand galaxies. These surveys will define precisely the large-scale structure that exists today, answering questions such as, “What are the largest structures that exist?” Used together with the CMB maps, this will definitively test the CDM theory of structure formation, and much more.
* Present expansion rate $`H_0`$. Direct measurements of the expansion rate using standard candles, gravitational time delay, SZ imaging and the CMB maps will pin down the elusive Hubble constant once and for all. It is the fundamental parameter that sets the size – in time and space – of the observable Universe. Its value is critical to testing the self consistency of CDM.
* Cold dark matter. A key element of theory is the cold dark matter particles that hold the Universe together; until we actually detect cold dark matter particles, it will be difficult to argue that cosmology is solved. Experiments designed to detect the dark matter that holds are own galaxy together are now operating with sufficient sensitivity to detect both neutralinos and axions (see e.g., Sadoulet, 1999; or van Bibber et al, 1998). In addition, experiments at particle accelerators (Fermilab and CERN) will be hunting for the neutralino and its other supersymmetric cousins.
* Nature of the dark energy. If the Universe is indeed accelerating, then most of the critical density exists in the form of dark energy. This component is poorly understood. Vacuum energy is only the simplest possibly for the smooth dark component; there are other possibilities: frustrated topological defects (Vilenkin, 1984; Pen & Spergel, 1998) or a rolling scalar field (see e.g., Ratra & Peebles, 1998; Frieman et al, 1995; Coble et al, 1997; Caldwell et al, 1998; Turner & White, 1997). Independent evidence for the existence of this dark energy, e.g., by CMB anisotropy, the SDSS and 2dF surveys, or gravitational lensing, is crucial for verifying the accounting of matter and energy in the Universe I have advocated. Additional measurements of SNe Ia could help shed light on the precise nature of the dark energy. The dark energy problem is not only of great importance for cosmology, but for fundamental physics as well. Whether it is vacuum energy or quintessence, it is a puzzle for fundamental physics and possibly a clue about the unification of the forces and particles.
## 7 New Questions; Some Surprises?
Will cosmologists look back on 1998 as a year that rivals 1964 in importance? I think it is quite possible. In any case, the flood of data that is coming will make the next twenty years in cosmology very exciting. It could be that my younger theoretical colleagues will get their wish – inflation + cold dark matter is falsified and it’s back to the drawing board. Or, it may be that it is roughly correct, but the real story is richer and even more interesting. This happened in particle physics. The quark model of the 1960s was based upon an approximate global $`SU(3)`$ flavor symmetry, which shed no light on the dynamics of how quarks are held together. The standard model of particle physics that emerged and which provides a fundamental description of physics at energies less than a few hundred GeV, is based upon the $`SU(3)`$ color gauge theory of quarks and gluons (QCD) and the $`SU(2)U(1)`$ gauge theory of the electroweak interactions. The difference between global and local $`SU(3)`$ symmetry was profound.
Even if inflation + cold dark matter does pass the series of stringent tests that will confront it in the next decade, there will be questions to address and issues to work out. Exactly how does inflation work and fit into the scheme of the unification of the forces and particles? Does the quantum gravity era of cosmology, which occurs before inflation, leave a detectable imprint on the Universe? What is the topology of the Universe and are there additional spatial dimensions? Precisely how did the excess of matter over antimatter develop? What happened before inflation? What does inflation + CDM teach us about the unification of the forces and particles of Nature? Last, but certainly not least, we must detect and identify the cold dark matter particles.
We live in exciting times!
## References
|
no-problem/9901/cond-mat9901097.html
|
ar5iv
|
text
|
# Berry Phase and the Symmetry of the Vibronic Ground State in Dynamical Jahn-Teller Systems
## Acknowledgement
We thank Arnout Ceulemans, Brian R. Judd, Erio Tosatti, and Lu Yu for useful discussions.
|
no-problem/9901/hep-ph9901318.html
|
ar5iv
|
text
|
# References
## Figure Captions
Fig. 1: The diagram that generates the “$`ϵ`$” entries of the quark and lepton mass matrices.
Fig. 2: The diagram that generates the “$`\sigma `$” entries of the mass matrices $`L`$ and $`D`$.
Fig. 3: The diagrams that produce small masses for the first-family quarks and leptons. The family index $`j`$ on the spinor field takes the values 2 or 3, giving, respectively, the $`\delta `$ and $`\delta ^{}`$ terms of the mass matrices. Different $`SO(10)`$ vector Higgs, denoted with superscript $`j`$, are exchanged in the two cases.
Fig. 4 A comparison of the model with experiment for the Wolfenstein parameters ($`\rho `$, $`\eta `$). The axes are $`\rho `$ and $`\eta `$ multiplied by the central value of $`\left|s_{12}V_{cb}\right|`$. The central values allowed by the model lie on the bold dashed circular arc, cf. Eq. (6). The constraints following from $`|V_{ub}|,B`$-mixing and $`ϵ`$ extractions from experimental data are shown in the lightly shaded regions. The experimentally allowed region is indicated by the heavily shaded central region. A typical unitarity triangle allowed by both data and the model is shown.
Fig. 4
|
no-problem/9901/cond-mat9901054.html
|
ar5iv
|
text
|
# Hole-Density Evolution of the One-Particle Spectral Function in Doped Ladders
\[
## Abstract
The spectral function $`A(𝐪,\omega )`$ of doped $`tJ`$ ladders is presented on clusters with up to $`2\times 20`$ sites at zero temperature applying a recently developed technique that uses up to $`6\times 10^6`$ rung-basis states. Similarities with photoemission results for the 2D cuprates are observed, such as the existence of a gap at $`(\pi ,0)`$ near half-filling (caused by hole pair formation) and flat bands in its vicinity. These features should be observable in ARPES experiments on ladders. The main result of the paper is the nontrivial evolution of the spectral function from a narrow band at $`x=0`$, to a quasi-noninteracting band at $`x0.5`$. It was also observed that the low-energy peaks of a cluster spectra acquire finite line-widths as their energies move away from the chemical potential.
PACS numbers: 74.20.-z, 74.20.Mn, 75.25.Dw
\]
Copper-oxide ladder compounds are currently under much investigation . Among their interesting properties are a spin-liquid ground state in the undoped limit, and the existence of superconductivity upon hole doping . Recently, the first angle-resolved photoemission (ARPES) studies of ladder materials have been reported. Both the doped and undoped ladder $`\mathrm{Sr}_{14}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ have been analyzed, finding one-dimensional metallic characteristics . Studies of the ladder compound $`\mathrm{La}_{1\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{CuO}_{2.5}`$ found similarities with $`\mathrm{La}_{2\mathrm{x}}\mathrm{Sr}_\mathrm{x}\mathrm{CuO}_4`$, including a Fermi edge . Core-level photoemission experiments for $`(\mathrm{La},\mathrm{Sr},\mathrm{Ca})_{14}\mathrm{Cu}_{24}\mathrm{O}_{41}`$ documented its chemical shift against hole concentration . Note that the importance of ARPES studies for other materials such as the high-Tc cuprates is by now clearly established . Using this technique the evolution with doping of the Fermi surface has been discussed , including the existence of flat bands near momenta $`(0,\pi )(\pi ,0)`$ .
These plethora of experimental results for the cuprates should be compared against theoretical predictions. However, the calculation of the ARPES response even for simple models is a formidable task. The most reliable computational tools for these calculations are the Exact Diagonalization (ED) method, restricted to small clusters, and the Quantum Monte Carlo (QMC) technique supplemented by Maximum Entropy, limited in doped systems to high temperatures due to the sign problem. Currently, on ladders dynamical properties can be exactly calculated at all densities only on clusters of size $`2\times 8`$ , while the QMC technique in the realistic regime of large $`U/t`$ (Hubbard model) has been applied on $`2\times 16`$ lattices only at half-filling and with 1 hole , the latter using an anisotropic ladder since for the isotropic case the sign-problem is severe.
Due to the limitations of these techniques an important issue still unclear is the evolution of the one-particle spectral function between the undoped limit, dominated by antiferromagnetic (AF) fluctuations both on ladders and planes, and the high hole-density regime where those fluctuations are negligible. While both extreme cases are properly treated by previously available numerical methods, the transition from one to the other as the hole density $`x`$ grows is still unknown. This evolution is expected to be highly nontrivial. For instance, the presence of hole-pairs in lightly doped ladders suggests the opening of a gap in ARPES, similar to the pseudogap of underdoped high-Tc cuprates . Shadow-band features in undoped ladders , which are absent at higher hole densities, adds to the complexity of this evolution.
Motivated by this challenging problem, in this paper the density evolution of the spectral function $`A(𝐪,\omega )`$ of doped 2-leg $`tJ`$ ladders is presented. The calculation is carried out at zero temperature on clusters with up to $`2\times 20`$ sites, increasing by a substantial factor the current resolution of the ED techniques. These intermediate size clusters were reached by working with a small fraction of the total Hilbert space of the system . The method is variational, although accurate as shown below. The improvement over previous efforts lies in the procedure used to select the basis states of the problem . The generation of the new basis is in the same spirit as any technique of the renormalization-group (RG) family. If the standard $`S_z`$-basis is used (3 states per site), experience shows that a large number of states is needed to reproduce qualitatively the spin-liquid characteristics of the undoped ladders. The reason is that in the $`S_z`$-basis one of the states with the highest weight in the ground state is still the Néel state, in spite of the existence of a short AF correlation length $`\xi _{AF}`$. A small basis built up around the Néel state incorrectly favors long-range spin order. However, if the Hamiltonian of the problem is exactly rewritten in, e.g., the $`rung`$-basis (9 states/rung for the $`tJ`$ model) before the expansion of the Hilbert space is performed, then the tendency to favor a small $`\xi _{AF}`$ is natural since one of the dominant states in this basis for the undoped case corresponds to the direct product of singlets in each rung, $`|S`$, which has $`\xi _{AF}=0`$ along the chains. Fluctuations of the Resonant-Valence-Bond (RVB) variety around $`|S`$ appear naturally in this new representation of the Hamiltonian leading to a finite $`\xi _{AF}`$. Note that $`|S`$ is just $`one`$ state of the rung-basis, while in the $`S_z`$-basis it is represented by $`2^{N_r}`$ states with $`N_r`$ the number of rungs of the 2-leg ladder. In general a few states in the rung-basis are equivalent to a large number of states in the $`S_z`$-basis. Expanding the Hilbert space in the new representation is equivalent to working in the $`S_z`$-basis with a number of states larger than can be reached directly with present day computers. Here for simplicity this technique will be referred to as the Optimized Reduced Basis Approximation (ORBA) .
As a first step, let us compare ORBA predictions for equal-time observables against DMRG results for the same clusters. Here a coupling $`J/t=0.4`$ is used. Its particular value is important: if $`J/t`$ is smaller, then pairs are lost while if it is larger superconducting correlations are important. Only in a small window of $`J/t`$ is that the ground state can be considered as formed by weakly interacting hole pairs, a regime that we want to investigate in this paper for its possible connection with the phenomenology of high-Tc at finite temperature. Fig.1a contains the ground state energy per site $`e_{GS}`$ vs $`x`$ using $`23\times 10^6`$ states in the rung-basis. The DMRG energies are obtained with $`m=200`$ states and open boundary conditions (OBC). Both sets of data are in good agreement . On $`2\times 20`$ clusters, the rung-basis approach allowed us to study up to 6 holes which has a full space of $`10^{14}`$ states (for zero momentum and total spin), while the largest previously reported exact study on a $`2\times 10`$ cluster and 2 holes needs a $`5\times 10^5`$ basis . Calculating the binding energy, or the chemical potential $`\mu `$ vs $`x`$, from $`e_{GS}`$ supplemented by the energies for an odd number of electrons, a tendency to pair formation at low hole-density was observed . Fig.1b contains the hole-hole correlations at several densities, compared (in one case) with PBC DMRG results. In Fig.1c spin-spin correlations are shown. The rung-basis properly reproduces the existence of a small $`\xi _{AF}`$ in the ground state, that decreases as $`x`$ grows. Size effects are not large for the clusters studied here, and good agreement with DMRG is observed. This technique captures the essence of the ground state behavior .
To produce dynamical results Ref. was followed, namely $`1020\%`$ states of the reduced basis $`N`$-holes ground state $`|\psi _0`$ were considered and the reduced subspace with, e.g., $`N+1`$ holes and $`𝐪`$-momentum was obtained through $`\widehat{O}_𝐪^{}|\psi _0`$ ($`\widehat{O}_𝐪^{}=_𝐣e^{i𝐪𝐣}\overline{c}_𝐣^{}`$, with $`\overline{c}_𝐣^{}`$ the hole creation operator at $`𝐣`$, dropping the spin-index, in the rung-basis). All states generated by this procedure were kept, and we worked in such a subspace in the subsequent iterations of the continued fraction expansion Only the bonding band subspace is discussed here . The $`\delta `$-functions have a width $`0.1t`$ throughout the paper.
Fig.2a corresponds to the undoped limit. A sharp peak is observed at the top of the PES spectra, maximized at momenta $`q_x=7\pi /10`$, i.e. close to the Fermi momentum for noninteracting electrons $`q_x^F=0.66\pi `$. The band defined by those peaks has a small bandwidth, as in 2D models, due to the interaction of the injected holes with the spin background . Note that all peaks at momenta $`q_x\pi /2`$ carry a similar weight and the dispersion is almost negligible. This unusual result is caused by strong correlation effects. The PES weight above $`q_x^F`$ , e.g. at $`q_x=\pi `$, is induced by the finite but robust $`\xi _{AF}`$, and its existence resembles the antiferromagnetically induced “shadow” features discussed before in 2D models .
Fig.2b contains results at low but finite hole-density. Several interesting details are observed: (i) the PES band near $`\mu `$ is flat. This should be an ARPES observable result resembling experiments in 2D cuprates, and it adds to the growing evidence linking the physics of ladders and planes; (ii) $`q_x=\pi `$ ($`\pi /2`$) PES has lost (gained) weight compared with $`x=0`$; (iii) the total PES bandwidth has increased; and (iv) the IPES band is intense near $`q_x=\pi `$, and it is separated from the PES band by a gap. The observed gap is $`\mathrm{\Delta }0.4t`$ and it is caused by hole pairing. The DMRG/PBC binding energy calculated for the same cluster and density is $`0.32t`$ ($`m=200`$, truncation error $`10^4`$). In the overall energy scale of the ARPES spectra, this difference is small and does not affect the study of the evolution of the dispersion shown here. Note that the results of Fig.2b are similar to those observed near $`(\pi ,0)`$ using ARPES in the 2D cuprates normal state .
Fig.3a contains results at $`x=0.1875`$. The trends observed at $`x=0.1`$ continue, the more dramatic being the reduction of the $`q_x=\pi `$ PES weight caused by the decrease in $`\xi _{AF}`$. The lost weight appears in the $`q_x=\pi `$ IPES signal. The gap is still observed in the spectrum. Weak BCS-like features both in PES and IPES near $`q_x^F`$ can be seen. Fig.3b contains data at $`x=0.3125`$, and up to $`6\times 10^6`$ states. The Hilbert space is maximized at this density for the $`2\times 16`$ cluster. Now the result resembles more a noninteracting system on a discrete lattice. The IPES signal is no longer very flat, and the IPES band now has a clear energy minimum near the momentum where PES is maximized. Fig.3c contains results for $`x=0.5`$ where a quasi-non-interacting dispersion is obtained using about $`3\times 10^6`$ states in $`|\psi _0`$. The inset shows that the trend continues at lower electronic densities. The bandwidth evolves from being dominated by $`J`$ near half-filling, to having $`t`$ as natural scale at $`x0.3`$ or larger. This evolution is smooth, yet nontrivial, following the reduction of $`\xi _{AF}`$ with doping.
A conceptually interesting issue in the context of finite-cluster spectra of electronic models is whether finite line-widths for the dominant peaks can be obtained by such a procedure. Studying the small clusters reached by ED techniques it naively seems that those peaks are usually generated by just one $`\delta `$-function (one pole). However, in the bulk limit, peaks away from the Fermi level should have an intrinsic width. How can we reach such a limit from finite clusters? One possibility is that as the cluster grows, the number of poles $`N_p`$ in a small energy window centered at the expected peak position must grow also, with their individual intensities becoming smaller such that the combined strength remains approximately constant. While this idea seems reasonable, it still has no explicit verification, but the intermediate size clusters reached in this study allow us to test it. Consider as an example Fig.3a where the actual energy and intensity of the poles contributing to the main features are shown explicitly. As the peaks move away from the top of the PES band, $`N_p`$ was indeed found to increase providing evidence compatible with the conjecture made above .
Fig.4a contains the main-peak weights in the PES band vs density. Size effects are small. The weight at $`q_x=\pi `$ diminishes rapidly with $`x`$, following the strength of the spin correlations of Fig.1c. Overall the region affected the most by spin correlations is approximately $`x0.25`$. Fig.4b summarizes the main result of the paper, providing to the reader the evolution with $`x`$ of the ladder dominant peaks in $`A(𝐪,\omega )`$. The area of the circles are proportional to the peak intensities. At small $`x`$ a hole-pairing-induced gap centered at $`\mu `$ is present in the spectrum, both the PES and IPES spectra are flat near $`(\pi ,0)`$, and the band is narrow. The PES flat regions at high momenta exist also in the undoped limit, where they are caused by the short-range spin correlations. Actually the resolution in densities and momenta achieved in this study allow us to reach the conclusion that the undoped and lightly doped regimes are smoothly connected. As $`x`$ grows to $`0.3`$, the flat regions rapidly loose intensity near $`(\pi ,0)`$, and the gap collapses.
The many similarities between ladders and planes discussed in previous literature suggest that our results may also be of relevance for 2D systems along the line $`(0,0)(\pi ,0)`$. For instance, the abnormally flat regions near $`(\pi ,0)`$ (Fig.2b) are similar to ARPES experiments data for the 2D cuprates , and they should appear in high resolution photoemission experiments for ladders as well. Note that in the regime studied with pairs in the ground state, the flat bands do not cross $`\mu `$ with doping but they simply melt. When $`x`$ is between 0.3 and 0.4, a quasi-free dispersion is recovered. The results of Fig.4b resemble a Fermi level crossing at $`x0.3`$ and beyond, while at small hole density no crossing is observed. It is remarkable that these same qualitative behavior appeared in the ARPES results observed recently in underdoped and overdoped LSCO . These common trends on ladders and planes suggest that the large energy scale (LES) pseudogap ($`0.2eV`$) of the latter may be caused by similar long-lived $`d`$-wave-like tight hole pairs in the normal state as it happens in doped ladders, where the pairing is caused by the spin-liquid RVB character of the ground state . In the 2D case a similar effect may originate in the finite $`\xi _{AF}`$ observed in the underdoped finite temperature regime (although there is no clear evidence in the 2D layered materials of a spin-liquid ground state). A consequence of this idea is that the pairs and LES ARPES pseudogap are correlated and they should exist as long as $`\xi _{AF}`$ is non-negligible , a prediction that can be tested experimentally.
Summarizing, the bonding-band spectral function of the 2-leg $`tJ`$ model has been calculated, and results can be used to guide future ARPES experiments for ladder compounds. These experiments should observe flat bands and gap features near $`(\pi ,0)`$ in the normal state. The data was found to be remarkably similar to experimental results for the 2D cuprates along the $`(0,0)(\pi ,0)`$ line. A common explanation for these features was proposed. Finally, note that the ORBA method discussed here introduces a new way to calculate dynamical properties of spin and hole models on intermediate size clusters. The method can be applied to a variety of strongly correlated electronic models.
The authors specially thank J. Riera for many useful suggestions. The financial support of the NSF grant DMR-9520776, CNPq-Brazil, CONICET-Argentina, and the NHMFL In-House Research Program (DMR-9527035) is acknowledged.
|
no-problem/9901/astro-ph9901369.html
|
ar5iv
|
text
|
# The Spectra of Main Sequence Stars in Galactic Globular Clusters II. CH and CN Bands in M71Based on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology and the University of California
## 1 INTRODUCTION
The major issue I intend to explore in this series of papers is that of star-to-star variations in abundances within a single globular cluster at and below the level of the main sequence turnoff. In the first paper of this series (Cohen 1999), I presented an overview of this subject and the historical background for the present work which began with the study of the giants in M13 by Suntzeff (1981). To summarize very briefly, star-to-star variations in several light elements, particularly C and N, are seen on the giant and subgiant branches of many globular clusters. The behavior of these variations within those globular clusters studied in most detail are in many cases consistent with what is expected for mixing as the explanation for most of the observed variations. 47 Tuc was the only galactic globular cluster studied at the level of the main sequence, where mixing should not yet have occurred, yet Canon et al. (1998) and references therein find detectable variations in the CH and CN bands for main sequence stars in 47 Tuc, and these variations are anti-correlated.
Globular cluster main sequence stars should not yet have synthesized through internal nuclear burning any elements heavier than He (and Li and Be) and hence will be essentially unpolluted by the internal nuclear burning and production of various heavy elements that occur in later stages of stellar evolution. Theory predicts that these stars are unaffected by gravitational settling and that their surfaces should be a fair representation of the gas from which the globular cluster formed. Thus the persistence of variations in C and N to such low luminosities in 47 Tuc (Cannon et al. 1998) and references therein) is surprising. The major recent reviews in this area are those of Kraft (1994) and Briley, Hesser & Smith (1994), while the more general reviews of McWilliam (1997) and of Pinnsoneault (1997) are also relevant.
In paper 1 I analyzed spectra of 50 stars at or below the main sequence turnoff in M13. I did not find any detectable variation in the strength of the CN or CH features in this sample. However, M13 is a quite metal poor globular cluster, and hence here I present an analysis of the spectra of 79 stars on the main sequence of M71, a globular cluster with metallicity comparable to that of 47 Tuc.
## 2 THE SAMPLE OF STARS
M71 was chosen as the second cluster in this study because of its metallicity (Cohen 1983) and because it is nearby, hence the turnoff stars will be relatively bright. This globular cluster is located at galactic latitude bII = $`4.6^{}`$ and has a reddening $`E(BV)=0.27`$ mag (Reed, Hesser & Shawl 1988).
Short exposure images in $`B`$ and $`R`$ were taken with Low Resolution Imaging Spectrograph (Oke et al. 1995) at the Keck Observatory centered on the cluster. Photometry was obtained with DAOPHOT (Stetson 1987) using these short exposures calibrated on the system of Landolt (1992). The zero point for each color is uncertain by $`\pm 0.05`$ mag. A sample of main sequence stars was chosen based on their position on the locus of the main sequence as defined by this photometry. Each candidate was inspected for crowding and stars were chosen for the spectroscopic sample on the basis of minimum crowding. Table 1 gives the object’s coordinates (B1950), $`R`$ mag, $`BR`$ color, and indices (together with their errors) for two molecular bands, the G band of CH at 4305 Å and the CN band near 3880 Å, for the M71 main sequence stars in the spectroscopic sample. The magnitudes and colors given in Table 1 have not been corrected for extinction.
Since the fields are very crowded, in addition to providing the star coordinates, we provide an identification chart (Figure 1) for a few stars in the M71 sample, from which, given the accurate relative coordinates, the rest of the stars can be located. Relative stellar coordinates are defined from the LRIS images themselves assuming the telescope pointing recorded in the image header is correct and taking into account the optical distortions in the images. The astrometry of Cudworth (1985) is used to fix the absolute coordinates.
Figure 2 presents a color-magnitude diagram for the main sequence stars in the M71 sample. The stars that have been observed spectroscopically are displaced by $`0.6`$ mag in $`BR`$ color and are shown as filled circles.
The color-magnitude diagram of the field of M71 shown in Figure 2 reflects the low galactic latitude this cluster. There are many field stars, most redder and presumably more distant than the globular cluster itself. Because the radial velocity of M71 is $`27`$ km s<sup>-1</sup> (Cohen 1980), it is not possible to isolate a sample of cluster members using radial velocity measurements from low dispersion spectra. My sample will therefore have some field star contamination. To estimate the fraction of field stars in such a sample, Figure 3 shows a histogram in color of all stars with $`17.3<R<17.6`$ mag. The sharp rise at the blue end of the distribution arises from the cluster, while the extended tail to the red is from field stars.
The M13 main sequence shown in Paper 1 is very narrow in color. Some of the apparent spread in color seen in the case of M71 may be due to variations in reddening. These are easily detected at the level of $`25\%`$ of E$`(BV)`$ (and corresponding amounts in other colors) from multicolor photometry of stars on the red giant branch in more heavily reddened galactic globular clusters (Cohen & Sleeper 1995). In addition, the field of M71 is very crowded, even more so than either of the two fields in M13 studied in Paper 1. This may lead to photometric errors which might produce an apparent spread in color of the M71 main sequence.
Based on Figure 3 one might estimate a field star contamination which at worst does not exceed 25% of the sample.
## 3 SPECTROSCOPIC OBSERVATIONS AND MEASUREMENT OF BAND INDICES
Three slitmasks were designed containing 79 stars from the M71 main sequence star sample. These were used at relatively low dispersion with the LRIS (300 g/mm grating, 2.46Å/pixel, 0.7 arcsec slit width) for a spectral resolution of 8Å. The CCD detector is digitized at 2 electrons/DN with a readout noise of 8 electrons. Two 800 sec exposures were taken with each slitmask under conditions of good seeing and dark sky in the summer of 1998. The data were reduced in a straightforward manner as described in Cohen et al. (1999) using Figaro (Shortridge 1988) except that the wavelength calibration came from arc lamp exposures, rather than from night sky lines on the spectra themselves. The spectra are not fluxed.
The definition of the CH and uvCN indices follows that of Paper 1, except that the wavelengths are shifted to take into account the difference in mean radial velocity between M13 and M71. The CH index again uses continuum bandpasses on both sides of the G band at 4305Å, with a feature bandpass chosen to avoid H$`\gamma `$. The CH and uvCN indices thus measured, together with their $`1\sigma `$ errors calculated assuming Poisson statistics are given in Table 1. The values are the fraction of absorption from the continuum, and are not in magnitudes. Recall that the errors given in Table 1 do not include the effect of cosmic rays nor the effect of the background signal from the night sky, both of which are small.
The M71 main sequence stars in my sample are somewhat brighter than those of M13, and the spectra are thus of even higher signal-to-noise than those of my main sequence sample in M13. Illustrative examples of the latter are shown in Figure 3 of Paper 1.
The continuum level in one star of the 79 in the M71 main sequence sample fell slightly below the minimum value (700 DN/pixel) set in Paper 1 for accepting the uvCN index measurements. However the features are so much stronger in this globular cluster that even the value for this object was accepted.
## 4 ANALYSIS
Figure 4 shows the CH index plotted as a function of $`R`$ mag for the 79 stars in my M71 main sequence sample. The 1$`\sigma `$ error bars are shown for each star. It is immediately clear that there is a large range in CH strength at all luminosities, which range is many times the measurement uncertainties.
Four of the stars in my M71 sample have very strong CH for their magnitude and are believed to be field stars. They are indicated by “x” symbols and are the only objects within the rectangle at the upper right of the figure. Ignoring these four stars, a second order least squares fit was carried out of CH index as a function of $`R`$ mag. Objects that lie above the mean fit are shown in Figure 4 as open circles, while objects that lie below this curve are shown as filled circles.
Ignoring the four probable field stars, the distribution of CH indices now appears to be approximately bimodal for the M71 sample.
Figure 5 show the results for the uvCN indices in the 79 M71 main sequence stars, again plotted as a function of $`R`$ mag. As was the case for the CH band indices, a large range in the strength of the uvCN band at a fixed luminosity is seen for the M71 main sequence. The overall appearance of the distribution is that it is bimodal. The same symbols are used in Figure 5 as in Figure 4. Comparing the two figures, it is immediately apparent that the CH and uvCN indices are anti-correlated. There are approximately equal numbers of CH strong/CN weak and CH weak/CN strong stars. Langer, Suntzeff & Kraft (1992) find this fraction determined from spectra of the red giants to vary from cluster to cluster among a set of three galactic globular clusters (M3, M13 and M79) of intermediate metallicity.
The good correlations seen in Figures 4 and 5 provide evidence that my M71 main sequence sample is not seriously contaminated by field stars.
## 5 DISCUSSION
In Paper 1 I showed that my data provide no evidence for variations of CH or CN band strengths among the 50 main sequence stars in our sample in M13. Here in the case of M71 I have found a range in CH and uvCN indices at a given luminosity along the M71 main sequence which is much larger than the measurement errors. Furthermore, the CH and uvCN indices are anti-correlated, and both appear to be bimodal.
It may well be that even though the measurement errors in M13 are small, the low metallicity of the cluster and the definition of such relatively crude molecular band indices conspire to hide any variation that may actually be present. The next paper in this series (Briley & Cohen 1999) will explore this possibility and will attempt to provide a guess as to the range of C and N variations that may be present in each of the two globular clusters, M71 and M13.
The next step will be to analyze other light elements in M71 to see if variations along the main sequence can be detected in Na, Ca or Mg, for example. Na and to a lesser extent Mg are known to vary among the giants and subgiants in several globular clusters. Suitable data from the LRIS at the Keck Observatory consisting of spectra of significantly higher dispersion than those analyzed here or in Paper 1 are already in hand for both the M71 and the M13 sample. These spectra can also be used to eliminate at least some of the field stars in the M71 sample through radial velocities.
Now that it is clear that variations of C and N are strong in at least two metal rich galactic globular clusters at the level of the main sequence, a major effort needs to be mounted to differentiate between mixing and primordial variations. Understanding the origin of these star-to-star variations at the level of the main sequence is an issue of importance not only to the field of globular cluster studies, but also has ramifications throughout many areas of stellar evolution and galaxy halo ages. I will return to this issue in future papers in this series.
## 6 SUMMARY
I have determined the strength of the CH and CN bands from spectra of 79 main sequence stars in M71. Significant variations in the strength of the G band of CH at 4305 Å and of the ultraviolet CN band at 3885 Å are seen from star to star at a fixed luminosity on the main sequence of M71. Both the CH indices and the uvCN indices appear to be bimodal and they are anti-correlated. There are approximately equal numbers of CN weak/CH strong and CN strong/CH weak main sequence stars in M71.
This is in contrast to the case of M13 discussed in Paper 1, where no variations are seen for C and N in M13 at the level of the main sequence turnoff and below it. I suggest that the variations may actually be present in M13 but cannot be detected with the molecular band indices I am using due to the low metallicity of M13 and to the lack of sensitivity of the molecular band indices themselves. The origin of this behavior, whether it is due mixing or to primordial variations or to some combination of these two factors, is not yet clear.
###### Acknowledgements.
The entire Keck/LRIS user community owes a huge debt to Jerry Nelson, Gerry Smith, Bev Oke, and many other people who have worked to make the Keck Telescope and LRIS a reality and to operate and maintain the Keck Observatory. We are grateful to the W. M. Keck Foundation, and particularly its late president, Howard Keck, for the vision to fund the construction of the W. M. Keck Observatory. I also thank Kevin Richberg for help with the data reduction.
|
no-problem/9901/astro-ph9901126.html
|
ar5iv
|
text
|
# WATER ICE, SILICATE AND PAH EMISSION FEATURES IN THE ISO SPECTRUM OF THE CARBON-RICH PLANETARY NEBULA CPD–56∘80321footnote 11footnote 1Based on observations with ISO, an ESA project with instruments funded by ESA Member States (especially the PI countries: France, Germany, the Netherlands and the United Kingdom) with the participation of ISAS and NASA
## 1 Introduction
CPD–568032 (hereafter CPD) belongs to the rare class of late WC-type nuclei of planetary nebulae and is classified as \[WC10\] in the scheme of Crowther, De Marco & Barlow (1998). It is thought that such objects may be one result of helium shell-flashes in low- and intermediate-mass stars on the Asymptotic Giant Branch (AGB). For a certain fraction of these double-shell burning stars, a helium shell flash may have ejected or ingested essentially all the remaining hydrogen-rich outer envelope. The resulting star could then be H-poor, like the late WC-type (\[WCL\]) planetary nebula nuclei (PNNs) whose spectra essentially mimic those of bona fide population I Wolf-Rayets, although mostly with lower wind velocities.
The large near- and mid-infrared excess of CPD has been known for over twenty years (Webster & Glass 1974; Cohen & Barlow 1980; Aitken et al. 1980), and has been attributed to emission by dust grains. Mid-infrared emission bands, most often attributed to PAHs (e.g., Allamandola, Tielens, & Barker 1989) were detected in ground-based 8–13-$`\mu `$m spectra by Aitken et al. (1980) and in airborne 5–8-$`\mu `$m spectra by Cohen et al. (1989). Longer wavelength PAH bending modes were identified by Cohen, Tielens & Allamandola (1985) from 7.7–22.7 $`\mu `$m spectroscopy of CPD with the IRAS Low Resolution Spectrometer (hereafter LRS). CPD has the highest measured luminosity fraction in the 7.7-$`\mu `$m band of any object (Cohen et al. 1989) and its nebula is characterized by a gas-phase C/O number ratio of 13 (De Marco, Barlow & Storey 1997; hereafter DMBS97), the joint highest gas-phase C/O ratio measured for a planetary nebula.
We present Infrared Space Observatory (ISO) Long Wavelength Spectrometer (LWS; Clegg et al. 1996; Swinward et al. 1996) 43–197-$`\mu `$m full grating spectra of CPD, combined with Short Wavelength Spectrometer (SWS; de Graauw et al. 1996) 2.4–45-$`\mu `$m grating spectra of this object, obtained in the LWS Guaranteed Time program. Preliminary results on the ISO spectra of CPD were presented by Barlow (1998).
## 2 The ISO spectrum of CPD-56<sup>o</sup>8032
Full wavelength coverage grating mode LWS01 spectra of CPD were secured during ISO revolution 84. The spectral resolution was 0.6-$`\mu `$m in first order (84–197-$`\mu `$m) and 0.3-$`\mu `$m in second order (43–93-$`\mu `$m). The spectra consisted of eight fast scans, each comprising a 0.5-s integration ramp at each grating position, sampled at 1/4 of a spectral resolution element. Our low-resolution 2.4–45-$`\mu `$m SWS grating spectrum of CPD was taken during ISO revolution 273. The SWS01 AOT was used at Speed 1, yielding a mean spectral resolving power of 200–300 over the whole spectrum. Standard pipeline processing (LWS OLP6.0 and SWS OLP6.1) was used to extract, reduce and calibrate all the separate spectral fragments. The ISAP and SIA packages provided the capability to examine the spectral fragments in detail.
Fig. 1 presents our complete wavelength coverage of CPD, combining SWS and LWS data. First we spliced all the SWS and LWS subspectra separately, following the methods described by Cohen, Walker & Witteborn (1992; CWW), then joined the composite SWS and LWS portions, which required scaling the LWS spectrum by 0.99$`\pm `$0.01 to register it with the SWS spectrum. The resultant 2.4–197-$`\mu `$m spectrum was normalized to the Point Source Catalog (PSC) photometry (see CWW) in all 4 IRAS bands. This necessitated a further rescaling of the total spectrum of CPD by a factor 1.05$`\pm `$0.04.
38–90-$`\mu `$m (lower).
The ISO spectrum of CPD in Fig. 1 exhibits unresolved emission lines of \[C $`\mathrm{ii}`$\] 158-$`\mu `$m, \[O $`\mathrm{i}`$\] 63- and 146-$`\mu `$m and CO rotational lines between J = 14–13 at 186.0 $`\mu `$m and J = 19–18 at 137.2 $`\mu `$m, which are all excited in the nebular photodissociation region. The canonical spectrum of PAH emission bands dominates the peak of Fig. 1 below 15 $`\mu `$m but the most striking aspect is that, despite the carbon-dominated stellar and nebular chemistry, the spectrum longwards of 15 $`\mu `$m is dominated by emission features usually associated with the circumstellar envelopes of O-rich stars (Glaccum 1995, Waters et al. 1996, Waelkens et al. 1996). Waters et al. (1998) have presented SWS spectra of two other C-rich PNe with \[WCL\] nuclei, which also show both PAH and crystalline silicate emission features. The ISO spectrum of CPD shown here has an additional remarkable property in that crystalline water ice features are present in emission at 43 and 62 $`\mu `$m (see below).
To identify and quantify the intensities of the many apparent emission bands, we have removed all obvious features from the SWS+LWS spectrum to define a set of continuum points longwards of about 4.5 $`\mu `$m. The simplest fit to this continuum was the sum of two blackbodies, with their temperatures and solid angles optimized by least-squares fitting to provide a lower envelope to the observed spectrum. This lower bound was achieved by ensuring that the difference spectrum (observed-minus-continuum) is negative only over small wavelength intervals in order to avoid truncating any emission features. The best fit is achieved for temperatures of 470$`\pm `$5K and 135$`\pm `$5K, which we interpret as grains within the ionized nebula heated by direct starlight (470K) or by resonantly trapped Lyman$`\alpha `$ photons. Subtraction of this simple continuum yields the CPD “excess” spectrum. Below 5 $`\mu `$m, the steep excess can be attributed to hot grains ($``$1600K). It is far above the stellar continuum radiation calculated by De Marco & Crowther (1998) and reddened using DBMS97’s value of E(B-V)=0.68, which contributes no more than 5% of the emission at 2.4 $`\mu `$m and rapidly diminishes with increasing wavelength.
The standard PAH emission bands (Cohen et al. 1989) dominate the excess (Fig. 1) with features at 3.3, 5.2, 6.2, 6.9, 7.7, 8.7 and 11.3 $`\mu `$m, together with the underlying emission plateaus at 6–9 $`\mu `$m and 11–14 $`\mu `$m. Between 16 and 40 $`\mu `$m (Fig. 2a), several emission features characteristic of crystalline silicates (Glaccum 1995, Waters et al. 1996) are recognizable, most prominently at 19, 24, 28, and 33 $`\mu `$m. The longest wavelength region (Fig. 2b) exhibits emission bands at 41, 43, 47.5, and 69 $`\mu `$m, along with a very broad, low-level, emission hump centered near 62 $`\mu `$m, under the \[O $`\mathrm{i}`$\] 63-$`\mu `$m line. We identify the 43- and 62-$`\mu `$m bands with crystalline water ice, as first detected by Omont et al. (1990) in the KAO spectrum of the Frosty Leo nebula.
Following Glaccum (1995) and Waters et al. (1996), we have attempted to identify the emission features seen above the continuum of CPD using (a) data for clinopyroxene, orthopyroxene and 100% forsterite (Koike et al. 1993); and (b) optical constants for both amorphous and crystalline water ice recently measured in the laboratory (Trotta 1996, Schmitt et al. 1998). We chose the 100% pure forsterite because of its weak but definite feature near 69 $`\mu `$m, shown (according to Koike et al. 1993) only by the pure form of this material. In Fig. 2 we distinguish the separate contributions of these four materials, and their total. We modeled the optically thin case, for small spherical grains (0.1 or 1.0 $`\mu `$m radius), so that we could neglect scattering and represent Q<sub>ext</sub> by Q<sub>abs</sub>. By trying to match the relative band strengths, we found plausible temperatures for each component. We made no attempt to constrain these separate temperatures, but three were found to be identical (forsterite, clinopyroxene, and crystalline ice: 65 K) and the orthopyroxene rather similar (90 K), perhaps suggestive of a common physical location, or of core-mantle grains. At these temperatures, the cold silicates produce no measurable emission below the 19-$`\mu `$m band. Note that without combined SWS+LWS coverage, we could not constrain these temperatures.
Our dust modeling indicates that forsterite causes the 24- and 33-$`\mu `$m bands, and the weak “wing” at 36 $`\mu `$m. To match the 28-$`\mu `$m band requires crystalline silicates such as orthopyroxene, which also contributes the 19-$`\mu `$m bending mode. Crystalline ice produces features near 43- and 62-$`\mu `$m, due to the transverse optical and longtitudinal acoustic vibrational branches respectively. As noted by Omont et al. (1990), the 62-$`\mu `$m band is not shown by amorphous ice (see the laboratory spectra of Smith et al. 1994), so the observation of this band demands the presence of crystalline ice. The prominent feature near 41 $`\mu `$m is probably dominated by clinopyroxene and this material also contributes a broad emission feature centered near 66-$`\mu `$m that merges with the longer wavelength ice band.
The sum of our separate component emissions provides only a qualitative match to CPD’s spectrum (Fig. 2), but is surely indicative of the circumstellar materials present, and confirms the striking presence of oxygen-rich materials around a carbon-rich PN. Typically, interband and wing emissions associated with the laboratory features in Fig. 2a,b contribute $`5\%`$ of the peak emission. Our lower envelope has not removed such a large continuum so it appears from Fig. 2 that laboratory “crystalline” materials yield features much too broad compared with CPD’s features. Before drawing this conclusion we would prefer to incorporate these materials into a physically realistic radiative transfer code, including grain geometry and secondary aspects like grain mantle structure and porosity. Note that none of the materials we used to fit the spectrum has a feature matching the one observed at 47.5 $`\mu `$m (Fig. 2b).
## 3 Discussion
Using current estimates for the luminosity and distance of CPD, we can deduce the angular extent of each of the emitting dust components around it. Adopting a distance of 1.53 kpc (DMBS97; De Marco & Crowther 1998), implies that, if they emit as equilibrium blackbodies, the 470 K and 135 K grains we use to represent our dust continuum must lie 36 AU (0.02 arcsec) and 400 AU (0.26 arcsec) from the star, respectively. We have derived the distance from the star of the crystalline silicates using optical constants in the UV and visible by Scott & Duley (1996). These materials are transparent in portions of the short wavelength spectrum but highly absorbing near the UV peak of the energy distribution of the WCL nucleus of this nebula. A realistic energy balance between the UV grain absorption at this peak (we used the NLTE model atmosphere of De Marco & Crowther 1998) and IR re-emission indicates that the 65–90 K oxygen-rich constituents must lie 1000 AU (0.005 pc) away, i.e., 0.60 arcsec. Thus even the warmest silicates and ices lie near the periphery of the ionized nebula, based on the HST images (DMBS97) which show all the nebular H$`\beta `$ emission to be confined to 1.6$`\times `$2.1 arcsec. These dimensions, and the normalization factors involved when we fit the emission spectra of grains to the excess emission in CPD, also yield rough estimates of the mass in the crystalline components. We obtain about 1.6, 1.3, 0.3, and 0.6 $`\times 10^4M_{}`$ for forsterite, clinopyroxene, orthopyroxene, and water ice, respectively, independent of grain size, for grain radii $`<3\mu `$m.
Roche, Allen & Bailey (1986) found the 3.3-$`\mu `$m PAH emission to have an angular size of 1.3 arcsec, i.e., within the ionized boundary of the nebula, while our postulated 470 K and 135 K blackbody grains lie well within the carbon-rich nebular ionized gas, suggesting that they too are likely to be carbon-rich (though we cannot prove this). The very hot dust responsible for the excess continuum below 5 $`\mu `$m must be only a few AU from the star and may be condensing in the wind of the WC10 central star, like the dust found to form in the outflows from Population-I WC9 stars (e.g., Cohen, Barlow & Kuhi 1975). Due to the absence of hydrogen in the wind, PAHs cannot be present there; the condensation of hydrogen-deficient soots must occur by pathways that bypass acetylenic chains and emphasize grain formation via fullerenes and “curling” of graphite sheets (e.g., Curl and Smalley 1988). Once such grains later penetrate into the H-rich nebula, partially hydrogenated PAHs may be created. Some nebular PAHs may also have been created previously in a C-rich AGB outflow, before it became completely hydrogen-depleted.
CPD is not the first PN to have the signatures of both C-rich and O-rich material recognized. IRAS 07027–7934, found by Menzies & Wolstencroft (1990) to be a low-excitation planetary nebula with a C-rich \[WCL\] central star (WC10 in the scheme of Crowther et al. 1998), exhibits PAH features in its IRAS LRS spectrum. Yet Zijlstra et al. (1991) discovered it hosts a strong 1612 MHz OH maser, normally associated only with O-rich material. The Type I bipolar PN NGC 6302 exhibits weak 8.7- and 11.3-$`\mu `$m PAH bands (Roche & Aitken 1986), a weak OH maser (Payne, Phillips & Terzian 1988), and silicate features at 19 $`\mu `$m (Barlow 1993) and longer wavelengths (Waters et al. 1996). Its LWS spectrum exhibits prominent crystalline water ice emission bands (Barlow 1998, Lim et al. in preparation). Waters et al. (1998) have detected crystalline silicate emission features in the SWS spectra of the strongly-PAH emitting C-rich objects BD+30<sup>o</sup>3639 and He 2-113 (both PNe with cool WC nuclei).
Amongst hypotheses to explain the simultaneous existence of C- and O-rich particles around C-rich stars, two (Little-Marenin 1986; Willems & de Jong 1986) are of possible relevance to CPD and similar nebulae: (a) a recent thermal pulse has converted an O-rich outflow to one that is C-rich; (b) the silicate grains are in orbit around the system and existed well before the current evolutionary phase.
If a recent thermal pulse converted an O-rich mass loss outflow to a C-rich one, the O-rich grains should be further out and cooler than the C-rich particles, in agreement with the properties of the dust around CPD. The chief objection raised to this scenario, in the context of carbon stars showing warm silicate emission, has been that such a transition should occur only once during the lifetime of a star, so the probability of finding a carbon star with silicate grains in its outflow, still sufficiently close and warm to exhibit a 10-$`\mu `$m feature, should be extremely small. This objection is weakened for extended nebulae, since the cooler nebular particles probe a longer look-back time than do 10-$`\mu `$m emitting silicate grains in a carbon star outflow. The expansion velocity of CPD’s nebula is 30 km s<sup>-1</sup> (DMBS97), so that nebular material now 1000 AU from the star, the deduced location of the O-rich grains, must have been ejected only 160 years ago. Even with a lower ($``$10 km s<sup>-1</sup>) typical AGB outflow velocity, the timescale is small compared to the predicted interval of 6$`\times 10^4`$ yrs between successive thermal pulses for a 0.62 M core (Boothroyd & Sackmann 1988). Thus a sufficiently recent O-rich to C-rich transition by CPD (as well as by He 2-113 and BD+30) appears statistically improbable, unless, as suggested by Waters et al. (1998), such stars are somehow particularly susceptible to a thermal pulse during their immediate post-AGB phase.
Because of the above difficulty, Lloyd-Evans (1990) and Barnbaum et al. (1991) proposed that carbon stars showing silicate emission are binaries containing a disk in which O-rich grains have accumulated during an earlier evolutionary phase, with the extended disk lifetimes allowing silicate emission to persist. For silicate grains to be warm enough to exhibit a 10-$`\mu `$m emission feature, the inner edge of such a disk could be at no more than 4–5 stellar radii from a carbon star (Barnbaum et al.), i.e., 5–6 AU for a 3000 L, 2400 K star. An O-rich dust disk with these parameters cannot be present around CPD, since it shows no trace of 10-$`\mu `$m silicate emission. At a radius of $``$1000 AU, the cool (65–90 K) O-rich grains would not be in a conventional mass-transfer circumstellar disk around one component of a wide binary, although Fabian & Hansen (1979) have suggested a mechanism whereby a wide binary system might focus matter from one component into a helical trajectory. An alternative possibility that would allow the pre-existing grains hypothesis to be retained would be if they resided in a Kuiper belt or inner Oort comet cloud around the star. A radius of 1000 AU is comparable to current estimates for the outer edge of the Kuiper belt around our own Sun (Weissman 1995) – the interaction of cometary nuclei in such a belt with CPD’s mass outflow and ionization front might provide the conditions needed to liberate the small particles ($`<310\mu `$m radius) that are required in order to explain the observed far-infrared silicate and ice bands. The annealing and recrystallization of silicates and ice grains liberated from comets, leading to the required highly ordered structures with correspondingly “sharp” emission features, could result from the sudden increase in the UV photon flux from CPD during its post-AGB evolution. Difficulties for the comet-cloud hypothesis include (a) the relatively large mass ($``$130 Earth masses) of crystalline silicates derived for CPD, high compared to current, though still rather uncertain, estimates for the mass of the Solar System Kuiper Belt and Oort Cloud. However, the more massive progenitor stars of current PNe could have appreciably higher mass comet systems; (b) the apparent correlation between the presence of crystalline silicate grains around PNe and the presence of both PAHs and a WCL central star seems to implicate a chemistry change between an O-rich and C-rich AGB outflow as the cause. However, since this correlation is still based on a small number of objects, the results for a larger sample of nebulae should help clarify whether either hypothesis fits better.
We thank the referee, Dr. R. Waters, for his useful comments. MC thanks NASA for support under grant NAS5-4884 to UC Berkeley.
## 4 Figure captions
|
no-problem/9901/astro-ph9901259.html
|
ar5iv
|
text
|
# The Accretion of Lyman Alpha Clouds onto Gas–Rich Protogalaxies; A Scenario for the Formation of Globular Star Clusters
## 1. Introduction
Globular clusters (GCs) are at once objects of interest to students of the Galaxy and to cosmologists — tothe former, as probes of galaxy formation and as dynamically pure objects of curiosity, and to the latter because of their great age, and for what they may tell us about conditions in the early Universe. Attempts to explain them have included models based on the accretion of protogalactic clumps (Searle & Zinn 1978; Searle 1980), two–phase collapse (Fall & Rees 1985; Murray et al. 1993), and cloud collisions (Murray & Lin 1989; 1993; Harris & Pudritz 1991; McLaughlin & Pudritz 1996). In this paper I suggest a dissipative scenario in which Lyman $`\alpha `$ clouds are viewed as the progenitors of the GCs.
Studies of high–redshift Lyman $`\alpha `$ clouds (e.g., Lu, Wolfe, & Turnshek 1991; Bechtold 1994) show that the high line density of clouds at $`z3`$ decreases rapidly with time. A major constituent of the Universe at high–redshift, these clouds are not likely to have disappeared without a trace. Binney (1976) first suggested that the earliest star formation in a collapsing protogalaxy would be within transient sheets of compressed gas generated by the collision of infalling gas. Much of the halo may have been created in this way. But the formation of GCs requires a more extraordinary variant of this accretion process. If clouds are centrally condensed (e.g., dark matter (DM) held), this would help them to survive the infall intact, possibly leading to a super–concentrated burst of star formation, and hence possibly a GC. The proposed GC formation scenario utilizes the kinetic energy of infall to compress clouds to densities comparable to the central regions of GCs.
Below, in referring to galaxian quantities and galactocentric distances, I use upper–case letters, and for cloud quantities, lower–case letters. When this convention isn’t applicable, I will use subscripts.
## 2. Anecdotal Evidence
Zaritsky (1995) has shown that there are abundance gradient anomalies, and asymmetries in the distribution of HI at large galactocentric radii in $`1/3`$ to $`1/2`$ of nearby spiral galaxies. This was cautiously interpreted as being the result of the accretion of low metallicity HI clouds within the last few $`\mathrm{Gyr}`$. More recently a measure of lopsidedness and anisotropy has been developed from the Fourier analysis of the surface brightness distributions of disk galaxies (Zaritsky & Rix 1997; Rudnick & Rix 1998; Jiang & Binney 1998) which has been found to correlate well with the locations of enhanced star formation and H I anomalies. These findings support the conclusions of Zaritsky’s (1995) earlier work.
A source for these apparent accretion events may have been found in a population of low redshift HI clouds, discovered in Hubble Space Telescope (HST) spectra of low redshift quasar and active galaxy spectra (Bahcall et al. 1991; Morris et al. 1991; Morris et al. 1993, Lanzetta et al. 1995; Stocke et al. 1995; Shull, Stocke & Penton 1996; Tripp, Lu, & Savage 1998), and at redshifts out to $`z0.6`$ (Chen et al. 1998). Many of these clouds appear to be clustered about luminous ($`M_V19`$) field galaxies. Recently, Tripp et al. (1998), adding data from two QSO sightlines, have summarized recent developments, showing that of those clouds which are found within $`2h_{75}^1\mathrm{Mpc}`$ of a major galaxy, most have a projected distance of less than 700 kpc. However, nearest–galaxy distances are seen to be greater than $`2h_{75}^1\mathrm{Mpc}`$ for as many as $`40\%`$ of the clouds, prompting Tripp et al. (1998) to refer to this population as “void clouds”.
In a study of high velocity clouds (HVCs) in the Local Group, Blitz et al. (1998) note that the internal velocity dispersions of remote HVCs are strongly peaked, with $`\sigma 20\pm 6.5\mathrm{km}\mathrm{s}^1`$, which argues for a generally homogeneous population that is only mildly subgalactic in its virial temperature. Their dust–to–gas ratios are at least a factor of three below that of normal Galactic clouds, and heavy–element abundances well below solar values. Wakker et al. (1998) have found that the “C” cloud, the closest HVC, has a metallicity, $`Z=0.07\pm 0.02Z_{}`$, well below what would be expected for a Galactic fountain or tidal tail. I therefore assume that these apparently infalling clouds are indeed representatives of the extragalactic Lyman $`\alpha `$ population.
It therefore seems likely that the more clustered cloud population is a major source of the impactors hypothesized by Zaritsky (1995), and others. They apparently share the same physical region occupied by dwarf galaxies that cluster about dominant field galaxies. If clouds share the kinematic characteristics of dwarfs as well, then we can expect that clouds will cluster about their primary within velocities of $`400`$ km/s (Zaritsky et al. 1997, and references therein). One reason why this might be expected is if the dwarf galaxies, and the clouds, were both representative of a single spectrum of systems that inhabit the neighborhood of giant field galaxies.
It is reasonable to propose that if there is a continuity between the nature of low, and high–redshift Lyman alpha absorption systems, then the large population of HI clouds at high–redshift may likewise have been loosely clustered about protogalaxies, and thus have been subject to accretion to the more dominant gravitational potential in their midst. At modest redshifts ($`z0.5`$), Chen et al. (1998) has found that clouds of Ly$`\alpha `$ equivalent width $`W0.3\mathrm{\AA }`$ lie at a characteristic distance of $`160h^1\mathrm{kpc}`$ ($`hH_o/100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$), consistent with more local data (Lanzetta et al. 1995; Tripp et al. 1998). It does not seem reasonable that these clouds are rotationally supported, for the velocity dispersion of galaxies within $`5h^1\mathrm{Mpc}`$ of the Local Group is on the order of only $`5060\mathrm{km}\mathrm{s}^1`$ (Giraud 1986; Schlegel et al. 1994), so that clouds of extragalactic origin with tangential velocities of $`200\mathrm{km}\mathrm{s}^1`$ would seem to be highly unlikely. For clouds not rotationally supported, however, the infall time is only $`1/4`$ that of the look–back time to $`z0.5`$. Therefore I conclude that these clouds, and those at higher redshifts, must be being accreted to the galaxies about which they cluster, and further, the distribution must be being replenished to some significant degree since roughly the same distributions maintain at $`z0.6`$ as locally, as we have seen.
At redshifts in the range of 2 to 3, Fernandez-Soto et al. (1996) have shown that there is strong clustering within $`250\mathrm{km}\mathrm{s}^1`$ among clouds that have associated metal lines, while in the range, $`2.7z3.7`$, Chernormordik (1995) has found weak cloud clustering within velocity scales of $`150\mathrm{km}\mathrm{s}^1`$ for clouds with columns $`\mathrm{N}10^{14}\mathrm{cm}^2`$. Using cross correlations of absorption systems in double and group quasars at redshifts of $`2`$, Crotts & Fang (1989) found that, Lyman $`\alpha `$ systems with $`Wo0.4\mathrm{\AA }`$ cluster on scales as large as $`0.7(1.2)h^1\mathrm{Mpc}`$ for $`\mathrm{q}_{}=0.5(0.0)`$, though there is an apparent significant variation with environment.
These findings are broadly consistent with the physical association of clouds with galaxies seen at low redshift, and we may therefore postulate that high–redshift clouds may accrete to their more massive neighbors on time–scales of $`1^+\mathrm{Gyr}`$, their dissipated remains precipitating bursts of star formation.
### 2.1. The Density of GCs and Clouds
If Lyman $`\alpha `$ clouds are to cause the formation of GCs at high–redshift, then the comoving density of Lyman alpha clouds at high–redshift should be comparable to the density of GCs locally. The density of GCs can be calculated from the luminosity function and the luminosity–weighted specific globular cluster frequency, $`\mathrm{S}_\mathrm{N}N_t/L_{15}`$ (Harris & van den Bergh 1981), where $`N_t`$ is the number of GCs in the galaxy, and $`L_{15}`$ is the luminosity in units of a galaxy with $`M_V=15`$. The number density of GCs is given by the integral of the median specific GC frequency times the luminosity–weighted Schechter (1976) function. Given the parameterization, $`X=L/L^{}`$, we find the comoving density of GCs is,
$$n_{gc}=S_NL_{15}^{}X\mathrm{\Phi }(X)𝑑XS_NL_{15}^{}\mathrm{\Phi }^{},$$
(1)
where the result follows from setting $`\alpha =1`$. I make the following substitutions: For a local sample of galaxies (central–cluster ellipticals excluded), $`S_N4`$ (Harris 1991), $`L_{15}^{}100`$ if $`M_V^{}20.0`$, and $`\mathrm{\Phi }^{}0.014h^3\mathrm{Mpc}^3`$ (Loveday et al. 1992). Thus,
$$n_{gc}5.5\mathrm{h}^3\mathrm{Mpc}^3.$$
(2)
For clouds at high–redshift, the attributed comoving density of clouds of radius $`r_{cl}`$, and line density $`dN/dz`$ is, $`n_{cl}=\frac{dN}{dz}\left(\frac{dz/dR}{\pi r_{cl}^2(1+z)^3}\right)`$. I assume a low $`\mathrm{\Omega }`$ FRW cosmology ($`\mathrm{\Lambda }=0`$), for which $`dz/dR\frac{H_0}{c}(1+z)^2`$, where $`H_0`$ is the Hubble constant. This yields,
$$n_{cl}\frac{dN/dz}{\pi R_or_{cl}^2(1+z)},$$
(3)
where $`R_o=c/H_o`$. A closed model doubles the expected density when $`z=3`$.
We must decide what fiducial value to substitute for $`r_{cl}`$. A numerical simulation of high–redshift self–shielded primordial clouds, in which cold, isothermally distributed DM comprises $`90\%`$ of cloud mass (Manning 1992), showed that the clouds responsible for most absorption lines were large, just short of the size at which thermal instability from self–shielding would occur in the central regions. These clouds were found to have H I columns of $`\mathrm{Log}N=14(13)`$ at projected radii of $`20(45)h^1\mathrm{kpc}`$, and core radii on the order of $`2\mathrm{kpc}`$, and core baryonic mass of $`14\times 10^6\mathrm{M}_{}`$). They have idealized thermally broadened velocity dispersions in the range of $`\sigma 1722\mathrm{km}\mathrm{s}^1`$. Meanwhile, Shull et al. (1996) have estimated the radii of local clouds to be $`100\mathrm{kpc}`$ at columns of $`N10^{13}\mathrm{cm}^2`$, and Blitz et al. (1998) have found the upper limit cloud radius, constrained by the tidal field of the Local Group, to be $`25`$ kpc at columns of $`N10^{19}\mathrm{cm}^2`$. While these figures may make the simulated cloud sizes at high–redshift seem low, the 50–fold drop in the far–UV metagalactic flux at the Lyman limit from its value at $`z=2.5`$ (Haardt & Madau 1996) will help to account for this apparent disparity in absorption cross section. In a study of the lensed quasar $`Q2345+007`$A,B, Foltz et al. (1984) have placed a lower limit on the characteristic $`diameters`$ of clouds at $`z=1.95`$ of $`525`$ kpc (the uncertainty stems from the unknown redshift of the lens). These values simply reflect the range of possible distances between beams. However, a Monte Carlo analysis of the correlated absorption systems in the spectra of Q$`1343+2640`$A, B (Dinshaw et al. 1994), which has a similar ratio of “hits” to “misses” (h/m$`2`$), found an inferred cloud diameter 2.5 times that of the median beam separation. If this correction is applied to the Foltz et al. (1984) data, the cloud range in $`radii`$ becomes $`7.562.5h^1\mathrm{kpc}`$. However, recent work correlating absorption systems of double and group quasars (e.g., Fang et al. 1996; Crotts & Fang 1998) find much larger values (typically few$`\times `$100s of kpc), but it is not clear that these are large individual clouds rather than the larger–scale clustering of much smaller individual clouds that we have been discussing, which could be expected to give “cloud diameters” of from $`300500h^1`$ kpc, depending on the equivalent widths probed and the method of analysis – consistent with their results.
For high redshift clouds ($`z23`$) I adopt a fiducial radius of 25 $`h^1\mathrm{kpc}`$ for individual H I clouds at columns of $`\mathrm{Log}N14`$. A line density of $`d\mathrm{N}/d\mathrm{z}100`$ at a redshift of $`z3`$ is assumed, based on data normalized to an equivalent width limit of $`W_o=0.24\mathrm{\AA }`$ (Weymann et al. 1998), which, for $`\mathrm{T}2\times 10^4`$K, is quite close to a column density, $`N=10^{14}\mathrm{cm}^2`$. Substituting these values into Eq. 3, we find,
$$n_{cl}(z=3)4.2\mathrm{h}^3\mathrm{Mpc}^3.$$
(4)
For $`\mathrm{\Omega }=1`$, $`n_{cl}=8.4\mathrm{h}^3\mathrm{Mpc}^3`$. Comparison of Eqs. 2 and 4 shows $`n_{gc}n_{cl}`$. If there is a $`spectrum`$ of cloud sizes (I find the largest dominated the absorption line–density), then since cloud number density, $`n_{cl}r^2`$ (Eq. 3), the total number density of clouds may be significantly larger, which may help the numbers to look better for open cosmologies. Also, if replenishment is occurring at high–redshift by the condensation of clouds in voids and subsequent movement toward neighboring galaxies, then the total number of potential impactors could be substantially greater than estimated above.
## 3. The Physical Requirements for Cloud Survival
For a modest sized protogalaxy, the infall velocities of an intergalactic cloud can be expected to be in excess of 60 km/sec. In order to form a cluster, therefore, some small fraction of the plunging cloud must survive the supersonic shock in a compact form. Numerical simulations of homogeneous clouds subjected to interstellar shocks have found that clouds are destroyed on time scales of a sound crossing time (e.g., Klein, McKee & Colella 1994; Murakami & Ikeuchi 1993). However, numerical studies of the survival of intergalactic, cold dark matter–held, “mini–halo” clouds (Rees 1986) subjected to supersonic flows (Murakami & Ikeuchi 1994) have shown that the cores of clouds confined by a dark halo may survive extended periods of exposure to supersonic wind as long as the central cloud density exceeds that of the ambient medium through which it passes. In the context of accretion to a galaxy, this will require that the central density of the cloud should increase in time in order that it may survive the increasing densities it will encounter as it falls inward. This in turn will require cooling. For a stable compression process, therefore, the cooling time scale, $`\tau _c`$, must remain smaller than the time–scale for the increase in the ambient protogalaxy gas density, $`\tau _d`$, at each stage of the cloud’s journey. The timescale for change of density is expressible by the equation,
$$\tau _d=\left(\frac{1}{\rho }\frac{\rho }{t}\right)^1.$$
(5)
I presume, for the sake of simplicity, that the gas and the DM are distributed in an isothermal profile with density law, $`\rho =𝒦/(4\pi R^2)`$, where $`R`$ is the galactocentric distance, and $`𝒦`$ is a unit of mass per unit length. $`𝒦`$ will be referred to as the system mass–distribution constant. For an $`L^{}`$ galaxy, $`𝒦=𝒦^{}1.14\times 10^7\mathrm{M}_{}\mathrm{pc}^1`$. I also presume that the distribution is truncated at a distance $`R_t`$. Then for $`R<R_t`$, $`M(R)=𝒦R`$, and for $`R>R_t`$, $`M(R)=𝒦R_t`$. Manipulating Eq. 5, we find,
$$\tau _d=\frac{R}{2v_i},$$
(6)
where $`v_i`$ is the infall velocity. With the assumed mass distribution, we can calculate the potential:
$$\mathrm{\Phi }(R<R_t)=G𝒦\left(ln\left(\frac{R_t}{R}\right)+\left(1\frac{R_t}{R_0}\right)\right),$$
(7)
where $`R_0`$ is the location, presumed in this case to be $`2`$ Mpc, of the turn–around radius, and at which the potential is set to zero. $`R_t`$ is assumed to be 500 kpc for an $`L^{}`$–type object. Conservation of energy requires that the infall velocity be given by the equation, $`v_i(R<R_t)\sqrt{2\mathrm{\Phi }(R<R_t)}`$ the inequality stemming from dissipative effects. Substituting this into Eq. 6, and using the value, $`𝒦^{}`$ (above), we find that the timescale for change of density is,
$$\tau _d1.38\times 10^8\left(\frac{R_{kpc}}{160}\right)\mathrm{yr},$$
(8)
where $`R_{kpc}`$ is the galactocentric radius in kiloparsecs. At a distance of $`15`$ kpc $`\tau _d1.3\times 10^7`$ yr.
Dissipation is expected to begin in earnest at an estimated galactocentric distance of $`160h^1\mathrm{kpc}`$, for that is the projected distance at which lines of sight encounter EWs of $`0.3\AA `$, or columns of $`10^{14}\mathrm{cm}^2`$ (Chen et al. 1998). It will be important that the rate of cooling keep pace with the time–scale for compaction. I assume that dissipation balances the rate of change the cloud potential energy so that the velocity remains at a constant $`50\mathrm{km}\mathrm{s}^1`$. Thus we have,
$$v\frac{GM}{R^2}_V\rho _{cl}𝑑V=n^2kT\beta _B_V𝑑V,$$
(9)
where we will insert values for an $`L^{}`$ galaxy, and cloud baryonic density, $`\rho _{cl}=n\mu m_H`$. The case B recombination cooling coefficient may be expressed as, $`\beta _B=9.17\times 10^{14}(T/20000)^{0.5}\mathrm{cm}^3\mathrm{s}^1`$. From this relation and Eq. 9, we derive,
$$T=\frac{1.127\times 10^4}{n^{2/3}}\left(\frac{15}{R_{kpc}}\right)\mathrm{K}.$$
(10)
The cooling time–scale of the cloud can be defined,
$$\tau _c=\frac{\frac{3}{2}nkT}{\mathrm{\Lambda }}.$$
(11)
With $`\mathrm{\Lambda }=n^2kT\beta _B`$, we find
$$\tau _c\left(\frac{2.313\times 10^{15}}{nT^{1/2}}\right)\mathrm{s}.$$
(12)
Combining Eqs. 10 and 12 we find,
$$\tau _c=\frac{6.9\times 10^5}{n^{2/3}}\left(\frac{R_{kpc}}{15}\right)^{\frac{1}{2}}\mathrm{yr}.$$
(13)
The pressures behind the head of the bow shock will result in baryon densities $`\rho _{cl}4^2\rho _{gal}(R)(T_{gal}/T_{cl})`$ during the compressonal stage, where $``$ is the Mach number, expected to range up to $`10`$. Assuming an $`L^{}`$ galaxy, at $`R15\mathrm{kpc}`$, and assuming $`90\%`$ DM, then the galactic baryon number density is, $`n_{gal}10^2`$. Thus, we may expect $`n1`$, and should eventually reach densities of order $`10^4\mathrm{cm}^3`$. This will assure that $`\tau _c`$ is much less than $`\tau _d`$.
For galactocentric radii of interest, the cooling time scale within the core of the cloud appears to be comfortably less than $`\tau _d`$. During the final stages of accretion, another term must be added to Eq. 9 to represent the deceleration of the cloud. This will compensate for the decline of the first term, leading to only modest changes in the derived cooling time–scale. It must be emphasized that there *should* be a comfortable gap between $`\tau _d`$ and $`\tau _c`$, for we don’t expect real galaxies to have entirely smooth density profiles.
## 4. The Mechanism for Cloud Compaction
In order to satisfy the requirements that these events produce globular clusters, the cloud must be compressed to densities comparable to, or in excess of that found in globular cluster cores. Powered by the kinetic energy of the cloud, the combination of pressurization and radiative cooling is capable of compacting the cloud. In this section I discuss three important facets of the GC formation scenario.
### 4.1. Dynamical Features of the Interaction
Let us now suppose that a centrally condensed cloud (core $`35\mathrm{kpc}`$ diameter, is drawn into the potential well of a gas–rich protogalaxy. When the cloud begins to encounter the dissipated gaseous halo at velocities of $``$Mach 10, a bow shock is established, forming a shocked shell about the cloud. In the frame of the shock, galactic gas is quickly decelerated by the shock within this shell, imparting a momentum to the infalling cloud. Only gas of density comparable to that of the wind is stripped— a result of the small mean free path in relation to cloud size — while the dense core is found to survive in excess of $`3\mathrm{Gyr}`$ (Murakami & Ikeuchi 1994). Accordingly, we should expect only the core to survive intact, carrying $`\mathrm{few}\times 10^6\mathrm{M}_{}`$ of baryons, as noted in §2.1, in the range of $`10^25\times 10^4`$ of the total cloud baryonic mass. The shocked and stripped remainder, much of which would be caught up in eddies, may participate in star formation in the manner suggested by Binney (1976), contributing to the halo population at relatively low efficiencies. The remnants of this would settle toward the disk.
While the DM component is thought to play an important role in containing the cloud in the IGM (Blitz et al. 1998; Shull et al. 1996), once the ram pressure, integrated over the surface of the cloud, exceeds the maximum force between the baryonic cloud and the DM cloud, the DM halo would be lost. This is expected to occur in the early stages of intense dissipation, at distances less than $`160h^1\mathrm{kpc}`$ (see §3). When the deceleration on the cloud begins to dwarf its self–gravity, the cloud will shift and compress to approximate an exponential atmosphere with a scale–height determined by the deceleration rate, which in turn will heat the cloud.
Within this environment, two forces act upon the cloud which tend to induce vorticity, a shift of the higher density gas within the cloud toward the shock front, displacing lower density gas toward the side, and the shear force due to the stripping flow inside the bow shock. While by itself, molecular viscosity is not sufficient to appreciably effect the vorticity, a turbulent viscosity will be produced. The shock will stand off from the cloud due to the high pressures immediately behind the shock, but the dense, cool cloud immediately “downstream” of this region will be subject to Rayleigh–Taylor instabilities since cloud deceleration will result in denser material overlying less dense material. As these perturbations grow they will become subject to the shear flow closer to the shock, transforming them into Kelvin-Helmholtz (K–H) instabilities. These instabilities would continue to grow along the “fetch” of the interface of galactic and cloud gas, causing a turbulent viscosity and an increased transferral of angular momentum to the cloud. The combined effect of these forces would be to induce vorticity into a torroidal volume whose axis is aligned with the vector of the cloud’s motion. During the stage of intense dissipation, it is better to compare it to an inverted convection cell than to a vortex ring, for unlike the vortex ring, the cloud is contained from without by the shock and by the deceleration, rather than from within by a vacuum.
During the final stage of intense dissipation, the increased deceleration means that gas which has been stripped need not be irrevocably lost to the cloud. Two factors are responsible: 1) stripped material will be pushed toward the central axis of motion at $``$sonic speeds because of the near vacuum behind the cloud, 2) the deceleration of the cloud will result in a relative (negative) acceleration between the stripped material and the shock front. The re–accretion of this gas to the cloud would reinforce the pattern of motion established in the early adjustment of the cloud. It is an important feature of this scenario that the stripped, and eventually re-incorporated material will contain metals present in the protogalaxy, which have been mixed with cloud gas by K–H instabilities. The well–established observational constraint that the metallicity in almost all GCs is uniform to within 0.05 dex (Suntzeff 1993), requires that turbulent mixing should occur well before star formation begins (Murray and Lin 1993). In this scenario, the enhancement of the metallicity, and the mixing, occur as a result of the natural dissipative process of accretion outlined above, comfortably preceding fragmentation (see §4.4).
### 4.2. The Stability of the Plunging Cloud
The modeling of intergalactic clouds held by DM (see §2.1) indicates they may be centrally condensed. Furthermore, centrally condensed clouds have been found to survive shocks (Murakami & Ikeuchi 1994). Therefore, if this model of Lyman-$`\alpha `$clouds is correct, they should be expected to survive the gentle onset of ram pressure. The vorticity induced by the shear flow inside the shock does not appear to be a destabilizing influence, because the cloud is contained by the shock shell and by deceleration, as described above. However, when the cloud decelerates to subsonic velocities, it will evolve toward an ordinary, albiet large, vortex ring. Here, the orderliness of the motion will be critical in determining the length of time the cell should survive. Studies of vortex rings at relatively high Reynolds numbers show that sinusoidal bending modes may grow, and eventually destroy the ring (Widnall & Sullivan 1974). However, these modes, whose number are few when vorticity is widely distributed, would be damped by the shock which wraps around the cell during the period of intense dissipation, but may grow after it is no longer supersonic. At the very minimum, the cell will survive for an eddy turn–over time, given that the ordered motion will produce a vacuum in a ring which will cause the self–induced motion characteristic of vortex rings. For a cell of dimension $`10`$ pc, the eddy turn–over time is,
$$\tau _e7.6\times 10^5\left(\frac{c_s}{v}\right)\mathrm{yr},$$
(14)
where $`v`$ is the peak tangential velocity of the vortex cell. The self-inducing motion of the vortex ring is likely to cause it to last many times greater than this. However, accurate estimates of this must await a careful numerical simulation, as energy and time–scale arguments are held hostage to uncertainties in the levels of turbulent viscosity during the stage of intense dissipation.
### 4.3. Energetic Requirements for Cloud Compression
If radiative cooling can keep pace with heating, we may assume isothermal pressurization. If the cloud core has a baryonic mass of $`m_{cl}`$, an initial baryonic density of $`\rho _i`$, and a final density of $`\rho _f`$, the required work for compression is,
$$W=m_{cl}c_s^2ln\left(\frac{\rho _f}{\rho _i}\right).$$
(15)
Values for $`W`$ are insensitive to variations by factors of a few in the initial or final densities. Equating this work to the kinetic energy of the cloud core we find that the required infall velocity $`v_i`$, is given by,
$$v_i=c_s\sqrt{2ln(\rho _f/\rho _i)},$$
(16)
where $`c_s`$ is the sound speed of the cloud core. For a final density equal to that of the median Galactic globular cluster core densitiy, $`10^3\mathrm{M}_{}\mathrm{pc}^3`$ (Djorgovski 1993), a factor of compression in the range of $`10^{68}`$ is expected, and yields a required infall velocity, $`v_i/c_s5.06.0`$. For a temperature of $`2.0\times 10^4`$ K, the minimum required infall velocity is $`6075`$ km/s, a velocity attainable even in dwarf systems (see e.g., Wyse & Silk 1985).
### 4.4. The Transformation of Cloud to Cluster
The final stage of the transformation is reached when the velocity approaches subsonic levels. During the supersonic period of dissipation we expect that the cloud size will be decreasing – not so much due to stripping, but because of its compaction. When the ram pressure, $`\rho v^2`$, begins to decline, the cell will gradually transform from one contained by the shock shell to one contained by its own vorticity. This will be occassioned by some expansion, and as it expands, the cloud deceleration will increase, leading to a relatively rapid change of state. Though the density of the surrounding medium should by this time be rather large, it is likely — indeed it is imperative — that the density in the central axis of the vortex ring should be orders of magnitude greater. The disappearance of the shock will allow the rapid cooling of cloud gas at the head of the shock, where it is densest. Yet the pressure on this gas is sustained by pressures from all directions: the gas near the axis is subjected to an axially symmetric centrifugal force from the vortex ring; toward the host galaxy there remains the ram pressure of the cloud at near sonic velocities; and finally, pressure is exerted by material which has been stripped, but is re–joining its cloud, now falling with increased velocity due to the increased deceleration.
To assess the plausibility and efficiency of star formation at this point we need to know the collapse time–scale for gas at the expected densities, the transit–time scale for gas in the axis of the vortex, and some notion of the duration of the vortex cell, which is responsible for maintaining the radial pressure along the axis of the cell. The time–scale for cloud collapse is,
$$\tau _{coll}1/\sqrt{G\rho }4.7\times 10^5\mathrm{yr},$$
(17)
for the central density cited in §4.3. The time–scale for transit down an axis of $`10`$ pc is $`\tau _e`$, given by Eq. 14, where, by assumption, $`v<c_s`$. As the cell is slowed, this velocity will fall, in time providing a sure opportunity for gravitational collapse. Note that the cooling time–scale (Eq. 13) for baryon densities of order the central GC densities ($`n_{cl}4\times 10^4\mathrm{cm}^3`$) is well within these time–scales. The cell itself should be stable for well over an eddy turn–over time–scale (Eq. 14), in keeping with the large durations of more modest–sized vortex rings which may travel distances that are many times their diameters (Widnall & Sullivan 1973) before disruptionl. A more severe threat to the survival of the vortex cell is condensation of gas and star formation, but by then, we don’t care.
Star formation rates, and efficiencies are now of concern. In a study of 97 normal, and star–forming galaxies, Kennicutt (1977) found that the disk–averaged star formation rates were found to fit to a Schmidt (1959) law with index $`N=1.4\pm 0.15`$. It was found that,
$$\mathrm{\Sigma }_{SFR}=(2.5\pm 0.7)\times 10^4\left(\frac{\mathrm{\Sigma }_{gas}}{1\mathrm{M}_{}\mathrm{pc}^2}\right)^{1.4\pm 0.15}\mathrm{M}_{}\mathrm{yr}^1\mathrm{kpc}^2.$$
(18)
If we imagine gas with a density equal to peak central GC densities (§4.3), distributed over a 10 pc cube, then the indicative star formation density is $`0.25\mathrm{M}_{}\mathrm{yr}^1\mathrm{pc}^2`$, and would yield $`1.25\times 10^7\mathrm{M}_{}`$ within a plausible time-scale of $`5\times 10^5\mathrm{yr}`$, a mass larger than the $`10^6\mathrm{M}_{}`$ contained in the cube. Star formation efficiency would vary with index $`N1`$, or, $`\mathrm{\Sigma }_{gas}^{0.4}`$. At projected final cloud densities, this implies star formation efficiencies $`40`$ times greater than that of a normal disk with a gas surface density of $`1\mathrm{M}_{}\mathrm{pc}^2`$. This would rival that of central starbursts (Kennicutt 1977).
## 5. Implications for Field Galaxy Formation
A model in which galaxies are subjected to periodic accretion of low metallicity clouds is consistent with the “chaotic” scenario of galaxy formation of Searle & Zinn (1978), and Searle (1980), now often referred to as hierarchical structure formation. The GC formation scenario requires a picture in which galaxy growth is a protracted process. That the accretors are probably DM–held, and generally relaxed, seems required by the specific scenario developed here. Shull et al. (1996) estimate that these clouds have a total mass (baryons plus DM) of $`M_{tot}=10^{9.8}\mathrm{M}_{}\mathrm{R}_{100}\mathrm{T}_{4.3}`$, where $`R_{100}`$ is the radius in units of 100 kpc at which the column density of neutral hydrogen is $`10^{13}\mathrm{cm}^2`$ , and $`T`$ is in units of $`10^{4.3}\mathrm{K}`$. This value is consistent with a mass distribution constant, $`𝒦10^5\mathrm{M}_{}\mathrm{pc}^1`$, with a circular velocity $`v_c20\mathrm{km}\mathrm{s}^1`$, as observed by Blitz et al. (1998). Somewhat smaller clouds may also be realistic.
The accretion of such a cloud would produce great turbulence. Might there be low–redshift counterparts of proto–late–type galaxies? High resolution VLA observations of 5 actively star forming blue compact dwarfs (BCDs) reveal kinematically distinct clumps of H I, and turbulent outer H I envelopes (on scales of a few$`\times `$100 pc). This is broadly suggestive of the accretion of H I clouds. That the median gas depletion time scale is on the order of 1 $`\mathrm{Gyr}`$ also argues for extragalactic replenishment. Thus these young galaxies may resemble our Galaxy in its earliest formative stages. It is easily seen that the accretion of one or two hundred clouds of $`10^{9.8}\mathrm{M}_{}`$ can account for the total mass of an $`L^{}`$ galaxy within $`R_{gc}100\mathrm{kpc}`$.
While the formation of GCs is somewhat incidental to the process of galaxy formation as outlined above, yet they remain potentially sensitive probes of the state of the Galaxy at its early formative stages. The GC formation scenario suggests that among the requirements for cluster formation are two constraints on the (proto)galaxy – that it must have a large gas column density, and a mass distribution constant large enough to support required circular velocities of $`40\mathrm{km}\mathrm{s}^1`$, a value somewhat below that characteristic of dwarf spiral galaxies. The former follows by the observation that ram pressure must slow the cloud to subsonic velocities: $`\rho _{gal}v^2S(t)𝑑tm_{cl}(t)\frac{d\mathrm{\Phi }(t)}{dR}𝑑t`$, where $`\rho _{gal}`$ is the galaxy baryon density, $`S(t)`$ is the cloud surface area, and stripping will reduce the cloud baryonic mass, $`m_{cl}`$, in time. Alternatively,
$$_{R_o}^R\rho _{gal}(R)v(R)S(R)𝑑R_{R_o}^R\frac{GM(R)m_{cl}(R)}{R^2}𝑑R.$$
A detailed numerical simulation will be required to properly evaluate these integrals, but clearly, if the column of galactic gas, $`\rho (R)\pi r_{cl}^2(R)𝑑R`$ is too small, then the equation cannot be met. The mass–distribution requirement, together with the equation, $`V_{rot}=\sqrt{G𝒦}`$, is essentially that $`𝒦1.2\times 10^6\mathrm{M}_{}\mathrm{pc}^1`$. This threshold for GC formation results from the observation that an infall velocity of $`60\mathrm{km}\mathrm{s}^1`$, which is at the lower end of the range satisfying the energetic requirements for cloud compression, is $`\sqrt{2}`$ times the circular velocity at that distance. Once the galaxy mass is great enough, GCs would be formed. However, in response to an increased specific star formation rate accompanying the formation of the bulge and the disk, the gas column should decrease, signaling an end of the halo and GC formation epoch. During the stage of this retreat, we would expect that, if it relaxed into a thick, slowly rotating disk, then GCs would have a lower probability of being formed if the cloud were plunging normal to the disk, due to the lower gas columns. Thus, the metal–rich “disk” globulars, with their flattened distribution (Armandroff 1993), and the thick disk itself, may be evidence favoring this interpretation.
## 6. Implications for Cosmology
While the redshifts of formation implied by this scenario may strike the reader as quite low, recent findings, largely based on *Hipparcos* data, the inclusion of helium diffusion into the cores of stars, and an improved equation of state, yield new distance determinations based on the main sequence turnoff, which in turn imply that the oldest globular clusters may be much closer to $`12\mathrm{Gyr}`$ old. This is a $`20\%`$ reduction from pre-*Hipparcos* values (for a recent review, see Chaboyer 1998). If we allow that these oldest GCs were formed at $`z=3`$, then given this GC age, we may derive $`H_o`$ as a function of cosmology. Accordingly, we find that if $`\mathrm{\Omega }=1`$, we would require $`H_o=47.7\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. If $`\mathrm{\Omega }=0`$, $`H_o=61.3\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. For a flat $`\mathrm{\Lambda }`$ model with $`\lambda =0.7`$, $`H_o=67.8\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. If the formation epoch instead were at $`z4`$, then the attributed Hubble constant is increased by less than $`6.5\%`$ for all three models.
It has been reported that GCs may have a significant range of ages. Recent estimates are in the range of $`5\mathrm{Gyr}`$ (Fusi-Pecci et al. 1995; Chaboyer, Demarque & Sarajedini 1996). Using standard astrophysical formulae, I find that the range of time between the plausible redshift range $`z=1.53`$, the expected range of GC formation redshifts, is, 1.79, 2.44 and 2.20 $`\mathrm{Gyr}`$ respectively for the $`\mathrm{\Omega }=1`$, 0, and $`\mathrm{\Lambda }=0.7`$ cosmologies, where for self–consistency I have used the specific value of $`H_o`$ derived for each case. These ranges are smaller than the observed value cited above, but not so much as to cause alarm at this early stage.
## 7. Conclusion
High resolution HST spectra at low redshift have disclosed the existence of a surprisingly large population of Lyman $`\alpha `$ clouds, most of which are thought to be clustered within 2 $`h_{75}^1\mathrm{Mpc}`$ of bright field galaxies. The juxtaposition of evidence for cloud clustering, and for major accretion events of low metallicity gas onto large field galaxies, suggests a *causal* relationship. By following the commonalities of cloud clustering among the low–redshift population out to high–redshift, it becomes clear that clouds at high–redshift certainly had the opportunity to accrete to the protogalaxies about which they are thought to cluster. The dramatic disappearance of Lyman $`\alpha `$ clouds at high–redshifts provides the link between the clustering of clouds and the heightened star formation rates in field galaxies seen at $`z13`$ (Madau et al. 1998). It is suggested that the central regions of many of these clouds might plausibly have been transformed into GCs, while the less–strongly held gas may have contributed to the formation of the stellar halo. It has been shown that the energetics of cloud compression are favorable, as are the numerical coincidences of the comoving densities of GCs, and clouds at high–redshift. The juxtaposition of the projected redshifts of formation with new globular cluster ages implies values of $`H_o`$ that are reasonable. However, the predicted cosmology-specific age range of GCs appears to be lower than recent work would imply. This interesting fact promises to be a goad to stimulate future work.
I would like to thank the anonymous referee for many comments and suggestions that have improved this paper. In addition, I thank Hyron Spinrad, Daniel Stern, Christopher F. McKee, Robert Fisher, and Dean McLaughlin for helpful suggestions during the planning and production of this paper.
|
no-problem/9901/hep-ph9901241.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Measurements of both solar and atmospheric neutrino fluxes provide strong although still indirect evidence for neutrino oscillations . The Super-Kamiokande collaboration has measured the magnitude and angular distribution of the $`\nu _\mu `$ flux originating from cosmic ray induced atmospheric showers . Especially, but not only the angular distribution of the $`\nu _\mu `$ flux calls for an interpretation of the data in terms of large angle ($`\theta >32^{}`$) neutrino oscillations, with $`\nu _\mu `$ disappearing to $`\nu _\tau `$ or a singlet neutrino and $`\mathrm{\Delta }m_{atm}^2`$ close to $`10^3\text{eV}^2`$. Five independent solar neutrino experiments , using three detection methods , have measured solar neutrino fluxes which differ significantly from expectations. The data is consistent with $`\nu _e`$ disappearance neutrino oscillations, occuring either inside the sun, with $`\mathrm{\Delta }m_{}^2`$ of order $`10^5\text{eV}^2`$, or between the sun and the earth, with $`\mathrm{\Delta }m_{}^2`$ of order $`10^{10}\text{eV}^2`$. The combination of data on atmospheric and solar neutrino fluxes therefore suggests a hierarchy of neutrino mass splittings: $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$.
Although this is the physical picture to which we stick in this lecture, two caveats have to be remembered. A problem in one of the solar neutrino experiments or in the Standard Solar Model could still allow comparable mass differences for $`\mathrm{\Delta }m_{atm}^2`$ and $`\mathrm{\Delta }m_{}^2`$ . Furthermore, another experimental result exists , interpretable as due to neutrino oscillations. The problem with it is that its description together with the atmospheric and solar neutrino anomalies in terms of oscillations of the 3 standard neutrinos is impossible, even if $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$ is allowed .
In this lecture I consider theories with three neutrinos. Ignoring the small contribution to the neutrino mass matrix which gives $`\mathrm{\Delta }m_{}^2`$, there are three possible forms for the neutrino mass eigenvalues:
$`\text{“Hierarchical”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}0& & \\ & 0& \\ & & 1\end{array}\right)`$ (1)
$`\text{“Pseudo-Dirac”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}1& & \\ & 1& \\ & & \alpha \end{array}\right)`$ (2)
$`\text{“Degenerate”}\overline{m}_\nu `$ $`=`$ $`m_{atm}\left(\begin{array}{ccc}0& & \\ & 0& \\ & & 1\end{array}\right)+M\left(\begin{array}{ccc}1& & \\ & 1& \\ & & 1\end{array}\right)`$ (3)
where $`m_{atm}`$ is approximately $`0.03`$ eV, the scale of the atmospheric oscillations. The real parameter $`\alpha `$ is either of order unity (but not very close to unity) or zero, while the mass scale $`M`$ is much larger than $`m_{atm}`$. I have chosen to order the eigenvalues so that $`\mathrm{\Delta }m_{atm}^2=\mathrm{\Delta }m_{32}^2`$, while $`\mathrm{\Delta }m_{}^2=\mathrm{\Delta }m_{21}^2`$ vanishes until perturbations much less than $`m_{atm}`$ are added. An important implication of the Super-Kamiokande atmospheric data is that the mixing $`\theta _{\mu \tau }`$ is large. It is remarkable that this large mixing occurs between states with a hierarchy of $`\mathrm{\Delta }m^2`$.
What lies behind this pattern of neutrino masses and mixings? The conventional paradigm for models with flavour symmetries is the hierarchical case with hierarchically small mixing angles, typically given by $`\theta _{ij}(m_i/m_j)^{\frac{1}{2}}`$. If the neutrino mass hierarchy is moderate, and if the charged and neutral contributions to $`\theta _{atm}`$ add, this kind of approach is probably not excluded by the data . It looks more interesting to think, however, that the neutrino masses and mixings do not follow this conventional pattern, since this places considerable constraints on model building. An attractive possibility is that a broken flavour symmetry leads to the leading order masses of (1), (2) or (3), to a large $`\theta _{atm}`$, and to acceptable values for $`\theta _{}`$ and $`\mathrm{\Delta }m_{}^2`$. To this purpose it is essential that the charged lepton mass matrix is discussed at the same time. Although it would be interesting to consider also the quark mass matrices, this problem is only briefly mentioned here.
It turns out that it is simpler to construct flavour symmetries which lead to (1) or (2) with large $`\theta _{atm}`$, as illustrated in Sect 2. In both hierarchical and pseudo-Dirac cases, the neutrino masses have upper bounds of $`(\mathrm{\Delta }m_{atm}^2)^{\frac{1}{2}}`$. In these schemes the sum of the neutrino masses is also bounded, $`\mathrm{\Sigma }_im_{\nu i}0.1`$ eV, implying that neutrino hot dark matter has too low an abundance to be relevant for any cosmological or astrophysical observation. By contrast, it is more difficult to construct theories with flavour symmetries for the degenerate case , were the total sum $`\mathrm{\Sigma }_im_{\nu i}=3M`$ is unconstrained by any oscillation data. While non-Abelian symmetries can clearly obtain the degeneracy of (3) at zeroth order, the difficulty is in obtaining the desired lepton mass hierarchies and mixing angles, which requires flavour symmetry breaking vevs pointing in very different directions in group space. I propose a solution to this vacuum misalignment problem in Sect 4, which can be used to construct a variety of models, some of which predict $`\theta _{atm}=45^{}`$. Along these lines one can also construct a model with bimaximal mixing having $`\theta _{atm}=45^{}`$ and $`\theta _{12}=45^{}`$ .
## 2 ”Hierarchical” or ”Pseudo-Dirac” neutrino masses
A large $`\theta _{\mu \tau }`$ mixing angle can simply be attributed to an abelian symmetry which does not distinguish between the muon and the tau left-handed lepton doublets. Since this does not constrain the right handed singlets, some asymmetry between them can be responsible of the $`\mu \tau `$ mass difference. This is strightforward, but not enough, however. As emphasized above, the $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$ hierarchy, if real and significant, has to be explained as well.
At least two textures for the neutrino mass matrices have been singled out which can be responsible for the ”Pseudo-Dirac” or ”Hierarchical” cases respectively and give at the same time a large $`\theta _{atm}`$ :
$$\lambda _\nu ^{(0)I}=\left(\begin{array}{ccc}0& B& A\\ B& 0& 0\\ A& 0& 0\end{array}\right)\lambda _\nu ^{(0)II}=\left(\begin{array}{ccc}0& 0& 0\\ 0& \frac{B^2}{A}& B\\ 0& B& A\end{array}\right)$$
(4)
The important point is that both textures $`I`$ and $`II`$ can be obtained by the seesaw mechanism with abelian symmetries and no tuning of parameters. For example, a simple model for texture $`II`$ has a single heavy Majorana right-handed neutrino, $`N`$, with interactions $`l_{2,3}NH+MNN`$, which could be guaranteed, for example, by a $`Z_2`$ symmetry with $`l_{2,3},N`$ odd and $`l_1`$ even. A simple model for texture $`I`$ has two heavy right-handed neutrinos which form the components of a Dirac state and have the interactions $`l_1N_1H+l_{2,3}N_2H+MN_1N_2`$. These interactions could result, for example, from a U(1) symmetry with $`N_1,l_{2,3}`$ having charge +1, and $`N_2,l_1`$ having charge $`1`$. In both cases, the missing right-handed neutrinos can be heavier, and/or have suitably suppressed couplings.
It makes actually sense to speak of neutrino mass textures only in association with corresponding charged lepton mass textures. In turn, these textures have to be obtainable by the same symmetries responsible of $`\lambda _\nu ^{(0)I}`$ or $`\lambda _\nu ^{(0)II}`$. It is easy to see how $`\lambda _\nu ^{(0)I}`$ or $`\lambda _\nu ^{(0)II}`$ can be coupled to
$$\lambda _E^{(0)}=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& B\\ 0& 0& A\end{array}\right)$$
(5)
I have rotated the right-handed charged leptons so that the entries of the first two columns vanish. As in (4), I take $`A`$ and $`B`$ to be non-zero and comparable, since they both occur consistently with the unbroken flavour symmetries. Both in (4) and in (5) the label <sup>(0)</sup> denotes the fact that suitable perturbations have to be added to obtain fully realistic masses and mixings.
Finally, it is immediate to extend these solutions to quarks, in particular to SU(5) unification, where each field is replaced by its parent SU(5) multiplet $`(l_{2,3}\overline{F}_{2,3},etc)`$. Some qualitatively successful relations are actually implied by this extension, like, e.g., $`V_{cb}=O(m_s/m_b)`$.
## 3 Textures for quasi-degenerate neutrinos
What are the possible textures for the degenerate case in the flavour basis? These textures will provide the starting point for constructing theories with flavour symmetries. In passing from flavour basis to mass basis, the relative transformations on $`e_L`$ and $`\nu _L`$ gives the leptonic mixing matrix $`V`$. Defining $`V`$ by the charged current in the mass basis, $`\overline{e}V\nu `$, I choose to parameterize $`V`$ in the form
$$V=R(\theta _{23})R(\theta _{13})R(\theta _{12})$$
(6)
where $`R(\theta _{ij})`$ represents a rotation in the $`ij`$ plane by angle $`\theta _{ij}`$, and diagonal phase matrices are left implicit. The angle $`\theta _{23}`$ is necessarily large as it is $`\theta _{atm}`$. In contrast, the Super-Kamiokande data constrains $`\theta _{13}20^{}`$ , and if $`\mathrm{\Delta }m_{atm}^2>2\times 10^3\text{eV}^2`$, then the CHOOZ data requires $`\theta _{13}13^{}`$ . For small angle MSW oscillations in the sun, $`\theta _{12}0.05`$, while other descriptions of the solar fluxes require large values for $`\theta _{12}`$.
Which textures give such a $`V`$ together with the degenerate mass eigenvalues of eq. (3)? In searching for textures, I require that in the flavour basis any two non-zero entries are either independent or equal up to a phase, as could follow simply from flavour symmetries. This allows just three possible textures for $`m_\nu `$ at leading order
$`\mathrm{`}\mathrm{`}A^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right)`$ (7)
$`\mathrm{`}\mathrm{`}B^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right)`$ (8)
$`\mathrm{`}\mathrm{`}C^{\prime \prime }m_\nu `$ $`=`$ $`M\left(\begin{array}{ccc}1& 0& 0\\ 0& 0& 1\\ 0& 1& 0\end{array}\right)+m_{atm}\left(\begin{array}{ccc}0& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right)`$ (9)
Alternatives for the perturbations proportional to $`m_{atm}`$ are possible. Each of these textures will have to be coupled to corresponding suitable textures for the charged lepton mass matrix $`m_E`$, defined by $`\overline{e_L}m_Ee_R`$. For example, in cases (A) and (B), the big $`\theta _{23}`$ rotation angle will have to come from the diagonalization of $`m_E`$.
To what degree are the three textures A,B and C the same physics written in different bases, and to what extent can they describe different physics? Any theory with degenerate neutrinos can be written in a texture A form, a texture B form or a texture C form, by using an appropriate choice of basis. However, for certain cases, the physics may be more transparent in one basis than in another, as illustrated later.
## 4 Degenerate neutrinos from broken non-abelian symmetries
The near degeneracy of the three neutrinos requires a non-abelian flavour symmetry, which I take to be $`SO(3)`$, with the three lepton doublets, $`l`$, transforming as a triplet. This is for simplicity – many discrete groups, such as a twisted product of two $`Z_2`$s, would also give zeroth order neutrino degeneracy .
Following ref , I work in a supersymmetric theory and introduce a set of “flavon” chiral superfields which spontaneously break SO(3). For now I just assign the desired vevs to these fields; later I show how to construct potentials which force these orientations. Also, for simplicity I assume one set of flavon fields, $`\chi `$, couple to operators which give neutrino masses, and another set, $`\varphi `$, to operators for charged lepton masses. Fields are labelled according to the direction of the vev, e.g. $`\varphi _3=(0,0,v)`$. For example, texture A, with
$$m_E=\left(\begin{array}{ccc}0& 0& 0\\ 0& \delta _2& D_2\\ 0& \delta _3& D_3\end{array}\right)m_{II},$$
(10)
results from the superpotential
$$W=(ll)hh+(l\chi _3)^2hh+(l\varphi _3)\tau h+(l\varphi _2)\tau h+(l\varphi _3)\xi _\mu \mu h+(l\varphi _2)\xi _\mu \mu h$$
(11)
where the coefficient of each operator is understood to be an order unity coupling multiplied by the appropriate inverse power of the large flavour mass scale $`M_f`$. The lepton doublet $`l`$ and the $`\varphi ,\chi `$ flavons are all $`SO(3)`$ triplets, while the right-handed charged leptons ($`e,\mu ,\tau `$) and the Higgs doublets, $`h`$, are $`SO(3)`$ singlets. The form of eqn. (11) may be guaranteed by additional Abelian flavour symmetries; in the limit where these symmetries are exact, the only charged lepton to acquire a mass is the $`\tau `$. These symmetries are broken by vevs of flavons $`\xi _{e,\mu }`$, which are $`SO(3)`$ and standard model singlet fields. The hierarchy of charged fermion masses is then generated by $`\frac{\xi _{e,\mu }}{M_f}`$. The ratios $`\varphi _{2,3}/M_f`$ and $`\chi /M_f`$ generate small dimensionless $`SO(3)`$ symmetry breaking parameters. The first term of (11) generates an $`SO(3)`$ invariant mass for the neutrinos corresponding to the first term in (7). The second term gives the second term of (7) with $`m_{atm}/M=\chi _3^2/M_f^2`$. The remaining terms generate the charged lepton mass matrices. Note that the charged fermion masses vanish in the $`SO(3)`$ symmetric limit — this is the way I reconcile the near degeneracy of the neutrino spectrum with the hierarchical charged lepton sector.
In this example we see that the origin of large $`\theta _{atm}`$ is due to the misalignment of the $`\varphi `$ vev directions relative to that of the $`\chi `$ vev. This is generic. In theories with flavour symmetries, large $`\theta _{atm}`$ will always arise because of a misalignment of flavons in charged and neutral sectors. To obtain $`\theta _{atm}=45^{}`$, as preferred by the atmospheric data, requires however a very precise misalignment, which can occur as follows. In a basis where the $`\chi `$ vev is in the direction $`(0,0,1)`$, there should be a single $`\varphi `$ field coupling to $`\tau `$ which has a vev in the direction $`(0,1,1)`$, where an independent phase for each entry is understood. As we shall now discuss, in theories based on $`SO(3)`$, such an alignment occurs very easily, and hence should be viewed as a typical expectation, and certainly not as a fine tuning.
Consider any 2 dimensional subspace within the $`l`$ triplet, and label the resulting 2-component vector of $`SO(2)`$ as $`\mathrm{}=(\mathrm{}_1,\mathrm{}_2)`$. At zeroth order in SO(2) breaking only the neutrinos of $`\mathrm{}`$ acquire a mass, and they are degenerate from $`\mathrm{}\mathrm{}hh`$. Introduce a flavon doublet $`\chi =(\chi _1,\chi _2)`$ which acquires a vev to break $`SO(2)`$. If this field were real, then one could do an $`SO(2)`$ rotation to set $`\chi _2=0`$. However, in supersymmetric theories $`\chi `$ is complex and a general vev has the form $`\chi _i=a_i+ib_i`$. Only one of these four real parameters can be set to zero using $`SO(2)`$ rotations. Hence the scalar potential can determine a variety of interesting alignments. There are two alignments which are easily produced and are very useful in constructing theories:
$$\text{“SO(2)” Alignment:}W=X(\chi ^2M^2);m_\chi ^2>0;\chi =M(0,1).$$
(12)
The parameter $`M`$, which could result from the vev of some $`SO(2)`$ singlet, can be taken real and positive by a phase choice for the fields. The parameter $`m_{\chi ^2}`$ is a soft mass squared for the doublet $`\chi `$.
The second example is:
$$\text{“U(1)” Alignment:}W=X\phi ^2;m_\phi ^2<0;\phi =V(1,i)\text{ or }V(1,i).$$
(13)
It is now the negative soft mass squared which forces a magnitude $`\sqrt{2}|V|`$ for the vev.
The vev of the $`SO(2)`$ alignment, (12), picks out the original $`SO(2)`$ basis; however, the vev of the $`U(1)`$ alignment, (13), picks out a new basis $`(\phi _+,\phi _{})`$, where $`\phi _\pm =(\phi _1\pm i\phi _2)/\sqrt{2}`$. If $`(\phi _1,\phi _2)(1,i)`$, then $`(\phi _{},\phi _+)(1,0)`$. An important feature of the $`U(1)`$ basis is that the $`SO(2)`$ invariant $`\phi _1^2+\phi _2^2`$ has the form $`2\phi _+\phi _{}`$. In the SO(3) theory, we usually think of $`(ll)hh`$ as giving the unit matrix for neutrino masses as in texture A. However, if we use the $`U(1)`$ basis for the 12 subspace, this operator actually gives the leading term in texture B, whereas if we use the $`U(1)`$ basis in the 23 subspace we get the leading term in texture C.
Using this trick, it is simple to write down a variety of models for quasi degenerate neutrinos and charged leptons which naturally give rise to $`\theta _{23}=\theta _{atm}=45^0`$ and possibly also to $`\theta _{12}=45^o`$, up to corrections vanishing as $`m_\mu /m_\tau `$ or $`m_e/m_\mu `$ go respectively to zero . In particular, quasi-degenerate neutrinos with $`\theta _{23}=\theta _{12}=45^0`$ and $`\theta _{13}=0`$ are known to meet the condition that makes them evading not only the upper bound on the absolute mass from oscillation experiments but also the constraint from neutrinoless Double Beta decay.
## 5 Conclusions
The hierarchy $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$ and the large $`\theta _{23}`$ mixing angle, as suggested by neutrino oscillation experiments, can be accounted for by a variety of lepton flavour models. A dichotomy emerges.
Models were all neutrino masses are bounded by $`m_{atm}(\mathrm{\Delta }m_{atm}^2)^{\frac{1}{2}}0.03eV`$ are easily constructed, based on abelian flavour symmetries, even though special attention has to be payed to the origin of $`\mathrm{\Delta }m_{atm}^2\mathrm{\Delta }m_{}^2`$. These models can be straightforwardly extended to quarks and, in particular, to SU(5) unification.
On the contrary, models of quasi-degenerate neutrinos are more likely based on non-abelian flavour symmetries. In the limit of exact flavour symmetry, the three neutrinos are massive and degenerate, while the three charged leptons are massless. Such zeroth-order masses result when the three lepton doublets form a real irreducible representation of some non-Abelian flavour group — for example, a triplet of $`SO(3)`$. A sequential breaking of the flavour group then produces both a hierarchy of charged lepton masses and a hierarchy of neutrino $`\mathrm{\Delta }m^2`$. The problem of extending these non-abelian symmetries to the quark sector is an open one.
An independent indication in favour of a non-abelian flavour symmetry may come from one or maybe two of the mixing angles being close to $`45^{}`$. A feature of the $`SO(3)`$ symmetry breaking is that it may follow a different path in the charged and neutral sectors, leading to a vacuum misalignment with interesting consequences. Mixing angles of $`45^{}`$ do arise from the simplest misalignment potentials. Such mixing can explain the atmospheric neutrino data, and can result in rotating away the $`\beta \beta _{0\nu }`$ process, allowing significant amounts of neutrino hot dark matter.
## Acknowledgements
This work summarizes the results and the conclusions obtained by discussing and collaborating with various people. Special tanks go to Lawrence Hall, Alessandro Strumia and Graham Ross.
|
no-problem/9901/chao-dyn9901030.html
|
ar5iv
|
text
|
# Intrinsically localized chaos in discrete nonlinear extended systems
## Abstract
The phenomenon of intrinsic localization in discrete nonlinear extended systems, i.e. the (generic) existence of discrete breathers, is shown to be not restricted to periodic solutions but it also extends to more complex (chaotic) dynamical behaviour. We illustrate this with two different forced and damped systems exhibiting this type of solutions: In an anisotropic Josephson junction ladder, we obtain intrinsically localized chaotic solutions by following periodic rotobreather solutions through a cascade of period-doubling bifurcations. In an array of forced and damped van der Pol oscillators, they are obtained by numerical continuation (path-following) methods from the uncoupled limit, where its existence is trivially ascertained, following the ideas of the anticontinuum limit.
Discrete homogeneous arrays of (hamiltonian and non-hamiltonian) nonlinear oscillators (or rotors) exhibit generic solutions which are time-periodic and (typically exponentially) localized in space. These solutions are called discrete breathers by analogy with non-topological localized solutions of certain PDE’s. In contrast with continuous ”bona fide” breathers, discrete breathers posses a remarkable structural stability, and thus genericity. This localization is often referred to as intrinsic to stress the fact that the system is homogeneous (no impurities or disorder are present). For an updated and comprehensive review on discrete breathers, see.
A general schematic way to describe a discrete breather in a one-dimensional lattice is the following: Let us consider the phase space $`\mathrm{\Gamma }_s`$ of a single oscillator (or rotor), so that the phase space $`\mathrm{\Gamma }`$ of the network is the cartesian product of the single site phase spaces. Let denote by $`A`$, $`B`$ periodic orbits in $`\mathrm{\Gamma }_s`$, eventually projections of trajectories of $`\mathrm{\Gamma }`$ onto $`\mathrm{\Gamma }_s`$. A discrete breather is a solution
$$\{\varphi _i(t)\}\{\mathrm{},B_2,B_1,A,B_1,B_2,\mathrm{}\}$$
(1)
with $`lim_{|i|\mathrm{}}B_i=B_{\mathrm{}}`$, and $`B_{\mathrm{}}A`$. Archetypical examples are Klein-Gordon hamiltonian breathers, where $`A`$ is a periodic cycle of frequency $`\omega _b`$ in the $`(\varphi ,\dot{\varphi })`$ phase space, $`B_{\mathrm{}}`$ is the rest solution $`(0,0)`$, and $`B_i`$ are $`\omega _b`$-cycles with exponentially decreasing amplitude. In the case of forced and damped arrays, $`A`$ and $`B_{\mathrm{}}`$ are usually $`\omega _b`$-cycles of different amplitude. If $`A`$ is an $`\omega _b`$-cycle non-homotopic to zero (i. e. the central oscillator rotates), the term rotobreather is used.
In this letter we present numerical evidences as well as plausibility arguments strongly supporting the conclusion that the phenomenon of intrinsic localization in discrete nonlinear extended systems is not restricted to time-periodic solutions, but it extends to more complex (chaotic) behaviour in a generic way for damped and forced systems. More specifically, we show below examples of solutions of the type schematized in (1), where $`A`$ is a chaotic trajectory, $`B_{\mathrm{}}`$ is a ”regular” $`\omega _b`$-cycle, and $`B_i`$ are ”noisy” cycles, with ”noise intensity” exponentially decreasing to zero as $`|i|`$ grows. The first example concerns the operation of a Josephson junction device, the Josephson junction ladder, which has recently received some attention from both theoretical and experimental sides, in connection to the relevance of nonlinear dynamics of discrete systems in Condensed Matter Physics. The second example, though also experimentally realizable, serves us to illuminate possible pathways towards a rigorous characterization of the genericity of intrinsically localized chaos in discrete nonlinear extended systems, in the spirit of the ideas of the, so called, anticontinuum limit approach to intrinsic localization. We end with a short discussion on the implausibility of existence of this type of solutions as exact ones in hamiltonian arrays. Earlier numerical observations of localized chaotic solutions seems to have been reported in(coupled map lattices) and(domain walls in a parametrically excited lattice of oscillators). Our results establish a precise (and very general) link between situations of spatio-temporal complex behaviour in spatially extended discrete systems and the emergent new results and powerful methods of intrinsic localization.
Recent theoretical analyses of the dynamics of an anisotropic Josephson junction ladder (see figure 1) with injected ac currents have shown the existence of discrete breathers as attracting solutions of the equations of motion describing the dynamics of the system in the framework of the resistively and capacitively shunted junction (RCSJ) approach . The existence of discrete breathers in Josephson junction arrays should indeed be regarded as generic, given the connection between the general description of these systems in terms of the superconducting Ginzburg-Landau order parameter $`\mathrm{\Psi }(\stackrel{}{x})=|\mathrm{\Psi }(\stackrel{}{x})|\mathrm{exp}(i\theta (\stackrel{}{x}))`$, where $`\stackrel{}{x}`$ denotes the superconducting island position, and the *discrete nonlinear Schrödinger equation*, for the case of ideal (perfect insulating) junctions . In fact, the quantum Hamiltonian of a single ideal Josephson junction corresponds to the problem of two coupled anharmonic quantum oscillators, for which the asymmetric classical breather solutions have been shown to persist in the quantum regime as very long lifetime states (see also ). When the energy cost to add an extra Cooper pair on a neutral superconducting island (*charging energy* $`E_c`$) is much lower than the tunneling energy (*Josephson energy* $`E_J`$) the superconducting phase $`\theta (\stackrel{}{x})`$ becomes a good (very weakly fluctuating) variable to describing the island state, thus validating the RCSJ approach . This is the situation when the superconducting islands are of macroscopic size. The validity of the RCSJ approach in the regime $`E_c/E_J1`$ is a well established issue and its predictions fit very well with experiments.
$`\theta _i`$ and $`\theta _i^{}`$ will denote, respectively, the phases of upper and lower islands at site $`i`$ in the ladder; the currents $`I(t)=I_{ac}\mathrm{cos}(\omega t)`$ are injected into the islands in the upper row and extracted from those in the lower row; $`(J_x,ϵ_x)`$ are the junction characteristics for junctions in horizontal links and $`(J_y,ϵ_y)`$ for junctions in vertical links. With the change of variables $`\chi _i=\frac{1}{2}(\theta _i+\theta _i^{})`$, $`\varphi _i=\frac{1}{2}(\theta _i\theta _i^{})`$, the RCSJ equations are
$`\ddot{\chi }_i=`$ $`J_x\left[\mathrm{sin}(\chi _{i+1}\chi _i)\mathrm{cos}(\varphi _{i+1}\varphi _i)+\mathrm{sin}(\chi _{i1}\chi _i)\mathrm{cos}(\varphi _{i1}\varphi _i)\right]`$ (2)
$`+ϵ_x\left(\dot{\chi }_{i+1}+\dot{\chi }_{i1}2\dot{\chi }_i\right)`$
$`\ddot{\varphi }_i=`$ $`J_x\left[\mathrm{cos}(\chi _{i+1}\chi _i)\mathrm{sin}(\varphi _{i+1}\varphi _i)+\mathrm{cos}(\chi _{i1}\chi _i)\mathrm{sin}(\varphi _{i1}\varphi _i)\right]`$ (3)
$`+ϵ_x\left(\dot{\varphi }_{i+1}+\dot{\varphi }_{i1}2\dot{\varphi }_i\right)J_y\mathrm{sin}(2\varphi _i)2ϵ_y\dot{\varphi }_iI(t)`$
With uniform initial conditions in the ”center of mass” coordinates and momenta: $`\chi _i`$ and $`\dot{\chi }_i`$ independent of $`i`$, equations (2) have the solution $`\chi _i(t)=\mathrm{\Omega }t+\alpha `$ for all $`i`$; this effectively decouples equations (3) for the $`\varphi _i`$ variables from equations (2) for the $`\chi _i`$ variables. Then, using efficient continuation methods from the uncoupled (anticontinuum) limit ($`J_x=ϵ_x=0`$), one easily computes discrete breather solutions; these turn out to be attractors of the dynamics of the ladder in a wide range of parameter values.
We will concentrate on the *rotobreather* type of solutions, in which the phase half-difference $`\varphi _j^{}`$ through a vertical junction rotates, while the rest $`\varphi _i`$ ($`ij^{}`$) oscillate, and the ”center of mass” variables $`\chi _i`$ remain uniformly at rest ($`\mathrm{\Omega }=\alpha =0`$; note that any other values for these parameters, fixed by the uniform initial conditions, would show the same behavior). The period of the rotobreather solution is $`T_b=2\pi /\omega _b=4\pi /\omega `$, where $`\omega `$ is the frequency of the external currents.
By performing the Floquet analysis of rotobreather solutions, one can determine the regions of linear stability in parameter space, whose borders correspond to different types of bifurcations. One of them (which occurs typically when varying the external frequency $`\omega `$) is a period-doubling bifurcation: The (destabilizing) eigenvector of the Floquet matrix, which is associated to the eigenvalue exiting the unit circle (in complex plane) at $`1`$, is (exponentially) localized at the center of the rotobreather and then, a new (linearly stable) rotobreather with frequency $`\omega _b/2`$ exists past the bifurcation. This new rotobreather can be easy and safely obtained by slightly perturbing the unstable rotobreather along the direction of the destabilizing eigenvector. In other words, although one cannot continue the localized solution in a bifurcation, local bifurcation analysis helps to throw a bridge over the bifurcation, so arriving safely to the new localized solution at the other side.
Continuously varying the external frequency $`\omega `$, further period doubling bifurcations are often found leading to a chaotic solution. In order to characterize unambiguosly this solution as chaotic, we have computed its Lyapunov spectrum $`\{\lambda _i\}`$, which is shown in figure 2.
There is only one positive Lyapunov exponent, $`\lambda _1=0.049`$bits/s. As we are dealing here with a continuous time dynamical system, a null exponent is also present. The rest of the spectrum is negative. Thus, there is only one expanding direction (degree of freedom) in phase space. The estimated Lyapunov dimension, $`D_L`$, defined as
$$D_L=j+\frac{1}{|\lambda _{j+1}|}\underset{i=1}{\overset{j}{}}\lambda _i$$
(4)
with $`j`$ such that $`_{i=1}^j\lambda _i>0`$ and $`_{i=1}^{j+1}\lambda _i<0`$ (exponents are ordered in decreasing order), is $`D_L=4.7`$.
A look at the profile (at different times) of the Lyapunov vector associated with the positive Lyapunov exponent reveals that it is strongly localized in space. As the period doubling bifurcations leading to the chaotic solution are driven by exponentially localized eigenvectors of the Floquet matrix, it is not surprising that this chaotic solution is exponentially localized. In figure 3 we show the Poincaré (stroboscopic, with period $`2T_b`$) section of the central rotor trajectory $`\varphi _0(t)(mod2\pi )`$ of the intrinsically localized chaotic solution for parameter values $`J_x=0.05`$, $`J_y=0.5`$, $`ϵ_x=0.03`$, $`ϵ_y=0.01`$, $`\omega =1.623`$ and $`I_{ac}=0.72`$. As shown also in figure 3, the trajectories $`\varphi _i(t)`$ for $`|i|>0`$ are noisy (or chaotically perturbed) oscillations. As a rough measure of ”noise intensity”, we adopt the radius $`r_i`$ of the smallest circle containing the Poincaré section of the $`i`$th oscillator. This quantity decreases exponentially $`r_iC\mathrm{exp}(|i|/\xi )`$ $`(\xi 1.13)`$, as evidenced in figure 3.
Vaguely speaking, one could say that the uniformly oscillating solution is robust enough to exponentially damp out the penetration of the chaotic perturbation produced by the central rotor; equivalently, one could say that the uniformly oscillating solution posses finite coherence length $`\xi `$, so that an oscillator does not feel the effect of any sustained local perturbation located at distances much greater than $`\xi `$ (lattice units) from it. On intuitive basis, it is clear that finite coherence length is required for intrinsic localization to occur.
Now we turn to the question on genericity, i. e. should one expect that these intrinsically localized chaotic solutions exist generically in discrete arrays of coupled nonlinear oscillators? Though arguably there is little doubt that finite coherence length is ubiquitous in discrete nonlinear extended systems, at least some degree of robustness of the chaotic trajectory in the central oscillator (not to speak of the mere possibility of a chaotic behaviour) is also needed. In an attempt to pave the way towards rigorous answers to the question, we have considered the perspective on intrinsic localization opened by the ”anticontinuum limit” approach, as explained below.
Let us consider a chain of forced and damped identical uncoupled oscillators, and assume that there is coexistence of a chaotic attractor and an attracting cycle in the single oscillator phase space. Now, consider the cartesian product of a (central site) chaotic attractor and attracting regular cycles in the rest of lattices sites. This set is an attractor in the phase space of the uncoupled chain, which could plausibly be continued when coupling is turned on.
In order to check this idea, we have chosen a chain of harmonically coupled, forced van der Pol oscillators:
$$\ddot{\varphi }_i=\mu (\varphi _i^21)\dot{\varphi _i}\varphi _i+b\mathrm{cos}(\omega t)+C(\varphi _{i+1}2\varphi _i+\varphi _{i1})$$
(5)
For $`\mu =4.033`$, $`b=9.0`$ and $`\omega =\pi `$, the single forced van der Pol oscillator phase space shows coexistence of two strange attractors and a periodic cycle of frequency $`\omega /3`$ (see ). We have numerically continued the solution of the uncoupled chain in which the central oscillator follows a chaotic trajectory in one of the strange attractors, while the rest of the oscillators follow the periodic cycle, for non-zero values of the coupling constant $`C`$, up to values of the order of $`0.5\times 10^3`$, which are small but significantly different from zero.
The continuation from the uncoupled limit provides a systematic way of obtaining intrinsically localized chaotic solutions, provided the coexistence of strange and periodic attractors for a single oscillator. It may also serve, like in the simpler case of periodic discrete breathers, as a basis for the construction of a proof of existence which we see as a difficult problem. Indeed, Mackay already mentioned this approach for the case of the Plykin attractor, where continuation is ensured due to uniform hyperbolicity; unfortunately, as usual in chaos theory, strong conditions which simplify mathematical proofs do not seem to fit easily into realistic physical models.
The examples we have shown here concern systems of forced and damped oscillators, and one may wonder about hamiltonian arrays of oscillators. Though we do not have a definite answer on the existence of intrinsic localized chaos in discrete nonlinear Hamiltonian extended systems, it seems plausible that the typical ”broad band” structure of the power spectrum of chaotic trajectories would imply a violation of the condition of non-resonance with the phonons . In the extent that this condition plays an essential role in the proof of existence of hamiltonian discrete breathers, we think that the answer is negative. However, chaotic breathers in discrete Hamiltonian arrays easily appear as long-lived transient solutions. An observation of erratically moving transient chaotic breathers in hamiltonian Fermi-Pasta-Ulam chains has been recently reported. After completion of this work, we became aware of the numerical observation of chaotic rotobreathers by Bonart and Page in a 1d driven damped lattice of dipoles.
\***
We acknowledge to S. Aubry, C. Baesens, R.S. Mackay and J.L. Marín for many useful discussions, P. Grassberger for his illuminating criticisms and J. Page for sending us a draft of his work, prior to publication. This work has been financially supported by DGES through project PB95-0797. One of us (JJM) acknowledges a Fulbright-MEC fellowship.
|
no-problem/9901/astro-ph9901411.html
|
ar5iv
|
text
|
# Library of medium-resolution fiber optic echelle spectra of F, G, K, and M field dwarfs to giants stars
## 1 Introduction
Spectral libraries of late-type stars with medium to high resolution and large spectral coverage are an essential tool for the study of the chromospheric activity in multiwavelength optical observations using the spectral subtraction technique (see Barden 1985; Huenemoerder & Ramsey 1987; Hall & Ramsey 1992; Montes et al. 1995a, b, c, 1996a, b, 1997b, 1998). Furthermore, these libraries are also very useful in many areas of astrophysics such as the stellar spectral classification, determination of atmospheric parameters (T<sub>eff</sub>, $`\mathrm{log}g`$, \[Fe/H\]), modeling stellar atmospheres, spectral synthesis applied to composite systems, and spectral synthesis of the stellar population of galaxies.
In previous work Montes et al. (1997a, hereafter Paper I) presented a library of high and mid-resolution (3 to 0.2 Å) spectra in the Ca ii H & K, H$`\alpha `$, H$`\beta `$, Na i D<sub>1</sub>, D<sub>2</sub>, and He i D<sub>3</sub> line regions of F, G, K, and M field stars. A library of echelle spectra of a sample of F, G, K, and M field dwarf stars is presented in Montes & Martín (1998, hereafter Paper II) which is an extension of Paper I to higher spectral resolution (0.19 to 0.09 Å) covering a large spectral range (4800 to 10600 Å).
The spectral library presented here expands upon the data set in Papers I and II. This library consists of echelle spectra of a sample of F, G, K, and M field stars, mainly dwarfs (V), subgiant (IV), and giants (III) but also some supergiants (II, I). The spectral resolving power is intermediate, nominally R = 12000 ($``$ 0.5 Å in H$`\alpha `$), but the spectra have a nearly complete optical region coverage (from 3900 to 9000Å). These regions includes most of the spectral lines widely used as optical and near-infrared indicators of chromospheric activity such as the Balmer lines (H$`\alpha `$ to H$`ϵ`$), Ca ii H & K, Mg i b triplet, Na i D<sub>1</sub>, D<sub>2</sub>, He i D<sub>3</sub>, and Ca ii IRT lines, as well as temperature sensitive photospheric features such as TiO bands.
Recently, Pickles (1998) has taken available published spectra and combined them into a uniform stellar spectral flux library. This library have a wide wavelength, spectral type, and luminosity class coverage, but a low spectral resolution (R = 500) and their main purpose is the synthesis and modeling of the integrated light from composite populations. However, for other purposes as detailed studies of chromospheric activity, stellar spectral classification, and determination of atmospheric parameters, libraries of higher resolution, as the presented in Paper I and II, Soubiran, Katz, & Cayrel (1998), and the library presented here are needed.
In Sect. 2 we report the details of our observations and data reduction. The library is presented in Sect. 3.
## 2 Observations and data reduction
The echelle spectra presented here were obtained during several observing runs with the Penn State Fiber Optic Echelle (FOE) at the 0.9-m and 2.1-m telescopes of the Kitt Peak National Observatory (KPNO). The FOE is a fiber fed prism cross-dispersed echelle medium resolution spectrograph and is described in more detail in Ramsey & Huenemoerder (1986). It was designed specifically to obtain in a single exposure a wide spectral range encompassing all the visible chromospheric activity sensitive features. Typical data and performance of the FOE for the different observing runs are discussed in Ramsey et al. (1987); Huenemoerder, Buzasi, & Ramsey (1989); Newmark et al. (1990); Hall et al. (1990); Buzasi, Huenemoerder, & Ramsey (1991); Hall & Ramsey (1992); Welty & Ramsey (1995); and Welty (1995).
In Table 1 we give a summary of observations. For each observing run we list the date, the CCD detector used, the number of echelle orders included, the wavelength range covered ($`\lambda `$<sub>i</sub>-$`\lambda `$<sub>f</sub>) and the range of reciprocal dispersion achieved (Å/pixel) from the first to the last echelle orders. The Å/pixel value for each order can be found in the header of the spectra. The spectral resolution, determined by the FWHM of the arc comparison lines, ranges from 2.0 to 2.2 pixels. The signal to noise ratio is larger than 100 in all cases. Tables 2 gives for each observing run the spectral lines of interest in each echelle order.
The spectra have been extracted using the standard reduction procedures in the IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. package (bias subtraction, flat-field division, and optimal extraction of the spectra). The wavelength calibration was obtained from concurrent spectra of a Th-Ar hollow cathode lamp. Finally, the spectra have been normalized by a polynomial fit to the observed continuum.
## 3 The library
As in Papers I and II, the stars included in the library have been selected as stars with low levels of chromospheric activity, that is to say, stars that do not present any evidence of emission in the core of Ca ii H & K lines in our spectra (Montes et al. 1995c, 1996a), stars with the lower Ca ii H & K spectrophometric index S (Duncan et al. 1991; Baliunas et al. 1995), or stars known to be inactive and slowly rotating stars from other sources (see Strassmeier et al. 1990; Strassmeier & Fekel 1990; Hall & Ramsey 1992).
Table 3 presents information about the observed stars. In this table we give the HD, HR and GJ numbers, name, spectral type and luminosity class (T<sub>sp</sub>), from the Bright Star Catalogue (Hoffleit & Jaschek 1982; Hoffleit & Warren 1991), the Catalogue of Nearby Stars (Gliese & Jahreiss 1991), and Keenan & McNeil (1989). The exception is some of the M dwarfs for which we list the more recent spectral type determination given by Henry, Kirkpatrick, & Simons (1994). In column (6) MK indicates if the star is a Morgan and Keenan (MK) Standard Star from García (1989) and Keenan & McNeil (1989). MK\* indicates if the star is included in the list of Anchor Points for the MK System compiled by Garrison (1994). Column (7) give the metallicity \[Fe/H\] from Taylor (1994; 1995) or Cayrel de Strobel (1992; 1997) and column (8) rotational period (P<sub>rot</sub>) and v sini from Donahue (1993), Baliunas et al. (1995), Fekel (1997), and Delfosse et al. (1998). We also give, in column (9), the Ca ii H & K spectrophometric index S from Baliunas et al. (1995) and Duncan et al. (1991). In column (10) we list information about the observing run in which each star have been observed, using a code given in the first column of Table 1, the number between brackets give the number of spectra available. The last two columns indicate if the star was also included in Papers I and II.
Representative spectra (from F to M, dwarfs and giants stars) in different spectral regions are plotted in figures (1 to 4) in order to show the behaviour of the more remarkable spectroscopic features with the spectral type and luminosity class. In order of increasing wavelength we have plotted the following line regions: H$`\beta `$ (Fig. 1), Na i D<sub>1</sub>, D<sub>2</sub>, and He i D<sub>3</sub>) (Fig. 2), H$`\alpha `$ (Fig. 3),and Ca ii IRT $`\lambda `$8498, 8542 (Fig. 4). In each figure we have plotted main sequence stars (luminosity class V) in the left panel, and giants stars (III) in the right panel.
A total of 130 stars are included in this library. Many of them have been observed in several observing runs, and in some cases several nights during the same observing run being the total number of spectra 345. Using these spectra as well as those of Papers I and II a study of possible short and long term spectroscopic variability of some of the multiply observed stars is possible.
A description of the spectral lines most widely used as optical and near-infrared indicators of chromospheric activity, as well as other interesting spectral lines and molecular bands present in the spectral range covered by the spectra can be found in Papers I and II and references therein.
As an illustration of the use of these spectra and those of Papers I and II we intend to analyze temperature sensitive lines in order to improve the actual line-depth ratio temperature calibrations (Gray & Johanson 1991, Gray 1994) and spectral-class/temperature classifications (Strassmeier & Fekel 1990), as well as the determination of fundamental atmospheric parameters T<sub>eff</sub>, $`\mathrm{log}g`$, \[Fe/H\] (Katz et al. 1998 and Soubiran et al. 1998). This will be the subject of forthcoming papers.
In order to enable other investigators to make use of the spectra in this library for their own purposes, all the final reduced (flattened and wavelength calibrated) multidimensional spectra containing all the echelle orders of the stars listed in Table 3 are available at the CDS in Strasbourg, France, via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5). They are also available via the World Wide Web at:
http://www.ucm.es/info/Astrof/fgkmsl/FOEfgkmsl.html.
The data are in FITS format with pertinent header information included for each image. In order to further facilitate the use of this library one dimensional normalized and wavelength calibrated spectra, for the orders containing the more remarkable spectroscopic features, are also available as separate FITS format files.
In addition this library as well as the libraries presented in Papers I and II will be included in the Virtual Observatory (see http://herbie.ucolick.org/vo/). This is a project to establish a new spectroscopic database which will be contained digitized spectra of spectroscopic plates as well as spectra observed digitally from different observatories. Virtual Observatory is an International Astronomical Union (IAU) initiative through its Working Group for Spectroscopic Data Archives.
This research has made use of the SIMBAD data base, operated at CDS, Strasbourg, France. This work has been supported by the Universidad Complutense de Madrid and the Spanish Dirección General de Enseñanza Superior e Investigacióna Científica (DGESIC) under grant PB97-0259 and by National Science Foundation (NSF) grant AST 92-18008. We also acknowledge, with gratitude, KPNO supporting the FOE presence from 1987 until 1996.
|
no-problem/9901/hep-ph9901356.html
|
ar5iv
|
text
|
# Irreversibility, steady state, and non-equilibrium physics in relativistic heavy ion collisions
\[
## Abstract
Heavy ion collisions at ultrarelativistic energies offer the opportunity to study the irreversibility of multiparticle processes. Together with the many-body decays of resonances, the multiparticle processes cause the system to evolve according to Prigogine’s steady states rather than towards statistical equilibrium. These results are general and can be easily checked by any microscopic string-, transport-, or cascade model for heavy ion collisions. The absence of pure equilibrium states sheds light on the difficulties of thermal models in describing the yields and spectra of hadrons, especially mesons, in heavy ion collisions at bombarding energies above 10 GeV/nucleon.
\]
The hypothesis that local equilibrium (LE) is attained by the system of two heavy ions colliding at ultra-relativistic energies is a basic assumption of macroscopic thermal- and hydrodynamical models of heavy ion collisions. The idea was pushed by Fermi and Landau almost 50 years ago for hadron-hadron collisions. Despite the long history of theoretical and experimental attempts there is no rigorous proof of LE yet. The present paper shows that the irreversibility of multiparticle processes, proceeding e.g. via string decays, causes these systems to evolve according to Prigogine’s steady state solution, rather than towards statistical equilibrium.
Using Bogolyubov’s hierarchy of relaxation times in non-equilibrium statistical mechanics one usually considers the following scheme: Suppose that in the initial stage the system is far from equilibrium. To describe it one has to introduce a set of various many-particle distribution functions rapidly varying in time. Then, due to interactions between the particles, correlations of the distribution functions occur within very short time intervals which are typically on the order of the collision time.
This is the kinetic stage – all many-particle distribution functions may be derived from the single one-particle distribution function. For times significantly larger than the collision time the number of parameters characterizing the system is reduced further to very few average values, namely the number of particles, their energy and velocity, i.e. to the moments of the distribution function. At this stage the system behavior is governed by hydrodynamics.
Unlike in non-relativistic mechanics, in relativistic heavy ion collisions the relaxation picture is more complex because of multiparticle processes. Here the number of particles and their composition are not conserved. Newly produced particles are not thermalized (even if they appear to be, see ) and this circumstance causes a delay in achieving equilibration. The equilibration time may appear too long as compared to the typical lifetime of the expanding system. Due to the lack of a rigorous first-principles theory of nuclear reactions at relativistic energies, the approach to LE is investigated mainly by virtue of dynamical calculations provided by microscopic semiclassical Monte Carlo models which have been intensively studied during the last 15 years.
The analysis of the space time evolution picture obtained in these models reveals that the whole system of colliding nuclei never attains a global equilibrium state after the initial non-equilibrium stage. Still, there is, in principle, a possibility of the occurrence of local equilibrium (e.g. in the central cell), because the approach to LE does not depend on the assumptions of the presence of a heat reservoir, of Gibbs ensembles, etc.
Our study has been inspired by the finding that quasi-stable states are present in partonic and hadronic matter, as observed independently in dynamical simulations . On the partonic level an analysis of the thermalization of partons has been performed by the late Klaus Kinder-Geiger . Equilibration of hadronic matter has been studied, e.g., in the Quantum Molecular Dynamics models . These simulations have shown that at high energies neither the global system nor its central part seem to reach the state of chemical equilibrium (in the sense of statistical mechanics) . This observed feature is not solely restricted to microscopic models. To describe, for instance, the experimental data on yields of strange particles in heavy ion collisions at 200 AGeV or hadron multiplicities at 158 AGeV the standard statistical model of the ideal hadron gas has been modified to invoke the hypothesis of chemical non-equilibrium as well.
Does this simply imply that the hadronization time is shorter than the equilibration time? - Not necessarily! In the present paper we show that dissipative processes, such as multiparticle production via strings and many-body ($`N3`$) decays of resonances, dominating at high energies, can lead to the creation of a stationary state called steady-state. This steady-state does not coincide with a pure “conventional” equilibrium state, as assumed in the statistical models.
Consider first the necessary and sufficient criteria of LE in the central zone of nuclear reactions, which is usually analyzed in microscopic calculations:
Necessary conditions: (i) absence of significant flow effect in the central cell; isotropy of the velocity distributions, and (ii) isotropy of the diagonal components of the pressure tensor,
$$P_x=P_y=P_z=\frac{1}{3V}\underset{i}{}\frac{p_{i\{x,y,z\}}^2}{(m_i^2+p_i^2)^{1/2}}.$$
(1)
Here $`V`$ is the volume of the cell and $`m_i,p_i`$ are the mass of the $`i`$-th hadron and its momentum, respectively.
Sufficient conditions: (iii) thermal equilibration which manifests itself in the time independence of the hadronic spectra after a certain period, and (iv) chemical equilibration, i.e. the time independence of different hadronic yields.
The necessary conditions look quite simple and evident: Local equilibrium may not be reached in symmetric nuclear collisions earlier than for the time $`t^{pass}=2R/(\gamma _{cm}v_{cm})`$, during which noninteracting Lorentz contracted nuclei of radius $`R/\gamma _{cm}`$, which stream freely with the velocity $`v_{cm}`$, would have passed through each other. Apparently, early in the collision this is the origin of a substantial initial longitudinal collective flow of hadrons in the cell, which distorts the equilibration picture at the very beginning of the reaction. After $`t^{pass}`$ this non-equilibrium component rapidly drops . In it has been reported that a stage of kinetic equilibrium is attained in heavy ion collisions in a central cell of volume $`V=5\times 5\times 5=125`$ fm<sup>3</sup> at about $`t10`$ fm/$`c`$, irrespective of the energy of the colliding nuclei from 10.7 AGeV (AGS) to 160 AGeV (SPS). Isotropy of both the pressure and the velocity distributions of hadrons characterizes, without the sufficient conditions (iii) and (iv), however, a pre-equilibrium stage of the reaction rather than an equilibrium one! In a fully equilibrated system conditions (iii)-(iv) must be satisfied as well.
This is the crucial point in our discussion: The statistical thermodynamics of many-particle systems determines the thermal equilibrium as the state with maximum entropy. Once thermal equilibrium is attained, the velocity distributions of different particles must be isotropic. If the total number of particles is conserved, kinetic equilibrium is equivalent to thermal equilibrium . But: this equivalence is broken, both in chemical reactions and in high energy physics.
Indeed, if the mixture of reacting substances is in the “true” equilibrium, then the rates of each chemical reaction must be the same for the direct and inverse processes . However, in a cyclic process, in which the concentrations of the reacting substances are time independent, but the partial reaction rates $`\omega _j=\omega _j^{\mathrm{dir}}\omega _j^{\mathrm{inv}}`$ are non-zero, the system is in a stationary state, which may be far from the equilibrium .
Consider now an ideal thermostat which contains a few thousand protons with an energy $`Em_pc^2`$, where $`m_p`$ is the mass of proton and $`c`$ is the light velocity. For the sake of simplicity we exclude the (slow) weak processes from this scenario, focusing on strong interactions only. Then, even if the initial momentum distribution of the protons is Maxwellian, thermal equilibrium (in the sense of a state of maximum entropy) is not reached yet. Many new particles, mostly pions, will be produced as a result of initial proton-proton and, later, proton-pion, etc. collisions. When the system will finally reach equilibrium, it will consist of a large number of pions (and heavier mesons) with an admixture of baryons (and antibaryons) whose net number is conserved. The final temperature must, of course, be much lower than the initial one – kinetic energy has been transformed into mass (of produced particles). But: will the particle abundances be the same as those given by the statistical mechanics of an ideal hadron gas? In other words, will the final state be the state of thermal and chemical equilibrium, in which any direct and inverse hadronic processes will be taking place on average at the same rate?
This problem is closely related to the principle of detailed balance and to the irreversibility of multiparticle processes. To avoid ambiguities, we would like to stress that the definition of detailed balance in quantum mechanics (DB<sup>QM</sup>) does not coincide with the definition of detailed balance in statistical physics (DB<sup>SP</sup>) and chemistry. Detailed balance in the sense of quantum mechanical invariance under time reversal implies that the transition amplitudes of the direct and the inverse processes must be of the same magnitude,
$$|M_{ab}|=|M_{ab}|.$$
(2)
In statistical physics and chemistry, the principle of detailed balance requires that (in thermostatic equilibrium of a system) every separate reaction between its components is in itself in equilibrium, i.e. the rates of the direct and inverse processes are the same. To clarify the difference between DB<sup>QM</sup> and DB<sup>SP</sup>, consider the process of multiparticle production, e.g. in string excitation:
$$a+bx_1+x_2+\mathrm{}+x_n,n1$$
(3)
According to Fermi’s Golden Rule, the probability of $`n`$ particle production reads
$$\mathrm{d}R_n=\frac{2\pi }{\mathrm{}}|M_{a+bn}|^2\underset{i=1}{\overset{n}{}}\mathrm{d}^4p_i\delta ^4\left(p\underset{i=1}{\overset{n}{}}p_i\right),$$
(4)
where $`p`$ is the total four-momentum, $`p_i`$ is the four-momentum of $`i`$-th particle, and $`|M|`$ is the amplitude of the process. The last factor is the space factor, which is fully determined by the kinematics of the reaction . Although $`|M_{a+bn}|=|M_{a+bn}|`$ and, therefore, DB<sup>QM</sup> is satisfied, the rates of the direct and the inverse processes, $`R_{a+bn}`$ and $`R_{a+bn}`$, are different, due to different space factors. This means that DB<sup>SP</sup> is not fulfilled. Note that the principle of detailed balance in particle physics has been verified for the reactions
$`a+b`$ $``$ $`c+d,`$ (5)
$`a+b`$ $``$ $`c,`$ (6)
where $`a,b,c,d`$ denote hadrons and their resonances. These processes are time reversible, because the space factors (or the densities of states) of the initial and final states are essentially the same.
The space factors are rapidly varying functions of $`n`$. Therefore, the matrix elements $`|M|`$ may be replaced by average values, and a situation typical for statistical mechanics is obtained: the probability of a state is proportional to the volume of the accessible phase space. In other words, multiparticle processes are irreversible in time, because they increase the local entropy of the system. Consequently, the processes of recombination of many hadrons into one string, and two strings colliding to form a couple of ground state hadrons (Fig. 1) of high energy are strongly suppressed, because they violate (locally) the Boltzmann $`H`$-theorem.
The irreversibility of multiparticle processes (e.g., strings which provide a steady source of new particles) first drives and then keeps the hadronic system out of the total chemical equilibrium, i.e. out of the full detailed balance in the sense of statistical mechanics. On the other hand, the DB<sup>SP</sup> principle is the basic assumption of the statistical model (SM) of the ideal hadron gas and variations like the statistical bootstrap model (SBM) . Therefore, simply extracting the energy density $`\epsilon `$, the baryon density $`\rho _\mathrm{B}`$ and the strangeness density $`\rho _\mathrm{S}`$ of the system at a given time and inserting these values as an input into the statistical model will be giving misleading results until all multiparticle processes in the system will have ceased.
Still, the conditions (iii)-(iv) may be fulfilled, even if the full detailed balance is not reached yet. Such states, which may be stable or not, but are out of local equilibrium, have been dubbed steady states of the system . To decide whether or not a steady state is attained in a microscopic model of heavy-ion collision, the system must be compared with the quasi-equilibrated (in the sense of the criteria (i)-(iv)) infinite matter, as simulated within the same microscopic model. In it was shown that the yields and energy spectra of hadrons in a central cell are – after $`t10`$fm/$`c`$ – very close to those values calculated for infinite hadron matter with the same $`\epsilon `$, $`\rho _\mathrm{B}`$ and $`\rho _\mathrm{S}`$. This is a strong indication on the occurrence of a steady state.
In conclusion, we have discussed the relaxation of hadronic matter produced in the central zone of heavy ion collisions in the energy range spanning from AGS to RHIC. Apparently, dissipative $`N`$-body ($`N1`$) decays of strings and resonances, i.e. multiparticle processes, are irreversible in time: the probability of $`N`$ particles
1) to collide simultaneously in a small volume and
2) to transform into a final state, which consists only of two particles of higher energies,
drops extremely rapidly with rising $`N`$.
Therefore, these processes drive the system towards a steady state. Due to the broken symmetry between the rates of direct and inverse processes, this steady state does not coincide with a pure equilibrium state.
The conditions (iii)-(iv), often applied for the analysis of local equilibrium, are generally weaker than the requirement of full local equilibrium usually imposed in the macroscopic models. At low energy densities, when multiparticle processes are rare, the steady state coincides practically with the equilibrium one. At higher energy densities, the difference between the states becomes more and more significant.
One characteristic feature of the steady state would be a strong enhancement of pions, accompanied by a suppression of (many-body decaying) resonances. This is due to the absence of an effective feeding mechanism. This feature of the steady state can explain the fact why conventional thermal models considerably underestimate yields of pions at energies of $`E>10`$ AGeV. These results are typical for a large family of microscopic (cascade-, transport-, string-) models, which describe hadronic and nuclear interactions without invoking the hypothesis of quark-gluon plasma (QGP) creation.
Non-equilibrium thermodynamics of irreversible processes finally comes to high energy physics, where the conservation of mass and particle number, conventional in statistical physics, is obviously violated. The number of possible reaction channels is three order of magnitude higher than in simple chemical reactions (see, however, the role of the equilibrium concept in biochemical/biophysical processes). Therefore, it is a hopeless task to solve the rate equations analytically. On the other hand, microscopic models for hadronic and nuclear collisions provide a very useful tool to probe these fundamental features of nature at very small space and time scale. The non-equilibrium aspects of heavy ion collisions are interesting and require further investigations.
Acknowledgements: Discussions with M. Belkacem, M. Gorenstein and L. Satarov are thankfully acknowledged. L.B. and E.Z. are grateful to the Institute for Theoretical Physics, Goethe University, Frankfurt am Main, for the warm and kind hospitality. This work was supported by the Graduiertenkolleg für Theoretische und Experimentelle Schwerionenphysik, BMBF, GSI, DFG, and the A. v. Humboldt–Stiftung.
|
no-problem/9901/astro-ph9901240.html
|
ar5iv
|
text
|
# Where are the missing galactic satellites?
## 1. Introduction
Satellites of galaxies are important probes of the dynamics and masses of galaxies. Currently, analysis of satellite dynamics is one of the best methods of estimating the masses within large radii of our Galaxy and of the Local Group (e.g., Einasto & Lynden-Bell 1982; Lynden-Bell et al. 1983; Zaritsky et al. 1989; Fich & Tremaine 1991), as well as the masses of other galaxies (Zaritsky & White 1994; Zaritsky et al. 1997). Although the satellites of the Milky Way and Andromeda galaxy have been studied for a long period of time, their number is still uncertain. More and more satellites are being discovered (Irwin et al. 1990; Whiting et al. 1997; Armandroff et al. 1998; Karachentseva & Karachentsev 1998) with a wide range of properties; some of them are relatively large and luminous and have appreciable star formation rates (e.g., M33 and the Large Magellanic Cloud; LMC). Exempting the strange case of IC10, which exhibits a high star formation rate ($`0.7M_{}\mathrm{yr}^1`$; Mateo 1998), most of the satellites are dwarf spheroidals and dwarf ellipticals with signs of only mild star-formation of $`10^3M_{}\mathrm{yr}^1`$. The star formation history of the satellites shows remarkable diversity: almost every galaxy is a special case (Grebel 1998; Mateo 1998). This diversity makes it very difficult to come up with a simple general model for formation of satellites in the Local Group. Because of the generally low star formation rates, it is not unexpected that the metallicities of the satellites are low: from $`10^2`$ for Draco and And III to $`10^1`$ for NGC 205 and Pegasus (Mateo 1998). There are indications that properties of the satellites correlate with their distance to the Milky Way (MW) or Andromeda, with dwarf spheroidals and dwarf ellipticals being closer to the central galaxy (Grebel 1997). Overall, about 40 satellites in the Local Group have been found.
Formation and evolution of galaxy satellites is still an open problem. According to the hierarchical scenario, small dark matter (DM) halos should on average collapse earlier than larger ones. To some degree, this is supported by observations of rotation curves of dark-matter dominated dwarfs and low-surface-brightness galaxies. The curves indicate that the smaller the maximum circular velocity, the higher the central density of these galaxies. This is expected from the hierarchical models in which the smaller galaxies collapse earlier when the density of the Universe was higher (Kravtsov et al. 1998; Kormendy & Freeman 1998). Thus, it is likely that the satellites of the MW galaxy were formed before the main body of the MW was assembled. Some of the satellites may have survived the very process of the MW formation, whereas others may have been accreted by the MW or by the Local Group at later stages. Indeed this sequence forms the basis of the currently popular semi-analytical models of galaxy formation (e.g., Somerville & Primack 1998, and references therein).
There have been a number of efforts to use the Local Group as a cosmological probe. Peebles et al. (1989) modeled formation of the Local Group by gravitational accretion of matter onto two seed masses. Kroeker & Carlberg (1991) found pairs of “galaxies” in cosmological simulations and used them to estimate the accuracy of traditional mass estimates. Governato et al. (1997) studied the velocity field around Local Group candidates in different cosmological models and Blitz et al.(1998) simulated a group of galaxies and compared their results with the observations of the high-velocity clouds in the Local Group.
Nevertheless, despite significant effort, theoretical predictions of the abundance and properties of the satellites are far from been complete. One of the difficulties is the survival of satellites inside halos of large galaxies. This numerically challenging problem requires very high-resolution simulations in a cosmological context and has been addressed in different ways. In the classical approach (e.g., Lin & Lynden-Bell 1982; Kuhn 1993; Johnston et al. 1995), one assumes a realistic potential for the MW, a density profile for the satellites (usually an isothermal model with a central core), and numerically follows a satellite as it orbits around the host galaxy. This approach lends many valuable insights into the physical processes operating on the satellites and alleviates some of the numerical problems. It lacks, however, one important feature: connection with the cosmological background. The host galaxy is implicitly assumed to be stable over many billions of years which may not be realistic for a typical galaxy formed hierarchically. Moreover, the assumed isothermal density profile of the satellite is different from profiles of typical dark matter halos formed in hierarchical models (Navarro, Frenk & White 1997). Last but not least, the abundances of the satellites can only be predicted if the formation of the satellites and of the parent galaxy is modelled self-consistently. Thus, more realistic cosmological simulations are necessary.
Unfortunately, until recently numerical limitations prevented the usage of cosmological simulations to address satellite dynamics. Namely, dissipationless simulations seemed to indicate that DM halos do not survive once they fall into larger halos (e.g., White 1976; van Kampen 1995; Summers et al. 1995). It appears, however, that the premature destruction of the DM satellites inside the virial radius of larger halos was largely due to numerical effects (Moore, Katz, Lake 1996; Klypin et al. 1997 (KGKK)). Indeed, recent high-resolution simulations show that hundreds of galaxy-size DM halos do survive in clusters of galaxies (KGKK; Ghigna et al. 1997; Colín et al. 1998). Encouraged by this success, we have undertaken a study of the survival of satellites in galaxy-size halos.
Dynamically, galactic halos are different from cluster-size halos (mass $`10^{14}h^1\mathrm{M}_{}`$). Clusters of galaxies are relatively young systems in which most of the satellite halos have had time to make only a few orbits. Galaxies are on average significantly older, enabling at least some of their satellites to orbit for many dynamical times. This increases the likelihood of the satellite being destroyed either from numerical effects of the simulation or the real processes of dynamical friction and tidal stripping. The destruction of the satellites is, of course, counteracted by accretion of the new satellites in an ongoing process of galaxy formation. It is clear, therefore, that to predict the abundances and properties of galactic satellites, one needs to model self-consistently both the orbital dynamics of the satellites and the formation process of the parent halo in a cosmological setting. In this paper we present results from a study of galactic satellite abundances in high-resolution simulations of two popular variants of the Cold Dark Matter (CDM) models. As will be described below, the dissipationless simulations used in our study are large enough to encompass a cosmologically significant volume and, at the same time, have sufficient resolution to make the numerical effects negligible.
The paper is organized as follows. In §2 we present the data that we use to estimate the observed velocity function of satellites of our Galaxy and M31. Cosmological simulations are presented and discussed in §3. We compare the predicted and observed velocity functions in §4 to show that the models predict considerably more lower mass satellites than is actually observed in the Local Group. In §5 and 6 we discuss possible interpretation and implications of our results and summarize our conclusions.
## 2. Satellites in The Local Group
There are about 40 known galaxies in the Local Group (Mateo 1998). Most of them are dwarf galaxies with absolute magnitudes of $`M_V1015`$. While more and more galaxies are being discovered, most of the new galaxies are very small and faint making it seem unlikely that too many larger satellites have been missed. Therefore, we have decided to consider only relatively massive satellites with estimated rotational velocity or three-dimensional velocity dispersion of stars larger than 10 km/s. In order to simplify the situation even further, we estimate the number of satellites per central galaxy. There is a number of arguments why this is reasonable. First, it makes comparison with cosmological models much more straightforward. This is justified to some degree by the fact that the satellites in the Local Group cluster around either the MW or M31 and there are only a few very remote ones of unclear association with a central galaxy. We also believe that the estimate of the satellite abundance per galaxy is more accurate because it is relatively straightforward to find the volume of the sample, which would be more difficult if we were to deal with the Local Group as a whole<sup>2</sup><sup>2</sup>2One of the problems would be choice of the outer boundary of the sample volume..
Using published results (Mateo 1998), we have compiled a list of satellites of the Milky Way and of the M31 with estimated circular velocities above the threshold of $`10\mathrm{km}/\mathrm{s}`$. In our estimate of abundances, we have not attempted to decide whether a satellite is bound to its central galaxy or not. Satellites have been simply counted if they lie within a certain radius from the center of their parent galaxy. We have chosen two radii to make the counts. The counts of DM satellites were made for the same radii. The radii were chosen rather arbitrarily to be $`200h^1\mathrm{kpc}`$ and $`400h^1\mathrm{kpc}`$. For a Hubble constant of $`h=0.7`$ ($`H_0=100h`$ km/s/Mpc), which was assumed for our most realistic cosmological model and which is consistent with current observational results, the radii are 286 kpc and 571 kpc. The smaller radius is close to a typical virial radius of a Milky Way size halo in our simulations. The larger radius allows us to probe larger volumes (and, thus, gives better statistics) both in simulations and in observations. Unfortunately, observational data may become less complete for this radius.
Since we cannot estimate the luminosities of galaxies associated with DM satellites in dissipationless simulations, we have chosen circular velocity $`V_{\mathrm{circ}}`$ to characterize both the dark halos and the satellite galaxies. The circular velocity can be estimated for galaxies (with some uncertainties) and for the DM halos. For spiral and irregular galaxies we used the rotational velocity, which is usually measured from 21 cm HI observations. For ellipticals and dwarf spheroidals we used observed line-of-sight velocity dispersion of stars, which was multiplied by $`\sqrt{3}`$ to give an estimate of $`V_{\mathrm{circ}}`$. Using our numerical simulations we confirmed that this gives a reasonably accurate estimate of $`V_{\mathrm{circ}}`$ with an error less than $`10\%20\%`$. Table 1 lists the number of satellites with $`V_{\mathrm{circ}}`$ larger than a given value (first column) for the Milky Way galaxy (second column) and M31 (third column). The forth column gives the average number of satellites and the fifth column lists names of the satellites in given velocity bin. Figures 4 and 5, discussed in detail below, present the cumulative circular velocity distribution of the observed satellites in MW and M31 within 286 kpc and 571 kpc radius from the central galaxies.
A few special cases should be mentioned. There are no measurements of velocity dispersion for AND I-III and the other two satellites of M31, AND V and VI, do not have measured magnitudes. Given that they all seem to have the properties of a dwarf spheroidal, we think it is reasonable to expect that they have $`V_{\mathrm{circ}}`$ in the range 10-20 km/s. Details of recent measurements of different properties of these satellites of the M31 can be found in Armandroff et al. (1998) and Grebel (1998). We also included CAS dSph (Grebel & Guhathakurta, 1998) in our list with $`V_{\mathrm{circ}}`$ in the range $`(1020)\text{km s}\text{-1}`$. One satellite (AND II) can be formally included in both lists (MW and M31). It is 271 kpc from the M31, but being at the distance of 525 kpc from MW it is should also be counted as the MW satellite. Since this is the only such case, we have decided to count it only once – as a satellite of M31.
## 3. Cosmological models and simulations
To estimate the satellite abundances expected in the hierarchical models, we have run simulations of two representative cosmologies. Parameters of the models and simulations are given in Table 2, where $`\mathrm{\Omega }_0`$ is the density parameter of the matter at $`z=0`$, $`\sigma _8`$ is the rms of density fluctuations on $`8h^1\text{Mpc}`$ scale estimated by the linear theory at present time using the top-hat filter. Other parameters given in Table 2 specify the numerical simulations: mass of dark a matter particle, $`m_{\mathrm{particle}}`$, defines the mass resolution, number of time-steps at the lowest/highest levels of resolution, size of the simulation box, and the number of dark matter particles. Numbers on resolution refer to the size of the smallest resolution elements (cells) in the simulations.
The simulations have been performed using the Adaptive Refinement Tree (ART) $`N`$-body code (Kravtsov, Klypin & Khokhlov 1997). The ART code reaches high force resolution by refining the mesh in all high-density regions with an automated refinement algorithm. The ΛCDM simulation used here was used in Kravtsov et al. (1998) and we refer the reader to that paper for details and tests. Additional tests and comparisons with a more conventional AP<sup>3</sup>M code will be presented in Knebe et al. (1999). The CDM simulation differs from the ΛCDM simulations only in the cosmological parameters and size of the simulation box. Our intent was to use the much more realistic ΛCDM model for comparisons with observations, and to use the CDM model to test whether the predictions depend sensitively on cosmology and to somewhat broaden the dynamical range of the simulations. Jumping ahead, we note here that results of the CDM simulation are close to those of the ΛCDM simulation as far as the circular velocity function of satellites is concerned. This indicates that we are dealing with general prediction of hierarchical scenarios, not particular details of the ΛCDM model. Nevertheless, we do expect that some details of statistics and dynamics of the satellites may depend on parameters of the cosmological models.
The size of the simulation box is defined by the requirement of high mass resolution and by the total number of particles used in our simulations. DM halos can be identified in simulations if they have more than $`20`$ particles (KGKK). Small satellites of the MW and Andromeda have masses of $`(15)\times 10^8M_{}`$. Thus, mass of a particle in the simulation should be quite small: $`10^7M_{}`$. Therefore, the number of particles in our simulations ($`128^3`$) dictates the box size of only a few megaparsec across. This puts significant constraints on our results. The number of massive halos, for example, is quite small. In the CDM simulation we have only three halos with circular velocity larger than $`140\text{km s}\text{-1}`$. The number of massive halos in the ΛCDM simulation is higher (eight).
The important issue for our study is the reliable identification of satellite halos. The problems associated with halo identification within high-density regions are discussed in KGKK. In this study we use a halo finding algorithm called Bound Density Maxima (BDM; see KGKK and Colín et al. 1998). The source code and description of the version of the BDM algorithm used here can be found in Klypin & Holtzman (1997). The main idea of the BDM algorithm is to find positions of local maxima in the density field smoothed at a certain scale and to apply physically motivated criteria to test whether the identified site corresponds to a gravitationally bound halo. The algorithm then computes various properties and profiles for each of the bound halos and constructs a uniform halo catalog ready to be used for analysis. In this study we will use the maximum circular velocity as the halo’s defining property. This allows us to avoid the problem of ambiguous mass assignment (see KGKK for discussion) and makes it easier to compare the results to observations.
The density maxima are identified using a top-hat filter with radius $`r_s`$ (“search radius”). The search is performed starting from a large number of randomly placed positions (“seeds”) and proceeds by moving the center of mass within a sphere of radius $`r_s`$ iteratively until convergence. In order to make sure that we use a sufficiently large number of seeds, we used the position of every tenth particle as a seed. Therefore, the number of seeds by far exceeds the number of expected halos. The search radius $`r_s`$ also defines the minimum allowed distance between two halos. If the distance between centers of any of the two halos is $`<2r_s`$, only one halo (the more massive of the two) is left in the catalog. A typical value for the search radius is $`(510)h^1\mathrm{kpc}`$. We set a lower limit for the number of particles inside the search radius $`N(<r_s)`$: halos with $`N(<r_s)<6`$ are not included in the catalog. We also exclude halos which have less than 20 bound particles and exclude halos with circular velocity less than $`10\text{km s}\text{-1}`$. Some halos may have significant substructure in their cores due, for example, to an incomplete merger. Such cases appear in the catalogs as multiple (2-3) halos with very similar properties (mass, velocity, radius) at small separations. Our strategy is to count these as a single halo. Specific criteria used to identify such cases are: (1) distance between halo centers is $`30h^1\mathrm{kpc}`$, (2) their relative velocity in units of the rms velocity of particles in the halos $`\mathrm{\Delta }v/v`$ is less than 0.15, and (3) the difference in mass is less than factor 1.5. If all the criteria are satisfied, only the most massive halo was kept in the catalog.
The box size of the simulations clearly puts limitations on sizes and masses of halos. In a few megaparsec box, one does not find large groups or filaments. The mean density in the simulation boxes, however, is equal to the mean density of the Universe, and thus we expect our simulations to be representative of the field population of galaxies (galaxies not in the vicinity of massive clusters and groups). The Local Group and field galaxies are therefore our main targets. Nevertheless, even in the small boxes used in this paper, the number of halos is very substantial. We find 1000 – 2000 halos of different masses and circular velocities in each simulation. This number is large enough for a reliable statistical analysis.
## 4. Satellites: predictions and observations
Figure 1 presents the velocity distribution function, defined as the number of halos in a given circular velocity interval per unit volume, in two ΛCDM simulations. The smaller-box simulation is the one that we use in our further analysis. To estimate whether the halo velocity function is affected by the small-box size, we compare the small-box result with results from the larger, $`60h^1\mathrm{Mpc}`$ box, simulation used in Colín et al. (1998). The latter followed the evolution of $`256^3`$ particles and had a mass resolution of $`1.1\times 10^9h^1\mathrm{M}_{}`$. In the small box, the total number of halos with $`V_{\mathrm{circ}}>10\text{km s}\text{-1}`$ and $`V_{\mathrm{circ}}>20\text{km s}\text{-1}`$ is 1716 (1066) for the lowest threshold of 20 bound particles. The numbers change slightly if a more stringent limit of 25 particles is assumed: 1556 (1052). In the overlapping range of circular velocities $`V_{\mathrm{circ}}=(100200)\text{km s}\text{-1}`$ the velocity function of the small box agrees very well with that of the large box. This shows that the lack of long waves in the small-box simulation has not affected the number of halos with $`V_{\mathrm{circ}}<200\text{km s}\text{-1}`$.
In the range $`V_{\mathrm{circ}}20400\text{km s}\text{-1}`$ the velocity function can be accurately approximated by a power law $`dN/dVdV_{\mathrm{circ}}3\times 10^4V_{\mathrm{circ}}^{3.75}h^3\mathrm{Mpch}^3/\text{km s}\text{-1}`$ motivated by the Press-Schechter (1974) approximation with assumptions of $`Mv^3`$ and of the power-law power spectrum with a slope of $`n=2.5`$. At higher circular velocities ($`V_{\mathrm{circ}}>300\text{km s}\text{-1}`$) the fit overpredicts the number of halos because the above fit neglects the exponential term in the Press-Schechter approximation. At small $`V_{\mathrm{circ}}`$ ($`<20\text{km s}\text{-1}`$) the points deviate from the fit, which we attribute to the incompleteness of our halo catalog at these circular velocities due to the limited mass resolution. Indeed, comparison with the CDM simulation, which has higher mass resolution, shows that the number of halos increases by about a factor of three when the threshold for $`V_{\mathrm{circ}}`$ changes from 20 km s<sup>-1</sup> to 10 km s<sup>-1</sup>. We thus estimate the completeness limit of our simulations to be $`V_{\mathrm{circ}}=20\text{km s}\text{-1}`$ for the ΛCDM simulations and $`V_{\mathrm{circ}}=10\text{km s}\text{-1}`$ for the CDM run. Note that for the issue of satellite abundance discussed below, any incompleteness of the catalogs at these velocities would increase the discrepancy between observations and models.
Figure 2 provides a visual example of a system of satellites around a group of two massive halos in the ΛCDM simulation. The massive halos have $`V_{\mathrm{circ}}280\text{km s}\text{-1}`$ and $`205\text{km s}\text{-1}`$ and masses of $`1.7\times 10^{12}h^1\text{M}\text{}`$ and $`7.9\times 10^{11}h^1\text{M}\text{}`$ inside the central $`100h^1\mathrm{kpc}`$. In Figure 3 the more massive halo is shown in more detail. To some extent the group looks similar to the Local Group though the distance between the halos is $`1.05h^1\text{Mpc}`$, which is somewhat larger than the distance between the MW and M31. Yet, there is a significant difference from the Local Group in the number of satellites. In the simulation, there are 281 identified satellites with $`V_{\mathrm{circ}}10\text{km s}\text{-1}`$ within 1.5 $`h^1`$Mpc sphere shown in Figure 2. The Local Group contains only about 40 known satellites inside the same radius.
The number of expected satellites is therefore quite large. Note, however, that the total fraction of mass bound to the satellites is rather small: $`M_{\mathrm{sat}}=0.091\times M_{\mathrm{dm}}`$, where $`M_{\mathrm{dm}}=7.8\times 10^{12}h^1\text{M}\text{}`$ is the total mass inside the sphere. Most of the mass is bound to the two massive halos. There is another pair of massive halos in the simulation, which has even more satellites (340), but the central halo in this case was much larger than M31. Its circular velocity was $`V_{\mathrm{circ}}=302\text{km s}\text{-1}`$. We will discuss the correlation of the satellite abundances with the circular velocity of the host halo below (see Figs. 4 & 5). The fraction of mass in the satellites for this system was also small ($`0.055`$).
Table 3 presents parameters of satellite systems in the ΛCDM simulation for all central halos with $`V_{\mathrm{circ}}>140\text{km s}\text{-1}`$. The first and the second columns give the maximum circular velocity $`V_{\mathrm{circ}}`$ and the virial mass of the central halos. The number of satellites and the fraction of mass in the satellites are given in the third and fourth columns. All satellites within $`200(400)h^1\mathrm{kpc}`$ from the central halos, possessing more than 20 bound particles, and with $`V_{\mathrm{circ}}>10\text{km s}\text{-1}`$ were used. The last two columns give the three-dimensional rms velocity of the satellites and the average velocity of rotation of the satellite systems.
Figures 4 and 5 show different characteristics of the satellite systems in the Local Group (see §2), and in the ΛCDM and the CDM simulations. Top panels in the plots clearly indicate that the abundance and dynamics of the satellites depend on the circular velocity (and thus on mass) of the host halo. More massive halos host more satellites and the rms velocity of the satellites correlates with host’s circular velocity, as can be expected. The number of satellites is approximately proportional to the cube of the circular velocity of the central galaxy (or halo): $`N_{\mathrm{sat}}V_{\mathrm{circ}}^3`$. This means that the number of satellites is proportional to the galaxy mass $`N_{\mathrm{sat}}M`$ because the halo mass is related to $`V_{\mathrm{circ}}`$ as $`MV_{\mathrm{circ}}^3`$. The number of the satellites almost doubles when the distance to the central halo increases by a factor of two. This is very different from the Local Group where the number of satellites increases only slightly with distance. Note that the fraction of mass in the satellites (see Table 3) does not correlate with the mass of the central object. The velocity dispersion decreases with distance, changing by 10% – 20% as the radius increases from $`200h^1\mathrm{kpc}`$ to $`400h^1\mathrm{kpc}`$. We would like to emphasize that both the number of satellites and the velocity dispersion have large real fluctuations by a factor of two around their mean values.
The bottom panels in Figures 4 and 5 present the cumulative velocity distribution function (VDF) of satellites: the number of satellites per unit volume and per central object with internal circular velocity larger than a given value of $`V_{\mathrm{circ}}`$. Note that the VDF is obtained as the unweighted average of the functions of individual halos in the interval $`V_{\mathrm{circ}}150300\text{km s}\text{-1}`$. This was done to improve the statistics. However, it is easy to check that the amplitude of the VDF corresponds to the satellite abundance around $`200\text{km s}\text{-1}`$ halos. For instance, the average VDF shown in Figure 5 predicts $`50`$ satellites within the radius of $`400h^1\mathrm{kpc}`$, while the upper panel of this figure shows that this is about what we observe for $`200\mathrm{km}/\mathrm{s}`$ hosts.
The right $`y`$-axis in the lower panels of Figures 4 and 5 shows the cumulative number of satellites in all host halos in the ΛCDM simulation. Error bars in the plots correspond to the Poisson noise. The dashed curve in Figure 5 shows VDF of all non-satellite halos (halos located outside $`400h^1\mathrm{kpc}`$ spheres around the massive host halos). Comparison clearly indicates that the VDF of the satellite halos has the same shape as the VDF of the field halos with the only difference being the amplitude of the satellites’ VDF. There are more satellites in the same volume close to large halos, but the fraction of large satellites is the same as in the field. We find the same result for spheres of $`200h^1\mathrm{kpc}`$ radius.
The velocity distribution function can be roughly approximated by a simple power law. For satellites of the Local Group the fit at $`V_{\mathrm{circ}}>10\text{km s}\text{-1}`$ gives
$$n(>V)=300\left(\frac{V}{10\text{km s}\text{-1}}\right)^1(h^1\text{Mpc})^3,$$
(1)
and
$$n(>V)=45\left(\frac{V}{10\text{km s}\text{-1}}\right)^1(h^1\text{Mpc})^3,$$
(2)
for $`R<200h^1\mathrm{kpc}`$ and $`R<400h^1\mathrm{kpc}`$, respectively. For the ΛCDM simulation at $`V_{\mathrm{circ}}>20\text{km s}\text{-1}`$ we obtain:
$$n(>V)=5000\left(\frac{V}{10\text{km s}\text{-1}}\right)^{2.75}(h^1\text{Mpc})^3,$$
(3)
$$n(>V)=1200\left(\frac{V}{10\text{km s}\text{-1}}\right)^{2.75}(h^1\text{Mpc})^3,$$
(4)
again, for $`R<200h^1\mathrm{kpc}`$ and $`R<400h^1\mathrm{kpc}`$, respectively. This approximation is formally valid for $`V_{\mathrm{circ}}>20\text{km s}\text{-1}`$, but comparisons with the higher-resolution CDM simulations indicates that it likely extends to smaller velocities. The numbers of observed satellites and satellite halos cross at around $`V_{\mathrm{circ}}=(5060)\text{km s}\text{-1}`$. This means that while the abundance of massive satellites ($`V_{\mathrm{circ}}>50\text{km s}\text{-1}`$) reasonably agrees with what we find in the MW and Andromeda galaxies, the models predict an abundance of satellites with $`V_{\mathrm{circ}}>20\text{km s}\text{-1}`$ that is approximately five times higher than that observed in the Local Group. The difference is even larger if we extrapolate our results to 10 km s<sup>-1</sup>. In this case eq.(4) predicts that on average we should expect 170 halo satellites inside a $`200h^1\mathrm{kpc}`$ sphere, which is 15 times more than the number of satellites of the Milky Way galaxy at that radius.
## 5. Where are the missing satellites?
Although the discrepancy between observed and predicted satellite abundances appears to be dramatic, it is too early to conclude that it indicates a problem for hierarchical models. Several effects can explain the discrepancy and thus reconcile predictions and observations. In this section we briefly discuss two possible explanations: the identification of the missing DM satellites with High Velocity Clouds observed in the Local Group, and the existence of a large number of invisible satellites containing a very small amount of luminous matter either due to early feedback by supernovae or to heating of the gas by the intergalactic ionizing background.
### 5.1. High Velocity Clouds?
As was recently discussed by Blitz et al. (1998), abundant High-Velocity Clouds (HVCs) observed in the Local Group may possibly be the observational counterparts of the low-mass DM halos unaccounted for by dwarf satellite galaxies. It is clear that not all HVCs can be related or associated with the DM satellites; there is a number of HVCs with a clear association with the Magellanic Stream and with the disk of our Galaxy (Wakker & van Woerden 1997; Wakker, van Woerden & Gibson 1999; and references therein). Nevertheless, there are many HVCs which may well be distant ($`>100\mathrm{kpc}`$; Wakker & van Woerden 1997; Blitz et al. 1998). According to Blitz et al., stability arguments suggest diameters and total masses of these HVCs of $`25\mathrm{kpc}`$ and $`3\times 10^8M_{}`$, which is remarkably close to the masses of the overabundant DM satellites in our simulations.
The number of expected DM satellites is quite high. For the pair of DM halos presented in Figure 2, we have identified 281 DM satellites with circular velocities $`>10\mathrm{km}/\mathrm{s}`$. Since the halo catalog is not complete at velocities $`20\text{km s}\text{-1}`$ (see §4), we expect that there should be even more DM satellites at the limit $`V_{\mathrm{circ}}=10\text{km s}\text{-1}`$. The correction is significant because about half of the identified halos have cirlular velocities below $`20\text{km s}\text{-1}`$. Using eq.(3) we predict that the pair should host $`(280/2)\times 2^{2.75}=940`$ DM satellites with $`V_{\mathrm{circ}}>10\text{km s}\text{-1}`$ within $`1.5h^1\mathrm{Mpc}`$. A somewhat smaller number, 640 satellites, follows from eq.(4), if we double the number of satellites to take into account that we have two massive halos in the system.
The number of HVCs in the Local Group is known rather poorly. Wakker & van Woerden’s (1991) all-sky survey, made with 1 degree resolution, lists approximately 500 HVCs not associated with the Magellanic Stream. About 300 HVCs have estimated linewidths (FWHM) of $`>20\text{km s}\text{-1}`$ (see Fig.1 in Blitz et al.1998), the limit corresponding to 3D rms velocity dispersion of $`15\text{km s}\text{-1}`$. Stark et al.(1992) found 1312 clouds in the northern hemisphere, but only 444 of them are resolved. The angular resolution of their survey was 2 degrees, but it had better velocity resolution than the Wakker & van Woerden compilation. Comparisons of low and high resolution observations indicates that the existing HVC samples are probably affected by selection effects (Wakker et al. 1999). The abundance of HVCs thus depends on one’s interpretation of the data. If we take 1312 HVCs of Stark et al.(1992), double the number to account for missing HVCs in the southern hemisphere, we arrive at about 2500 HVCs in the Local Group. This is more than three times the number of expected DM satellites. This large number of HVCs also results in a substantial fraction of mass of the Local Group confined in HVCs. Assuming the average masses given by Blitz et al., this naive estimate gives the total mass in HVCs $`7.5\times 10^{11}M_{}`$. If we take mass of the Local Group to be $`3\times 10^{12}h^1\mathrm{M}_{}`$ (Fich & Tremaine 1991), the fraction of mass in the HVCs is high: 0.2-0.25. This is substantially higher than the fraction of mass in DM satellites in our simulations ($`0.05`$).
Nevertheless, there is another, more realistic in our opinion, way of interpreting the data. While it is true that Wakker & van Woerden (1991) may have missed many HVCs, it is likely that most of the missed clouds have small linear size. Thus, the mass should not be doubled when we make correction for missed HVCs. In this case 500 HVCs (as in Wakker & van Woerden sample studied by Blitz et al.) with average dark matter mass of $`3\times 10^8h^1\mathrm{M}_{}`$ give in total $`1.5\times 10^{11}h^1\mathrm{M}_{}`$ or 0.05 of the mass of the Local Group. This is consistent with the fraction of mass in DM satellites which we find in our numerical simulations. It should be kept in mind that the small HVCs may contribute very little to the total mass in the clouds.
As we have shown above, the number density of DM satellites is a very strong function of their velocity: $`dn(V)/dVV^{3.75}`$. If the cloud velocity function is as steep as that of the halos, this might explain why changes of parameters of different observational samples produce very large differences in the numbers of HVCs. The mass of a DM satellite is also a strong function of velocity: $`MV^3`$. As the result, the total mass in satellites with velocity less than $`V`$ is $`V^{2.25}`$. The conclusion is that the mass is in the most massive and rare satellites. If the same is true for the HVCs, we should not double the mass when we find that a substantial number of small HVCs were missed in a catalog.
To summarize, it seems plausible that observational data on HVCs are compatible with a picture where every DM satellite either hosts a dwarf galaxy (a rare case at small $`V_{\mathrm{circ}}`$) or an HVC. This picture relies on the large distances to the HVCs and can be either confirmed or falsified by the upcoming observations (Wakker et al. 1999). Note, however, that at present the observed properties of HVCs (mainly the abundances, distances, and linewidths) are so uncertain that a more quantitative comparison is impossible.
### 5.2. Dark satellites?
There are at least two physical processes that have likely operated during the early stages of galaxy formation and could have resulted in the existence of a large number of dark (invisible) satellites. The first process is gas ejection by supernovae-driven winds (e.g., Dekel & Silk 1996; Yepes et al. 1997; Mac Low & Ferrara 1998). This process assumes at least one initial starformation episode, and thus should produce some luminous matter inside the host DM satellites. Indeed, this process may explain the observed properties of the dwarf spheroidal galaxies in the Local Group (e.g., Dekel & Silk 1996; Peterson & Caldwell 1993; Hirashita, Takeuchi & Tamura 1998). It is not clear whether this process can also produce numerous very low mass-to-light ratio systems missed in the current observational surveys. It is likely that some low-luminosity satellites have still been missed in observations, since several faint galaxies have been discovered in the Local Group just during the last few years (see §1). What seems unlikely, however, is that observations have missed so many. This may still be the case if missed satellites are very faint (almost invisible), but more theoretical work needs to be done to determine whether gas ejection can produce numerous very faint systems. The recent work by Hirashita et al. (1998) shows that this process may be capable of producing very high mass-to-light ratio ($`M/L`$ up to $`1000`$) systems of mass $`10^8h^1\mathrm{M}_{}`$.
Another possible mechanism is prevention of gas collapse into or photoevaporation of gas from low-mass systems due to the strong intergalactic ionizing background (e.g., Rees 1986; Efstathiou 1992; Thoul & Weinberg 1996; Quinn, Katz & Efstathiou 1996; Weinberg, Hernquist & Katz 1997; Navarro & Steinmetz 1997; Barkana & Loeb 1999). Numerical simulations by Thoul & Weinberg (1996) and by Quinn et al. (1996) show that the ionizing background can inhibit gas collapse into halos with circular velocities $`30\text{km s}\text{-1}`$. These results are in general agreement with more recent simulations by Weinberg et al. (1997) and Navarro & Steinmetz (1997).
As explained by Thoul & Weinberg, accretion of intergalactic gas heated by the ionizing background into dwarf $`30\text{km s}\text{-1}`$ systems is delayed or inhibited because the gas has to overcome pressure support and is, therefore, much slower to turn around and collapse. If the collapse may be delayed until relatively late epochs ($`z1`$), many low-mass DM satellites may have been accreted by the Local Group without having a chance to accrete gas and form stars. This would clearly explain the discrepancy between the abundance of dark matter halos in our simulations and observed luminous satellites in the Local Group. More interestingly, a recent study by Barkana & Loeb (1999) shows that gas in small ($`V_{\mathrm{circ}}20\text{km s}\text{-1}`$) halos would be photoevaporated during the reionization epoch even if the gas had a chance to collapse and virialize prior to that.
These results indicate that the ionizing background, of the amplitude suggested by the lack of the Gunn-Peterson effect in quasar spectra, can lead to the existence of numerous dark (invisible) clumps of dark matter orbiting around the Milky Way and other galaxies and thus warrants further study of the subject. It would be interesting to explore potential observational tests for the existence of dark satellites, given the abundances predicted in hierarchical models. One such feasible tests, examined recently by Widrow & Dubinski (1998), concerns the effects of DM satellites on microlensing statistics in the Milky Way halo.
## 6. Conclusions
We have presented a study of the abundance and circular velocity distribution of galactic dark matter satellites in hierarchical models of structure formation. Numerical simulations of the ΛCDM and CDM models predict that there should be a remarkably large number of dark matter satellites with circular velocities $`V_{\mathrm{circ}}1020\text{km s}\text{-1}`$ orbiting our galaxy – approximately a factor of five more than the number of satellites actually observed in the vicinity of the Milky Way or Andromeda (see §4). This discrepancy appears to be robust: effects (numerical or physical) would tend to produce more dark matter satellites, not less. For example, dissipation in the baryonic component can only make the halos more stable and increase their chance to survive.
Although the discrepancy between the observed and predicted satellite abundances appears to be dramatic, it is too early to conclude that it indicates a problem for hierarchical models. Several effects can explain the discrepancy and thus reconcile the predictions and observations. If we discard the possibility that $`80\%`$ of the Local Group satellites have been missed in observations, we think that the discrepancy may be explained by (1) identification of the overabundant DM satellites with the High Velocity Clouds observed in the Local Group or by (2) physical processes such as supernovae-driven winds and gas heating by the ionizing background during the early stages of galaxy formation (see §5). Alternative (1) is attractive because the sizes, velocity dispersions, and abundance of the HVCs appear to be consistent with the properties of the overabundant low-mass halos. These properties of the clouds are deduced under assumptions that they are located at large ($`100\mathrm{kpc}`$) distances which should be testable in the near future with new upcoming surveys of the HVCs. Alternative (2) means that the halos of galaxies in the Local Group (and other galaxies) may contain substantial substructure in the form of numerous invisible clumps of dark matter. This second possibility is interesting enough to merit further detailed study of the above effects on the evolution of gas in low-mass dark matter halos.
We are grateful to Jon Holtzman and David Spergel for comments and discussions. This work was funded by the NSF grant AST-9319970, the NASA grant NAG-5-3842, and the NATO grant CRG 972148 to the NMSU. Our numerical simulations were done at the National Center for Supercomputing Applications (NCSA; Urbana-Champaign, Illinois).
|
no-problem/9901/hep-ph9901387.html
|
ar5iv
|
text
|
# On Forward 𝐽/𝜓 Production at Fermilab Tevatron
\[
University of Wisconsin - Madison MADPH-99-1097 hep-ph/9901387 January 1999
## Abstract
The D$`Ø`$ Collaboration has recently reported the measurement of $`J/\psi `$ production at low angle. We show here that the inclusion of color octet contributions in any framework is able to reproduce this data.
\]
The D$`Ø`$ collaboration has recently reported the first measurement of $`J/\psi `$ production in the forward pseudorapidity region $`2.5|\eta |3.7`$ in $`p\overline{p}`$ collisions at $`\sqrt{s}=1800`$ GeV . It was shown that the dependence of the $`J/\psi `$ cross section with its transverse momentum confirmed theoretical expectations based on NRQCD . Here we show that the soft color model is also able to explain these results. The implication of this result is that this data, once more, requires the inclusion of color octet perturbative diagrams for the production of $`\psi `$’s. How this is implemented is not decisive .
We have evaluated $`\psi `$ production following reference . Like the measurement, we included prompt production, as well as production via $`b`$-decay. We only adjusted the renormalization and factorization scales as appropriate for a leading order calculation. The predictions of the soft color model for the forward $`J/\psi `$ production at the Tevatron is compared with the experimental results in Fig. 1. As can be seen, the leading order evaluation of the soft color model adequately describes the shape of the forward $`p_T`$ distribution and its absolute normalization.
It is interesting to verify that the soft color model describes the rapidity distribution for different cuts on $`p_T`$. The result is shown in Fig. 2, and a good agreement is obtained.
###### Acknowledgements.
This research was supported in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation, by the U.S. Department of Energy under grant DE-FG02-95ER40896, by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq).
|
no-problem/9901/astro-ph9901031.html
|
ar5iv
|
text
|
# A New HST Measurement of the Crab Pulsar Proper Motion. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555
## 1 Introduction
The Crab pulsar has been the second pulsar to be associated with its supernova remnant (Comella et al. 1969) and the interaction between the two has been subject of deep and detailed studies (e.g. Kennel & Coronoti 1984). Recently, associating the ROSAT/HRI picture of the pulsar and its surroundings with HST/WFPC2 images of the remnant, Hester et al. (1995) have drawn a convincing picture of the central part of the Crab Nebula ”symmetrical about the (presumed) rotation axis of the pulsar” with such an axis lying ”at an approximate position angle of $`115^{}`$ east of north”. However, linking the pulsar rotation to the remarkably symmetrical appearance of the high resolution X and optical data is clearly a kind of default solution since ”the only physical axis that exists for the pulsar is its spin axis”. Although generally correct, this statement may not represent a complete description of the Crab pulsar, which is known to move. Here we want to draw attention to the possible relationship between the Crab pulsar proper motion and the symmetric appearence of the inner Crab Nebula.
Isolated Neutron Stars are fast moving objects (e.g. Caraveo 1993; Lyne & Lorimer 1993; Lorimer 1998), and the Crab is no exception. Measurements of the proper motion of Baade’s star (later recognized to be the optical counterpart of the Crab pulsar) were attempted several times (e.g. Trimble 1968), yielding vastly different values. This prompted Minkowski (1970) to conclude that the proper motion of the star was not reliably measurable.
The situation changed few years later, when Wyckoff & Murray (1977) obtained a new value of the Crab proper motion which allowed to reconcile the pulsar birthplace with the center of the nebula, i.e. the filaments’ divergent point. The relative proper motion, measured by Wyckoff & Murray over a time span of 77 years, amounts to a total yearly displacement of $`15\pm 3`$ mas, corresponding to a transverse velocity of 123 km s<sup>-1</sup> for a pulsar distance of 2 kpc.
What matters here is the direction of such a motion, i.e. its position angle of $`298^{}\pm 10^{}`$. Taken at face value, this direction is certainly compatible with the Crab axis of symmetry, defined by Hester et al. (1995), hinting an alignement between the pulsar proper motion and the major axis of the nebula.
Given the non trivial consequences of this evidence, we have sought an independent confirmation of the pulsar proper motion. Owing to the dramatic evolution of telescopes as well as optical detectors in the last 20 years, we are now in a position to measure anew the Crab proper motion in a time span much shorter than the 77 years required by Wyckoff & Murray.
## 2 Defining the Data Set
Proper motion measurements rely on accurate relative astrometry. In order to measure the tiny angular displacement of the Crab pulsar, we need high resolution images taken at different epochs. Currently, the best instrument to pursue this task is the Wide Field Planetary Camera 2 (WFPC2), onboard the Hubble Space Telescope. Luckily enough, the Crab pulsar is a conspicuous target so that, since the first telescope refurbishing, it has been repeatedly observed (Hester et al. 1995, 1996; Blair et al. 1997). Of course, different observers used different filters and placed the pulsar either in one of the Wide Field Camera (WFC) chips or, more often, in the Planetary Camera (PC).
In order to define a homogeneous data set, first we have gone through the exposure list to single out images obtained through the same filter. Since the 547M medium bandpass ($`\lambda =5454\AA ;\mathrm{\Delta }\lambda =486.6\AA `$) turned out to be the most frequently used, we have examined all the images taken through this filter. The 547M data set (listed in Table 1) has been retrieved from the HST public database, and, after combining and cosmic ray cleaning, all images have been inspected to define a suitable set of ”good quality” reference stars.
When doing astrometric studies, the presence of good reference stars is very important. An outstanding image without at least 4 reference objects, chosen to be well below the saturation limit, but bright enough to yield sufficient counts for precise positioning, is of no use for our purposes. This is particularly true for the Planetary Camera which, in spite of its much sharper angular resolution (0.0455 arcsec/px as opposed to 0.1 arcsec/px of the WFC), suffers from the limited dimensions (35 $`\times `$ 35 arcsec) of its field of view. Indeed, among the several PC observations listed in Table 1, only #3, which is shown in Fig.1, contains 4 reference stars.
Thus, only observations #1,2,3 and 8, covering a time span of 1.9 years, appear suitable for our astrometric analysis.
## 3 The Relative Astrometry
Precise alignement of these images is our next task. The traditional astrometric approach would call for a linear image-to-image trasformation, requiring at least four constants, namely two independent ($`x`$ and $`y`$) translations, rotation and image scale. However, since the paucity of reference stars would have hampered the overall accuracy of the superposition, we applied the rotate-shift procedure devised by Caraveo et al. (1996) in order to reduce the plate constants to be computed. This method takes advantage of the accurate mapping of the geometrical distorsion of the WFPC2 to define the instrument scale, while the telescope roll angle is used to a priori align our images in RA and DEC. Thus, the statistical weigth of the few common stars is used only to compute the $`x`$ and $`y`$ shifts.
Therefore, our alignement recipe is as follow:
$``$ – first, the frames have been corrected for the WFPC2 geometrical distorsion (Holtzman et al. 1995) using the wmosaic task in STSDAS, which also automatically applies the scale transformation between the PC and WFC chips. As a result, all the ”corrected” images have the same pixel size, corresponding to 0.1 arcsec (i.e. 1 WFC px);
$``$ – second, the frames have been aligned in right ascension and declination according to their roll angles;
$``$ – third, the ”best” positions of the Crab pulsar, as well as those of the reference stars, have been computed by 2-D gaussian fitting of their profiles, using specific MIDAS routines. Particular care was used for the pulsar itself, in order to make sure that the object’s centroid is not affected by the emission knot observed $``$ 0.7 arcsec to the SE. A positional accuracy ranging from 0.02 px to 0.03 px was achieved for the pulsar ($`V=16.5`$) as well as for the reference stars ($`17V19`$). It is worth noting that this result is by no means an exceptional one; Ibata & Lewis (1998) obtain similar accuracies for significantly fainter objects.
$``$ – Finally, we used the common reference stars (1 to 4 in Fig. 1) to compute the linear shifts needed to overlay the different frames onto image # 1, which was used as a reference. This procedure did not always achieve the same degree of accuracy. While obs.#3-to-obs.#1 and obs.#8a-to-obs.#1 yielded residuals close to 0.04 WFC pixels, the superpositions involving obs.# 2 and #8b,c resulted in higher residuals ($``$ 0.1 px). Unfortunately, we cannot offer an explanation for this effect, other than noting that it arises when comparing images obtained with different chips of the Wide Field Camera. Therefore, we were forced to reduce our data set to just one Wide Field chip. Since we had no a priori reason to prefer one particular chip, we selected the chip which maximized the time span. This turns out to be chip#2, with obs.#1 and #8a. To these, PC observation #3 can be added. These three images, accurately superimposed, are our final data set.
## 4 Results
It is now possible to compare the positions obtained for the Crab pulsar over 1.9 years. This is done in Table 2, where we give positions, relative displacements and errors, measured for the Crab pulsar as well as for the four reference stars. While the positions measured for the Crab pulsar in obs.#3 and #8a show a small variation, marginal in $`y`$ but certainly significant in $`x`$, no significant displacement is seen in any of the reference objects. This is shown in Fig. 2, where we have plotted in the $`(\alpha ,\delta )`$ plane the coordinate offsets measured in obs.#3 and #8a wrt obs.#1, which represents the 0,0 point in the figure. While the positions of the reference stars at the three different epochs are virtually unchanged, the pulsar is clearly affected by a proper motion to NW. A linear fit to the $`\alpha `$ and $`\delta `$ displacements yields the Crab proper motion relative to the reference stars. This turns out to be
$`\mu _\alpha =17\pm 3`$ mas yr<sup>-1</sup>, $`\mu _\delta =7\pm 3`$ mas yr<sup>-1</sup>
corresponding to an overall annual displacement $`\mu =18\pm 3`$ mas yr<sup>-1</sup> in the plane of the sky, with a position angle of $`292^{}\pm 10^{}`$. This vector is also shown in Fig.1.
Our result is to be compared with the value of
$`\mu _\alpha =13\pm 2`$ mas yr<sup>-1</sup>, $`\mu _\delta =7\pm 3`$ mas yr<sup>-1</sup>
with a position angle of $`298^{}\pm 10^{}`$, obtained by Wyckoff and Murray over a time span of $``$ 77 years.
## 5 Conclusions
With two independent, fully consistent measurements, we can now proceed to compare the Crab pulsar proper motion direction with the axis of symmetry of the inner nebula. This task is an easy one, since we can use figure 8 of Hester et al. (1995), where such axis is coincident with the direction defined by the Knot1-Knot2 alignment. According to Hester et al. the position angle of this direction is $`115^{}`$, to which an offset of $`180^{}`$ is to be added to take into account the direction of the Crab motion. This yields a value of $`295^{}`$, to be compared to our value $`292^{}`$ or to that of Wyckoff & Murray ($`298^{}`$).
Although all these values are affected by non negligible errors, both known, as in the case of the proper motion, and unknown, as in the case of the roughly defined axis of symmetry, an alignement between the pulsar proper motion and the ”axis of symmetry” of the inner nebula seems to be present. In fact, the experimental evidence gathered so far shows that the Crab pulsar is moving along the major axis of the Crab Nebula (Wyckoff & Murray 1977) and that both the knots and the X-ray jet appear aligned to the pulsar proper motion. Although the significant uncertainties of the relevant parameters leave open the possibility of a chance coincidence, it is interesting to speculate on the implications of such an alignement.
Since a neutron star acquires its proper motion at birth, there is no doubt that the pulsar motion has been present ”ab initio”, well before both knots and jets came into existence. However, the proper motion energy content is far too small to account for the surrounding structures and their rapid evolution. Therefore, the link, if any, between proper motion and axis of symmetry must be through some basic characteristics which was also present when the Crab pulsar was born. Hester et al. (1995) proposed a scenario associating the symmetrical appearence of the Nebula with the pulsar spin axis. Under this hypothesis, the neutron star motion would turn out to be aligned with the spin axis, reflecting an asymmetry of the supernova explosion along the progenitor’s spin axis. Proper motion spin axis alignements have been discussed in the literature (see e.g. Tademaru 1977), but no conclusive evidence was found.
If the X-ray jets do indeed trace the pulsar spin axis, and the relation between proper motion and axis of symmetry is not a fortuitous one, the Crab would provide the first example of such an alignement. While this would shed some light on the mechanisms responsible for the pulsar kick (e.g. Spruit & Finney 1998), one must immediately add that nothing similar has yet been found for the very limited sample of the young pulsars we know. PSR 0540$``$69, the twin of the Crab in the Magellanic Cloud, is too far to allow for proper motion measurements in any reasonable time span. PSR 1509$``$58 does not have a definite optical counterpart. The significantly older Vela pulsar does not show any alignement between its 50 mas/y proper motion (Nasuti et al. 1997) and the X-ray jet proposed by Markwardt & Ögelman (1995).
Before speculating any further, better data are needed to improve our knowledge on the geometry of the Crab pulsar surroundings.One more HST observation could easily improve the determination of the proper motion position angle. A very accurate proper motion measurement, however, will not settle the problem without a substantial improvement on the X-ray side. A sharper X-ray image is needed to better assess the position angle of the jet(s) together with their shape and overall dimension. The AXAF High Resolution Camera could improve significantly on the fuzzy picture of the inner Crab Nebula obtained by ROSAT.
Irrispective of future developments, however, the presence of an observed motion adds a definite direction to the cilindrically symmetrical appearence of the Crab.
|
no-problem/9901/astro-ph9901355.html
|
ar5iv
|
text
|
# HST Observations of the Central-Cusp Globular Cluster NGC 6752. The Effect of Binary Stars on the Luminosity Function in the Core
## 1 Introduction
The study of a globular cluster’s luminosity function (LF) provides insight into its present dynamical state and the stellar populations of which it is comprised. However, the presence of binary stars can alter the appearance of the luminosity function. A LF constructed from a population containing a significant fraction of binary stars is not a single star LF at all, but rather an amalgam of single stars and binaries. Because of mass segregation, the binary fraction in a cluster is likely to vary with magnitude and radial distance to the cluster center. Thus we expect in general that the presence of binaries may make the single star LF different from the observed main-sequence LF, which includes stars on the binary sequence.
Here we attempt to quantify this effect using our HST data of NGC 6752, in which we have previously discovered a large, centrally concentrated population of main-sequence binary stars in the core (Rubenstein & Bailyn, 1997 hereafter Paper II). Data reduction and calibration procedures are discussed in Paper II. Here we discuss the implication of the binary sequence we discovered for the cluster LF.
## 2 Determining The Luminosity Function and the Effects of Mass Segregation
To disentangle the true LF from an uncorrected LF it is necessary to perform artificial star tests. In Paper II we describe the procedure we employed to digitally add nearly $`10^7`$ artificial stars to the images. We demonstrated that the artificial stars had photometric errors which were very similar to the real stars, and should therefore have the same recovery probabilities as real stars. The artificial stars were added with a flat LF which was similar to the observed LF. We calculated the recovery rate of artificial stars in a fashion similar to Bolte (1994), although we used magnitude bins of 0.5 mag. Briefly, the fraction of artificial stars recovered in the $`i^{th}`$ magnitude bin, $`f_i`$, is the number of stars recovered within that magnitude bin, divided by the number of stars added to the data in that magnitude bin.
We calculated the incompleteness correction factor by inverting the recovery fraction, 1/$`f_i`$. Since we have only split the data into “inner” and “outer” regions, we did not construct a two-dimensional completeness look-up table as did Bolte. Rather, we separately calculated the incompleteness corrections for each region. We then smoothed the results by performing a least-squares fit to the recovery fraction values as a function of magnitude. We did not fit the data beyond V=23 since at this point the completeness drops suddenly by more than a factor of 2 to below 50%. Due to the large saturated regions near a few extremely bright stars, where no objects are recovered at all, even relatively bright stars are only about 75% complete. However, only the relative completeness from magnitude bin to magnitude bin is relevant to a discussion of the LF of the cluster stars.
To construct the LFs, we bin the stars into 0.5 mag bins. Each star receives a weight equal to the inverse of its recovery probability. The results, and our interpolation, are shown in Figure 1. We have scaled the LF of the inner regions by an arbitrary factor so that the inner and outer regions have the same values along the sub-giant branch and at the main-sequence turn-off (MSTO). In the inner region ($`1r_{core}12\mathrm{}=0.2`$pc; Djorgovski 1993), the V-band LF is flat for 5 magnitudes below the MSTO, which is located at V$`16.5`$. Beyond 5 magnitudes below the MSTO, the LF falls very rapidly.
Both the plateau and the sudden drop can be attributed to the advanced dynamical state of this stellar population. In particular, mass segregation will eject the low mass stars from the cluster center and force the more massive objects from the outer parts of the cluster in towards the center (see review by Heggie & Meylan 1997). The “outer” region retains more of its low mass objects, in that it has a flat LF 2 magnitudes further down from the MSTO than the inner region. Converting from luminosity to mass using the Yale Isochrones (Chaboyer et al. 1995), it is clear that low and moderate mass objects are strongly depleted relative to a Salpeter IMF. Indeed, the inner region shows an inverted mass function beyond 5 magnitudes below the furnoff. This result is statistically significant, but caution is warranted because the completeness level has dropped by about a factor of three.
## 3 Binary Fraction as a Function of Luminosity
In general, one would expect that mass segregation in a GC core would result in a larger binary fraction of the lower main sequence than near the turnoff, because low mass main sequence stars would be preferentially ejected from the core. To test this hypothesis, we checked to see if the color distribution redward of the main-sequence ridge-line (MSRL, as defined in Paper II) was a function of magnitude. We split the magnitude interval $`16.5`$V$`19.0`$ into two equal portions, $`16.5`$V$`17.75`$ (hereafter the “bright” stars) and $`17.75`$ V $`19.0`$ (hereafter the “dim” stars). Then we performed an analysis identical to the one described in § 3.2 of Paper II. Briefly, we performed Monte Carlo experiments using the photometric results of both the real stars and the artificial stars. We calculated the difference in color, $`\mathrm{\Delta }`$C, between the MSRL and each star (both real and artificial). We then determined a parameter $`Y`$ for each real star, equal to the fraction of artificial stars of similar magnitude and crowding which have $`\mathrm{\Delta }`$C smaller than that of the real star. If the real stars and artifical stars are drawn from the same input distribution, the values of $`Y`$ should be evenly distributed from zero to unity. The fact that they were not demonstrated the need for an underlying population of binary stars (see Paper II for more details). We found that the distribution of $`Y`$ values was significantly further from uniform among the dim stars than among the bright stars (Figure 2), indicating a greater fraction of binary stars in the dim group, as expected. The formal probability that the two groups of stars were drawn from a population having the same input distribution was $`10^7`$.
To quantify the variation of binary fraction with magnitude, one would ideally determine the absolute binary fraction among both the bright and dim star populations. Unfortunately, by splitting the stars into two groups the statistical significance of the results are degraded. Specifically, the 3$`\sigma `$ limits of the binary fraction determined in the manner of Paper II are poorly constrained, 4%—50% for the bright stars, 18%—42% among the dim stars. Fortunately, it is possible to obtain statistically significant results by calculating the difference in binary fraction between the bright and faint stars. To perform this differential analysis, we modified the Monte Carlo procedure discussed in Paper II for determining the binary fraction. In this case, we increased the magnitude of some of the stars in the bright group to simulate the effect of additional binaries. We then compared the $`Y`$ distribution of this altered bright star population with that of the dim stars — the fraction of bright stars which had to have light added for the two distributions to be comparable is a measure of the difference in the binary star population of the two groups.
In carrying out this procedure, we had to specify the ratio of brightness of the two stars in our fake binary systems. Since the distribution of binary mass ratios and luminosity ratios is unknown, we used the equation $`V_2=\frac{V_1}{R^\xi },`$ where $`R`$ is a random number between 0 and 1. This relation, while convenient to work with, is not meant to accurately model what is, after all, an unknown distribution. The free parameter $`\xi `$ determines the luminosity ratio of the binary distribution: $`\xi =0`$ corresponds to the case where all binaries have components with equal luminosity, while larger values of $`\xi `$ result in distributions increasingly weighted toward smaller luminosity ratios (and thus small mass ratios).
Figure 2 shows the results of these calculations. We found that for $`\xi =0`$, $`5`$% of the bright stars must have binary companions added to match the dim stars’ distribution. For the physically more realistic cases $`\xi =`$1 or 2, $`10`$% of the bright stars must “become binaries” to match the distribution of the dim stars.
## 4 Correcting the Luminosity Function for Binaries
The relative LF of single stars would be minimally altered if the binary fraction were constant at all magnitudes. If that were the case, there would be equal numbers of binaries in all LF bins, and the underlying single star LF would not be masked. However, in NGC 6752 we now know that the binary fraction (BF) does change with magnitude. Therefore, the single star LF is altered by the binary population and can not be observed unless we first account for the binaries. In this section we will give an example of how to perform this correction. This calculation is not intended to be definitive, but rather to demonstrate the potential size of the effect on the LF.
Since the binary frequency changes with magnitude, but is only defined in two magnitude ranges, we must make assumptions about the way the binary population changes. To minimize the number of free parameters, we will assume that the BF varies linearly with magnitude. To determine the BF in each magnitude bin, we combine the absolute binary fraction (from the results of Paper II) with a simple, linear extension of the magnitude dependence of the BF (found in above). The average magnitude of the stars analyzed in Paper II, in the interval from V=16.5 to V=19.0, is 17.75. We assume that at this representative magnitude, the binary fraction is the mean of the $`3\sigma `$ limits derived in Paper II, ie. $`(0.15+0.38)/2=0.265`$. We determined above that the binary fraction increases 10% from the bright stars to the dim stars. Using the midpoints of those groups’ magnitudes we determine that at V=17.125, the binary fraction is $`0.215`$ and that every 0.5 mag fainter on the CMD corresponds to an increase of 4% in the binary fraction.
We then construct a new LF in which the number of stars in each magnitude bin is reduced by an amount equal to the binary fraction. In other words, for a given magnitude bin the “binary fraction corrected star count”, $`\mathrm{N}_b=\mathrm{N}_c(1\mathrm{BF}),`$ where N<sub>c</sub> is the completeness corrected star count and BF is the binary fraction in that magnitude bin. In Figure 3 we plot the resulting V-band LF from the inner region, along with the version not corrected by the removal of binaries. Not surprisingly, the LF is even more depressed at faint magnitudes when the effect of binaries is considered. The inversion noticed at fainter magnitudes in the LF of the core, plotted in Figure 1, is now present virtually all the way to the turn-off region. This calculation shows that neglecting binaries will lead to qualitative errors in the derived LF. However, the assumptions required about changes in binary fraction with magnitude mean that these particular results may not be quantitatively reliable.
In Paper II we speculated that at faint magnitudes the MSRL might be shifted significantly to the red due to a preponderance of binaries at the faint end dominating over low luminosity single stars. The results obtained here support an interpretation that the observed ridge-line is not the main-sequence ridge-line below about 3.5 magnitudes below the MSTO, but rather, at fainter and fainter magnitudes, it is increasingly a binary ridge-line. Under our crude assumptions, the BF at V=21 is over 50%.
Ground based studies by Da Costa (1982) and Richer et al. (1991) found that the LF in NGC 6752 away from the cores rises all the way to the faint object cutoff at $`m_v=22.5`$ and $`m_v=23.5`$ respectively. Recently, Shara et al. (1995) and Ferraro et al. (1997) have shown from HST data that the LF flattens closer to the core. Both our inner LF uncorrected for binaries, and our outer LF are in general agreement with the flat LF found in the core by Shara et al. (1995). However, the inverted single star LF suggested for the core by the binary correction described here implies a greater degree of dynamical evolution than does the previous work.
## 5 Conclusions
Our analysis of the luminosity function in the core of this cluster indicates that for about 5 magnitudes down from the main-sequence turn-off there is a flat LF and beyond that point the LF is falling. Below this point there is an inverted mass function. However, this LF does not represent the LF of single main-sequence stars because there are more binaries at fainter magnitudes, presumably due to mass segregation. We find that the population of binaries increases by about 8% per magnitude over the small interval we were able to test. When we extrapolate this trend to fainter magnitudes, which may not be justified, we find that for single stars there is evidence of an inverted mass function nearly all the way up to the MSTO. Another implication of this extrapolation is that below about 3.5 magnitudes below the MSTO the observed ridge-line is dominated by binaries to the extent that it is significantly different from the single star ridge-line, and should therefore not be used for isochrone fitting.
Future studies of the stellar populations of GC cores must account for the effect of binaries. Failure to correct for the binaries on the lower main sequence will have the effect of over estimating the number of low mass stars. Since the binaries are located preferentially in the core it is unlikely that they will alter the results in the outer regions of clusters.
## 6 Acknowledgments
EPR would like to thank Peter Stetson, Ken Janes and Jim Heasley for making newer versions available of DAOFIND and SPS. CDB is grateful for a National Young Investigator award from the NSF. We thank Adrienne Cool, Pierre Demarque, Richard Larson, Mario Mateo, Jerry Orosz & Alison Sills for comments and suggestions. Sukyung Yi provided detailed instructions on how to transform Yale Isochrones to HST WFPC2 filters (detailed in Yi, Demarque & Oemler 1995), and Alison Sills carried out this transformation. Mary-Katherine McGovern assisted with the artificial star tests. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. This work has been supported by NASA through LTSA grants NAGW-2469 & NAG5-6404 and grant number HST-GO-5318 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
|
no-problem/9901/gr-qc9901072.html
|
ar5iv
|
text
|
# The inverse problem for pulsating neutron stars: A “fingerprint analysis” for the supranuclear equation of state
## 1 Introduction
As we approach the next millennium there is a focussed worldwide effort to construct devices that will enable the first undisputed detection of gravitational waves. A network of large-scale ground-based laser-interferometer detectors (LIGO, VIRGO, GEO600, TAMA300) is under construction, while the sensitivity of the several resonant mass detectors that are already in operation continues to be improved. At the present time it seems likely that the gravitational-wave window to the Universe will be opened within the next five to ten years, and that gravitational-wave astronomy will finally become a reality.
An integral part in this effort is played by theoretical modelling of the expected sources. Theorists are presently racking their brains to think of various sources of gravitational waves that may be observable once the new ultrasensitive detectors operate at their optimum level, and of any piece of information one may be able to extract from such observations. One of the most challenging goals that can (at least, in principle) be achieved via gravitational-wave detection is the determination of the equation of state of matter at supranuclear densities.
We have recently argued that observed gravitational waves from the various nonradial pulsation modes of a neutron star can be used to infer both the mass and the radius of the star with surprisingly good accuracy, and thus put useful constraints on the equation of state \[Andersson & Kokkotas 1996, Andersson & Kokkotas 1998\]. This conclusion was, however, based on an “ideal” detection situation that ignored the various uncertainties associated with the analysis of a noisy data-stream. In the present paper we analyze thoroughly this idea by incorporating all possible statistical errors that might arise in estimating the parameters associated with the neutron star equation of state.
The spectrum of a pulsating relativistic star is known to be tremendously rich, but most of the associated pulsation modes are of little relevance for gravitational-wave detection. From the gravitational-wave point of view one would expect the most important modes to be the fundamental ($`f`$) mode of fluid oscillation, the first few pressure ($`p`$) modes and the first gravitational-wave ($`w`$) modes \[Kokkotas & Schutz 1992\]. For details on the theory of relativistic stellar pulsation we refer the reader to a recent review article by one of us \[Kokkotas 1997\]. That the bulk of the energy from an oscillating neutron star is, indeed, radiated through these modes has been demonstrated by numerical experiments \[Allen, et al. 1998, Allen, et al. 1999\].
Recently, two of us \[Andersson & Kokkotas 1998\] provided data for the relevant pulsation modes of several stellar models for each of twelve proposed realistic equations of state, thus extending earlier results of Lindblom and Detweiler \[Lindblom and Detweiler 1983\]. This data was then used to obtain empirical relations between the observables (frequency and damping time) of the $`f`$\- and the $`w`$-modes and the stellar parameters (mass and radius). It was shown that these relations could be used to infer both the radius and the mass of the star (typically with an error smaller than 10%), i.e., to take the fingerprints of the star. It was also pointed out that, since no general empirical relations could be inferred for the $`p`$-modes, they could prove important for deducing the actual equation of state once the radius and the mass of the star is known. The empirical relations, including the relevant statistical errors, are listed in Appendix 1.
The proposed strategy can potentially be of great importance to gravitational-wave astronomy, since most stars are expected to oscillate nonradially. The evidence for this is compelling: Many $`p`$-modes (as well as possible $`g`$-modes and $`r`$-modes) have been observed in the sun and there are strong indications that similar modes are excited also in more distant stars. In principle, one would expect the modes of a star to be excited in any dynamical scenario that leads to significant asymmetries. Still, one can only hope to observe gravitational waves from the most compact stars. Hence, our attention is restricted to neutron stars (or possible strange stars \[Alcock et al 1986\], if they exist). Furthermore, to lead to detectable gravitational waves the modes must be excited to quite large amplitudes, which means that only the most violent processes are of interest.
There are several scenarios in which the various pulsation modes may be excited to an interesting level: (1) A supernova explosion is expected to form a wildly pulsating neutron star that emits gravitational waves. The current estimates for the energy radiated as gravitational waves from supernovae is rather pessimistic, suggesting a total release of the equivalent to $`10^6M_{}c^2`$, or so. However, this may be a serious underestimate if the gravitational collapse in which the neutron star is formed is strongly non-spherical. Optimistic estimates suggest that as much as $`10^2M_{}c^2`$ may be released in extreme events. (2) Another potential excitation mechanism for stellar pulsation is a starquake, e.g., associated with a pulsar glitch. The typical energy released in this process may be of the order of the maximum mechanical energy that can be stored in the crust, estimated to be at the level of $`10^910^7M_{}c^2`$ \[Blaes 1997, Mock & Joss 1997\]. This is also an interesting possibility considering the recent conclusion that the soft-gamma repeaters are likely to be so-called magnetars, neutron stars with extreme magnetic fields \[Duncan & Thompson 1992\], that undergo frequent starquakes. It seems very likely that some pulsation modes are excited by the rather dramatic events that lead to the most energetic bursts seen from these systems. Indeed, Duncan \[Duncan 1992\] has recently argued that toriodal modes in the crust should be excited. If modes are excited in these systems, an indication of the energy released in the most powerful bursts is the $`10^9M_{}c^2`$ estimated for the March 5 1979 burst in SGR 0526-66. The maximum energy should certainly not exceed the total supply in the magnetic field $`10^6(B/10^{15}G)^2M_{}c^2`$ \[Duncan & Thompson 1992\]. The possibility that a burst from a soft gamma-ray repeater may have a gravitational-wave analogue is very exciting. (3) The coalescence of two neutron stars at the end of binary inspiral may form a pulsating remnant. It is, of course, most likely, that a black hole is formed when two neutron stars coalesce, but even in that case the eventual collapse may be halted long enough (many dynamical timescales) that several oscillation modes could potentially be identified \[Baumgarte et al 1996\]. Also, stellar oscillations can be excited by the tidal fields of the two stars during the inspiral phase that preceeds the merger \[Kokkotas & Schäfer 1995\]. (4) The star may undergo a dramatic phase-transition that leads to a mini-collapse. This would be the result of a sudden softening of the equation of state (for example, associated with the formation of a condensate consisting of pions or kaons). A phase-transition could lead to a sudden contraction during which a considerable part of the stars gravitational binding energy would be released, and it seems inevitable that part of this energy would be channeled into pulsations of the remnant. Large amounts of energy that could be released in the most extreme of these scenarios: A contraction of (say) 10% can easily lead to the release of $`10^2M_{}c^2`$. Transformation of a neutron star into a strange star is likely to induce pulsations in a similar fashion.
It is reasonable to assume that the bulk of the total energy of the oscillation is released through a few of the stars quadrupole pulsation modes in all these scenarios. We will assume that this is the case and assess the likelihood that the associated gravitational waves will be detected. Having done this we discuss the inverse problem, and investigate how accurately the neutron star parameters can be inferred from the gravitational wave data.
Before we proceed with our main analysis it is worthwhile making one further comment. In this study we neglect the effects of rotation on the pulsation modes. This is not because rotation plays an insignificant role. On the contrary, we expect rotation to be highly relevant in many cases. The recent discovery that the so-called $`r`$-modes may be unstable \[Andersson 1998, Friedman & Morsink 1998\] and lead to rapid spin down of a young neutron star that is born rapidly rotating \[Andersson, Kokkotas & Schutz 1999\], highlights the fact that it is not sufficient to consider only non-rotating stars. Still, as far as most pulsation modes are concerned, one would expect rotation to have a significant effect only for neutron stars with very short period, and the present study may well be reasonable for stars with periods longer than, say, 20 ms. Furthermore, it has been argued that neutron stars are typically born slowly rotating \[Spruit & Phinney 1998\]. If that is the typical case, then our results could be relevant for most newly born neutron stars. A more pragmatic reason for not including rotation in the present study is that detailed data for modes of rotating neutron stars is not yet available. Once such data has been computed the present study should be extended to incorporate rotational effects.
Having pointed out this caveat, we are prepared to proceed with the discussion of the present results. The rest of the paper is organized as follows. In Section 2 we basically repeat the analysis of Finn \[Finn1992\], in a slightly different form in order to compute the measurement errors of frequency and damping time of a gravitational wave that is emitted from a pulsating neutron star, and extend this analysis, so as to compute the accuracy by which one can estimate various parameters of the star. In Section 3 we transform our results into a form that shows clearly how plausible it is to determine the equation of state of a neutron star by analyzing the gravitational wave data. In Section 4 we discuss how detection of mode signals can be used to reveal the nuclear equation of state. Section 5 is a brief discussion of how accurately one can expect to be able to locate the source in the sky. The final section presents our conclusions. Appendix 1 contains empirical relations for oscillation frequency and damping rate of the $`f`$ and $`w`$-modes in terms of the stellar parameters (mass and radius), deduced from twelve realistic equations of state \[Andersson & Kokkotas 1998\]. Appendix 2 lists the elements of the Fisher and covariance matrices used in the statistical analysis of the mode-signals.
## 2 Statistical analysis of observed mode-signals
Suppose that one tries to detect the gravitational waves associated with the stellar pulsation modes that are excited when (say) a neutron star forms after a supernova explosion. Since all modes are relatively short lived, the detection situation is similar to that for a perturbed rotating black hole \[Echeverria1989, Finn1992\]. For each individual mode the signal is expected to have the following form:
$$h(t)=\{\begin{array}{cc}0\hfill & \text{for }t<T\text{,}\hfill \\ 𝒜e^{(tT)/\tau }\mathrm{sin}[2\pi f(tT)]\hfill & \text{for }t>T\text{.}\hfill \end{array}$$
(1)
Here, $`𝒜`$ is the initial amplitude of the signal, $`T`$ is its arrival time, and $`f`$ and $`\tau `$ are the frequency and damping time of the oscillation, respectively. Since the violent formation of a neutron star is a very complicated event, the above form of the waves becomes realistic only at the late stages when the remnant is settling down and its pulsations can be accurately described as a superposition of the various modes, either fluid or spacetime ones, that have been excited. At earlier times ($`t<T`$) the waves are expected to have a random character that is completely uncorrelated with the intrinsic noise of an earth-bound detector. This partly justifies our simplification of setting the waveform equal to zero for $`t<T`$.
The energy flux $`F`$ carried by any weak gravitational wave $`h`$ is given by
$$F=\frac{c^3}{16\pi G}|\dot{h}|^2,$$
(2)
where $`c`$ is the speed of light and $`G`$ is Newton’s gravitational constant. Thus, when gravitational waves emitted from a pulsating neutron star hit such a detector on Earth, their initial amplitude will be \[Thorne 1987, Schutz 1997\]
$`𝒜2.4`$ $`\times `$ $`10^{20}\left({\displaystyle \frac{E_{\mathrm{gw}}}{10^6M_{}c^2}}\right)^{1/2}`$ (3)
$`\times `$ $`\left({\displaystyle \frac{10\mathrm{k}\mathrm{p}\mathrm{c}}{r}}\right)\left({\displaystyle \frac{1\mathrm{k}\mathrm{H}\mathrm{z}}{f}}\right)\left({\displaystyle \frac{1\mathrm{m}\mathrm{s}}{\tau }}\right)^{1/2}`$
where $`E_{\mathrm{gw}}`$ is the energy released through the mode and $`r`$ is the distance between detector and source. Again, $`f`$ is the frequency of gravitational waves and $`\tau `$ is the damping time: the rate at which the amplitude of the mode decays as energy is carried away from the star.
In order to reveal this kind of signal from the noisy output of a detector one could use templates of the same form as the expected signal (so called matched filtering). Following the analysis of Echeverria \[Echeverria1989\] the signal-to-noise ratio is found to be
$$\left(\frac{S}{N}\right)^2=\rho ^22hh=\frac{4Q^2}{1+4Q^2}\frac{𝒜^2\tau }{2S_n},$$
(4)
with
$$Q\pi f\tau ,$$
(5)
being the quality factor of the oscillation, and $`S_n`$ the spectral density of the detector (assumed to be constant over the bandwidth of the signal).
In Eq. (4), we used the following definition for the scalar product between two functions:
$`h_1h_2`$ $``$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑f{\displaystyle \frac{\stackrel{~}{h}_1(f)\stackrel{~}{h}_2^{}(f)}{S_n(f)}}={\displaystyle \frac{1}{S_n}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑f\stackrel{~}{h}_1(f)\stackrel{~}{h}_{2}^{}{}_{}{}^{}(f)`$ (6)
$`=`$ $`{\displaystyle \frac{1}{S_n}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑th_1(t)h_2(t).`$
To compute the accuracy by which the parameters of the signal can be determined, we first define the dimensionless parameters $`ϵ,\eta ,\xi ,\zeta `$ as
$`f_\mathrm{o}ϵ`$ $``$ $`ff_\mathrm{o}`$ (7)
$`\tau _\mathrm{o}\eta `$ $``$ $`\tau \tau _\mathrm{o}`$ (8)
$`\tau _\mathrm{o}\zeta `$ $``$ $`TT_\mathrm{o}`$ (9)
$`𝒜_\mathrm{o}\xi `$ $``$ $`𝒜𝒜_\mathrm{o},`$ (10)
where $`f_\mathrm{o},\tau _\mathrm{o},T_\mathrm{o},𝒜_\mathrm{o}`$ are the true values of the four quantities $`f,\tau ,T,𝒜`$. These new parameters are simply relative deviations of the measured quantities from their true values. One can then construct the Fisher information matrix $`\mathrm{\Gamma }_{ij}`$ and by inverting it, obtain all possible information about the measurement accuracy of each parameter of the signal, and the correlations between the errors of the parameters. The components of the symmetric Fisher matrix, which is defined by
$$\mathrm{\Gamma }_{ij}2\frac{h}{\theta _i}\frac{h}{\theta _j},$$
(11)
where $`\theta _i=(ϵ,\eta ,\zeta ,\xi )`$ are the parameters of the signal, are listed in Appendix 2.
The inverse of the Fisher matrix, $`\mathrm{\Sigma }_{ij}\mathrm{\Gamma }_{ij}^1`$, the so-called covariance matrix, is the most important quantity from the experimental point of view. Its components, that are directly related with the measurement errors of the parameters, are also listed in Appendix 2. The purpose of the present analysis is to try to identify the nuclear equation of state from supposedly detected mode data. The relevant parameters for this analysis are the frequency $`f`$ and damping time $`\tau `$. Hence, we only need $`\mathrm{\Sigma }_{ϵϵ}`$ and $`\mathrm{\Sigma }_{\eta \eta }`$ —the squares of the relative errors of $`f`$ and $`\tau `$ respectively— from Appendix 2. We will also discuss the possibility that the signal will be seen by several detectors. If that is the case, we can use the time of arrival $`T`$ to locate the position of the source in the sky. For this discussion we need $`\mathrm{\Sigma }_{\zeta \zeta }`$, see Section 5.
The estimated errors of the actual measurements should be taken into account along with the statistical errors in our empirical relations (cf. Appendix A) in order to compute the parameters of the pulsating star. The law of error propagation will be employed to estimate the errors of the parameters of the star: When one tries to estimate the value of a quantity $`z`$ which is given as a function of other quantities $`z=f(x_1,x_2,\mathrm{})`$ that can be measured directly, the error that should be attributed to the former one is given by
$$\sigma _z=\left[\underset{i}{}\left(\frac{f}{x_i}\sigma _{x_i}\right)^2+\underset{ij}{}\left(\frac{f}{x_i}\frac{f}{x_j}\sigma _{x_i}\sigma _{x_j}r_{x_ix_j}\right)\right]^{1/2},$$
(12)
where $`\sigma _{x_i}`$ is the measurement error of the quantity $`x_i`$ and $`r_{x_ix_j}`$ is the correlation between the errors of $`x_i`$ and $`x_j`$. In Section 4 we will use the measured values of the frequencies of the $`f`$ and the first $`p`$-mode ($`f_f`$ and $`f_p`$) to obtain information about the parameters of the star. Since these are independent quantities (they are measured separately by different templates), the second sum of the above law will not be present in our calculations. The magnitude of the estimated errors of neutron star parameters will determine the efficiency of the method and our chance to achieve our final goal, the determination of the equation of state of a neutron star.
## 3 Detecting pulsation modes
### 3.1 Are the modes detectable?
Two separate questions must be addressed in any discussion of gravitational-wave detection. The first one concerns identifying a weak signal in a noisy detector, thus establishing the presence of a gravitational wave in the data. The second question regards extracting the detailed parameters of the signal, e.g., the frequency and e-folding time of a pulsation mode. To address either of these issues we need an estimate of the spectral noise density $`S_n`$ of the detector.
The few pulsation modes of a neutron star that may be detectable through the associated gravitational waves all have rather high frequencies; typically of the order of several kHz. To illustrate this we show the mode-frequencies for all models considered by Andersson and Kokkotas \[Andersson & Kokkotas 1998\] as a function of the stellar mass in Figure 1. From this figure we immediately see that a detector must be sensitive to frequencies of the order of 8-12 kHz and above to observe most $`w`$-modes. Of course, it is also clear that some equations of state yield $`w`$-modes at somewhat lower frequencies. For example, for massive neutron stars with $`M1.82.3M_{}`$ (stiff EOS), as have been suggested for low-mass X-ray binaries, the $`w`$-mode frequency could be as low as 6 kHz (see also recent results for the axial $`w`$-modes by Benhar et al \[Benhar, et al.1999\]). The $`p`$-modes lie mainly in the range 4-8 kHz, while all $`f`$-modes have frequencies lower than 4 kHz. This means that the mode-signals we consider lie in the regime where an interferometric detector is severely limited by the photon shot noise. For this reason a detection strategy based on resonant detectors (bars, spheres or even networks of small resonant detectors \[Frasca & Papa1995\]) or laser interferometers operating in dual recycling mode seems the most promising. In fact, the range of mode-frequencies in Fig. 1 should motivate detailed studies of the prospects for construction dedicated ultrahigh frequency detectors. In the following we will compare three different detectors: The initial and advanced LIGO interferometers, for which
$$S_n^{1/2}h_m\left(\frac{f}{\alpha f_m}\right)^{3/2}\frac{1}{\sqrt{f}}\text{Hz}^{1/2},$$
(13)
with $`h_m=3.1\times 10^{22}`$, $`\alpha =1.4`$ and $`f_m=160`$ Hz for the initial configuration, and $`h_m=1.4\times 10^{23}`$, $`\alpha =1.6`$ and $`f_m=68`$ Hz respectively, for the advanced configuration \[Flanagan & Hughes 1998\]. We also consider an “ideal” detector that is tuned to the frequency of the mode and has sensitivity of the order of $`S_n^{1/2}10^{23}`$ Hz<sup>-1/2</sup> (this is the sensitivity goal of detectors under construction ). It should be noted that the Advanced LIGO estimates are roughly valid also for spherical detectors such as TIGA, cf. Harry, Stevenson and Paik \[Harry et al 1996\].
The detectability of the $`f`$, $`p`$ and $`w`$-modes for different detectors can be assessed from (4). The main problem in doing this is the lack of realistic simulations providing information about the level of excitation of various modes in an astrophysical situation. Still, given the frequency and damping rate of a specific mode we can ask what amount of energy must be channeled through the mode in order to be detectable by a given detector. We immediately find that detection of pulsating neutron stars from outside our own galaxy is very unlikely. Let us consider a “typical” stellar model for which the $`f`$-mode has parameters $`f_f=2.2`$ kHz and $`\tau _f=0.15`$ s. This corresponds to a $`1.4M_{}`$ neutron star according to the Bethe-Johnson equation of state, cf. Table 3 of Andersson and Kokkotas \[Andersson & Kokkotas 1998\]. For this example we find that the $`f`$-mode in the Virgo cluster (at 15 Mpc) must carry an energy equivalent to more than $`0.3M_{}c^2`$ to lead to a signal-to-noise ratio of 10 in our ideal detector. Given that the total energy estimated to be radiated as gravitational waves in a supernova is at the level of $`10^510^6M_{}c^2`$, we cannot realistically expect to observe mode-signals from far beyond our own galaxy.
This means that the number of detectable events may be rather low. Certainly, one would not expect to see a supernova in our galaxy more often than once every thirty years, or so. Still, there are a large number of neutron stars in our galaxy, all of which may be be involved in dramatic events (see the introduction for some possibilities) that lead to the excitation of pulsation modes. The energies required to make each mode detectable (with a signal-to-noise ratio of 10) from a source at the center of our galaxy (at 10 kpc) are listed in Table 1. In the table we have used the data for the “typical” stellar model, for which the characteristics of the $`f`$-mode were given above, $`f_p=6`$ kHz and $`\tau _p=2`$ s, and $`f_w=11`$ kHz and $`\tau _w=0.02`$ ms. This data indicates that, even though the event that excites the modes must be violent, the energy required to make each mode detectable is not at all unrealistic. In fact, the energy levels required for both the $`f`$\- and $`p`$-modes are such that detection of violent events in the life of a neutron star should be possible, given the Advanced LIGO detectors (or alternatively spheres with the sensitivity proposed for TIGA). On the other hand, detection of $`w`$-modes with the broad band configuration of LIGO seems unlikely. Detection of these modes, which would correspond to observing a uniquely relativistic phenomenon, requires dedicated high frequency detectors operating in the frequency range above 6 kHz. Still, we believe that the data in Table 1 illustrates that neutron star pulsation modes may well be detectable from within our galaxy, and that the first detection may in fact come as soon as the first generation of LIGO detectors come on line.
### 3.2 How well can we determine the mode parameters?
Let us now discuss the precision with which we can hope to infer the details of each pulsation mode. After inserting Eq. (3) into formulae (39) and (43) we can compute the relative measurement error in the frequency and the damping time of the waves by some appropriately designed detector. After introducing a convenient parameter $`𝒫`$ according to
$$𝒫^1=\left(\frac{S_n^{1/2}}{10^{23}\mathrm{Hz}^{1/2}}\right)\left(\frac{r}{10\mathrm{k}\mathrm{p}\mathrm{c}}\right)\left(\frac{E_{\mathrm{gw}}}{10^6M_{}c^2}\right)^{1/2},$$
(14)
we find that the error estimates take the following form
$$\frac{\sigma _f}{f}0.0042𝒫^1\sqrt{\frac{12Q^2+8Q^4}{4Q^4}}\left(\frac{\tau }{1\mathrm{m}\mathrm{s}}\right)^1,$$
(15)
and
$$\frac{\sigma _\tau }{\tau }0.013𝒫^1\sqrt{\frac{10+8Q^2}{Q^2}}\left(\frac{f}{1\mathrm{k}\mathrm{H}\mathrm{z}}\right).$$
(16)
Also, for the time of arrival of the gravitational wave signal we get from (46)
$$\sigma _T0.0042𝒫^1\mathrm{ms}.$$
(17)
To illustrate these results we show the relative errors associated with the parameter extraction for the “typical” $`1.4M_{}`$ stellar model we used in the previous section; see Table 2. We assume that each mode carries the energy required for it to be observed with signal-to-noise ratio of 10, cf. Table 1. (This is a convenient measure since it is independent of the particulars of the detector.)
From the sample data in Table 2 one sees clearly that, while an extremely accurate determination of the frequencies of both the $`f`$\- and the $`p`$-mode is possible, it would be much harder to infer their respective damping rates. It is also clear that an accurate determination of both the $`w`$-mode frequency and damping will be difficult. To illustrate this result in a different way, we can ask how much energy must be channeled through each mode in order to lead to a 1% relative error in the frequency or the damping rate, respectively. Let us call the corresponding energies $`E_f`$ and $`E_\tau `$. This measure will then be detector dependent, so we list the relevant estimates for the three detector configurations used in Table 1. When the data is viewed in this way, cf. Table 3, we see that an accurate extraction of $`w`$-mode data will not be possible unless a large amount of energy is released through these modes. Furthermore, one would clearly need a detector that is sensitive at ultrahigh frequencies. In other words, it seems unlikely that we will be able to use the $`w`$-modes to infer the detailed neutron star parameters as we have previously suggested \[Andersson & Kokkotas 1996\].
## 4 Revealing the equation of state
In the previous section we discussed issues regarding the detectability of a mode-signal, and the accuracy with which the parameters of the mode could be inferred from noisy gravitational wave data. Let us now assume that we have detected the mode and extracted the relevant parameters. We then naturally want to constrain the supranuclear equation of state by deducing the mass and the radius of the star (or combinations of them). In principle, the mass and the radius can be deduced from any two observables, cf., Table 2 of Andersson and Kokkotas \[Andersson & Kokkotas 1998\]. In the absence of detector noise, several combinations look promising, but in reality only few combinations are likely to be useful.
Consider the following example: We could in principle deduce the mass and the radius from a detected $`f`$-mode (assuming that we have extracted both its frequency and damping rate). However, as can be seen from Table 2, the estimated relative error in frequency is about three orders of magnitude smaller than the relative error in damping time. Hence, if these two measurements are to be used to determine the mass and the radius of the pulsating star one must keep in mind that the measurement of frequency is far more accurate than the measurement of damping time. And it is clear that by combining $`f_f`$ with $`\tau _f`$ we will only get accurate estimates for $`M`$ and $`R`$ if the energy in the mode is substantial. The same is true for the combination $`f_p`$ and $`\tau _p`$, as well as any combination involving the $`w`$-mode data. Still, we should not discard the possibility that there will be unique events for which $`w`$-modes carry the bulk of the energy, as for example the scattering of \[Tominanga et al. 1999\] or the infall of a smaller mass on neutron stars \[Borelli 1997\]. In such cases, the strategy proposed by Andersson and Kokkotas \[Andersson & Kokkotas 1998\] will be useful.
From the data in Tables 13, it seems natural to use the $`p`$-mode in any scheme for deducing the stellar parameters. But, as we have already mentioned, there does not exist a “nice” relation between frequency and stellar parameters for the $`p`$-mode, that is independent of the equation of state. Still, the strategy that would seem the most promising on the basis of the present results is based on using the frequencies of $`f`$\- and $`p`$-modes. A possible method is based on the following steps: The first step is to invert the empirical relation for the $`f`$-mode frequency, Eq. (23), in order to transform the measurement of $`f_f`$ to an estimate of the mean density of the star. Let us define a parameter $`x=(\overline{M}/\overline{R}^3)^{1/2}`$, where $`\overline{M}M/1.4M_{}`$ and $`\overline{R}R/10\mathrm{km}`$, in order to simplify the notation. Then, once $`f_f`$ has been measured $`x`$ can be computed from
$$x=\frac{f_f(\mathrm{kHz})0.78}{1.63},$$
(18)
with a corresponding relative error
$$\frac{\sigma _x}{x}=\left[\left(\frac{0.97}{\rho }\frac{1\mathrm{m}\mathrm{s}}{\tau _f}\frac{1}{f_f0.78}\right)^2+\left(\frac{0.01}{f_f0.78}\right)^2+0.006^2\right]^{1/2},$$
(19)
where $`f_f`$ is expressed in kHz.
The first of the three terms in the square root comes, obviously, from the measurement error of $`f_f`$ (the extra complicating factor, related with the $`Q`$ of the $`f`$-mode, that arises there when one tries to express the result with respect to the signal-to-noise, has been simplified to unity since the $`Q`$ of the $`f`$-mode is generally a large number). The other two terms arise from dispersion of the data for the various equation of state. For a typical $`f`$-mode frequency of 2 kHz, and damping time 0.2 s the relative error in $`x`$ is $`0.01`$ (assuming $`\rho >6`$). Actually, the first term in Eq. (19) is negligible and could be omitted. Therefore
$$\frac{\sigma _x}{x}\sqrt{\left(\frac{0.01}{f_f0.78}\right)^2+0.006^2}.$$
(20)
Here, we need not worry about the sign of the factor $`f_f0.78`$; it is always positive since $`f_f>1.4\mathrm{kHz}`$ for every stellar model in the dataset, see Figure 1 here, or Figure 1 of \[Andersson & Kokkotas 1998\].
Then, by measuring the frequency of the first $`p`$-mode —which is expected to carry roughly as much energy as the $`f`$-mode, see Allen et al \[Allen, et al. 1998\], one could place an error box on a diagram of $`f_p`$ vs. $`x`$, where all theoretical models for the equation of state are drawn. We illustrate this in Figure 2. This way we can identify the most likely equation of state. Detecting gravitational waves from pulsating neutron stars ensures quite accurate measurements of $`f_f`$ and $`f_p`$, so the error box in Figure 2 is remarkably small. Hence, this method can easily distinguish between the different equations of state in our dataset. In fact, what makes this method so efficient is the fact that different equations of state are described by quite distinct curves in an $`f_px`$ diagram.
Finally, we would like to infer the mass and radius of the neutron star. To do this we can use the data in Figure 3, which gives the connection between the $`p`$-mode frequencies of the different equations of state and the stellar compactness. From this diagram, we can immediately use the detected $`f_p`$ to infer the compactness of the star, once the right equation of state has been identified. Having estimated both the average density and the compactness, it is an elementary calculation to obtain the mass and the radius of the star.
At this point it is natural to ask the following question: What if the true equation of state is not close to one of the present models? This may well be the case. After all, our understanding of the state of nuclear matter at supranuclear densities still awaits observational testing. Should the equation of state be markedly different from the ones in our sample we will get an error box which does not lie close to any of the equations of state in Figure 2. Should one then want to estimate the stellar parameters, one could construct a sequence of polytropes ($`p=K\rho ^\gamma `$, $`\gamma =1+1/n`$) for various values of $`K`$ and $`n`$ (see for example the graphs in \[Andersson & Kokkotas 1996\]). The appropriate combination of these free parameters will bring the corresponding equation of state within our error box. Then, one could use Figure 3 and read off the “correct” compactness of the star ($`\overline{M}/\overline{R}`$) and from this compute its radius and mass.
## 5 Determining the position of the source
As with other kinds of gravitational-wave sources, a network of at least three detectors is needed to pinpoint the location of the source in the sky. The difference in arrival time for the three detectors could be used to determine the position of the source. The higher the accuracy in measuring the time of arrival at each detector, the more precise will be the positioning of the source. Two remote detectors, at a distance $`d`$ apart from each other will receive the signal with a temporal difference of
$$\mathrm{\Delta }T=\frac{d}{c}\mathrm{cos}\theta ,$$
(21)
where $`c`$ is the speed of light, and $`\theta `$ is the angle between the line joining the two detectors and the line of sight of the source. Therefore, the accuracy by which this angle can be measured is
$$\mathrm{\Delta }\theta =\frac{\sqrt{2}\sigma _Tc}{d\mathrm{sin}\theta }.$$
(22)
The $`\sqrt{2}`$ arises from the measurement errors of the two times of arrival. If one assumes an ‘L’ shaped network of 3 detectors with arm length of $`d=10,000`$ km, Eqs. (17) and (22) lead to an error box on the sky with angular sides of $`1^\mathrm{o}`$, at most (for specific areas of the sky, and large signal-to-noise ratios the angular sizes could be much smaller). This is quite interesting since one could then correlate the detection of gravitational waves with radio- , X-ray or gamma-ray observations directed on that specific corner of the sky.
## CONCLUDING REMARKS
In this paper we have extended previous studies of the detectability of gravitational waves from pulsating neutron stars to include the statistical errors associated with an analysis of a weak signal in a noisy data stream. We have shown that the generation of detectors that is presently under construction may well be able to observe such sources from within our own galaxy. Detections from distant galaxies seem unlikely, unless a sizeable amount of energy (of the order of $`0.01M_{}c^2`$) is released through the pulsation modes. This means that we expect the event rate to be rather low. One would certainly not expect to see more than perhaps three neutron stars being born in supernovae per century in our galaxy. Of course, other unique events in a neutron stars life, like starquakes and phase-transitions, may lead to relevant signals. Perhaps the most interesting possibility is the possible association between gravitational waves from pulsation modes and gamma rays from the soft-gamma repeaters (the magnetars). The number of such events is very hard to estimate at the present time. Given the many uncertainties in the available models of all these scenarios, we certainly cannot rule out the possibility of detecting gravitational waves from pulsations in neutron stars.
However, and this is an important point, the chances of detecting pulsating neutron stars would be much enhanced if one could construct dedicated high-frequency gravitational-wave detectors sensitive in the range 5-10 kHz. This provides a serious challenge to experimenters, but that the pay-off of a successful detection of oscillating neutron stars would be great is clear from our present analysis of the inverse problem. We have shown that the oscillation frequencies of the fluid $`f`$ and $`p`$ modes can be accurately deduced from detected signals. Other parameters, like the particulars of the gravitational-wave $`w`$ modes will be much more difficult to infer, unless the signal-to-noise ratio of the detection is unexpectedly large. This means that a previously proposed strategy for deducing the parameters of the star (the mass and the radius) from observations of the $`f`$ and a $`w`$-mode is unlikely to be practical. However, we show that an equally powerful strategy can be based on detected $`f`$ and $`p`$-modes. Given an observation of $`f`$ and $`p`$-mode oscillation frequencies, with the estimated accuracies, we can easily rule out many proposed equations of state. In other words, we have proposed a gravitational-wave “fingerprint analysis” for neutron stars holds a lot of promise, and may help us reveal the true equation of state at supranuclear densities once gravitational waves are observed.
## ACKNOWLEDGMENTS
We are grateful to Bernard Schutz for many illuminating discussions. KDK would like to thank British Counsil for a travel grant.
## Appendix A EMPIRICAL RELATIONS
In this Appendix we list the empirical relations deduced from data for twelve different realistic equations of star. These relations between the observables (frequency and damping of the modes) and the stellar parameters (mass $`M`$ and radius $`R`$) are essentially the same as the ones listed by Andersson and Kokkotas \[Andersson & Kokkotas 1998\], but here we also include the relevant statistical errors.
We have constructed empirical relations for both the fundamental mode of fluid pulsation (the $`f`$-mode) and the slowest damped of the gravitational wave $`w`$-modes. For the $`f`$-mode, the frequency scales with the mean density of the neutron star according to
$$\frac{f_f}{1\mathrm{kHz}}(0.78\pm 0.01)+(1.63\pm 0.01)\left(\frac{\overline{M}}{\overline{R}^3}\right)^{1/2},$$
(23)
while the damping time of the $`f`$-mode can be described by
$$\frac{1\mathrm{s}}{\tau _f}\left(\frac{\overline{M}^3}{\overline{R}^4}\right)\left[(22.85\pm 1.51)(14.65\pm 1.32)\left(\frac{\overline{M}}{\overline{R}}\right)\right],$$
(24)
where $`\overline{M}M/1.4M_{}`$ and $`\overline{R}R/10\mathrm{km}`$. The indicated uncertainties are simply the statistical errors that show up if one takes into account all stellar models in the data set, on equal basis. In principle, equations (23) and (24) could be inverted to compute the average density ($`\overline{M}/\overline{R}^3`$) and compactness ($`\overline{M}/\overline{R}`$) of the star from measured values of $`f_f`$ and $`\tau _f`$. However, this procedure proves unreliable since Eq. (24) is a double-valued function with respect to compactness, and the presence of errors makes it impossible to infer the compactness of the star with reasonable accuracy, even if the characteristics of the waves could be measured with extreme precision.
The $`w`$-modes are pulsations directly associated with spacetime itself. They have relatively high oscillation frequencies (6-14 kHz for typical neutron stars) and barely excite any fluid motion. They are also rapidly damped, with typical lifetime of fraction of a millisecond. The frequency of the first $`w`$-mode scales with the stellar compactness as
$$\frac{f_w}{1\mathrm{kHz}}\frac{1}{\overline{R}}\left[(20.95\pm 0.33)(9.17\pm 0.29)\left(\frac{\overline{M}}{\overline{R}}\right)\right],$$
(25)
while the damping rate of the mode is well described by
$`{\displaystyle \frac{\overline{M}}{\tau _w\text{ (ms)}}}`$ $``$ $`(3.90\pm 4.39)+(104.06\pm 8.33)\left({\displaystyle \frac{\overline{M}}{\overline{R}}}\right)`$ (26)
$``$ $`(67.28\pm 3.84)\left({\displaystyle \frac{\overline{M}}{\overline{R}}}\right)^2.`$ (27)
## Appendix B THE FISHER AND COVARIANCE MATRIX
For a typical mode signal the components of the symmetric Fisher matrix, defined by
$$\mathrm{\Gamma }_{ij}2\frac{h}{\theta _i}\frac{h}{\theta _j},$$
(28)
where $`\theta _i=(ϵ,\eta ,\zeta ,\xi )`$ are
$$\mathrm{\Gamma }_{ϵϵ}=\frac{1+24Q^4+32Q^6}{(1+4Q^2)^2}\rho ^2,$$
(29)
$$\mathrm{\Gamma }_{ϵ\eta }=\frac{34Q^2}{2(1+4Q^2)^2}\rho ^2,$$
(30)
$$\mathrm{\Gamma }_{ϵ\zeta }=2Q^2\rho ^2,$$
(31)
$$\mathrm{\Gamma }_{ϵ\xi }=\frac{1}{1+4Q^2}\rho ^2,$$
(32)
$$\mathrm{\Gamma }_{\eta \eta }=\frac{3+6Q^2+8Q^4}{(1+4Q^2)^2}\rho ^2,$$
(33)
$$\mathrm{\Gamma }_{\eta \zeta }=\frac{1}{2}\rho ^2,$$
(34)
$$\mathrm{\Gamma }_{\eta \xi }=\frac{3+4Q^2}{2(1+4Q^2)}\rho ^2,$$
(35)
$$\mathrm{\Gamma }_{\zeta \zeta }=(1+4Q^2)\rho ^2,$$
(36)
$$\mathrm{\Gamma }_{\zeta \xi }=0,$$
(37)
$$\mathrm{\Gamma }_{\xi \xi }=\rho ^2.$$
(38)
The inverse of the Fisher matrix, $`\mathrm{\Sigma }_{ij}\mathrm{\Gamma }_{ij}^1`$, the so-called covariance matrix, has the following components:
$$\mathrm{\Sigma }_{ϵϵ}=\frac{12Q^2+8Q^4}{2Q^4(1+4Q^2)}\frac{1}{\rho ^2},$$
(39)
$$\mathrm{\Sigma }_{ϵ\eta }=\frac{34Q^2}{Q^2(1+4Q^2)}\frac{1}{\rho ^2},$$
(40)
$$\mathrm{\Sigma }_{ϵ\zeta }=\frac{1+4Q^2}{2Q^2(1+4Q^2)}\frac{1}{\rho ^2},$$
(41)
$$\mathrm{\Sigma }_{ϵ\xi }=\frac{1+Q^2}{2Q^4}\frac{1}{\rho ^2},$$
(42)
$$\mathrm{\Sigma }_{\eta \eta }=\frac{4(5+4Q^2)}{(1+4Q^2)}\frac{1}{\rho ^2},$$
(43)
$$\mathrm{\Sigma }_{\eta \zeta }=\frac{4}{(1+4Q^2)}\frac{1}{\rho ^2},$$
(44)
$$\mathrm{\Sigma }_{\eta \xi }=\frac{32Q^2}{Q^2}\frac{1}{\rho ^2},$$
(45)
$$\mathrm{\Sigma }_{\zeta \zeta }=\frac{2}{(1+4Q^2)}\frac{1}{\rho ^2},$$
(46)
$$\mathrm{\Sigma }_{\zeta \xi }=\frac{1}{2Q^2}\frac{1}{\rho ^2},$$
(47)
$$\mathrm{\Sigma }_{\xi \xi }=\frac{(1+2Q^2)^2}{2Q^4}\frac{1}{\rho ^2}.$$
(48)
These relations can be used to estimate the measurement error in the various parameters of the mode-signal; see discussion in the main text.
|
no-problem/9901/hep-ph9901220.html
|
ar5iv
|
text
|
# Axino-Neutrino Mixing in Gauge-Mediated Supersymmetry Breaking Models
## Abstract
When the strong CP problem is solved by spontaneous breaking of an anomalous global symmetry in theories with gauge-mediated supersymmetry breaking, the pseudo Goldstone fermion (the axino) is a good candidate of a light sterile neutrino. Its mixing with neutrinos relevant for current neutrino experiments can arise in the presence of R-parity violation. The realistic four neutrino mass matrix is obtained when the see-saw mechanism is brought in, and an ansatz for the right-handed neutrino mass is constructed.
preprint: KIAS-P99002, hep-ph/9901220
Current neutrino experiments observing the deficits in the atmospheric and solar neutrino fluxes possibly hint at the existence of an $`SU(3)_c\times SU(2)_L\times U(1)_Y`$ singlet fermion that mixes with the ordinary neutrinos. More excitingly, the reconciliation of the above neutrino data with the candidate events for the $`\nu _\mu \nu _e`$ oscillation requires the existence of such a singlet fermion (called a sterile neutrino ) with the mass $`𝒪`$(1) eV. Since the mass of a singlet state cannot be protected by the gauge symmetry of the standard model, the introduction of a sterile neutrino must come with a theoretical justification. In view of this situation, there have been many attempts to seek for the origin of a light singlet fermion with the required properties . In this letter, we point out that a sterile neutrino can arise naturally in a well-motivated extension of the standard model, namely, in the supersymmetric standard model (SSM) incorporating the Peccei-Quinn (PQ) mechanism for the resolution of the strong CP problem , and the mechanism of supersymmetry breaking through gauge mediation .
Most attractive solution to the strong CP problem would be the PQ mechanism which introduces a QCD-anomalous global symmetry (called the PQ symmetry) spontaneously broken at a high scale $`f_a10^{10}10^{12}`$ GeV, and thus predicts a pseudo Goldstone boson (the axion, $`a`$) . In supersymmetric theories, the fermionic partner of the axion (the axino, $`\stackrel{~}{a}`$) exists and would be massless if supersymmetry is conserved. But supersymmetry is broken in reality and there would be a large mass splitting between the axion and the axino, which depends on the mechanism of supersymmetry breaking. In the context of supergravity where supersymmetry breaking is mediated at the Planck scale $`M_P`$, the axino mass is generically of the order of the gravitino mass, $`m_{3/2}10^2`$ GeV , which characterizes the supersymmetry breaking scale of the SSM sector. However, the axino can be very light if supersymmetry breaking occurs below the scale of the PQ symmetry breaking, as in theories with the gauge-mediated supersymmetry breaking (GMSB) . The GMSB models are usually composed of three sectors: the SSM, messenger and hidden sector. The messenger sector contains extra vector-like quarks and leptons which conveys supersymmetry breaking from the hidden sector to the SSM sector .
In order to estimate the axino mass in the framework of GMSB, it is useful to invoke the non-linearly realized Lagrangian for the axion superfield. Below the PQ symmetry breaking scale $`f_a`$, the couplings of the axion superfield $`\mathrm{\Phi }`$ can be rotated away from the superpotential and are encoded in the Kähler potential as follows:
$`K`$ $`=`$ $`{\displaystyle \underset{I}{}}C_I^{}C_I+\mathrm{\Phi }^{}\mathrm{\Phi }+{\displaystyle \underset{I}{}}{\displaystyle \frac{x_I}{f_a}}(\mathrm{\Phi }^{}+\mathrm{\Phi })C_I^{}C_I`$ (1)
$`+`$ higher order terms in $`f_a`$ (2)
where $`C_I`$ is a superfield in the SSM, messenger, or hidden sector, and $`x_I`$ is its PQ charge. Upon supersymmetry breaking, the axino gets the mass from the third term in Eq. (1),
$$m_{\stackrel{~}{a}I}x_I\frac{F_I}{f_a}$$
(3)
where $`F_I`$ is the F-term of the field $`C_I`$. If there is a massless fermion charged under the PQ symmetry, the axino would have a Dirac mass of order $`F_I/f_a`$ <sup>*</sup><sup>*</sup>* Note that the late-decaying particle scenario for the structure formation with the axino in the MeV region and the gravitino in the eV region can be realized in this case for $`F(10^5\mathrm{GeV})^2`$.. It is however expected that there is no massless mode \[except neutrinos, see below\] in the theory, and each component of the superfield $`C_I`$ has the mass of order $`M_I\sqrt{F_I}`$ unless one introduces extra symmetries to ensure the existence of massless modes. Then, the axino mass is see-saw reduced to have the majorana mass:
$$m_{\stackrel{~}{a}}\frac{x_I^2F_I^2}{M_If_a^2}x_I^2\frac{M_I^3}{f_a^2}.$$
(4)
Now one can think of three possible scenarios implementing the PQ symmetry: PQ symmetry acting (i) only on the SSM sector, (ii) on the messenger sector, (iii) on the hidden sector. In each case, $`\sqrt{F_I}`$ is of order of (i) $`10^210^3`$ GeV, (ii) $`10^410^5`$ GeV, (iii) $`10^510^6`$ GeV. In the last case, we restricted ourselves to have a light gravitino $`m_{3/2}1`$ keV which evades overclosure of the universe if no entropy dumping occurs after supersymmetry breaking. Then we find the following ranges of the axino mass:
$$m_{\stackrel{~}{a}}\{\begin{array}{cc}(10^910^6)\mathrm{eV}\left(\frac{10^{12}\mathrm{GeV}}{f_a}\right)^2\text{(i)}\hfill & \\ (10^31)\mathrm{eV}\left(\frac{10^{12}\mathrm{GeV}}{f_a}\right)^2\text{(ii)}\hfill & \\ (110^3)\mathrm{eV}\left(\frac{10^{12}\mathrm{GeV}}{f_a}\right)^2\text{ (iii)}\hfill & \end{array}$$
(5)
Interestingly, the axino mass in the case (i) or (ii) falls into the region relevant for the current neutrino experiments. Our next question is then how the axino-neutrino mixing arises.
The mixing of the axino with neutrinos can come from the axion coupling to the lepton doublet $`L`$, $`x_\nu (\mathrm{\Phi }^{}+\mathrm{\Phi })L^{}L`$, which yields the mixing mass,
$$m_{\stackrel{~}{a}\nu }x_\nu \frac{F_\nu }{f_a}$$
(6)
where $`x_\nu `$ is the PQ charge of the lepton doublet. It is crucial for us to observe that nonzero $`F_\nu `$ can arise when R-parity is not imposed. To calculate the size of $`F_\nu `$, we work in the basis where the Kähler potential takes the canonical form. That is, $`L^{}H_1+\mathrm{h}.\mathrm{c}.`$ is rotated away in the Kähler potential, and the superpotential of the SSM allows for the bilinear terms,
$$W=\mu H_1H_2+ϵ_i\mu L_iH_2.$$
(7)
where the dimensionless parameter $`ϵ_i`$ measures the amount of R-parity violation. Due to the $`ϵ`$-term in Eq. (7), one has $`F_\nu =ϵ_i\mu v\mathrm{sin}\beta `$ where $`v=174`$ GeV and $`\mathrm{tan}\beta =H_2/H_1`$, and therefore,
$$m_{\stackrel{~}{a}\nu _i}10^4\mathrm{eV}\left(\frac{ϵ_i\mathrm{sin}\beta }{2\times 10^6}\right)\left(\frac{10^{12}\mathrm{GeV}}{f_a}\right)\left(\frac{\mu }{300\mathrm{GeV}}\right).$$
(8)
Since R-parity and lepton number violating bilinear operators (7) are introduced, we have to take into account the so-called tree-level neutrino mass . The tree mass arises due to the misalignment of the sneutrino vaccum expectation values with the $`ϵ_i`$ terms, which vanishes at the mediation scale $`M_m`$ of supersymmetry breaking, but is generated at the weak scale through renomalization group (RG) evolution of supersymmetry breaking parameters. This tree mass takes the form of $`m_\nu ^{\mathrm{tree}}ϵ_iϵ_j`$, and its size (dominated by one component $`ϵ_i`$) is given by ,
$`m_{\nu _i}^{\mathrm{tree}}`$ $``$ $`1\mathrm{eV}\left({\displaystyle \frac{a_i}{3\times 10^6}}\right)^2\left({\displaystyle \frac{M_Z}{M_{1/2}}}\right)`$ (9)
$`\text{where}a_i`$ $``$ $`ϵ_i\mathrm{sin}\beta \left({\displaystyle \frac{\mu A_b}{m_{\stackrel{~}{l}}^2}}\right)\left({\displaystyle \frac{3h_b^2}{8\pi ^2}}\mathrm{ln}{\displaystyle \frac{M_m}{m_{\stackrel{~}{l}}}}\right)`$ (10)
where $`M_Z,M_{1/2}`$ and $`m_{\stackrel{~}{l}}`$ are the Z-boson, gaugino and slepton mass, respectively, and $`h_b,A_b`$ are the $`b`$-quark Yukawa coupling and corresponding trilinear soft-parameter, respectively. In Eq. (9), the term proportional to $`h_b^2\mathrm{ln}(M_m/m_{\stackrel{~}{l}})`$ characterizes the size of the RG-induced misalignment. Taking $`\mathrm{tan}\beta =1,\mu A_b=m_{\stackrel{~}{l}}^2`$, and $`M_m=10^3m_{\stackrel{~}{l}}`$ as reference values, one finds
$$m_{\nu _i}^{\mathrm{tree}}10^4\mathrm{eV}\left(\frac{ϵ_i}{10^6}\right)^2\left(\frac{M_Z}{M_{1/2}}\right)$$
(11)
which grows roughly as $`\mathrm{tan}^4\beta `$ in large $`\mathrm{tan}\beta `$ region.
Based on Eqs. (5), (8) and (11), we can make the following observations:
* The just-so solution of the solar neutrino problem with large mixing and $`\mathrm{\Delta }m_{\mathrm{sol}}^20.7\times 10^{10}\mathrm{eV}^2`$ implies $`m_{\nu _e}^{\mathrm{tree}},m_{\stackrel{~}{a}}<m_{\stackrel{~}{a}\nu _e}10^5\mathrm{eV}`$, and could be realized in the case (i). For this, we need $`f_a10^{11}`$ GeV and $`ϵ_110^7`$.
* The small mixing MSW solution requiring $`\theta _{\mathrm{sol}}4\times 10^4`$ and $`\mathrm{\Delta }m_{\mathrm{sol}}^25\times 10^6\mathrm{eV}^2`$ can be realized for the case (i) or (ii) if $`m_{\nu _e}^{\mathrm{tree}}<m_{\stackrel{~}{a}}2\times 10^3`$ eV and $`m_{\stackrel{~}{a}\nu _e}10^4\mathrm{eV}`$. For this, one needs $`f_a10^{10}`$ GeV and $`ϵ_i10^8`$ in the case (i); or $`f_a10^{12}`$ GeV and $`ϵ_i10^6`$ in the case (ii).
* The $`\nu _\mu \stackrel{~}{a}`$ explanation of the atmospheric neutrino oscillation requiring nearly maximal mixing is realized if $`m_{\stackrel{~}{a}\nu _e}>m_{\stackrel{~}{a}},m_{\nu _\mu }^{\mathrm{tree}}`$ and $`\mathrm{\Delta }m_{\mathrm{atm}}^22m_{\stackrel{~}{a}\nu _e}(m_{\stackrel{~}{a}}+m_{\nu _\mu }^{\mathrm{tree}})3\times 10^3\mathrm{eV}^2`$. The best region of parameters for this is $`ϵ_210^5`$ and $`f_a10^{10}`$ GeV prefering the case (i).
Note that a low $`\mathrm{tan}\beta `$ is preferred to suppress $`m_{\nu _i}^{\mathrm{tree}}`$ in all of the above cases.
Having seen that the $`\nu _{e,\mu }\stackrel{~}{a}`$ mixing can arise to explain the solar or atmospheric neutrino problem, let us now discuss how all the experimental data can be accommodated in our scheme. Recall that there exist only two patterns of neutrino mass-squared differences compatible with the results of all the experiments. Namely, four neutrino masses are divided into two pairs of almost degenerate masses separated by a gap of $`\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}1\mathrm{eV}`$ as indicated by the result of the LSND experiments , and follow either of the following patterns :
$`\text{(A)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{atm}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{solar}}{\stackrel{}{m_3<m_4}}}},`$ (12)
(13)
$`\text{(B)}\underset{\mathrm{LSND}}{\underset{}{\stackrel{\mathrm{solar}}{\stackrel{}{m_1<m_2}}\stackrel{\mathrm{atm}}{\stackrel{}{m_3<m_4}}}}.`$ (14)
In (A), $`\mathrm{\Delta }m_{21}^2`$ is relevant for the explanation of the atmospheric neutrino anomaly and $`\mathrm{\Delta }m_{43}^2`$ is relevant for the suppression of solar $`\nu _e`$’s. In (B), the roles of $`\mathrm{\Delta }m_{21}^2`$ and $`\mathrm{\Delta }m_{43}^2`$ are interchanged.
In our scheme, the required degeneracy of $`m_3`$ and $`m_4`$ could be a consequence of the quasi Dirac structure: $`m_{\stackrel{~}{a}\nu _i}m_{\stackrel{~}{a}},m_{\nu _i}^{\mathrm{tree}}`$. To have $`m_4,m_3m_{\stackrel{~}{a}\nu _i}1\mathrm{eV}`$, we need a large value of $`ϵ_i`$ $`10^4`$ e.g. for $`f_a=10^{10}`$ GeV. But this makes the tree mass too large ($`m_{\nu _i}^{\mathrm{tree}}1\mathrm{eV}`$) to accommodate $`\mathrm{\Delta }m_{\mathrm{sol}}^2`$ or $`\mathrm{\Delta }m_{\mathrm{atm}}^2`$ for (A) or (B), respectively. For the pattern (B), the atmospheric neutrino oscillation could also be explained by the $`\nu _\mu \nu _\tau `$ degeneracy with the splitting $`m_4m_310^3\mathrm{eV}`$. However, when the ordinary neutrino mass comes from R-parity violation, the neutrino mass takes generically the hierarchical structure since the tree mass (of the form $`m_{ij}^{\mathrm{tree}}ϵ_iϵ_j`$) is rank-one. Even though this structure can be changed due to the one-loop contribution through the squark and slepton exchanges , one needs fine-tuning to achieve the required degeneracy: $`(m_4m_3)/m_310^3`$ .
The simplest way to get the realistic neutrino mass matrix would be to introduce heavy right-handed neutrinos $`N`$ whose masses are related naturally to the PQ scale $`f_a10^{12}\mathrm{GeV}`$. For this purpose, let us assign the following $`U(1)`$ PQ charges to the fields:
$$\begin{array}{ccccccccc}H_1& H_2& L& N& \varphi & \varphi ^{}& \sigma & \sigma ^{}& Y\\ 1& 1& 2& 3& 1& 1& 6& 6& 0\end{array}$$
(15)
Then the PQ symmetry allows for the superpotential $`W=W_N+W_{PQ}`$ where
$`W_N`$ $`=`$ $`{\displaystyle \frac{h_\mu }{M_P}}H_1H_2\varphi ^2+{\displaystyle \frac{h_i}{M_P^2}}L_iH_2\varphi ^3+{\displaystyle \frac{m_i^D}{H_2}}L_iH_2N_i+{\displaystyle \frac{M_{ij}}{2\sigma }}N_iN_j\sigma `$ (16)
$`W_{PQ}`$ $`=`$ $`(\varphi \varphi ^{}+\sigma \sigma ^{}+M_1^2)Y+M_2Y^2+{\displaystyle \frac{h_y}{3}}Y^3+{\displaystyle \frac{1}{6}}\varphi ^6\sigma +{\displaystyle \frac{1}{6}}\varphi _{}^{}{}_{}{}^{6}\sigma ^{}`$ (17)
where $`M_{ij},M_1,M_2f_a`$. Minimization of the scalar potential coming from $`W_{PQ}`$ leads to the supersymmetric minimum with the vaccum expectation values $`\varphi ,\varphi ^{},\sigma ,\sigma ^{}f_a`$ and $`Y0`$ satisfying the conditions $`\varphi /\varphi ^{}^6=\sigma ^{}/\sigma `$ and $`\varphi \varphi ^{}=6\sigma \sigma ^{}`$. Note that the superpotential $`W_N`$ provides a solution to the $`\mu `$ problem , and generates the right value of $`ϵ_i`$ for our purpose, that is,
$$\mu \frac{h_\mu f_a^2}{M_P}10^2\mathrm{GeV},ϵ_i\frac{h_if_a}{h_\mu M_P}10^6.$$
(18)
Now the usual arbitrariness comes in the determination of the Dirac neutrino mass $`m_i^D`$ and the right-handed neutrino mass $`M_{ij}`$. Guided by the unification spirit, let us first take the Dirac masses, $`m_1^D1\mathrm{MeV}`$, $`m_2^D1\mathrm{GeV}`$ and $`m_3^D100\mathrm{GeV}`$ following the hierarchical structure of up-type quarks. Then, the see-saw suppressed neutrino mass matrix $`m_{ij}^{3\mathrm{x}3}=m_i^Dm_j^DM_{ij}^1`$ takes the form,
$$m^{3\mathrm{x}3}1\mathrm{eV}\left(\begin{array}{ccc}10^8c_{11}& 10^5c_{12}& 10^3c_{13}\\ 10^5c_{12}& 10^2c_{22}& c_{23}\\ 10^3c_{13}& c_{23}& 10^2c_{33}\end{array}\right)$$
(19)
where $`c_{ij}=M_{ij}^1/M_{23}^1`$ and $`m_2^Dm_3^DM_{23}^11\mathrm{eV}`$. Note that the matrix (19) is close to the neutrino mass matrix ansatz :
$$m^{3\mathrm{x}3}1\mathrm{eV}\left(\begin{array}{ccc}0& 0& 0.01\\ 0& 10^3& 1\\ 0.01& 1& 10^3\end{array}\right)$$
(20)
which explains the atmospheric neutrino and LSND data, except that (19) has a too large component, $`m_{33}^{3\mathrm{x}3}`$. Restricted by minimality condition and requiring $`c_{33}=0`$, the ansatz (20) can be translated to the corresponding one for the right-handed neutrino mass matrix:
$$M_{ij}=\left(\begin{array}{ccc}A& B& C\\ B& B^2/A& 0\\ C& 0& 0\end{array}\right).$$
(21)
The parameters $`A,B,C`$ in Eq. (21) are determined from the observational quantities as follows:
$`\sqrt{\mathrm{\Delta }m_{\mathrm{LSND}}^2}`$ $``$ $`{\displaystyle \frac{Am_2^Dm_3^D}{BC}}`$ (22)
$`\theta _{\mathrm{LSND}}`$ $``$ $`{\displaystyle \frac{B}{A}}{\displaystyle \frac{m_1^D}{m_2^D}}`$ (23)
$`{\displaystyle \frac{\mathrm{\Delta }m_{\mathrm{atm}}^2}{\mathrm{\Delta }m_{\mathrm{LSND}}^2}}`$ $``$ $`{\displaystyle \frac{C}{B}}{\displaystyle \frac{m_2^D}{m_3^D}}`$ (24)
from which one finds $`A,C0.1B`$, and $`B^2/A10B`$ for $`B10^{11}\mathrm{GeV}`$.
In conclusion, it has been shown that the axino, which is predicted by the PQ mechanism in supersymmetric theories, can be naturally light and mix with ordinary neutrinos to explain the solar or atmospheric neutrino problem in the context of gauge mediated supersymmetry breaking. The lightness is ensured by the small supersymmetry breaking scale $`\sqrt{F}10^5`$ GeV (far below the PQ symmetry breaking scale $`f_a`$) and the mixing is induced due to the presence of the R-parity violating bilinear term $`ϵ_i\mu L_iH_2`$. The $`\nu _e\stackrel{~}{a}`$ mixing can arise to explain the solar neutrino deficit when the parameters are in the ranges: $`ϵ_110^6`$ and $`f_a10^{12}`$ GeV for which the axion can provide cold dark matter of the universe. To account for the atmospheric neutrino oscillation ($`\nu _\mu \stackrel{~}{a}`$), larger $`ϵ`$ and smaller $`f_a`$ are preferred: $`ϵ_210^5,f_a10^{10}\mathrm{GeV}`$.
With R-parity violation alone, it is hard to find the four neutrino mass matrix which accommodates all the neutrino data. It has been therefore attempted to obtain a realistic four neutrino oscillation scheme relying on the see-saw mechanism combined with R-parity violation. In this case, only the pattern (B) with the $`\nu _e\stackrel{~}{a}`$ solar and $`\nu _\mu \nu _\tau `$ atmospheric oscillations can be realized. A toy model has been built in a way that the right values of $`\mu `$ and $`ϵ`$ are generated naturally through the PQ symmetry selection rule. In the unification scheme where the Dirac neutrino mass follows the hierarchical structure of up-type quarks, an ansatz for the right-handed neutrino mass matrix has been constructed to reproduce the required light neutrino mass matrix.
|
no-problem/9901/astro-ph9901153.html
|
ar5iv
|
text
|
# Warped Galaxies From Misaligned Angular Momenta
## 1 Introduction
An “integral sign” twist has been observed in the extended H 1 disk of many galaxies; in some cases it can also be seen in the star light. Briggs (1990) characterized the behavior of a sample of 12 warped galaxies as: (1) coplanar inside $`R_{25}`$, and warped beyond, with a straight line of nodes (LON) inside $`R_{\mathrm{Ho}}`$, (2) changing near $`R_{\mathrm{Ho}}`$, (3) into a LON on a leading spiral (as seen from the inner disk) outside $`R_{\mathrm{Ho}}`$. Bosma (1991) found 12 clearly warped disks in a sample of 20 edge-on systems; taking into account random warp orientation, the true fraction of warped disks must be larger. This high fraction of warped galaxies implies either that warps are long lived features or that they are repeatedly regenerated.
If a twisted disk were modeled as a set of uncoupled tilted rings in a flattened potential, their changing precession rates with radius would lead to a winding problem, similar to that for spirals (e.g. Binney & Tremaine 1987). If warps are long-lived, therefore, some means to overcome differential precession is required (see reviews by Toomre 1983 and Binney 1992).
Most recent ideas for warp formation rely in some way on the influence of the halo. Toomre (1983) and Dekel & Shlosman (1983) suggested that a flattened halo misaligned with the disk can give rise to a warp, and Toomre (1983), Sparke & Casertano (1988) and Kuijken (1991) found realistic warp modes inside rigid halos of this form. However, angular momentum conservation requires there to be a back reaction on the halo (Toomre 1983; Binney 1992); Dubinski & Kuijken (1995) and Nelson & Tremaine (1995) showed that a mobile halo should cause a warped disk to flatten quickly (but see also Binney et al. 1998).
As a warp represents a misalignment of the disk’s inner and outer angular momenta, Ostriker & Binney (1989) proposed a qualitative model in which the warp is generated by the slewing of the galactic potential through accretion of material with misaligned spin. The accretion of satellites by larger galaxies, such as in the Milky Way, provides direct evidence of late-infalling material with misaligned angular momentum. Such misalignments are expected to be generic in hierarchical models of galaxy formation (Quinn & Binney 1992) in which the spin axis of late-arriving material, both clumpy and diffuse, is poorly correlated with that of the material which collapsed earlier. Jiang & Binney (1998) calculate a concrete example of a warp formed through the addition of a misaligned torus of matter.
If the accreted material is flattened due to its intrinsic spin, a density misalignment is present immediately which exerts a torque to twist the disk, as in Jiang & Binney’s model. But dissipationless halo material may not be strongly flattened despite streaming about a misaligned axis. Nevertheless such material will exert a torque on the disk through dynamical friction, causing the disk’s angular momentum vector to tip towards alignment with that of the halo, as we show in this Letter.
Rotation in the inner halo will cause the disk to tilt differentially because the inner disk experiences a stronger dynamical friction torque (since the densities of both disk and halo are highest) and because time-scales are shortest in the center. As the inner disk begins to tip, the usual gravitational and pressure stresses in a twisted disk become important. Our fully self-consistent $`N`$-body simulations show that this idea is actually quite promising and leads to warps which are relatively long-lived and of the observed form.
Dynamical friction arises through an aspherical density response in the halo that lags “downstream” from the disk. In the long run, however, the density distribution in the halo must become symmetric about the disk plane, even while it continues to rotate about a misaligned axis. Simple time-reversibility arguments dictate that a steady system cannot support net torques (cf. the “anti-spiral” theorem, Lynden-Bell & Ostriker 1967; Kalnajs 1971). We therefore find the dynamical friction torque on the disk does not persist indefinitely. We do not regard this as a serious objection to our model, since late infalling material must constantly revise the net angular momentum of the halo.
## 2 Numerical Method
Since mild force anisotropies in many grid-based $`N`$-body methods can cause an isolated disk to settle to a preferred plane (e.g. May & James 1984), we adopt a code with no preferred plane. An expansion of the potential in spherical harmonics has been widely used for $`N`$-body simulations both with a grid (van Albada 1982) and without (Villumsen 1982; White 1983; McGlynn 1984). Here we adopt an intermediate approach: we tabulate coefficients of a spherical harmonic expansion of the density distribution on a radial grid, and interpolate for the gravitational forces between the values on these shells. The radial grid smooths the gravitational field, thereby avoiding the problem of “shell crossings.” Since there is no gridding in the angular directions, we retain the full angular resolution up to the adopted $`l_{\mathrm{max}}`$ – the maximum order of the spherical harmonic expansion.
While avoiding a preferred plane, this method is not well-suited to representation of disks. The vertical restoring force to the disk mid-plane converges slowly with increasing $`l_{\mathrm{max}}`$, as shown in Figure 1. Most of our simulations included terms to $`l_{\mathrm{max}}=10`$ only; tests with higher $`l_{\mathrm{max}}`$ (and fewer particles) suggest these models overestimate the magnitude and duration of the warp in the massive part of the disk, although milder and shorter-lived warps still develop. Moreover, the warp in the test-particle layer beyond the edge of the massive disk is unaffected by force resolution.
## 3 Initial model
While we have found that a massive halo is able to produce spectacular warping, we here prefer more realistic minimal halo models (cf. Debattista & Sellwood 1998). Our galaxy model has three massive components: an exponential disk of length-scale $`R_d`$, truncated at $`8R_d`$, a polytropic halo and a central softened point mass. The ratio disk:halo:central mass is 1:9:0.2, chosen to give a roughly flat rotation curve out to $`15R_d`$. We also include a disk of test particles, extending well outside the massive disk, that is intended to mimic the neutral hydrogen layer of a galaxy.
The initial massive disk had velocity dispersions set by adopting Toomre $`Q=1.5`$ and a thickness of $`0.1R_d`$ which were both independent of radius. The massless particles started with exactly circular orbits in the disk mid-plane. The central mass is a single particle with a core radius of $`0.3R_d`$.
The initial distribution of halo particles was generated by the method first used by Raha et al. (1991) which gives an exact equilibrium isotropic halo in the potential of the disk and central mass. The halo extends to a radius $`r_{\mathrm{trunc}}=30R_d`$. Because the disk has mass, the halo is not precisely spherical; its initial axis ratio varies from closely spherical at $`r4R_d`$ to $`c/a0.7`$ near $`r=1.5R_d`$. The initial halo angular momentum was created by selectively reversing halo particle velocities about a chosen axis, which for Run 1 is tipped away from the disk spin ($`z`$-)axis by $`45^{}`$ in the $`x`$-direction. We chose a value of the dimensionless $`\lambda \frac{L}{G}\sqrt{\frac{E}{M^5}}0.07`$ for our halo models; here $`L`$, $`E`$ and $`M`$ are respectively the total angular momentum, energy and mass of the halo. A value of 0.07 is typical in hierarchical clustering models (Barnes & Efstathiou 1987; Steinmetz & Bartelmann 1995).
We work in units where $`G=M(=M_{\mathrm{disk}}+M_{\mathrm{halo}})=R_d=1`$; the unit of time is therefore $`(R_d^3/GM)^{1/2}`$. A rotation period in the disk plane at $`R=1`$ is 32. We set $`l_{\mathrm{max}}=10`$, used a radial grid with 200 shells and a time step of 0.05. The disk and halo components are represented by a total of $`10^6`$ equal mass particles. Our simulations conserve energy to better than $`0.04\%`$.
## 4 Results
### 4.1 Warp in the massive disk
The outer disk lags as the inner part of the disk in Run 1 begins to tilt, causing a warp to develop almost at once. Figure 2 shows that the approximately rigid tilt of the inner disk increases rapidly at first and then more slowly, while the radius at which the warp starts also moves outwards over time. The disk in Run 1 at $`t=400`$ is shown in Figure 3.
Figure 4 shows the warp of the massive disk in the form of a Tip-LON diagram (Briggs 1990) at equally spaced times. Each point represents the direction of the normal to the best-fit plane of an annular piece of the disk, with the center of the disk defining the origin. The normal to the inner disk tilts initially in the $`(x,z)`$ plane while the outer disk is left behind, thereby shifting outer-disk points along the $`\varphi =180^{}`$ direction (e.g. $`t=100`$) while the almost flat inner part of the disk gives rise to the concentration of points in the center. The warp reaches a maximum angle of $`7^{}`$ at $`t350`$ and it takes roughly 700 time units ($`20`$ disk rotations at $`R=R_d`$) for most of the disk to settle back to a common plane.
The leading spiral, reminiscent of Briggs’ (1990) third rule, develops through clockwise differential precession in the outer parts. Precession is a consequence of gravitational coupling between the inner and outer disk (Hunter & Toomre 1969). The extremely slow precession of the outermost edge of the disk indicates that it is subject to a very mild torque, arising almost exclusively from the distant, tipped inner disk. The outer disk would precess more rapidly if the halo density distribution were flattened.
### 4.2 Secular evolution
Several changes occur as the model evolves: the dynamical friction force driving the warp decays over time, as expected from §1, causing the inner disk to tilt more slowly (Figure 2). The radius at which the warp starts moves outwards and the amplitude of the warp (the difference in tilt angle between the inner and outer disk) also decreases. The massive disk is almost coplanar again by time 1000 (Figure 4). Spiral arms and a weak bar also drive up the velocity dispersion of the particles to $`Q2.5`$.
The inner disk tilts remarkably rigidly indicating strong cohesion which arises from two distinct mechanisms. Most studies have focused on gravitational forces, which Hunter & Toomre (1969) found were inadequate to persuade the outer disk to precess along with the inner disk in a steady mode. However, the disk is also stiffened by the radial epicyclic excursions of the stars which communicate stresses across the disk.
Both self-gravity is strongest and random motion is greatest in the inner disk, where the coupling is evidently strong enough to preserve its flatness. The settling of the disk to ever larger radii should be describable in terms of the group velocity and/or damping of bending waves in a warm and finitely thick disk, but the absence of a dispersion relation valid in this regime precludes a comparison with theory. It is interesting that each annulus settles as its precession angle reaches $`180^{}`$ (Figure 4), thereby preventing excessive winding of the warp. The settling of each ring after half its precession period could be coincidental but we have seen it in many models. One possible reason is that a warm disk cannot support bending waves with wavelength shorter than the average epicycle diameter; a prediction based on this idea is only roughly in accord with the radially dependent settling time, however. It should be noted that whether settling is described by group velocity, wave damping or precession angles, it should be more rapid in a disk with stronger forces towards the mid-plane.
To demonstrate the importance of random motion, we ran a new simulation (Run 2) identical to Run 1 except with $`Q=4.0`$ initially in the disk. The warp was much reduced, as shown in Figure 5, even though the inner disk tilted by an angle comparable to that in Run 1.
### 4.3 Test particle layer
The sheet of test particles is intended to approximate a gaseous disk. Being massless, it does not induce a response from the halo, but is perturbed by forces from the tilted massive disk and its associated halo response. Within $`8R_d`$ the test particles simply tilt or warp with the disk. Outside the massive disk, however, the disturbance forces from the halo and the massive disk drop off rapidly and the plane of this dynamically-cool sheet hardly moves at large radii. It therefore appears warped relative to the plane of the tilted disk, and remains so even by the end of the simulation when the stellar disk has mostly settled.
### 4.4 Further Simulation
The spin axis of the halo in Run 1 was initially inclined at 45 to that of the disk. In Run 3, we set this angle to 135 thereby reversing the sign of $`J_z`$, the component of the halo’s angular momentum along the $`z`$-axis. The larger misalignment angle causes the inner disk to tilt faster and further and gives rise to a larger warp, as shown in Figure 5. Dynamical friction also lasts longer. Other properties of the warp are similar to those in Run 1; in particular, the warp begins at a similar radius at equal times.
## 5 Conclusions
Our simulations have confirmed that dynamical friction from a halo having angular momentum misaligned with that of the disk causes a transient warp. The warp has two properties commonly observed: the LON traces out a leading spiral relative to the inner disk and lasts longest in the H 1 layer.
By driving the warp, we side-step the most troublesome difficulties faced by other warp mechanisms. The bane of global mode warp models, that forces are simply too weak to overcome differential precession near the edge, has become a strength in our mechanism: the weak coupling of the outer edge creates the warp. Furthermore, the gradual settling of the warm disk avoids any winding problem.
The massive disk can warp slightly, but is largely rigid both because of gravitational restoring forces and radial pressure. The size and lifetime of the warp in the massive disk are probably somewhat overestimated because our numerical method does not yield the full gravitational restoring force. This worry does not affect the conclusions about the warp in the extended H 1 layer, which has very little mass and rigidity.
We have deliberately adopted an almost spherical halo in order to show that warps can be formed without misaligned density distributions. Rotating halos are likely, of course, to be slightly flattened also, in which case the disk will respond to both types of forcing. This will lead to warps that precess, H 1 layers that do experience forces, and so on. Studies of these cases will be reported in a future paper.
As noted above, we expect the net angular momentum of the halo to be revised as material continues to straggle in long after the main galaxy has reached maturity. Every change to the halo’s spin vector can be expected to affect the disk through friction, even if the arriving material is torn apart at large distance by the tidal field of the host galaxy. Our picture is similar to that proposed by Ostriker & Binney (1989), but who envisage warps as being driven from the outside by a misalignment of the inner disk with the flattened outer halo. In practice, both mechanisms must be inextricably linked. On-going infall makes it hardly surprising that warps are detected in most disks.
Conversations with Scott Tremaine and Alar Toomre and the report of the referee, James Binney, were most helpful. The authors wish to thank the Isaac Newton Institute, Cambridge, England for their hospitality for part of this project. This work was supported by NSF grant AST 96/17088 and NASA LTSA grant NAG 5-6037.
|
no-problem/9901/astro-ph9901150.html
|
ar5iv
|
text
|
# Radiative Transfer Effects and Convective Collapse: Size(flux)-Strength Distribution for the Small-scale Solar Magnetic Flux Tubes
## 1. Introduction
One of the important properties of the small-scale solar magnetic structures, as established by the observations(Stenflo and Harvey 1985, Zayer et al 1989,1990; Schussler 1991), is that while the strong field $`network`$ $`elements`$ have field strengths very weakly dependent on the flux per element, $`\mathrm{\Phi }`$, with typical values of $`\mathrm{\Phi }`$ about $`1\times 10^{18}`$ Mx and higher, the $`inner`$ $`network`$ weak field structures have a typical value of about $`500`$ G(Keller et al 1994, Solanki et al 1996, Lin 1995) for their strength and $`\mathrm{\Phi }`$ about $`1\times 10^{17}`$ Mx or lower with a strong dependence between the two. If the convective collapse of a weak field tube is the global process responsible for the formation of the strong field tubes(Parker 1978; Webb and Roberts 1978; Spruit and Zweibel 1979; Spruit 1979; Hasan 1983, 1984;Venkatakrishnan 1985; Steiner 1996), that comprise the $`network`$, with strengths weakly dependent on $`\mathrm{\Phi }`$ then it should be explained why tubes with smaller fluxes viz., the $`inner`$ $`network`$ elements do not collapse to kG strength; efficient radiative exchange with the surroundings by a small flux tube(Hasan 1986; Venkatakrishnan 1986) offers a natural explanation; here, having in mind a quantitative comparison with the above mentioned observationally established properties of the Solar flux tubes, we employ a semi-empirical model of the photospheric and the convection zone structure of the Sun and study in detail the effects of radiation on the convective instability and the wave motions.
## 2. Equations
We add to the familiar thin flux tube equations(Roberts and Webb,1978) the following non-adiabatic energy equation(see e.g.,Cox 1980),
$$\frac{p}{t}+v\frac{p}{z}\frac{\mathrm{\Gamma }_1p}{\rho }\left[\frac{\rho }{t}+v\frac{\rho }{z}\right]=\frac{\chi _T}{\rho c_vT}.𝐅$$
(1)
where $`p`$, $`\rho `$, and $`𝐅`$ denote the fluid pressure, density, and radiative energy flux respectively; (see Cox(1980) for the definitions of other thermodynamic quantities); all the variables are evaluated on the tube axis(r=0); The radiative flux $`𝐅`$ is calculated in the generalised Eddington approximation following Unno and Spiegel(1967): the mean intensity $`J`$ that is needed to evaluate the flux,
$$𝐅_R=\frac{4}{3\kappa \rho }J$$
(2)
is found by reducing the exact relation
$$.𝐅_R=4\kappa \rho (SJ)$$
(3)
to a form appropriate for a thin tube, which reads,
$$\frac{1}{3}\frac{^2J}{\tau ^2}+\frac{4}{3}\left(\frac{J_eJ}{\tau _a^2}\right)=JS$$
(4)
where $`d\tau =\kappa \rho dz`$, $`\tau _a=\kappa \rho a`$, $`S`$ is the source function which we take it to be the Planck function, $`a`$ is the tube radius, and $`J_e`$ is the mean intensity in the external medium. $`J_e`$ is found by solving the equation
$$\frac{1}{3}\frac{^2J_e}{\tau _e^2}=J_eS_e$$
(5)
### 2.1. Equilibrium
The equilibrium stratification of the external medium that we use here is the one determined to match the combined $`VALC`$(Vernazza et al. 1981) and Spruit(1977) models (see Hasan,Kneer and Kalkofen(1998) for details about the construction of this external quite Sun model); the Rosseland mean opacities are calculated by interpolation from the tables of Kurucz(1993) for the upper layers and from those of Rogers and Iglesias(1992) for the deeper regions. We assume temperature equilibrium, $`T=T_e`$, which implies that $`\beta `$, defined as $`\beta =8\pi p/B^2`$, is constant with z(if the dependence of $`\mu `$ is neglected). The pressure and density are thus determined from
$`p={\displaystyle \frac{\beta }{1+\beta }}p_e`$ (6)
$`\rho ={\displaystyle \frac{\beta }{1+\beta }}\rho _e`$ (7)
The extent of the flux tube covers the atmosphere from the temperature minimum in the chromosphere to $`5000`$km deep in the convection zone with the photospheric surface($`\tau =1`$) being assigned $`z=0`$. We measure the positive $`z`$ downwards.
### 2.2. Linear Stability: The Perturbation Equations
With the assumption that the perturbations in the ambient medium are negligible small perturbations inside the tube about the equilibrium described above obey the following equations in the linear limit:
$$\frac{\xi }{z}=\frac{B^{^{}}}{B}\frac{\rho ^{^{}}}{\rho }\left[\frac{d(\mathrm{ln}\rho )}{dz}\frac{d(\mathrm{ln}B)}{dz}\right]\xi $$
(8)
$$\frac{^2\xi }{t^2}=Hg\frac{}{z}\left(\frac{p^{^{}}}{p}\right)g\left(\frac{p^{^{}}}{p}\frac{\rho ^{^{}}}{\rho }\right)$$
(9)
$$\frac{B^{^{}}}{B}=\frac{\beta }{2}\frac{p^{^{}}}{p}$$
(10)
$$\frac{}{t}\left(\frac{p^{^{}}}{p}\right)\mathrm{\Gamma }_1\frac{}{t}\left(\frac{\rho ^{^{}}}{\rho }\right)+\left[\frac{d\mathrm{ln}p}{dz}\mathrm{\Gamma }_1\frac{d\mathrm{ln}\rho }{dz}\right]\frac{\xi }{t}=\frac{\chi _T}{\rho c_vT}.𝐅^{^{}}$$
(11)
where $`\xi `$ denotes the vertical displacement and $`H`$ is the pressure scale height. the perturbation of equation (3) can be done in a straightforward manner:
$$.𝐅_𝐑^{^{}}=\frac{dF_R^{^{}}}{dz}=4\pi \kappa _a\rho (S^{^{}}J^{^{}})+.𝐅_𝐑\left(\frac{\kappa ^{^{}}}{\kappa }+\frac{\rho ^{^{}}}{\rho }\right)$$
(12)
The perturbation in the mean intensity $`J^{^{}}`$ is determined by perturbing and linearizing the transfer equation (4) which in the first order moment form are
$$\frac{d}{dz}\left(\frac{J^{^{}}}{J}\right)=\frac{dlnJ}{dz}\frac{\eta ^{^{}}}{\eta }\frac{dlnJ}{dz}\frac{J^{^{}}}{J}+\frac{dlnJ}{dz}\frac{^{^{}}}{}$$
(13)
$`{\displaystyle \frac{d}{dz}}\left({\displaystyle \frac{^{^{}}}{}}\right)=\left({\displaystyle \frac{\mathrm{\Delta }_t}{2ϵH_p}}{\displaystyle \frac{dln}{dz}}\right){\displaystyle \frac{\eta ^{^{}}}{\eta }}+{\displaystyle \frac{\mathrm{\Delta }_t}{2ϵH_p}}{\displaystyle \frac{a^{^{}}}{a}}{\displaystyle \frac{(1+_c)(4+3\tau _a^2)}{16ϵH_p}}{\displaystyle \frac{J^{^{}}}{J}}+`$
$`{\displaystyle \frac{1}{qH_p}}{\displaystyle \frac{T^{^{}}}{T}}{\displaystyle \frac{dln}{dz}}{\displaystyle \frac{^{^{}}}{}}`$ (14)
where $`=𝐅/4`$ is the Eddington flux, $`\eta =\kappa \rho `$, $`H_p`$ is the pressure scale-height at the bottom and the various other quantities are as defined below:
$$_c=\frac{J}{S}1$$
(15)
is a measure of departure from radiative equilibrium,
$$\mathrm{\Delta }_t=\frac{JJ_e}{S}$$
(16)
is the ratio of the excess of the mean intensity inside the tube to the Planck function; $`ϵ`$ and $`q`$ are the ratios,
$$ϵ=\frac{\tau _r}{\tau _{th}}$$
(17)
$$q=\frac{\tau _N}{\tau _{th}}$$
(18)
where
$$\tau _{th}=\frac{\rho c_vTH_p}{}$$
(19)
is the radiative relaxation time over the length of one pressure scale-height at the bottom,
$$\tau _r=\frac{\rho c_va^2}{K},K=\frac{16\sigma T^3}{3\kappa \rho }$$
(20)
is the radiative relaxation time across the tube in the optically thick limit and
$$\tau _N=\frac{c_vT}{4\kappa S}$$
(21)
is the radiative relaxation time that one obtains in the optically thin limit (with $`Newton^{}s`$ $`law`$ $`of`$ $`cooling`$)(Spiegel,1957); and,
$$\tau _a=\kappa \rho a$$
(22)
is the depth dependent optical thickness of the tube. We reduce the perturbation equations to a final set of four equations for the four variables $`\xi `$, $`p^{^{}}/p`$, $`J^{^{}}/J`$ and $`^{^{}}/`$. The optically thick reduction of the set of equations corresponds to taking the limit $`\tau `$ tending to infinity and replacing the mean intensity by the Planck function. The optically thin case is got in the limit $`\tau _{th}`$ approaching zero.
### 2.3. Boundary Conditions
We use closed mechanical boundary conditions and as thermal conditions we impose that there is no incoming radiation from above at the top boundary and that the perturbations are adiabatic at the bottom boundary.
## 3. Results and Discussion
The action of radiation that we bring out in this study of convective instability and wave-motions of the tube is explained conveniently with the help of the graphs shown in $`figs.(1)`$ and $`(2)`$ where we plot the growth rates and frequencies of the fundamental mode, which is the most unstable, respectively, as a function of the surface tube radius $`a_0`$ for various values of the plasma $`\beta `$.
### 3.1. Convective Instability
The onset of the convective instability corresponds to the cusps in the curves of $`fig.(1)`$ where the overstable mode’s frequency becomes zero and the growth rate shoots up sharply. Comparison with the results obtained with only the lateral exchange in the diffusion approximation (Venkatakrishnan 1986, Hasan 1986) or in the Newton’s law of cooling reveal that these earlier treatments overestimate the degree of instability: the growth rates obtained with the present more accurate treatment of radiation with the inclusion of vertical exchange of radiation are appreciably smaller; moreover, for a given value of $`\beta `$, i.e. for a tube of given strength the onset of convective instability requires the size of the tube to be greater in the present case than that is required when the diffusion approximation is used.
The convective instability is completely suppressed for tubes with the plasma $`\beta `$ smaller than $`2.45`$ whatever be its size; this corresponds to a field strength of about $`1160`$Gauss at $`\tau =1`$ inside the tube; this has to be compared with the value of $`1350`$G($`\beta =1.83`$) that Spruit and Zweibel obtained in the adiabatic case. We point out here that the field strength of $`1160`$G that we obtain here does not necessarily imply that all collapsing tubes of weaker fields will attain this unique value and become stable; this value represents a necessary strength for stability against convective collapse and thus can be considered as a minimum strength for stability; a collapsing weaker tube of sufficient size can of course attain an equilibrium collapsed state of field strength higher than this value(cf. Spruit 1979).
### 3.2. Size-strength Distribution for the Solar Tubes
From the positions of the cusps in the curves of $`fig.(1)`$, i.e. from the positions that mark the onset of the convective instability we pick up the values of the plasma $`\beta `$ and the radius $`a_0`$; the resulting radius-field strength dependence is shown in $`fig.(3)`$; the corresponding flux-strength distribution in $`fig.(4)`$. Comparison of this curve with those observationally produced(Solanki et al 1997, Lin 1996) shows a remarkable agreement leading to the conclusion that indeed the convective collapse is the cause of the formation of the flux elements on the Sun’s surface. Our refined, realistic, and exact treatment of the convective collapse process on the Sun reinforces the conclusions drawn from earlier simplified treatments(Venkatakrishnan 1986, Hasan 1986) and verifies the original suggestion and the physical explanation by Parker(1978) for the concentrated small scale magnetic structures on the Sun.
### 3.3. Overstability
The characteristics of the overstable mode are explained with the help of the $`fig.(1)`$ again; the growth rates in the present Eddington approximation are lower than those obtained in the earlier treatments where the vertical losses are not taken care of and use either the diffusion approximation or the Newton’s law of cooling; the differences thus demonstrate that overstability is hindered by the vertical losses
We point out that while there is no damping out of oscillations when only lateral exchange takes place and the oscillations’ growth rate only asymptotically goes to zero in the limit of large radii, i.e. in the adiabatic limit, the inclusion of vertical radiative losses make the oscillations to damp out for radii greater than a particular finite value which is determined by the $`\beta `$; thus it is clarified that the horizontal exchange between vertically oscillating fluid elements acts to amplify the oscillations while the vertical losses always try to smooth out the fluctuations thereby introducing damping; this $`radiative`$ $`damping`$ is quite severe for the overstable mode of an intense flux tube on the Sun that it gets completely damped out for tubes of radii larger than a certain critical value.
Finally we note that a tube which can undergo convective collapse for radii greater than a critical value for a given field strength remains overstable for all smaller radii that it can take.
## 4. Conclusions
* We have demonstrated that radiative transport has a marked effect on the size-field strength relation for solar flux tubes. Our results can be be applied to solar flux tubes more reliably in view of the more refined treatment of radiative transfer.
* We have generalized the necessary condition for the onset of convective instability in the presence of radiative energy exchange. We find that radiation has a stabilizing influence which is greater for tubes with small radius.
* Overstability of the longitudinal slow mode is shown to be dependent on the tube radius: there is a critical tube radius above which a strong field convectively stable tube’s oscillations get damped as a result of vertical radiative losses.
## References
Cox, J.P. 1980, Theory of Stellar Pulsation, Princeton University Press.
Hasan, S.S. 1984, ApJ, 285, 851
Hasan, S.S. 1984, A&A, 143, 39
Hasan, S.S. 1986, MNRAS, 219, 357
Hasan, S.S., Kneer, F., Kalkofen, W. 1998, A&A, 332, 1064
Lin, H. 1995, ApJ, 446, 421
Parker, E.N. 1978, ApJ, 211, 368
Rogers, F.J., Iglesias, C. 1992, ApJS, 79, 507
Schussler, M. 1990 in Solar Photosphere:Structure,Convection and Magnetic fields ed. by J.O.Stenflo, IAU-Symp. No.138
Schussler, M. 1991 in The Sun: A Laboratory for Astrophysics, 191-220, ed. by J.T.Schmelz and J.C.Brown
Solanki, S.K. et al, 1996, A&A, 310, L33-L36
Spiegel, E.A. 1957, ApJ, 126, 202
Spruit, H. 1979, Sol. Phys., 61, 363
Spruit, H., Zweibel, E. 1979, Sol. Phys., 62, 15
Stenflo, J.O., Harvey, J. 1985, Sol. Phys., 95, 99
Unno, W., Spiegel, E.A. 1966, PASJ, 18, 85
Venkatakrishnan, P. 1986, Nature, 322, 156
Webb, A.R., Roberts, B. 1978, Sol. Phys., 59, 249
Zayer, I., Solanki, S.K., Stenflo, J.O. 1989, A&A, 211, 463
Zayer, I., Solanki, S.K., Stenflo, J.O., Keller, C.U., 1990, A&A, 239, 356
|
no-problem/9901/hep-ph9901350.html
|
ar5iv
|
text
|
# On Exotic Solutions of the Atmospheric Neutrino Problem.
## 1 Introduction
The measurements of the fluxes of atmospheric neutrinos by the Super–Kamiokande (SK) experiment show evidence for the disappearance of muon (anti)–neutrinos. The same indication comes from the older data of the Kamiokande and IMB experiments and the recent ones of Soudan–2 . Also the results recently presented by the MACRO collaboration indicate a suppression of the muon (anti)–neutrino flux.
The simplest explanation of the data is the existence of $`\nu _\mu \nu _\tau `$ oscillations . In the framework of flavor oscillations one should consider the more general case of three flavors (with the Chooz experiment giving important contraints to the electron neutrino transitions), and could also envisage more complex scenarios involving sterile states . We will not pursue these possibilities here, and we will adopt instead the simplest scenario of two–flavors oscillations as a prototype model that, as we will see, is able to describe successfully the experimental data.
We will instead investigate if other forms of ‘new physics’ beyond the standard model, different from standard flavor oscillations, can also provide a satisfactory description of the existing data. Indeed several other physical mechanisms have been proposed in the literature as viable explanations of the atmospheric neutrino data. In this work we will consider three of these models: neutrino decay , flavor changing neutral currents (FCNC) , and violations of the equivalence principle or, equivalently, of Lorentz invariance . All these model have the common feature of ‘disappearing’ muon neutrinos, however the probability depends in different ways on the neutrino energy and path. To discriminate between these models a detailed study of the disappearance probability $`P`$ and of its functional form is needed.
In this work in contrast with previous analyses we will argue that the present data allow to exclude the three ‘exotic’ models, at least in their simplest form, as explanations of the atmospheric neutrino problem. This is mainly due to the difficulty that these models have to fit at the same time the SK data for leptons generated inside the detector (sub– and multi–GeV) and for up–going muons generated in the rock below it.
## 2 Data
In fig. 1, 2 and 3 we show (as data points with statistical error bars) the ratios between the SK data and their Montecarlo predictions calculated in the absence of oscillations or other form of ‘new physics’ beyond the standard model. In fig. 1 we show the data for the $`e`$–like contained events, in fig. 2 for $`\mu `$–like events produced in the detector, and in fig. 3 for upward-going muon events, as a function of zenith angle of the detected lepton. In each figure we include four lines: the dotted line (a constant of level unity) corresponds to exact agreement between data and no–oscillation Montecarlo, including the absolute normalization. The dot–dashed lines correspond to the assumption that there is no deformation in the shape of the zenith angle distributions, but that one is allowed to change the normalization of each data sample independently. The values obtained are: 1.16 for $`e`$–like sub-GeV, 1.21 for $`e`$–like multi-GeV, 0.72 for $`\mu `$–like sub-GeV, 0.74 for $`\mu `$–like multi-GeV, 0.56 for stopping upward-going muons, and 0.92 for passing upward-going muons. For two sets of data (sub-GeV and multi-GeV $`\mu `$–like events) the constant shape fits give very poor descriptions ($`\chi ^2=26`$ for the sub-GeV and 33 for the multi-GeV for 4 d.o.f). Also the zenith angle shape of the passing upward-going muons is not well fitted by the no–oscillation Montecarlo ($`\chi ^2=17`$ for 9 degrees of freedom). The electron data do not show clear evidence of deformations, although the constant shape fit for the sub-GeV events ($`\chi ^2=9.7`$ for 4 d.o.f.) is rather poor.
The normalizations of the different data sets are of course strongly correlated, and therefore it is not reasonable to let them vary independently. The other extreme option, that we will adopt in this work for simplicity, is to use one and the same parameter to fix the normalization of the six data samples. The result for constant shapes (i.e. assuming no ‘new physics’ beyond the standard model) is represented by the dashed lines in fig. 1, 2 and 3 corresponding to a value 0.84 and a very poor $`\chi ^2=280`$ for 34 d.o.f.).
The full lines in the figures correspond to our best fit assuming $`\nu _\mu \nu _\tau `$ oscillations with maximal mixing. We define the $`\chi ^2`$ as follows:
$$\chi ^2=\underset{j}{}\left[\frac{N_j\alpha N_j^{th}(N_{j,MC}^{SK}/N_{j,0}^{th})}{\sigma _j}\right]^2$$
(1)
In (1) the summation runs over all data bins, $`N_j`$ is the SK result for the $`j`$–th bin, $`\sigma _j`$ its statistical error, $`N_j^{th}`$ our prediction, $`N_{j,0}^{th}`$ our prediction in the absence of oscillations, $`N_{j,MC}^{SK}`$ the no–oscillation prediction of Super–Kamiokande, and $`\alpha `$ allows for variations in the absolute normalization of the prediction. We have rescaled our prediction to the SK Montecarlo because we do not have a sufficiently detailed knowledge of the detector response (e.g. number of detected rings) and efficiency. For the same same input neutrino spectra the difference between our no–oscillation calculation (see for a description) and the SK Montecarlo result is approximately 10%,
For our best fit the values of the relevant parameters are $`\alpha =1.15`$ and $`\mathrm{\Delta }m^2=\mathrm{3.2\hspace{0.33em}10}^3`$ eV<sup>2</sup>. The $`\chi ^2`$ is 33.3 for 33 d.o.f.
Our definition of the $`\chi ^2`$ is somewhat simplistic. We do not take into account the contribution of systematic errors, either in the data or in the theory. The assumption of a common $`\alpha `$ for $`e`$–like and $`\mu `$-like events corresponding to different energy regions is certainly too strict. It is therefore remarkable that this fit is so good, and essentially in agreement (same normalization and very near $`\mathrm{\Delta }m^2`$ value) with the much more elaborate fit in .
In the rest of this paper we will consider other, ‘exotic’ models and we will find that they are not able to provide a satisfactory fit to the same data.
## 3 Models
We briefly recall the essential points of the models we are discussing.
For the usual, two–neutrino flavor oscillations the ‘disappearance probability’ $`P`$ is given by:
$$P=P_{\nu _\mu \nu _\tau }^{osc}=\mathrm{sin}^22\theta \mathrm{sin}^2\left[\frac{\mathrm{\Delta }m^2}{4}\frac{L}{E_\nu }\right],$$
(2)
with the very characteristic sinusoidal dependence on the ratio $`L/E_\nu `$.
In the simplest realization of neutrino decay, neglecting the possibility of the simultaneous existence of neutrino oscillations, the disappearance probability is given by:
$$P=P^{dec}=1\mathrm{exp}\left[\frac{m_\nu }{\tau _\nu }\frac{L}{E_\nu }\right],$$
(3)
still depending on the ratio between neutrino pathlength and energy $`L/E_\nu `$, but with a functional form different from (2).
If flavor changing neutral currents contribute to the interaction of neutrinos with ordinary matter, a non trivial flavor evolution will develop even for massless neutrinos as originally noted by Wolfenstein . There are several theoretical models generically predicting nondiagonal neutrino interactions with matter. In particular such models have been proposed as a possible consequence of $`R`$-parity violating interactions in supersymmetric models and suggested as solutions of both the solar and atmospheric neutrino problems. Let us call $`V_{\alpha \beta }`$ the effective potential that arises from the forward scattering amplitude of a neutrino with a fermion $`f`$: $`\nu _\alpha +f\nu _\beta +f`$. In the standard model $`V_{\mu \tau }=V_{\tau \mu }=0`$, and $`V_{\mu \mu }=V_{\tau \tau }=\sqrt{2}G_FT_3(f_L)N_f`$ where $`G_F`$ is the Fermi constant, $`N_f`$ is the number density of the fermion $`f`$ and $`T_3(f_L)`$ is the third component of the fermion’s weak isospin. Since the effective potentials for muon and tau neutrinos are identical, there is no effect on standard oscillations. However, if the scattering amplitudes are different from those predicted by the standard model, and if flavor changing scattering can occur, then the effective potential acquires non diagonal terms $`V_{\mu \tau }=V_{\tau \mu }=\sqrt{2}G_FϵN_f`$, and different diagonal elements (with $`V_{\tau \tau }V_{\mu \mu }=\sqrt{2}G_Fϵ^{}N_f`$), and there will be a nontrivial flavor transition probability even for massless neutrinos. After the crossing of a layer of matter with a column density
$$X_f=_0^L𝑑L^{}N_f(L^{}),$$
(4)
the transition probability is:
$$P=P_{\nu _\mu \nu _\tau }^{FCNC}=\frac{4ϵ^2}{4ϵ^2+ϵ^2}\mathrm{sin}^2\left[\frac{G_F}{\sqrt{2}}X_f\sqrt{4ϵ^2+ϵ^2}\right].$$
(5)
The probability has again an oscillatory form, however in this case the role of $`L/E_\nu `$ is taken by the column density $`X_f`$ and there is no dependence on the neutrino energy.
If the gravitational coupling of neutrinos are flavor dependent (implying a violation of the equivalence principle) mixing will take place for neutrinos traveling in a gravitational field even for massless neutrinos . The neutrino states with well defined coupling to the gravitational field define a ‘gravitational basis’ related to the flavor basis by a unitary transformation. The effective interaction energy matrix of neutrinos in a gravitational field can be written in an arbitrary basis as
$$H=2|\varphi (r)|E_\nu (1+f)$$
(6)
where $`E_\nu `$ is the neutrino energy, $`\varphi (r)=|\varphi (r)|`$ is the gravitational potential, and $`f`$ is a (small, traceless) matrix that parametrize the possibility of non–standard coupling of neutrinos to gravity and is diagonal in the gravitational basis.
Much in the same way as in the previous cases, the noncoincidence of gravitational and flavor eigenstates determines mixing and flavor transitions. Considering the simple case of two flavors and assuming a constant gravitational potential $`|\varphi |`$, the transition probability takes the form
$$P=P_{\nu _\mu \nu _\tau }^{grav}=\mathrm{sin}^2(2\theta _G)\mathrm{sin}^2[\delta |\varphi |E_\nu L].$$
(7)
where $`\theta _G`$ is the mixing angle and $`\delta `$ is the difference between the coupling to gravity of the gravitational eigenstates. Note that in this case the argument of the oscillatory function is proportional to the product of the neutrino energy and pathlength, whereas for the standard flavor oscillations it is the ratio of the same quantities that matters.
Equations (2), (3), (5) and (7) are the disappearance probabilities for the four mechanisms that we will confront with the experimental data.
## 4 Flavor Oscillations
It is interesting to discuss how the usual flavor oscillations can successfully reproduce the pattern of suppression measured for the different event samples. The events detected in one particular bin are produced by neutrinos with a predictable distribution of $`E_\nu `$ and pathlength $`L`$, and therefore of $`L/E_\nu `$, the significant quantity in flavor oscillations. In fig. 4, in the top panel we show as a function of $`L/E_\nu `$ the survival probability corresponding to maximal mixing and $`\mathrm{\Delta }m^2=3.2\times 10^3`$ eV<sup>2</sup> (our best fit point). Also shown with a dashed line is the survival probability for neutrino decay that we will discuss in the next section.
In the second panel we show the $`L/E_\nu `$ distributions of sub–GeV $`\mu `$–like events in the five zenith angle bins used by the SK collaboration: $`\mathrm{cos}\theta _\mu [1,0.6]`$, $`[0.6,0.2]`$ $`[0.2,+0.2]`$, $`[0.2,0.6]`$ and $`[0.6,1.0]`$ (corresponding to the thick solid, thick dashed, thin dot–dashed, thin dashed and thin solid line).
In the third panel we show the corresponding distributions for multi-GeV $`\mu `$–like events (same coding for the lines).
In the fourth panel we show the $`L/E_\nu `$ distributions for upward going muons that stop in the detector in the zenith angle bins: $`\mathrm{cos}\theta _\mu [1,0.8]`$, $`[0.8,0.6]`$ $`[0.6,0.4]`$, $`[0.4,0.2]`$ and $`[0.2,0.0]`$ with the corresponding lines ordered from right (higher values of $`L/E_\nu `$) to left (lower values of $`L/E_\nu `$).
In the last panel we show the same distributions for passing upward going muons in ten zenith angle bins, $`\mathrm{cos}\theta _\mu [1,0.9]`$, $`\mathrm{}`$, $`[0.1,0.0]`$.
Some remarks can be useful for an understanding of the distributions shown in fig. 4. For the sub-GeV events, one can see that the parent neutrinos have $`L/E_\nu `$ spread over a broad range of values. This is due to the poor correlation between the neutrino and muon directions $`\theta _{\nu \mu }53^{}`$.
For multi-GeV data the distributions are much narrower, reflecting the tighter correlation between the neutrino and muon directions, $`\theta _{\nu \mu }13^{}`$. Note also that the peaks in the $`L/E_\nu `$ distributions corresponding to sub-GeV and multi-GeV events in the same zenith angle interval are at slightly different points because of the different energy of the parent neutrinos.
For up–going stopping muons the width of the distribution is wider than in the multi-GeV case. The correlation between the muon and neutrino directions $`\theta _{\nu \mu }10^{}`$ is actually better, but the width of the distribution reflects the wider energy range of the neutrinos contributing to this signal. Passing muons are nearly collinear with the parent neutrinos ($`\theta _{\nu \mu }2.9^{}`$), but the large energy range of the neutrinos that extend over nearly two decades ($`E_\nu 10`$–10<sup>3</sup> GeV) results in a wide $`L/E_\nu `$ distribution.
All curves in the lower four panels of fig. 4 are normalized to unit area. In order to obtain the suppression due to oscillations in a particular bin, one has to perform the integral:
$$N_j^{osc}(\mathrm{sin}^22\theta ,\mathrm{\Delta }m^2)=𝑑x\frac{dN_j^0}{dx}[1P^{osc}(x,\mathrm{sin}^22\theta ,\mathrm{\Delta }m^2)]$$
(8)
Comparing the survival probability with the $`L/E_\nu `$ distributions it is easy to gain a qualitative understanding of the effects produced. For $`\mathrm{\Delta }m^2310^3`$ eV<sup>2</sup>, neutrinos with $`L/E_\nu \stackrel{<}{_{}}10^2`$ Km/GeV have a survival probability close to unity and do not oscillate, while for neutrinos with $`L/E_\nu \stackrel{>}{_{}}10^3`$ Km/GeV, averaging over the rapid oscillations, the survival probability becomes one half for maximal mixing. We recall that horizontal neutrinos travel an average pathlength of $`600`$ Km.
Taking into account the $`L/E_\nu `$ distributions of the different set of events one can see that all zenith angle bins of the muon sub–GeV events are somewhat suppressed, because even vertically downward going muons can be produced by upgoing neutrinos.
For multi–GeV events, with the tighter correlation between the neutrino and muon directions, the two up–going bins are suppressed by the ‘average’ factor $`0.5`$, the two down–going bins are left unchanged and the horizontal muons have an intermediate suppression.
The up–going stopping muons are always suppressed by a factor $``$ 1/2, except for the bin nearest to the horizontal.
For the up–going passing muons the larger average energy and therefore smaller $`L/E_\nu `$ explains the smaller suppression and its pattern, varying from nearly unity for the horizontal bin to a maximum of $`0.65`$ for the vertical one.
## 5 Exotic Models
### 5.1 Neutrino Decay
Fitting the sub-GeV and multi-GeV data of Super-Kamiokande with the simplified model of muon neutrino decay (that neglects mixing) given in (3), we find a minimum in the $`\chi ^2`$ for a value $`\tau _\nu /m_\nu =8900`$ Km/GeV (with $`\alpha =1.07`$). This is in good agreement with the results of . The authors of this reference have as a best fit point $`\tau _\nu /m_\nu 12800`$ Km/GeV, with a small mixing angle $`\mathrm{sin}^22\theta 0.06`$. The curve describing the decay probability for our best fit is shown as the dashed line in the top panel of fig. 4.
It is simple to have a qualitative understanding of the value of $`\tau _\nu /m_\nu `$ that provides the best fit. One needs to suppress by a factor $`0.5`$ the up-going multi-GeV muons that have $`L/E_\nu 10^{3.5}`$ Km/GeV (see fig. 4).
The inclusion of decay results in $`\chi ^2=71`$ (for 18 d.o.f.), a very significant improvement over the value 234 (19 d.o.f.) of the “standard model”, but still significantly worse than the value $`\chi ^225`$ of the $`\nu _\mu \nu _\tau `$ flavor oscillation fit to the same set of data.
For a value of $`\tau _\nu /m_\nu `$ of the order of what is given by our fit to the sub–GeV and multi–GeV data, one expects a much smaller suppression of the high energy passing up–going muons (as already noted in ). In fact including also the 15 data points of the up-going muons in a new fit, the best fit point becomes $`\tau _\nu /m_\nu =10000`$ Km/GeV (similar to the previous one), but $`\chi ^2`$ increases to the much higher value 140.
### 5.2 Violation of the equivalence principle
Performing a fit to the sub-GeV and multi-GeV data of Super Kamiokande with the disappearance probability given by (7) and with maximal mixing ($`\theta _G=\pi /4`$), we find a minimum in the $`\chi ^2`$ for a value $`\delta |\varphi |=410^4`$ Km<sup>-1</sup>GeV<sup>-1</sup> (with $`\alpha =1.10`$). The $`\chi ^2`$ for this fit is 35 for 18 d.o.f., still a very significant improvement over the standard model case, but not as good as the flavor oscillations result. The survival probability given by our best fit is shown in the top panel of fig. 5.
The reason of the poor $`\chi ^2`$ can qualitatively be understood looking at fig. 5. This figure is the equivalent of fig. 4, in the sense that the four lower panels show the distributions in the variable that is relevant in this case, namely $`LE_\nu `$. The distributions in this variable for the sub–GeV and multi–GeV events have shapes similar to the corresponding ones in $`L/E_\nu `$, because the width of the distributions is mostly determined by the spread in pathlength $`L`$. However the average value of the $`LE_\nu `$ of the sub–GeV events is lower than the corresponding one (same zenith angle bin) for multi–GeV events, the opposite of what happens in the $`L/E_\nu `$ distributions, see fig. 4. Therefore, parameters describing well multi–GeV events will generally produce too low a suppression for sub–GeV events or viceversa.
It can be argued (as the authors of reference do) that taking into account systematic uncertainties the model defined by equation (7) provides a good fit to the data, however this is not the case if upward–going muons are included in the picture. This should be evident looking at the lower panels in fig. 5. Upward–going muons are produced by high energy neutrinos and the frequent oscillations do imply a suppression by 50% of passing (and stopping) muons, with no deformation of the zenith angle distribution. This is in disagreement with the corresponding data: in fact, trying to fit all the data together we obtain similar best fit parameters, $`\delta |\varphi |=4.510^4`$ Km<sup>-1</sup>GeV<sup>-1</sup> and $`\alpha =1.145`$, but with a very bad $`\chi ^2=142.7`$ for 32 d.o.f. (the contribution of passing upward–going muon data being $`100`$).
### 5.3 Flavor Changing Neutral Currents
In the case of neutrino transitions produced by flavor changing neutral currents, the rôle of $`L/E_\nu `$ is replaced by $`X`$, the column density. This has the fundamental consequence that there is no energy dependence of the flavor conversion. Moreover since air has a density much lower that the Earth’s, the transitions do not develop during the neutrino path in the atmosphere, and therefore down–going neutrinos are unaffected. Note also that there is not a simple relation between the zenith angle $`\theta _\nu `$ and the pathlength $`L`$ because of fluctuations in the neutrino birth position. However, due the air low density, the zenith angle $`\theta _\nu `$ does define the column density $`X`$ with a negligible error: the entire down–going hemisphere corresponds to $`X0`$ and to a vanishing transition probability.
Performing, as before, a fit to the sub-GeV and multi-GeV data of Super Kamiokande with the disappearance probability given by (5) and assuming scattering off down quarks and $`ϵ^{}=0`$ (that is maximal mixing), we obtain a best fit value $`ϵ=0.4`$ and $`\alpha =1.08`$ corresponding to a minimum $`\chi ^2=38`$. With increasing $`ϵ`$ the oscillations become more frequent, and essentially all values $`ϵ\stackrel{>}{_{}}0.4`$ give comparable fits, since for these large values the oscillations can be considered as averaged in the entire up–going hemisphere.
The authors of reference , exploring the parameter space ($`ϵ,ϵ^{}`$) find two solutions: (a): (0.98,0.02) and (b): (0.08,0.07), that are plotted in the upper panels of fig. 6. The first solution corresponds to the one that we have found, considering the slow variation of $`\chi ^2`$ with $`ϵ`$ in the large $`ϵ`$ region. The $`\chi ^2`$ found by the authors of is however better that what we find, indeed as good as in the flavor oscillation model.
We do find that fitting the muon data only, without considering the constraint on the normalization coming from the electron data, the FCNC model gives an excellent fit, indeed as good or better than the flavor oscillation model. The reason why, in our fitting procedure, the FCNC model gives not as good a fit originates from the fact that the theoretical average value of the suppression for both sub–GeV and multi–GeV muon events for the best fit parameters is $`0.75`$, corresponding to no suppression in the down–going hemisphere and $`0.5`$ in the opposite one. The data for the double ratio $`R=(\mu /e)_{Data}/(\mu /e)_{MC}`$: $`R_{sub}=0.61\pm 0.03\pm 0.05`$, and $`R_{multi}=0.66\pm 0.06\pm 0.08`$ indicate a larger average suppression. The allowance of a non perfect correlation between the normalizations of the muons and electron data samples would certainly reduce the $`\chi ^2`$ value of our fit.
The inclusion of up–going muons among the data considered, again results in evidence against this model. We recall the fact that the passing muons are essentially collinear with the parent neutrinos and that the experimental zenith angle distribution does not exhibit large sharp features as those predicted for example by solution (b) of (see fig. 6). Therefore the relative smoothness of the passing muon data allows to exclude a large range of values ($`ϵ,ϵ^{}`$) that correspond to few oscillations in the up–going hemisphere (that is $`0.04\stackrel{<}{_{}}\sqrt{4ϵ^2+ϵ^2}\stackrel{<}{_{}}0.2`$) and still large effective mixing $`ϵ\stackrel{>}{_{}}ϵ^{}`$.
The solution (a) of cannot be excluded using this consideration, because its frequent oscillations do not produce sharp features given the binning of the experimental data, and give a constant suppression $`2ϵ^2/(4ϵ^2+ϵ^2)`$ for all zenith angle bins. The model has no energy dependence, and therefore this average suppression must apply to the up–going passing and stopping events, as well as to the up–going multi–GeV events, that have also a rather sharp correlation between the neutrino and muon directions. This is in disagreement with two features of the experimental data: (i) the passing muons have a suppression considerably less than both the stopping and up–going multi–GeV muons; (ii) the shape of the zenith angle distribution of passing muons shows evidence for a deformation. More quantitatively, a fit to all the data with $`ϵ^{}=0`$ gives the parameter values $`ϵ=1.4`$ and $`\alpha =1.12`$ but with a total $`\chi ^2=149`$ (the contribution to $`\chi ^2`$ of the throughgoing muon data being 105).
## 6 Summary and conclusions
The survival probability $`P(\nu _\mu \nu _\mu )`$ in the case of two flavor $`\nu _\mu \nu _\tau `$ oscillations has a well defined dependence on the pathlength and energy of the neutrinos. In order to establish unambiguosly the existence of such oscillations it is necessary to study in detail these dependences. In the analysis of the events interacting in the detector, one can study a very wide range of pathlengths ($`10\stackrel{<}{_{}}L\stackrel{<}{_{}}10^4`$ Km) but a much smaller range of neutrino energies close to 1 GeV (the sub–GeV and multi–GeV samples). Therefore it is not easy to obtain experimental information on the dependence of the survival probability on the neutrino energy. In fact models where the combination $`L/E_\nu `$ (flavor oscillations), $`LE_\nu `$ (violations of the equivalence principle) and $`XL`$ (flavor changing neutral currents) is the relevant variable for an oscillating transition probability, have been proposed as viable solutions of these data. Neutrino decay is also dependent on the ratio $`L/E_\nu `$, but with a different functional form.
In this study we find that flavor oscillations provide a significantly better fit to the sub–GeV and multi–GeV data samples than the exotic alternatives we have considered, however with a generous allowance for systematic uncertainties the alternative explanations can still be considered as viable. Including the upward going muons in the fit the alternative models are essentially ruled out.
The upward–going muons are a set of $`\nu `$–induced events corresponding to much larger $`E_\nu `$: for passing muons the median parent neutrino energy is approximately 100 GeV, with a significant contribution of neutrinos with energy as large as 1 TeV, and therefore are in principle a powerful handle to study the energy dependence of the neutrino survival probability. If flavor oscillations (where $`L/E_\nu `$ is the significant variable) are the cause of the suppression of sub–GeV and multi–GeV muon events, the neutrinos producing passing upward going muons must also oscillate, but with a smaller suppression because of their larger energy; moreover, for the range of $`\mathrm{\Delta }m^2`$ suggested by the lower energy data, one expects a moderate but detectable deformation of the zenith angle distribution. Both effects are detected.
In the alternative exotic models we have studied here, high energy events, such as the passing upward–going muons, are suppressed much more ($`LE_\nu `$) than or as much ($`XL`$) as the up–going multi–GeV events, in contrast to the experimental evidence.
Also in the case of neutrino decay, the upward–going muon data are very poorly fitted by the model. Of course if neutrino have different masses (and can decay) it is natural to expect oscillations in combination with decay. We have not explored this possibility, but we can conclude that decay cannot be the dominant form of muon neutrino disappearance.
Two results of the measurements of upward going–muons are critically important to allow discrimination against exotic models and in favour of usual oscillations:
* the stopping/passing (Data/Montecarlo) double ratio for the SK upward–going muons is $`r=0.56`$, with a combined statistical and experimental systematic error of 0.07. The theoretical uncertainty in the relative normalization of the two sets of data has been estimated as 8% in ; more conservatively the SK collaboration has used 13%. Quadratically combining the more conservative estimate of the theoretical uncertainty with the experimental errors, the resulting $`\sigma _r`$ is 0.1. Therefore the suppression for the high energy passing muons, is weaker than for the lower energy stopping ones at more than four sigma of significance, even allowing for a rather large uncertainty in the theoretical prediction. This is in contrast with models that predict for the stopping/passing double ratio $`r`$ a value of unity (flavor changing neutral currents) or larger (violations of the equivalence principle).
* The shape of the through–going upward–going muons zenith angle distribution shows indication of a deformation, although the no–distortion hypothesis (with free normalization) has a probability close to 5%. The deformation if present is a rather smooth one, and the distribution can be used to rule out models (such as FCNC with smallish $`ϵ`$) that produce deep and marked features in the neutrino distribution (well mapped by the nearly collinear muons).
The MACRO collaboration has also obtained results on upward–going muons , that indicate the presence of an angular deformation compatible with the presence of flavor oscillations (although the oscillation fit even if significantly better that the standard model fit is still rather poor). Preliminary results on events where upward–going muons are produced in (and exit from) the detector, and a second class of events that combines stopping upward–going muons and downward–going muons produced in the detector indicate a pattern of suppression that is only compatible with an oscillation probability that decreases with energy .
Also the Kamiokande collaboration has measured passing upward–going muons with results in good agreement with Super–Kamiokande, while the Baksan collaboration has obtained results not in good agreement. One should also note that the IMB collaboration has in the past measured a stopping/passing ratio for upward–going muons in agreement with a no oscillation Montecarlo prediction (see for a critical analysis).
In conclusion, we find that the present data on atmospheric neutrinos allow to determine some qualitative features of the functional dependence of the disappearance probability for muon neutrinos. This probability (smeared by resolution effects) increases with the pathlength $`L`$ producing the up/down asymmetry that is the strongest evidence for physics beyond the standard model. The difference in suppression between the sub(multi)–GeV muon events and the higher energy through–going muons indicates that the transition probability decreases with energy. These results are in agreement with the predictions of $`\nu _\mu \nu _\tau `$ oscillations and in contrast with several alternative exotic models. If flavor oscillations are indeed the mechanism for the muon neutrinos disappearance, additional data with more statistics and resolution (in $`L`$ and $`E_\nu `$) should allow to study in more detail the oscillatory structure of the transition probability as a function of the variable $`L/E_\nu `$, unambiguosly determining the physical phenomenon. It is natural to expect that the oscillations involve all flavors and that electron neutrinos participate in the oscillations (with a reduced mixing because of the Chooz limit). The resulting flavor conversions will have a more complex dependence on the neutrino path and energy $`E_\nu `$; the detection of these more subtle effects could become the next challenge for the experimentalists.
|
no-problem/9901/chao-dyn9901006.html
|
ar5iv
|
text
|
# Asymptotic Theory for the Probability Density Functions in Burgers Turbulence
\[
## Abstract
A rigorous study is carried out for the randomly forced Burgers equation in the inviscid limit. No closure approximations are made. Instead the probability density functions of velocity and velocity gradient are related to the statistics of quantities defined along the shocks. This method allows one to compute the anomalies, as well as asymptotics for the structure functions and the probability density functions. It is shown that the left tail for the probability density function of the velocity gradient has to decay faster than $`|\xi |^3`$. A further argument confirms the prediction of E et al. \[Phys. Rev. Lett. 78, 1904 (1997)\] that it should decay as $`|\xi |^{7/2}`$.
\]
In this Letter, we focus on statistical properties of solutions of the randomly forced Burgers equation
$$u_t+uu_x=\nu u_{xx}+f,$$
(1)
where $`f`$ is a zero-mean, statistically homogeneous, white-in-time Gaussian process with covariance
$$f(x,t)f(y,s)=2B(xy)\delta (ts),$$
(2)
where $`B(x)`$ is smooth. We are particularly interested in the probability density function (pdf) of the velocity gradient $`\xi (x,t)=u_x(x,t)`$, since it depends heavily on the intermittent events created by the shocks. Assuming statistical homogeneity, and letting $`Q(\xi ;t)`$ be the pdf of $`\xi (x,t)`$, it can be shown that $`Q`$ satisfies
$$Q_t=\xi Q+\left(\xi ^2Q\right)_\xi +B_1Q_{\xi \xi }\nu \left(\xi _{xx}|\xi Q\right)_\xi ,$$
(3)
where $`B_1=B_{xx}(0)`$. $`\xi _{xx}|\xi `$ is the ensemble-average of $`\xi _{xx}`$ conditional on $`\xi `$. The explicit form of this term is unknown, leaving (3) unclosed. There have been several proposals on how to approximately evaluate the quantity
$$F(\xi ;t)=\underset{\nu 0}{lim}\nu \left(\xi _{xx}|\xi Q\right)_\xi .$$
(4)
At steady state, they all lead to an asymptotic expression of the form
$$Q\{\begin{array}{cc}C_{}|\xi |^\alpha \hfill & \mathrm{as}\xi \mathrm{},\hfill \\ C_+\xi ^\beta \mathrm{e}^{\xi ^3/(3B_1)}\hfill & \mathrm{as}\xi +\mathrm{},\hfill \end{array}$$
(5)
for $`Q`$, but with a variety of values for the exponents $`\alpha `$ and $`\beta `$ (here the $`C_\pm `$’s are numerical constants). By invoking the operator product expansion, Polyakov suggested that $`F=aQ+b\xi Q`$, with $`a=0`$ and $`b=1/2`$. This leads to $`\alpha =5/2`$ and $`\beta =1/2`$. Boldyrev considered the same closure with $`1b0`$, which gives $`2\alpha 3`$ and $`\beta =1+b`$. The instanton analysis predicts the right tail of $`Q`$ without giving a precise value for $`\beta `$, but has not given any specific prediction for the left tail. E et al. made a geometrical evaluation of the effect of $`F`$, based on the observation that large negative gradients are generated near shock creation. Their analysis gives a rigorous upper-bound for $`\alpha `$: $`\alpha 7/2`$. In , it was claimed that this bound is actually reached, i.e., $`\alpha =7/2`$. Finally Gotoh and Kraichnan argued that the viscous term is negligible to leading order for large $`|\xi |`$, i.e. $`F0`$ for $`|\xi |B_1^{1/3}`$. This approximation leads to $`\alpha =3`$ and $`\beta =1`$. For other approaches, see e.g. . In this letter we proceed at an exact evaluation of (4) and we prove that $`\alpha `$ has to be strictly larger than $`3`$ (a result which does not require that steady state be reached). At steady state, we prove that $`\beta =1`$ and we give an argument which supports strongly the prediction of , namely, $`\alpha =7/2`$.
To begin with, let us remark that it is established in the mathematics literature that the inviscid limit
$$u^0(x,t)=\underset{\nu 0}{lim}u(x,t),$$
(6)
exists for almost all $`(x,t)`$. Since $`u^0`$ will in general develop shocks, say, at $`x=y`$, we may have $`u_x^0\delta (xy)`$, and one cannot simply drop the viscous term in the Burgers equation without giving some meaning to $`u^0u_x^0`$ at shocks. This can be done using BV-calculus, which allows one to write an equation for $`u^0`$ and gives rules for manipulating the terms entering this equation and computing the effect of the viscous term in the inviscid limit. An alternative, more intuitive, way of accessing the effect of the viscous shock on the velocity profile outside the shock is to carry out an asymptotic analysis near and inside the shock. Here we will take the second approach and refer the interested reader to for the first approach with BV-calculus. It is important to remark that the two approaches lead to the same results.
Before considering velocity gradient, it is helpful to study the statistics of velocity itself. Let $`R(u;t)`$ be the pdf of $`u(x,t)`$. Assuming statistical homogeneity, $`R`$ satisfies
$$R_t=B_0R_{uu}\nu \left(u_{xx}|uR\right)_u,$$
(7)
where $`B_0=B(0)`$. To compute $`\nu \left(u_{xx}|uR\right)_u`$, let us note that for $`\nu 1`$, the solutions of (1) consist of smooth pieces where the viscous effect is negligible, separated by thin shock layers inside which the viscous effect is important. Let $`u_{\text{out}}(x,t)`$ be the solution of the Burgers equation outside the viscous shock layer; $`u_{\text{out}}`$ can be obtained as a series expansion in $`\nu `$. To leading order in $`\nu `$, $`u_{\text{out}}`$ satisfies Riemann’s equation, $`u_t+uu_x=f`$. In order to deal with the shock layer, say at $`x=y`$, define
$$u_{\text{in}}(x,t)=v(\frac{xy}{\nu },t),$$
(8)
and write $`v=v_0+\nu v_1+O(\nu ^2)`$. To leading order, $`v_0(z,t)`$ satisfies $`(v_0\overline{u})v_{0}^{}{}_{z}{}^{}=v_{0}^{}{}_{zz}{}^{}`$, yielding $`v_0(z,t)=\overline{u}(s/2)\mathrm{tanh}(sz/4)`$ where $`\overline{u}=dy/dt`$ and $`s`$ is the jump across the shock. Consequently we have the following generic velocity profile inside the shock layer:
$$u_{\text{in}}(x,t)=\overline{u}\frac{s}{2}\mathrm{tanh}\left(\frac{s(xy)}{4\nu }\right)+O(\nu ).$$
(9)
The actual values of $`\overline{u}`$ and $`s`$ are obtained from the matching conditions between $`u_{\text{in}}`$ and $`u_{\text{out}}`$. In terms of $`v`$ and the stretched variable $`z`$, they are
$$\underset{z\pm \mathrm{}}{lim}v_0=\underset{xy0^\pm }{lim}u_{\text{out}}=\overline{u}\pm \frac{s}{2}.$$
(10)
It is well-known that $`s0`$.
We will use (9) to evaluate the viscous term in (7). By definition ,
$$\nu u_{xx}|uR=\nu \underset{L\mathrm{}}{lim}\frac{1}{2L}_L^L𝑑xu_{xx}\delta [uu(x,t)].$$
(11)
In the limit $`\nu 0`$ only small intervals around the shocks will contribute to the integral. So, we can split the integral into small pieces involving only the shock layers and use the generic form of $`u_{\text{in}}`$ in the layers to evaluate these integrals. To $`O(\nu )`$, this gives
$$\begin{array}{c}\nu u_{xx}|uR\hfill \\ =\nu \underset{L\mathrm{}}{lim}\frac{N}{2L}\frac{1}{N}\underset{j}{}_{\mathrm{j}\mathrm{th}\mathrm{layer}}𝑑xu_{\text{in}}^{}{}_{xx}{}^{}\delta [uu_{\text{in}}(x,t)]\hfill \\ =\rho 𝑑s𝑑\overline{u}T(\overline{u},s;t)_{\mathrm{}}^+\mathrm{}𝑑zv_{0}^{}{}_{zz}{}^{}\delta [uv_0(z,t)],\hfill \end{array}$$
(12)
where in the second integral we picked any particular shock layer and we went to the stretched variable $`z=(xy)/\nu `$. Here $`N`$ denotes the number of shocks in $`[L,L]`$, $`\rho =\rho (t)=lim_L\mathrm{}N/2L`$ is the shock density, and $`T(\overline{u},s;t)`$ is the probability density of $`\overline{u}(y,t)`$ and $`s(y,t)`$ conditional on the property that there is a shock at position $`y`$ ($`T`$ is independent of $`y`$ because of statistical homogeneity). The last integral in (12) can of course be evaluated using the explicit form of $`v_0`$. Another, more elegant, way to proceed is to use the equation for $`v_0`$, $`(v_0\overline{u})v_{0}^{}{}_{z}{}^{}=v_{0}^{}{}_{zz}{}^{}`$, and change the integration variable from $`z`$ to $`v_0`$ using $`dzv_{0}^{}{}_{zz}{}^{}=dv_0v_{0}^{}{}_{zz}{}^{}/v_{0}^{}{}_{z}{}^{}=dv_0(v_0\overline{u})`$. The result is
$$\underset{\nu 0}{lim}\nu u_{xx}|uR=\rho 𝑑s_{u+s/2}^{us/2}𝑑\overline{u}(u\overline{u})T(\overline{u},s;t).$$
(13)
This equation gives an exact expression for the viscous contribution in the limit $`\nu 0`$ in terms of certain statistical quantities associated with the shocks. Of course, using (13) in (7) does not lead to a closed equation since $`T`$ remains to be specified. However, information can already be obtained at this point without resorting to any closure assumption. For instance, using (13) in (7) and taking the second moment of the resulting equation yields $`u^2_t=2B_02ϵ`$ with
$$ϵ=\underset{\nu 0}{lim}\nu u_x^2=\frac{1}{12}\rho |s|^3.$$
(14)
In particular, at steady state $`\rho |s|^3=12B_0`$.
Similar calculations can be carried out for multi-point pdf’s and, in particular, for $`W(w;x,t)`$, the pdf of the velocity difference $`w(x,z,t)=u(x+z,t)u(z,t)`$. It leads to an equation of the form
$$\begin{array}{ccc}\hfill W_t& =& wW_x2_{\mathrm{}}^w𝑑w^{}W_x(w^{};x,t)\hfill \\ & & +2[B_0B(x)]W_{ww}+H(w;x,t),\hfill \end{array}$$
(15)
where, to $`O(x)`$, $`H`$ is given by
$$\begin{array}{ccc}\hfill H& =& \rho \left[wS(w;t)+s\delta (w)\right]\hfill \\ & +& 2\rho _{\mathrm{}}^w𝑑w^{}S(w^{};t)2\rho \theta (w)+O(x).\hfill \end{array}$$
(16)
Here $`\theta (w)`$ is the Heaviside function and $`S(s;t)=𝑑\overline{u}T(\overline{u},s;t)`$ is the conditional pdf of $`s(y,t)`$. By direct substitution it may be shown that the solution of (15) is, to $`O(x^2)`$,
$$W(1\rho x)\frac{1}{x}Q(\frac{w}{x};t)+\rho xS(w;t)+O(x^2).$$
(17)
The first term in this expression contains $`Q(\xi ;t)`$, the pdf of the non-singular part of the velocity gradient, to be considered below (see (20)). This term accounts for those realizations of the flow where there is no shock in between $`z`$ and $`x+z`$ (an event of probability $`1\rho x+O(x^2)`$). This term also leads to the consistency constraint that $`lim_{x0}W=\delta (w)`$ (using $`lim_{x0}Q(w/x;t)/x=\delta (w)`$). The next term in (17), $`\rho xS(w;t)`$, accounts for the realizations of the flow where there is a shock in between $`z`$ and $`x+z`$ (an event of probability $`\rho x+O(x^2)`$). Equation (17) can be used to compute the structure functions, $`|w|^a=𝑑w|w|^aW`$. To leading order this gives
$$|w|^a\{\begin{array}{cc}x^a|\xi |^a+O(x)\hfill & \text{if}0a<1,\hfill \\ x\rho |s|^a+O(x^{1+a})\hfill & \text{if}1<a,\hfill \end{array}$$
(18)
where $`|\xi |^a=𝑑\xi |\xi |^aQ`$. Using $`\rho |s|^3=12B_0`$, we get Kolmogorov’s relation for $`a=3`$
$$|w|^312xB_0.$$
(19)
We now go back to the velocity gradient. Observe first that, in the limit $`\nu 0`$, the velocity gradient can be written as
$$u_x(x,t)=\xi (x,t)+\underset{j}{}s(y_j)\delta (xy_j),$$
(20)
where the $`y_j`$’s are the locations of the shocks, $`\xi `$ is the non-singular part of $`u_x`$. Assuming homogeneity, a direct consequence of (20) is
$$u_x=\xi +\rho s=0.$$
(21)
Unlike the viscous case where $`\xi =u_x`$, hence $`\xi =0`$, we have in the inviscid limit $`\xi =\rho s0.`$ Note also that the inviscid limit of the solutions of (3) converge to the pdf of $`\xi `$ only, which is still going to be denoted by $`Q`$.
To evaluate $`F`$, there are two ways to proceed. One is to rewrite (15) in terms of the pdf of $`(u(x+z,t)u(z,t))/x`$ and take the limit as $`x`$ goes to zero. This is the approach taken in . The other is to evaluate (4) directly. The two approaches amount to different orders of taking the limit $`x0,\nu 0`$, and give the same result. Hence the two limiting processes commute. We will take the second approach and evaluate (4) using the same basic idea as above. Here, however, we have to proceed more carefully with the shock layer analysis. Differentiation of (9) gives
$$\xi _{\text{in}}(x,t)=\frac{s^2}{8\nu }\mathrm{sech}^2\left(\frac{s(xy)}{4\nu }\right)+O(1).$$
(22)
While the next order term in (9) was negligible in the limit $`\nu 0`$, the $`O(1)`$ contribution to $`\xi _{\text{in}}(x,t)`$ actually dominates the $`O(\nu ^1)`$ contribution at the border of the shock layer because the latter falls exponentially fast as the outer region is approached, whereas the former tends to constants, say, $`\xi _\pm `$. In particular, the matching between $`\xi _{\text{out}}(x,t)`$ and $`\xi _{\text{in}}(x,t)`$ involves the $`O(1)`$ terms. To see how matching takes place, differentiating the expression for $`u_{\text{in}}`$, we have $`\xi _{\text{in}}=\nu ^1v_{0}^{}{}_{z}{}^{}+v_{1}^{}{}_{z}{}^{}+O(\nu )`$. The matching condition between $`\xi _{\text{in}}`$ and $`\xi _{\text{out}}`$ reads
$$\underset{z\pm \mathrm{}}{lim}v_{1}^{}{}_{z}{}^{}=\underset{xy0^\pm }{lim}\xi _{\text{out}}\xi _\pm .$$
(23)
The equation for $`v_1`$ is
$$v_{0}^{}{}_{t}{}^{}+(v_0\overline{u})v_{1}^{}{}_{z}{}^{}+v_1v_{0}^{}{}_{z}{}^{}=v_{1}^{}{}_{zz}{}^{}+f_x,$$
(24)
and, from the above argument, the only information we really need about $`v_1`$ is its values at the boundaries $`z\pm \mathrm{}`$. Since $`v_{0}^{}{}_{z}{}^{}`$ falls exponentially fast for large $`|z|`$, (24) reduces to
$$\overline{u}_t\pm \frac{s_t}{2}\pm \frac{s}{2}v_{1}^{}{}_{z}{}^{}=v_{1}^{}{}_{zz}{}^{}+f_x,z\pm \mathrm{},$$
(25)
where we used the asymptotic values of $`v_0`$. Thus, as $`z\pm \mathrm{}`$,
$$v_1\frac{2\overline{u}_t}{s}z\frac{s_t}{s}z\pm \frac{2f_x}{s}z+c_1^\pm +c_2^\pm \mathrm{e}^{\pm sz/2}.$$
(26)
Notice that the exponential terms are irrelevant in these expression since $`s0`$. Equation (26) implies
$$\underset{z\pm \mathrm{}}{lim}v_{1}^{}{}_{z}{}^{}=\frac{2\overline{u}_t}{s}\frac{s_t}{s}\pm \frac{2f_x}{s}=\xi _\pm ,$$
(27)
where the last equality is just the definition of $`\xi _\pm `$. Note that (27) can be rewritten as
$$s_t=\frac{s}{2}\left(\xi _{}+\xi _+\right),\overline{u}_t=\frac{s}{4}\left(\xi _{}\xi _+\right)+f_x.$$
(28)
In the limit $`\nu 0`$ these are the equations of motion along the shock.
We can now evaluate the viscous contribution using
$$\nu \xi _{xx}|\xi Q=\nu \underset{L\mathrm{}}{lim}\frac{1}{2L}_L^L𝑑x\xi _{xx}\delta [\xi \xi (x,t)].$$
(29)
The calculation is similar to the one for the velocity and eventually leads to
$$F(\xi ;t)=\frac{\rho }{2}𝑑ss\left[V_{}(\xi ,s;t)+V_+(\xi ,s;t)\right],$$
(30)
where $`V_\pm (\xi ,s;t)`$ are the conditional pdf’s of $`\xi _\pm (y,t)`$ and $`s(y,t)`$. The appearance of $`\xi _\pm `$ in (30) is of course a direct result of the $`O(1)`$ term in (22).
We now use (30) in (3) and analyze some consequences of
$$Q_t=\xi Q+\left(\xi ^2Q\right)_\xi +B_1Q_{\xi \xi }+F(\xi ;t).$$
(31)
Taking the first moment of (31) leads to
$$\xi _t=\left[\xi ^3Q\right]_{\mathrm{}}^+\mathrm{}+\frac{\rho }{2}\left[s\xi _{}+s\xi _+\right],$$
(32)
where we used $`𝑑\xi \xi F=\rho [s\xi _{}+s\xi _+]/2`$. On the other hand, averaging the first equation in (28) gives
$$\left(\rho s\right)_t=\frac{\rho }{2}\left[s\xi _{}+s\xi _+\right].$$
(33)
This equation uses the fact that shocks are created at zero amplitude, and shock strengths add up at collision. These are consequences of the fact that the forcing is smooth in space . Since $`\xi _t=(\rho s)_t`$ from (21), the comparison between (32) and (33) tells us that the boundary term in (32) must be zero. Since $`Q0`$, $`\xi ^3Q`$ has different sign for large positive and large negative values of $`\xi `$. Therefore we must have $`lim_{\xi +\mathrm{}}\xi ^3Q=0`$ and $`lim_\xi \mathrm{}\xi ^3Q=0`$. This proves that $`Q`$ goes to zero faster than $`|\xi |^3`$ as $`\xi \mathrm{}`$ and $`\xi +\mathrm{}`$.
The analysis can be carried out one step further for the stationary case ($`Q_t=0`$). In this case, treating (31) as an inhomogeneous second order ordinary differential equation, we can write its general solution as $`Q=C_1Q_1+C_2Q_2+Q_3`$, where $`C_1`$ and $`C_2`$ are constants, $`Q_1`$ and $`Q_2`$ are two linearly independent solutions of the homogeneous equation associated with (31), and $`Q_3`$ is some particular solution of this equation. One such particular solution is
$$Q_3=_{\mathrm{}}^\xi 𝑑\xi ^{}\frac{\xi ^{}F(\xi ^{})}{B_1}\frac{\xi \mathrm{e}^\mathrm{\Lambda }}{B_1}_{\mathrm{}}^\xi 𝑑\xi ^{}\mathrm{e}^\mathrm{\Lambda }^{}G(\xi ^{}),$$
(34)
where $`\mathrm{\Lambda }=\xi ^3/(3B_1)`$ and
$$G(\xi )=F(\xi )+\xi _{\mathrm{}}^\xi 𝑑\xi ^{}\frac{\xi ^{}F(\xi ^{})}{B_1}.$$
(35)
With this particular solution, it can be shown (see for details) that the realizability constraints imply that $`C_1=C_2=0`$, i.e. the only non-negative, integrable solution is $`Q=Q_3`$. Furthermore, in order that $`Q`$ actually be non-negative, $`F`$ must satisfy
$$0FC\xi ^2\mathrm{e}^{\xi ^3/(3B_1)}\mathrm{as}\xi +\mathrm{},$$
(36)
for some constant $`C<0`$. Substituting into (34), we get
$$Q\{\begin{array}{cc}C_{}|\xi |^3_{\mathrm{}}^\xi 𝑑\xi ^{}\xi ^{}F(\xi ^{})\hfill & \mathrm{as}\xi \mathrm{},\hfill \\ C_+\xi \mathrm{e}^{\xi ^3/(3B_1)}\hfill & \mathrm{as}\xi +\mathrm{},\hfill \end{array}$$
(37)
which confirms the result $`QC_{}|\xi |^\alpha `$ with $`\alpha >3`$ as $`\xi \mathrm{}`$, and gives $`\beta =1`$.
The actual value of the exponent $`\alpha `$ depends on the asymptotic behavior of $`F`$. The latter can be obtained from further considerations on the dynamics of the shock (28). This is rather involved and will be left to . The result gives $`\alpha =7/2`$ which confirms the prediction of . Here we will restrict ourselves to an interpretation of the current approach in terms of the geometric picture. Observe that the largest values of $`\xi _\pm `$ are achieved just after the shock formation. Assume that a shock is created at time $`t=0`$, position $`x=0`$, and with velocity $`u=0`$. Then, locally
$$x=utau^3+\mathrm{}.$$
(38)
It follows that for $`t1`$ the solutions of $`0=utau^3`$, $`u_\pm `$, behave as
$$u_\pm =\sqrt{\frac{t}{a}}s=2\sqrt{\frac{t}{a}},$$
(39)
and $`\xi _\pm `$, solutions of $`1=\xi t3au^2\xi `$, behave as
$$\xi _\pm =\frac{1}{2t}.$$
(40)
Assuming that these give the dominant contribution to $`F(\xi )`$ for large negative values of $`\xi `$, the asymptotic form of $`F`$ is
$$FC_0^{\mathrm{}}𝑑ts(t)\left\{\delta [\xi \xi _{}(t)]+\delta [\xi \xi _+(t)]\right\},$$
(41)
where $`C`$ is some constant related to the statistics of the shock life-time and $`a`$, and $`s(t)`$, $`\xi _\pm (t)`$ are given by (39), (40). The evaluation of (41) gives $`FC|\xi |^{5/2}`$, and, hence,
$$QC_{}|\xi |^{7/2}\mathrm{as}\xi \mathrm{}.$$
(42)
Even though this argument gives only a lower bound for $`F`$ at large negative values of $`\xi `$, further arguments presented in indicate that this lower bound is actually sharp.
We thank Bob Kraichnan and Stas Boldyrev for stimulating discussions. The work of W. E is supported by a Presidential Faculty Fellowship from the National Science Foundation. The work of E. V. E. is supported by U.S. Department of Energy Grant No. DE-FG02-86ER-53223.
|
no-problem/9901/hep-ph9901239.html
|
ar5iv
|
text
|
# Constraints on Hadronic Spectral Functions From Continuous Families of Finite Energy Sum Rules
## I Introduction
As is well-known, for typical hadronic correlators $`\mathrm{\Pi }(s)`$, analyticity, unitarity, and the Cauchy theorem imply the existence of dispersion relations which, due to asymptotic freedom, allow non-trivial input in the form of the operator product expansion (OPE) at large spacelike $`s=q^2=Q^2`$. The utility of these relations can be improved by Borel transformation, which introduces both an exponential weight, $`exp(s/M^2)`$, (where $`M`$, the Borel mass, is a parameter of the transformation), on the hadronic (spectral integral) side and a factorial suppression of contributions from higher dimension operators on the OPE side. The exponentially decreasing weight allows rather crude approximations for the large $`s`$ part of the spectral function to be tolerated. Typically, one employs essentially a “local duality” approximation, i.e., uses the OPE version of the spectral function for all $`s`$ greater than some “continuum threshold”. Competition between optimizing suppression of contributions from the crudely modelled “continuum” and convergence of the OPE usually results in a “stability window” in $`M`$ for which neither contributions from the continuum, nor those from the highest dimension operators retained on the OPE side, are completely negligible. The resulting uncertainties have to be carefully monitored to determine the reliability of a given analysis. The presence of the decreasing exponential weight also means that the method is less sensitive to the parameters of the higher resonances in the channel in question.
In this paper we investigate the alternative to Borel-transformed (SVZ) sum rules provided by those finite energy sum rules (FESR’s) generated by integration over the “Pac-man” contour (running from $`s_0`$ to threshold below the cut on the real timelike axis, back from threshold to $`s_0`$ above the cut, and closed by a circle of radius $`s_0`$ in the complex $`s`$ plane). One advantage of such FESR’s is the absence of an exponentially decreasing weight. Indeed, if $`\mathrm{\Pi }(s)`$ is a hadronic correlator without kinematic singularities, and $`w(s)`$ any analytic weight function, then, with the spectral function $`\rho (s)`$ defined as usual,
$$_{s_{th}}^{s_0}w(s)\rho (s)𝑑s=\frac{1}{2\pi i}_{|s|=s_0}w(s)\mathrm{\Pi }(s)𝑑s.$$
(1)
Such FESR’s have to date usually employed integer power weights, $`w(s)=s^k`$, $`k=0,1,2`$ (see, e.g., the recent extraction of $`m_u+m_d`$ in Ref. using the isovector pseudoscalar correlator), but are, of course, valid for any $`w(s)`$ analytic in the region of the contour.
One interesting FESR involving a non-integer-power weight is that relevant to hadronic $`\tau `$ decay. Neglecting the tiny contributions proportional to $`(m_dm_u)^2`$ in the isovector vector (IV) current correlator (which are, in any case, hard to handle reliably on the OPE side), the ratio of the non-strange hadronic to electronic widths is proportional to
$$_{4m_\pi ^2}^{m_\tau ^2}\frac{ds}{m_\tau ^2}\left(1\frac{s}{m_\tau ^2}\right)^2\left(1+2\frac{s}{m_\tau ^2}\right)\rho _\tau ^{(0+1)}(s)$$
(2)
with $`\rho _\tau ^{(0+1)}(s)`$ the sum of longitudinal and transverse contributions to the corresponding spectral function. This is the hadronic side of a FESR with weight
$$w_\tau (s)=\frac{1}{m_\tau ^2}\left(1\frac{s}{m_\tau ^2}\right)^2\left(1+2\frac{s}{m_\tau ^2}\right),$$
(3)
the OPE side of which is
$$\frac{i}{2\pi }_{|s|=m_\tau ^2}\frac{ds}{m_\tau ^2}\left(1\frac{s}{m_\tau ^2}\right)^2\left(1+2\frac{s}{m_\tau ^2}\right)\mathrm{\Pi }_{V,ud}^{(0+1)}(s).$$
(4)
Eq. (4) thus provides an expression for the non-strange hadronic $`\tau `$ decay width which requires as input only $`a(m_\tau ^2)=\alpha _s(m_\tau ^2)/\pi `$ and the relevant $`D=4`$ and $`D=6`$ condensates (these latter, in fact, give rather small contributions). This representation works extremely well, in the sense that the $`a(m_\tau ^2)`$ value required by $`\tau `$ decay data (see Refs. and earlier references cited therein) is nicely consistent, after running, with that measured experimentally at the $`Z`$ mass scale. One can also verify that the success of the underlying FESR is not a numerical accident by comparing the hadronic and OPE sides for a range of $`s_0`$ values $`<m_\tau ^2`$. As shown by ALEPH, and reiterated below, the agreement between the two representations is excellent for all $`s_0`$ between $`2\mathrm{GeV}^2`$ and $`m_\tau ^2`$.
The success of the $`\tau `$ decay FESR has a simple physical explanation. As argued in Ref. , for large enough $`s_0`$, the OPE should provide a good representation of $`\mathrm{\Pi }(s)`$ over most of the circle $`|s|=s_0`$. When local duality is not yet valid, however, this representation will necessarily break down over some region near the timelike real axis. Since, with $`\mathrm{\Delta }`$ some typical hadronic scale, the problematic region represents a fraction $`\mathrm{\Delta }/\pi s_0`$ of the full circle, one might expect the error on the OPE side of a given FESR to be $`\mathrm{\Delta }/\pi s_0`$. Consider, however, a correlator having perturbative contribution of the form $`Q^{2n}\left(1+c_1a(Q^2)+c_2a(Q^2)^2+\mathrm{}\right)`$, with $`n`$ positive. Expanding this expression in terms of $`a(s_0)`$, one obtains $`Q^{2n}\left(1+c_1a(s_0)+𝒪\left(a(s_0)\right)^2\right)`$, where the coefficient of the second order terms now involves $`log(Q^2/s_0)`$ (for details, see e.g. Ref. ). Integrating around $`|s|=s_0`$, for any analytic $`w(s)`$, the first surviving contribution is then the $`𝒪\left(a(s_0)\right)^2`$ logarithmic term. This logarithm, associated with the perturbative representation of $`\alpha _s(Q^2)`$, has maximum modulus precisely in the region (on either side of the cut on the time-like real axis) for which the perturbative representation is least reliable. FESR’s associated with weight functions (such as $`w(s)=s^k`$) not suppressed near $`s=s_0`$ can thus have errors potentially much greater than those suggested by the naive estimate above. We illustrate this point for the case of the IV correlator below. For hadronic $`\tau `$ decay, however, phase space naturally produces a (double) zero of $`w_\tau (s)`$ at $`s=m_\tau ^2`$, and this suppression of contributions from the region of the contour near the real timelike axis results in a very accurate FESR. We will see, in Section II, that other weight functions with zeros at $`s=s_0`$ also produce very reliable FESR’s in the IV channel. We will then use such weights, and corresponding FESR’s, in Section III, to studying the pseudoscalar isovector (PI) and strangeness $`S=1`$ scalar (SS) channels (relevant to the extraction of the light quark mass combinations $`m_u+m_d`$ and $`m_s+m_u`$).
## II Lessons from hadronic $`\tau `$ decay
As noted above, FESR’s involving weights, $`w(s)`$, with $`w(s_0)0`$ have significant potential uncertainties if local duality is not yet valid at scale $`s_0`$. To quantify this statement, consider the $`s^k`$-weighted FESR’s for the IV channel. In Table I we list, as a function of $`s_0`$, the hadronic ($`I_k^{ex}`$) and OPE ($`I_k^{OPE}`$) sides of these sum rules,
$`I_k^{ex}`$ $``$ $`{\displaystyle _{4m_\pi ^2}^{s_0}}s^k\rho ^{ex}(s)𝑑s,`$ (5)
$`I_k^{(OPE)}`$ $``$ $`{\displaystyle \frac{1}{2\pi i}}{\displaystyle _{|s|=s_0}}s^k\mathrm{\Pi }_{V,ud}^{(0+1)}(s)𝑑s,`$ (6)
for $`k=0,1,2,3`$. The hadronic side is evaluated using the spectral function, $`\rho ^{ex}(s)`$, measured by ALEPH, while for the OPE side we employ the known OPE for $`\mathrm{\Pi }_{V,ud}^{(0+1)}(s)`$, together with (1) ALEPH values for $`a(m_\tau ^2)`$ and the $`D=6`$ condensate terms (from the non-strange decay analysis alone), (2) the gluon condensate of Ref. , (3) the GMO relation, $`<2m_{\mathrm{}}\overline{\mathrm{}}\mathrm{}>=m_\pi ^2f_\pi ^2`$, (4) quark mass ratios from Chiral Perturbation Theory (ChPT), (5) $`0.7<\left[<\overline{s}s>/<\overline{\mathrm{}}\mathrm{}>\right]<1`$, as in Refs. , and (6) four loop running, with contour improvement, for the perturbative contributions.
As seen from the Table, the errors in the $`s^k`$-weighted FESR’s are significant, except near $`s_02.8\mathrm{GeV}^2`$, where the hadronic and OPE representations happen to cross. The worsening of agreement for $`s_0`$ above $`2.8`$ GeV<sup>2</sup> simply reflects the facts that (1) local duality is not valid for $`s_03`$ GeV<sup>2</sup> and (2) the problematic region of the circular part of the contour contributes significantly to the OPE side of the sum rule for $`w(s)=s^k`$, as one would expect. Note that the situation cannot be improved by taking “duality ratios” (ratios of such sum rules corresponding to different values of $`k`$) since, as pointed out in Ref. , if one insists on a match between the hadronic and OPE versions of such a duality ratio for $`s_0`$ values lying in some “duality window”, then the spectral function is constrained to match (up to an undetermined overall multiplicative constant) that implied by the OPE for all $`s_0`$ in that window. If $`s_0`$ lies in the region of validity of local duality, this is not a problem, but if it does not (for example, if distinct resonances are still present), then contributions from the problematic part of the contour cannot have fully cancelled in the ratio.
The situation is much improved if we consider FESR’s corresponding to weights with a zero at $`s=s_0`$. For reference we give, in Table II, the experimental (hadronic) and OPE sides of the FESR for the (double zero) combination $`I_03I_2+2I_3`$ relevant to hadronic $`\tau `$ decay (see also the discussion in Ref. ). As noted earlier, the match between the two sides is very good, even at low scales. We consider also the FESR’s corresponding to Eq. (1), based on the weights $`w_{k,k+1}(s)=\left(s/s_0\right)^k\left(s/s_0\right)^{k+1}`$ which have only a simple zero at $`s=s_0`$. Denoting the hadronic and OPE sides by $`J_{k,k+1}^{(ex,OPE)}(s_0)`$, we have
$$J_{k,k+1}^{(ex,OPE)}(s_0)=\frac{1}{s_0^k}I_k^{(ex,OPE)}(s_0)\frac{1}{s_0^{k+1}}I_{k+1}^{(ex,OPE)}(s_0).$$
(7)
The results for the cases $`k=0`$ and $`k=1`$ are given in Table II. Evidently, even a simple zero at $`s=s_0`$ is enough the suppress contributions from the problematic part of the contour sufficiently to produce sum rules that are very reliable, again even down to rather low scales.
The constraints on the hadronic spectral function obtained by combining the $`w_{01}`$ and $`w_{12}`$ sum rules are actually rather strong. To see this, note that a general linear combination of the two sum rules involves, for some constant $`A`$, a weight function proportional to
$$w(A,s)=\left(1\frac{s}{s_0}\right)\left(1+A\frac{s}{s_0}\right).$$
(8)
For $`A<1`$, this weight has a second zero in the hadronic integration region which moves to lower $`s`$ as $`A`$ is decreased. To the left of this zero, the spectral function is weighted positively, to the right, negatively. Dialing the crossover location (by varying $`A`$) then places rather strong constraints on the spectral function, provided that (1) the OPE representation remains accurate over the range of $`A`$, $`s_0`$ values employed and (2) the errors on the OPE side are not unduly exacerbated by cancellations in forming the combination of the two sum rules. For the IV channel, it is straightforward to demonstrate that the errors are, indeed, not amplified, and that the resulting OPE and hadronic representations do, indeed, remain in excellent agreement, for $`9A9`$ (over which range the weight function varies from having no second zero to having one below the $`\rho `$ peak) and for a range of $`s_0`$ values extending well below $`m_\tau ^2`$. We do not display these facts explicitly since the central values for the OPE and hadronic representations follow from those already given in the Table. It is also worth stressing that, not only does the OPE representation match very well with the hadronic one based on experimental data, but that making a sum-of-resonances ansätz for the spectral function and fitting its parameters to the OPE representation produces a model spectral function in good agreement with the experimental one (for example, the resulting $`\rho `$ decay constant differs from the experimental by less than the experimental error).
## III Applications to other channels
We now consider FESR’s for two channels of relevance to the extraction of the light quark masses. For the PI channel, unmeasured continuum contributions to the spectral function contribute roughly three-quarters of the hadronic side of the (integer power weighted) FESR which determines $`(m_u+m_d)^2`$, while for the SS channel, experimental constraints exist only on the $`K\pi `$ portion of the spectral function. We employ FESR’s based on the weights $`w(A,s)`$ in both these channels; in the former, to test the plausibility of the ansätz employed for the continuum part of the spectral function and, in the latter, to test the viability of certain assumptions/approximations made in the earlier analyses.
For the PI channel, $`m_u+m_d`$ is extracted from sum rules for the correlator, $`\mathrm{\Pi }_5(q^2)=id^4xe^{iqx}<0|\left(_\mu A^\mu (x)_\mu A^\mu (0)\right)|0>`$, where $`A^\mu `$ is the isovector axial vector current. With $`\rho _5(s)`$ the corresponding spectral function, one has, using $`w(s)=s^k`$,
$$_0^{s_0}𝑑ss^k\rho _5(s)=\frac{3}{8\pi ^2}\left[m_u(s_0)+m_d(s_0)\right]^2\frac{s_0^{k+2}}{k+2}\left[1+R_{k+1}(s_0)+D_k(s_0)\right]+\delta _{k,1}\mathrm{\Pi }_5(0)$$
(9)
where $`R_{k+1}(s_0)`$ (the notation is that of Ref. ) contain the perturbative corrections, $`D_k`$ the contributions from higher dimension operators, and $`\mathrm{\Pi }_5(0)`$ is determined by $`f_\pi `$, $`m_\pi `$ and the combination $`2L_8^rH_2^r`$ of fourth order ChPT low energy constants (LEC’s) (see Ref. ). The analysis of Refs. (BPR) proceeds by (1) adjusting the relative strength of $`\pi (1300)`$ and $`\pi (1800)`$ contributions to a continuum spectral ansatz using the $`k=0`$ to $`k=1`$ duality ratio, (2) fixing the overall scale of this ansätz by normalizing the sum of the resonance tails to the leading order ChPT expression for $`\rho _5(s)`$ at continuum threshold, (3) (with $`\rho _5(s)`$ so fixed) using the $`k=0`$ and $`k=1`$ sum rules to extract $`m_u+m_d`$ and $`2L_8^rH_2^r`$, respectively. A number of possible problems exist with this analysis. First (see Ref. ) there are potential dangers in the overall normalization prescription, associated with the fact that continuum threshold is rather far from the resonance peak locations. Second, the value of the LEC combination obtained implies an unusual value for the light quark condensate ratios. (The combination $`2L_8^rH_2^r`$ is related to $`2L_8^r+H_2^r`$, which controls flavor breaking in the condensate ratios. With standard values for $`L_8^r`$, the BPR $`2L_8^rH_2^r`$ value corresponds to $`<\overline{s}s>/<\overline{u}u>=1.30\pm 0.33`$.) Finally, the presence of the $`\pi (1800)`$ signals that one is not in the region of local duality, and hence that non-negligible errors may be present in $`s^k`$-weighted FESR’s (and duality ratios thereof). We can investigate this latter question by considering those additional constraints on the BPR continuum spectral ansätz obtained from the $`w(A,s)`$ family of FESR’s. As input to the OPE side we use the latest ALEPH determination of $`a(m_\tau ^2)`$, the condensate values employed in Refs. , the most recent value of Ref. for $`m_u+m_d`$, and the four-loop contour-improved version of the perturbative contributions. Apart from the small decrease in the ALEPH value of $`a(m_\tau ^2)`$ between 1997 and 1998 the input to the OPE analysis is, therefore, identical to that of Refs. . To be specific in tabulating results, we have employed, on the hadronic side, the updated continuum ansätz of Ref. (the situation is not improved if one uses instead any of the earlier ansatze of Ref. ). In Table III we present the hadronic (had) and OPE sides of the resulting sum rules for $`s_0`$ in the BPR duality window and the range $`0A9`$. Note that the best duality match from Ref. corresponds to $`s_0=2.02.4`$ GeV<sup>2</sup>. In all cases, the known $`\pi `$ pole contribution has been subtracted from both sides of the sum rule; the results thus provide a direct test of the continuum spectral ansätz. While one cannot guarantee that the $`w(A,s)`$ sum rules will work as well in the PI as in the IV channel, there are clear physical grounds for expecting them to be more reliable than those based on the weights $`w(s)=s^k`$. If one were actually in the region of local duality, and had a good approximation to the physical continuum spectral function, then of course the two methods would be compatible. Since they are not, we conclude that either the OPE is simply not well enough converged to provide a reasonable representation of the correlator away from the timelike real axis, for the $`s_0`$ values considered (in which case the whole analysis collapses as a method for extracting $`m_u+m_d`$), or the BPR continuum spectral ansatz, and hence the estimate of $`m_u+m_d`$ based on it, is unreliable. The former seems implausible (the contour-improved series, particularly at the somewhat larger scales shown in the table, appears rather well-behaved) though it cannot be rigorously ruled out.
Let us turn now to the SS channel. Here one replaces $`A^\mu `$ above with the $`S=1`$ vector current $`\overline{s}\gamma ^\mu u`$, and obtains sum rules involving the $`S=1`$ scalar correlator, $`\mathrm{\Pi }(s)`$. The analyses of Refs. (JM/CPS) and (CFNP) employ the conventional SVZ method. In the former, the Omnes representation, together with experimental $`K_{e3}`$ and $`K\pi `$ phase shift data, is used to fix the timelike $`K\pi `$ scalar form factor at continuum threshold $`s=(m_K+m_\pi )^2`$. A sum-of-resonances ansätz for the spectral function, $`\rho (s)`$, normalized to this value, is then employed on the hadronic side. In contrast, CFNP employ the Omnes representation also above threshold in order to obtain the $`K\pi `$ contribution to $`\rho (s)`$ purely in terms of experimental data. This improves the low-$`s`$ behavior of the spectral function (which shows considerable distortion associated with the attractive $`I=1/2`$ $`s`$-wave $`K\pi `$ interaction). Unresolved issues for the CFNP analysis include (1) the size of spectral contributions associated with neglected higher multiplicity states, (2) sensitivity to the assumption that the $`K\pi `$ phase is constant at its asymptotic value ($`\pi `$) beyond the highest $`s`$ ($`(1.7\mathrm{GeV})^2`$), for which it is known experimentally, and (3) the failure to find a stability window unless the continuum threshold is allowed to lie significantly above the region of significant $`K\pi `$ spectral contributions (leaving an unphysical region of size $`12`$ GeV<sup>2</sup> with essentially no spectral strength in the resulting spectral model). We investigate the CFNP ansätz (which retains only the $`K\pi `$ portion of $`\rho (s)`$, obtained as described above) by studying again the $`w(A,s)`$ family of FESR’s. In Table IV are displayed the OPE and hadronic sides, as a function of $`s_0`$ and $`A`$, for both the JM/CPS and CFNP spectral ansatze. In each case, the value of $`m_s(1\mathrm{GeV}^2)`$ extracted by the earlier authors has been used as input on the OPE side. For the CFNP case, the original authors quote a range of values; to be specific, we have chosen that value from this range, $`m_s=155\mathrm{MeV}`$, which produces a match of the OPE and hadronic sides of the sum rule for $`s_0=4\mathrm{GeV}^2`$ and $`A=0`$. Comparing the two sides at other values of $`s_0`$, $`A`$ then provides a test of the quality of the spectral ansätz. The agreement is obviously much better for the CFNP ansätz than in the other two cases, though amenable to some further improvement. Note that the most obvious improvement, namely adding additional spectral strength in the vicinity of the $`K_0^{}(1950)`$ to account for contributions of multiparticle states (the $`K\pi `$ branching ratio of the $`K_0^{}(1950)`$ is $`52\pm 8\pm 12\%`$), actually somewhat worsens the agreement between the OPE and hadronic sides, suggesting that modifications of the spectrum at lower $`s`$ may be required.
## IV Summary
We have shown that continuous families of FESR’s can be used to place constraints on hadronic spectral functions and that, based both on qualitative physical arguments and a study of the IV channel, these constraints can be expected to be more reliable than those based on FESR’s with integer power weights. The method is complementary to conventional Borel transformed (SVZ) treatments in that it involves weights that do not suppress (and for some $`A`$ actually enhance) contributions from the higher $`s`$ portion of the spectrum.
Relevant to attempts to extract the light quark masses, the method has been shown to produce constraints on the continuum portions of hadronic spectral functions not exposed by previous sum rule treatments. For the PI channel, one finds that either the use of the OPE representation is not justified at the scales considered, or existing spectral ansatze must be modified significantly to produce an acceptable match to the known OPE representation. The need for (albeit less significant) modifications to the CFNP spectral model in the SS channel has also been illustrated. We conclude that the question of the values of the light quark masses is still open, particularly for $`m_u+m_d`$, and that further investigations using the method explored here may help in clarifying the situation.
## ACKNOWLEDGMENTS
The author acknowledges the ongoing support of the Natural Sciences and Engineering Research Council of Canada, and the hospitality of the Special Research Centre for the Subatomic Structure of Matter at the University of Adelaide and the T5 and T8 Groups at Los Alamos National Laboratory, where portions of this work were originally performed. Useful discussions with Tanmoy Bhattacharya and Rajan Gupta, and with Andreas Höcker on the ALEPH spectral function analysis are also gratefully acknowledged.
|
no-problem/9901/astro-ph9901220.html
|
ar5iv
|
text
|
# The Gravitational-wave contribution to the CMB Anisotropies
## 1 Introduction
Inflationary theory has had a large impact on cosmology. On the one hand, it resolves some difficulties of the standard Big-Bang model. On the other, it provides a way of producing those density fluctuations that in the gravitational instability scenario are the seed of the large scale structure of the universe. In fact, one of the most reliable predictions of the inflationary paradigm is the parallel production of scalar and tensor perturbations from quantum fluctuations of the inflaton field $`\widehat{\varphi }`$ (Starobinsky (1979); Rubakov et.al. (1982); Starobinsky (1982); Abbot & Wise (1984)). The amplitude of tensor fluctuations determines the value of the inflationary potential and, together with other inflationary parameters, its first two derivatives (see e.g. Turner (1997)). Thus, a detection of a nearly scale-invariant stochastic gravitational wave (GW) background (tensor modes) is crucial in order to confirm any inflationary model and constrain the physics occurring near the Planck scale, at $`10^{16}GeV`$.
Observations of Cosmic Microwave Background (CMB) anisotropy promise to be unique in this respect (Starobinsky (1985); Crittenden et.al. (1993); Turner et.al. (1993)). Recent numerical simulations (Zaldarriaga et.al. (1997); Dodelson et.al. (1997); Bond (1997)) have shown that inflationary parameters will be measured with an accuracy of few percent by the MAP (Bennett et.al. (1995)) and Planck (Bersanelli et.al. (1996)) space missions, which will image the CMB anisotropy pattern with high sensitivity and at high angular resolution.
Meanwhile, the number of experiments reporting detections of anisotropy has increased to a couple of ten (see Table 1, below). At the moment, the detections available seem compatible with the predictions of inflationary models, like Cold Dark Matter (CDM), with ”blue” power spectra, i.e. $`P(k)=Ak^{n_S}`$ with $`n_S\stackrel{>}{}1`$ (de Bernardis et.al. (1997); Bennett et.al. (1996); Bond & Jaffe (1996)). As noticed by many authors, there is a substantial rise in the anisotropy angular power spectrum at $`\mathrm{}200`$, which appears to be consistent with the expected location of the first Doppler peak in flat models. This small scale behaviour seems to disfavor a GW contribution. In fact, as is well known, tensor fluctuations induce anisotropy only on large angular scales ($`\mathrm{}\stackrel{<}{}30`$). If there is a sizable contribution from GW in large scale detected anisotropies, this would lower the predicted value of $`(\mathrm{\Delta }T/T)_{rms}`$ on smaller scales.
Moreover, inflationary models that predict $`n_S\stackrel{>}{}1`$ generally predict vanishingly small tensor fluctuations (Kolb & Vadas (1994)).
Based on these arguments, a lot of recent CMB data analysis (Lineweaver et.al. (1997); Hancock et.al. (1997)) has not taken into account the possible presence of a GW background, assuming its contribution to be negligible.
In our opinion, there are two points that can alter these conclusions:
\- Tensor modes are compatible with the theory of linear adiabatic perturbations of a homogeneous and isotropic universe. Like scalar perturbations and in contrast with vector perturbations, they can arise from small deviations from the isotropic Friedmann universe near the initial singularity. So, CMB data should be analyzed without any a priori assumptions: the presence or absence of a tensor component in models with $`n_S1`$ can be only tested by observation.
\- Few variations in the still undetermined cosmological parameters (like the baryonic abundance or the Hubble constant) and inflationary parameters (like the spectral index $`n_S`$) can counterbalance the effect of tensor modes, increasing the predicted value of $`(\mathrm{\Delta }T/T)_{rms}`$ on small scales.
Thus, in this paper, we will discuss what kind of constraint present CMB anisotropy data provide on the tensor contribution allowing all the remaining parameters to vary freely in their acceptable ranges. We will extend our previous CMB data analysis (de Bernardis et.al. (1997)), by including new CMB detections, and by analyzing a larger set of models. We restrict ourselves to critical universes ($`\mathrm{\Omega }_{matter}=1`$), as a recent analysis of CMB anisotropies and galaxy surveys (Gawiser & Silk (1998)) has shown that pure scalar Mixed Dark Matter (MDM) models are in good agreement with the data set. We will address the importance of a cosmological constant, reported by Riess et.al. (1998) and Perlmutter et.al. (1998), in a forthcoming paper.
Since we treat the GW contribution as a free parameter, we will not test any specific inflationary model. So, our approach will be mainly phenomenological: we assume that GW are created in the early universe by some process during or immediately after inflation, which we do not want to specify any further here. Nonetheless, as the amplitude of the GW spectrum provides a test for inflation (see next section), in our conclusions we will discuss if results are compatible with this paradigm.
Since any possible GW signal will affect the matter power spectrum normalization inferred from COBE, we will test the models that best fit the CMB data with the normalization $`\sigma _8`$ of the matter fluctuation in $`8h^1`$ spheres and with the shape of the spectrum from the Peacock & Dodds (1994) analysis.
The plan of the paper is as follows. In Sect.2 we write the set of equations necessary to describe the inflationary process in the slow roll approximation. In Sect.3 we briefly discuss the analysis of the current degree-scale CMB experiments. In Sect.4 we test the best fit models with the Large-Scale Structure (LSS) data. Finally, in Sect. 5 we present and discuss our conclusions.
## 2 Early Universe
Inflation in the early universe is determined by the potential $`V(\widehat{\varphi })`$, where $`\widehat{\varphi }`$ can be a multiplet of scalar fields. Here we restrict ourselves to the case of a single, minimally coupled scalar field $`\varphi `$ with potential $`V`$ and equation of motion
$$\ddot{\varphi }+3H\dot{\varphi }+V^{}=0,$$
(1)
(as usual, the dot and prime indicate derivatives with respect to physical time $`t`$ and to the scalar field $`\varphi `$, respectively). The expansion rate in the early universe can be written as:
$$H^2=\frac{8\pi }{3m_{Pl}^2}\left[\frac{1}{2}\dot{\varphi }^2+V(\varphi )\right]$$
(2)
where $`m_{pl}=1.210^{19}GeV`$ is the Planck mass (we use natural units, i.e. $`h=c=k=1`$).
The slow-roll approximation holds in most of the inflationary models. This condition is valid if (Copeland et.al. (1993); Hodges & Blumenthal (1990))
$$\frac{m_{pl}^2}{4\pi }\left[\frac{H^{\prime \prime }}{H}\right]=\eta (\varphi )<<1$$
(3)
and
$$\frac{m_{pl}^2}{4\pi }|\frac{H^{}}{H}|^2=ϵ(\varphi )<<1$$
(4)
The second condition, since $`ϵ`$ is a direct measure of the equation of state of the scalar field matter, also implies the period of accelerated expansion (Dodelson et.al. (1997)).
In the slow roll approximation the amplitude of scalar and tensor perturbations are related to the inflationary potential as follows (Copeland et.al. (1993)):
$$A_S(\varphi )=\sqrt{\frac{2}{\pi }}\frac{1}{m_{pl}^2}\frac{H^2}{|H^{}|}$$
(5)
and
$$A_T(\varphi )=\frac{1}{\sqrt{2}\pi }\frac{H}{m_{pl}}$$
(6)
We can relate the wavelength, $`\lambda `$, and the Hubble parameter during inflation, $`H(\varphi )`$, with the scalar field by writing:
$$\frac{d\mathrm{ln}\lambda }{d\varphi }=\frac{\sqrt{4\pi }}{m_{pl}}\frac{A_S}{A_T}$$
(7)
and
$$\frac{\mathrm{ln}H}{\varphi }=\frac{\sqrt{4\pi }}{m_{pl}}\frac{A_T}{A_S},$$
(8)
respectively.
Let us define the spectral equations for scalar and tensor components as follows:
$$A_S^2(k)=A^2(\frac{k}{k_0})^{n_S1}$$
(9)
and
$$A_T^2(k)=B^2(\frac{k}{k_0})^{n_T}$$
(10)
where $`k_0=H_0`$ is the wavenumber of a fluctuation which re-enters the horizon at the present time, and $`A`$ and $`B`$ are constants. It is easy to see that $`n_T=2ϵ(k)`$ if $`\lambda =\lambda _0`$, and $`n_T=0`$ if $`\lambda 0`$ (Lidsey et.al. (1997)).
We define the ratio of amplitudes of the scalar and tensor modes by:
$$r=\sqrt{ϵ(k_0)}=\frac{B}{A}$$
(11)
By solving Eq.(7) and assuming $`n_T0`$, the scalar field can be written as a function of the wavelength:
$$\varphi (\lambda )=\varphi _0+\varphi _1\left[(\frac{\lambda }{\lambda _0})^{\frac{n_S1}{2}}1\right]$$
(12)
where $`\varphi _0`$ is a constant, to be found from boundary conditions, and $`\varphi _1=\frac{r}{n_S1}\frac{m_{pl}}{\sqrt{\pi }}`$.
Furthermore, Eq.s (12) and (8) allow us to find the Hubble parameter $`H(\varphi )`$ during inflation:
$$H(\varphi )=H_i\mathrm{exp}(\frac{r^2}{n_S1}\xi ^2)$$
(13)
where $`\xi =\frac{\varphi +\varphi _1\varphi _0}{\varphi _1}`$ and $`H_i`$ is a constant.
The potential can be written in terms of the Hubble parameter:
$$V(\varphi )=\frac{3m_{pl}^2}{8\pi }H^2(\varphi )$$
(14)
At this point, we can define the relation between the quadrupole multipoles of the CMB anisotropy generated by scalar and tensor perturbations: $`C_2^S`$ and $`C_2^T`$, respectively. To do this we will follow the calculations done by (Souradeep & Sahni (1992)) in which both $`C_2^S`$ and $`C_2^T`$ were found as a function of $`H(\varphi )`$ at $`\lambda =H_0^1`$. So we have:
$`C_2^S={\displaystyle \frac{2\pi ^2}{25}}f(n_S){\displaystyle \frac{1}{m_{pl}^4}}{\displaystyle \frac{H^4}{(H^{})^2}}`$ (15)
$`C_2^T={\displaystyle \frac{2.9}{5\pi }}{\displaystyle \frac{H^2}{m_{pl}^2}}`$ (16)
and
$$\frac{C_2^T}{C_2^S}=\frac{29}{4\pi ^3}\frac{m_{pl}^2}{f(n_S)}\frac{(H^{})^2}{H^2}$$
(17)
where
$$f(n_S)=\frac{\mathrm{\Gamma }(3n_S)\mathrm{\Gamma }({\displaystyle \frac{3+n_S}{2}})}{\mathrm{\Gamma }^2({\displaystyle \frac{4n_S}{2}})\mathrm{\Gamma }({\displaystyle \frac{9n_S}{2}})}$$
(18)
Using Eq.(13) we can write:
$$\frac{\mathrm{ln}H(\varphi )}{\varphi }=\frac{2\sqrt{\pi }}{m_{pl}}r\xi $$
(19)
Therefore, at $`\varphi =\varphi _0`$ (i.e. $`\xi =1`$), we have:
$$R(n_S)\frac{C_2^T}{C_2^S}=\frac{29r^2}{\pi ^2f(n_S)}$$
(20)
As we can see from the equation above, the tensor to scalar quadrupole ratio $`R`$ is related to the slow-roll parameter $`ϵ`$. Eq. (20) identifies a region in the ($`n_S`$,$`R`$) space of values where the slow-roll condition is satisfied. Furthermore, as $`ϵ<1`$ only if the universe has undergone a period of accelerated expansion, one can use this equation to test the inflationary scenario.
In the same way, we can use Eq.(19) in Eq.(3) in order to find:
$$2\eta =n_s1+2r^2$$
(21)
so the slow roll condition (4) implies Eq. (3) if $`n_s1`$.
Using Eq.(14), we can now write the potential as
$$V(\varphi _0)=\frac{15}{23.2}C_2^Tm_{pl}^4$$
(22)
Therefore, the measurement of the contribution to the quadrupole anisotropy of tensor fluctuation, $`C_2^T`$, allows us to estimate the size of the potential responsible for inflation.
## 3 CMB Anisotropy
### 3.1 Method
We use a set of the most recent CMB anisotropy detections, both on large and degree angular scales, in order to estimate the amplitude of tensor fluctuations. The likelihood of the assumed independent CMB anisotropy data is (see de Bernardis et.al. (1997)):
$``$ $`=`$ $`{\displaystyle \underset{j}{}}{\displaystyle \frac{1}{(2\pi [(\mathrm{\Sigma }_{j}^{}{}_{}{}^{(the)})^2+(\mathrm{\Sigma }_j^{(exp)})^2])^{1/2}}}`$ (23)
$`\times `$ $`exp({\displaystyle \frac{1}{2}}{\displaystyle \frac{[\mathrm{\Delta }_{j}^{}{}_{}{}^{(exp)}\mathrm{\Delta }_j^{(the)}]^2}{(\mathrm{\Sigma }_j^{(the)})^2+(\mathrm{\Sigma }_j^{(exp)})^2}})`$
where $`\mathrm{\Delta }_j^{exp}`$ and $`\mathrm{\Delta }_j^{the}`$ are the experimentally detected and theoretically expected mean square anisotropy, respectively. The $`(\mathrm{\Sigma }_j^{(the)})^2`$ and $`(\mathrm{\Sigma }_j^{(the)})^2`$ are the respective cosmic and experimental variances. Obviously, the likelihood depends on the parameters of the cosmological model. Although a complete analysis should cover all the parameter space, here we restrict ourselves to flat models ($`\mathrm{\Omega }_0=1`$) composed of baryons ($`0.01\stackrel{<}{}\mathrm{\Omega }_b\stackrel{<}{}0.14`$), cold dark matter ($`\mathrm{\Omega }_{CDM}\stackrel{>}{}0.7`$), hot dark matter ($`\mathrm{\Omega }_\nu \stackrel{<}{}0.3`$), photons and massless neutrinos. As shown in (de Bernardis et.al. (1997); Ma & Bertschinger (1995); Dodelson et.al. (1996)) the angular power spectrum of MDM models differs from pure CDM by less than $`10\%`$ in the angular scales of interest. Given the poor sensitivity of the available CMB anisotropy detections at degree angular scales, we restrict ourselves to pure CDM models, keeping in mind that basically the same power spectrum is also expected for MDM models. The predictions of CDM and MDM models for the matter power spectrum obviously differ, and in a substantial way: we will discuss this point in more detail below.
Here we keep as free parameters $`\mathrm{\Omega }_b`$ and $`h`$. Both parameters affect the positions and amplitudes of the so-called Doppler peaks of the angular power spectrum. In fact, changing $`\mathrm{\Omega }_b`$ at fixed $`h`$ changes the pressure of the baryon-photon fluid before recombination, increasing its oscillations below its Jeans length. A larger baryon to photon ratio will increase the compressions (which produce the even peaks in $`C_{\mathrm{}}`$ for inflationary models) and decrease the rarefaction (odd peaks for inflationary models). Lowering $`h`$ at fixed $`\mathrm{\Omega }_b`$ changes the epoch of matter-radiation equality: potentials inside the horizon decay in a radiation dominated era but not in a fully matter dominated one. The combination $`\mathrm{\Omega }_bh^2`$, which actually appears in the calculations, is also constrained by primordial nucleosynthesis arguments (Copi et.al. (1995)): $`0.01\stackrel{<}{}\mathrm{\Omega }_bh^2\stackrel{<}{}0.026`$. Moreover, from globular cluster ages, $`0.4\stackrel{<}{}h\stackrel{<}{}0.65`$ (Kolb & Turner (1991)).
We will also explore variations in the spectral index of the (scalar) primordial power spectrum $`n_S`$. We restrict ourselves to values of $`n_S\stackrel{<}{}1.5`$, to be consistent with the absence of spectral distortions in the COBE/FIRAS data (Hu et.al. (1994)). A parameter independent normalization for the power spectrum can be expressed in terms of the amplitude of the multipole $`C_{10}`$. We define the parameter $`𝒜A/A_{COBE}`$ as the amplitude $`𝒜`$ of the power spectrum (considered as a free parameter) in units of $`A_{COBE}`$, the amplitude needed to reproduce $`C_{10}47.6\mu K^2`$, as observed on the COBE-DMR four-year maps (Bunn (1997)).
Finally, for tensor fluctuations, we will assume $`n_T=0`$. In fact, variations in the tensor spectral index in the range $`1\stackrel{<}{}n_T\stackrel{<}{}0`$ do not give appreciable changes in the structure of the $`C_{\mathrm{}}`$’s, given the cosmic variance and the current experimental sensitivity. We parameterize the amplitude of these tensor fluctuations with $`R`$, defined in Eq. (22). So, in the end, we will consider only five quantities as free parameters: $`𝒜`$, $`n_S`$, $`R`$, $`\mathrm{\Omega }_b`$ and $`h`$.
We have computed the angular power spectrum of CMB anisotropy by solving the Boltzmann equation for fluctuations in CMB brightness (Peebles & Yu (1970); Hu et.al. (1995)). Our code is described in (de Bernardis et.al. (1997); Melchiorri & Vittorio (1996)) and allows the study of CMB anisotropy both in cold (CDM) and mixed (MDM) dark matter models. Our $`C_{\mathrm{}}`$’s match to better than $`0.5\%`$ for $`\mathrm{}1500`$ compared with those of other codes (Seljak & Zaldarriaga (1996); Ma & Bertschinger (1995)). In Fig. 1 we show the $`C_{\mathrm{}}`$’s for different parameter choices.
The data we consider are listed in Table 1 and shown in Fig. 1. We have updated the data presented in our previous paper (de Bernardis et.al. (1997)) to include the new results from the Tenerife, MSAM and CAT experiments. For the COBE data, we use the $`8`$ data points from Tegmark & Hamilton (1997), that have the advantage of uncorrelated error bars.
### 3.2 Results
The best fit parameters (i.e. those which maximize the likelihood) are (with $`95\%`$ confidence): $`n_S=1.23_{0.15}^{+0.17}`$, $`R=2.4_{2.2}^{+3.4}`$, with $`𝒜=0.92`$, $`\mathrm{\Omega }_b=0.07`$ and $`h=0.46`$. We can only put the following upper limits (at $`68\%`$) on these last two best fit values: $`\mathrm{\Omega }_b<0.11`$, $`h<0.58`$.
A probability confidence level contour in the five dimension volume of parameters is obtained by cutting the $``$ distribution with the isosurface $`_P`$, and by requiring that the volume inside $`_P`$ is a a fraction $`P`$ of the total volume. The projections of the $`_{68}`$ and $`_{95}`$ surfaces on the $`n_SR`$ plane are shown in Fig. 2.
As we can see from Fig. 2, the Likelihood contours are very broad and models with spectral index $`n_S1`$ and $`R=0`$ are statistically indistinguishable from models with $`n_S1.4`$ and $`R4`$.
The quite large values of $`R`$ for $`n_S\stackrel{>}{}1`$ are due to a parameter degeneracy problem that present CMB anisotropy detections are not able to solve (see Fig. 1). In fact, increasing the contribution of tensor modes boosts the anisotropy on large scales ($`>>2^{}\mathrm{\Omega }_0^{1/2}`$). As the theoretical predictions are normalized to COBE/DMR, adding tensor fluctuations while keeping all the other parameters fixed, actually suppresses the level of degree scale anisotropy. To counterbalance this effect, it is necessary to postulate ”blue” primordial spectra, i.e. $`n_s\stackrel{>}{}1`$. The shape of the confidence level region in the $`n_sR`$ plane reflects this correlation. This degeneracy in the model prediction is actually broken at a higher angular resolution, $`\mathrm{}\stackrel{>}{}300`$ say, where present experiments are particularly affected by cosmic variance, due to the very small region of the sky sampled (see Table 1). We have the following $`95\%`$ C.L. upper limits on $`R`$: $`0.3`$, $`1.3`$, $`2.5`$, $`4.5`$, $`7.8`$ and $`12.5`$ for $`n_s=0.8`$, $`0.9`$, $`1.0`$, $`1.1`$, $`1.2`$ and $`1.3`$, respectively. At $`n_s=1.4`$ and $`1.5`$ we can put $`95\%`$ C.L. lower limits of $`1.0`$ and $`2.8`$ on $`R`$. A quadratic fit to the maxima distribution gives:
$$R=34.370.8n_S+36.5n_S^2$$
(24)
for $`1.1n_S1.5`$. With the above equation, we find that the tensor component can have an rms amplitude value of $`28\mu k`$ for $`n_s=1.1`$ and $`49\mu k`$ for $`n_s=1.5`$, while the scalar component remains at $`100\mu k`$.
It is interesting to see (Fig.1) that models with $`n_S1.4`$ and $`R3`$, which are well compatible with our analysis, seem to prefer a greater Hubble constant, $`h0.6`$. So, the gravitational wave contribution also seem to moderate the discrepancy between the value $`h0.7`$ (Freedman (1996)), inferred by several different methods, and the value $`h0.4`$ (Lineweaver et.al. (1997)) inferred by scalar-only CMB analysis.
We found that inside the $`95\%`$ contour, the overall normalization amplitude, in units of $`A_{COBE}`$ is $`𝒜=1\pm 0.2`$, i.e. all the models considered therein correspond well with COBE/DMR normalization.
The simple analysis carried out here does not take into account the correlation due to overlapping sky coverage (e.g., Tenerife and COBE, and/or MSAM and Saskatoon). We check the stability of our analysis with a jacknife test, i.e. removing one set of experimental data each time. We have a maximum variation of $`34\%`$ in our limits in the $`n_SR`$ plane, except with the removal of COBE data that modifies our results by $`10\%`$. So, neglecting this correlation does not significantly change the results of our analysis. We also repeated the analysis including the possible $`\pm 14\%`$ calibration error to the five Saskatoon points (Netterfield et.al. (1997)), and we did not find significant variations. In the limited cases where comparison is possible, our analysis produced results similar to those of Bond & Jaffe (1996), Lineweaver et.al. (1997) and Hancock et.al. (1997).
## 4 Comparison with LSS
As we have seen, blue models with a substantial tensor component agree well with CMB data. Tensor modes have dramatic effects on the matter power spectrum, reducing its normalization by a factor of $`(1+R)^1`$. Using the above fit formula, the tensor contribution to the CMB correlation function on the COBE/DMR scales can be between $`54\%`$ for $`n_s=1.1`$ and $`91\%`$ for $`n_s=1.5`$. In this section we want to test these models with large scale matter distribution. As is well known, CDM blue models predict a universe that is too inhomogeneous on scales $`10h^1Mpc`$. Nonetheless, the excess power on these scales can be reduced by considering a mixture of cold and hot dark matter, i.e. mixed dark matter (MDM) models. The difference in the $`C_l`$ behaviour between a pure CDM and an MDM ($`\mathrm{\Omega }_\nu 0.3`$) model is very tiny, $`2\%`$ up to $`l300`$ and $`8\%`$ up to $`l800`$ (see, for example De Gasperis et.al. (1995)). Therefore, the results of our CMB analysis are the same in this kind of model. In Figure 3, MDM matter power spectra from models that agree with CMB data are shown. The data points are an estimate of the linear power spectrum from Peacock & Dodds (1994), assuming a CDM flat universe and bias values between Abell, radio, optical, and IRAS catalogs $`b_A:b_R:b_O:b_I=4.5:1.9:1.3:1.0`$ with $`b_I=1.0`$. As shown in (Smith et.al. (1997)) recovered linear power spectra of CDM and MDM models are nearly the same in the region $`0.01k0.15hMpc^1`$ but diverge from this spectrum at higher $`k`$, so we restrict ourselves to this range. The $`\chi ^2`$ (with $`11`$ degrees of freedom) are $`15`$, $`10`$, $`21`$, $`9`$, $`37`$, $`53`$ for models in Figure 3 with $`(1.4,3.3)`$, $`(1.3,3.9)`$, $`(1.2,1.3)`$, $`(1.1,0.6)`$,$`(1.0,0.1)`$ and $`(0.9,0)`$ in the $`(n_s,R)`$ space. So, models with a large tensor contribution on COBE scales and blue spectral index seem to agree well also with the shape of matter distribution on large scale. The values for the $`\sigma _8`$, computed with CMBFAST (Seljak & Zaldarriaga (1995)), are $`0.69`$, $`0.61`$, $`0.66`$, $`0.63`$, $`0.63`$, $`0.74`$, in very reasonable agreement with the value of $`\sigma _8^{IRAS}=0.69\pm 0.05`$ (Fisher et.al. (1994)) derived from the IRAS catalog.
Whether IRAS galaxies are biased is still under debate. Analysis from cluster data (Eke et.al. (1996), Pen (1998), Bryan and Norman (1998)), shows a preferred value of $`\sigma _80.50.6`$ with few percent error bars. Analysis from peculiar velocities (Zehavi (1998)) results in a larger value $`\sigma _8=0.85\pm 0.2`$, which seems to be in severe conflict with the cluster data. Thus, the theoretical values of $`\sigma _8`$ for blue MDM models with a relic gravitational wave background are between the $`\sigma _8`$ values derived from cluster abundance and peculiar velocities. In any case, the likelihood of the CMB data is quite flat around its maximum. So, it is easy to find models, statistically indistinguishable from the best fit models, with $`\sigma _8`$ nearer either to $`0.5`$ or to $`0.8`$.
Because of statistical and/or systematic uncertainties we do not consider it appropriate to put more than qualitative conclusions on these results, but still one can say that the lower matter normalization due to the tensor component helps the blue MDM models to match the LSS data.
## 5 Conclusions
Our main conclusions, are as follows:
1. The conditional Likelihood shows a maximum at $`n_S=1.23_{0.15}^{+0.17}`$, $`R=2.4_{2.2}^{+3.4}`$, with $`𝒜=0.92`$, $`\mathrm{\Omega }_b=0.07`$ and $`h=0.46`$. Thus, there is some evidence that a tensor component can be present, and in a substantial way, in models with $`n_s`$ greater than one. Inflationary models of this type have been investigated by Copeland et.al. (1993) and by Lukash & Mikheeva (1996) and thus belong to the class of hybrid inflationary models (Kinney, Dodelson & Kolb (1998)). The general form of the potential can be written as $`V(\varphi )=V_0+\frac{1}{2}\mu ^2\varphi ^2`$ At the end of inflation, the inflationary potential $`V(\varphi )`$ is not equal to zero, being $`V_0`$ of the order of $`(610^{16}Gev)^4`$. In order to be consistent with the present vacuum energy $`(10^{30}Gev)^4`$, one additional field is necessary to finish inflation. The inclusion of this field does not change the conclusion of our analysis, since it affects only the high frequency region of the GW spectrum ($`100MHz`$). For models on the best fit curve (Eq.24), $`V(\varphi _0)`$ belongs to the interval $`4.310^{11}m_{pl}^4<V_0<1.310^{10}m_{pl}^4`$. In Fig.2 we plot Eq.(24) with the condition $`ϵ=1`$. The region below this curve in the $`n_SR`$ plane is where the slow roll approximation is valid. As we can see, models on our best fit curve satisfy this condition, even if models with $`ϵ1`$ are compatible with observations.
Approaching the limiting region $`ϵ=1`$, higher order terms in the slow roll approximation became valuable. This leads to changes in our conclusions on the potential by a factor $`1ϵ/330\%`$ (Kinney, Dodelson & Kolb (1998)).
2. The $`95\%`$ region on the $`n_SR`$ plane includes a wide range of parameters. This means that the presently available data set is not sensitive enough to produce precise determinations for $`n_S`$ and $`R`$. Systematic and statistical errors in the different experiments are still significant, but, as we have shown, the difficulties involved in such determinations are mainly due to a degeneracy in these parameters. So, the $`(n_S,R)`$ degeneracy has important consequences for tests of the inflationary theory: increasing the scalar spectral index and the tensor component lead to a break in the slow roll approximation, but it also produces CMB power spectra near to the scale invariant one. Therefore it is difficult from the present CMB data to see if the slow roll condition is correct.
Furthermore, current CMB results on the normalization of the matter power spectrum and/or its spectral index can be biassed and/or anti-biassed by a huge tensor contribution. As we can see from Fig.1, this degeneration also has effects on the constraints of the remaining cosmological parameters, being a model with $`h0.6`$ statistically indistinguishable from a model with $`h0.4`$.
The inflationary background of primordial gravitational waves is assumed detectable mainly through CMB experiments. The local energy density of this background is, in the most optimistic situation, extremely low, with $`d\mathrm{\Omega }_{GW}h^2/dlogk10^{16}`$ at frequencies $`10^{15}Hz<f<10^{15}Hz`$. The tenuity of this signal makes the degeneracy in the $`n_S`$ and $`R`$ parameters much more worrying than similar degeneracy in other parameters (e.g. $`h`$ and $`\mathrm{\Omega }_b`$) that could be constrained through other measurements.
3. ”Blue” MDM models with a tensor contribution, are in reasonable agreement with the present values of $`\sigma _8`$, and with the shape of the matter power spectrum inferred by the Peacock & Dodds (1994) analysis. A tensor contribution could also be a viable mechanism in order to reconcile these models with a low value for the $`\sigma _8`$ around $`0.5`$ (Henry & Arnaud (1991)).
This being the situation, a measure of the structure of the secondary peaks becomes a crucial test for the presence of tensor perturbations. Using the above best fit equation, we can make some predictions regarding future detections. We found that an experiment with a window function probing the multipoles $`500\mathrm{}680`$, will measure a total rms anisotropy of $`28.3\mu K`$ for $`n_s=1.1`$, and $`34.4\mu K`$, with $`n_S=1.5`$. An $`20\%`$ difference that could be proven, when the sensitivity of these experiments is within a few $`\mu k`$, with an improved sky coverage. Polarization measurements at intermediate angular scales can also be helpful (Sazhin (1984), Polnarev (1985), Sazhin & Benitez (1995), Sazhin (1996)). The possibility of a direct separation of scalar perturbations from tensor perturbations by the method of decomposition of Stocks parameters in sets of spin $`\pm 2`$ spherical harmonics seems extremely promising (Kamionkovsky & Kosowsky (1996), Seljak & Zaldarriaga (1996), Sazhin & Shulga (1996)).
Possibly a definitive answer will come when future CMB experiment provides a clear and robust picture of sub-degree angular scale anisotropy and polarization.
We wish to thank Paolo de Bernardis, Ruth Durrer, Giancarlo De Gasperis, Martin Kunz and Andrew Yates. M.V.S. acknowledges the University of ”Tor Vergata” for hospitality during writing part of this paper. MVS acknowledge ”Cariplo Foundation” for Scientific Research and ”Landau-Network - Centro Volta” for financial support during the writing the last version of this paper.
|
no-problem/9901/physics9901035.html
|
ar5iv
|
text
|
# Citations and the Zipf-Mandelbrot’s law
## 1 Introduction
Let us begin with an explanation as to what is Zipf’s law. If we assign ranks to all words of some natural language according to their frequencies in some long text (for example the Bible), then the resulting frequency-rank distribution follows a very simple empirical law
$$f\left(r\right)=\frac{a}{r^\gamma }$$
(1)
with $`a0.1`$ and $`\gamma 1`$. This was observed by G. K Zipf for many languages long time ago . More modern studies also confirm a very good accuracy of this rather strange regularity.
In his attempt to derive the Zipf’s law from the information theory, Mandelbrot produced a slightly generalized version of it:
$$f\left(r\right)=\frac{p_1}{\left(p_2+r\right)^{p_3}},$$
(2)
$`p_1,p_2,p_3`$ all being constants.
The same inverse pow-law statistical distributions were found in embarrassingly different situations (For reviews see ). In economics, it was discovered by Pareto long ago before Zipf and states that incomes of individuals or firms are inversely proportional to their rank. In less formal words , “most success seem to migrate to those people or companies who already are very popular”. In demography , city sizes (populations) also are pow-like functions of cities ranks. The same regularity reveals itself in the distributions of areas covered by satellite cities and villages around huge urban centers .
Remarkably enough, as is claimed in , in countries such as former USSR and China, where natural demographic process were significantly distorted, city sizes do not follow Zipf’s law!
Other examples of zipfian behavior is encountered in chaotic dynamical systems with multiple attractors , in biology , ecology , social sciences and etc. .
Even the distribution of fundamental physical constants, according to , follows the inverse power law!
The most recent examples of Zipf-like distributions are related to the World Wide Web surfing process .
You say that all this sounds like a joke and looks improbable? So did I when became aware of this weird law from M. Gell-Mann’s book “The Quark and the Jaguar” some days ago. But here are the distribution of first 50 USA largest cities according to their rank , fitted by Eq.2:
The actual values of fitted parameters depend on the details of the fit. I assume (rather arbitrarily) 5% errors in data.
Maybe it is worthwhile to remember here, the old story about a young priest who complains his father about having a very difficult theme for his first public sermon – virgin birth.
– “Look father”, he says, “if some young girl from this town, becomes pregnant, comes to you and says that this is because of Holy Spirit. Do you believe it?”
The father stays silent for a while, then answers:
–”Yes, son, I do. If the baby would be born, if he would be raised and if he would live like the Christ”.
So, clearly, you need more empirical evidence to accept improbable things. Here is one more, the list of the most populated countries fitted by the Mandelbrot formula (2):
Even more simple Zipfian $`a/r`$ parameterization will work in this case fairly well!
## 2 Fun with citations
But all this was known long ago. Of course it is exciting to check its correctness personally. But more exciting is to find whether this rule still holds in a new area. SPIRES database provides excellent possibility to check scientific citations against Zipf-Malderbrot’s regularity.
As I have been involved in this matters because of M. Gell-Mann’s book, my first try naturally was his citations itself. The results were encouraging:
But maybe M. Gell-Mann is not the best choice for this goal. SPIRES is a rather novel phenomenon, and M. Gell-Mann’s many important papers were written long before its creation. So they are purely represented in the database. Therefore, let us try present day citation favorite E. Witten. Here are his 160 most cited papers according to SPIRES (Note once more that the values of fitted parameters may depend significantly on the details of the fit. In this and previous case I choose $`\sqrt{N}`$ as an estimate for data errors, not to ascribe too much importance to data points with small numbers of citations. In other occasions I assume 5% errors. Needless to say, both choices are arbitrary):
You have probably noticed very big values of the prefactor $`p_1`$. Of course this is related to the rather big values of other two parameters. We can understand big value of $`p_2`$ parameter as follows. The data set of individual physicist’s papers are subset of more full data about all physicists. So we can think of $`p_2`$ as being an average number of papers from other scientists between two given papers of the physicists under consideration. Whether right or not, this explanation gains some empirical support if we consider top cited papers in SPIRES (Review of particle physics is excluded):
As we see $`p_2`$ is fairly small now.
At last, it is possible to find the list of 1120 most cited physicists (not only from the High Energy Physics) on the World Wide Web . Again the Mandelbrot formula (2) with $`p_1=3.8110^4,p_2=10.7`$ and $`p_3=0.395`$ gives an excellent fit. Now there are too many points, making it difficult to note visually the differences between the curve and data. In the figure that follows, we show this relative difference explicitly.
For the most bulk of data the Mandelbrot’s curve gives the precision better than 5%!
You wonder why now $`p_2`$ is relatively high? I really do not know. Maybe the list is still incomplete for his lower rank part. In any case, if you take just the first 100 entries from this list, the fit results in $`p_1=2.110^4,p_2=0.09,p_3=0.271`$. This example also shows that actually the Mandelbrot’s curve with constant $`p_1,p_2,p_3`$ is not as good approximation as one might judge from the above given histograms, because different parts of data prefer different values of the Mandelbrot’s parameters.
## 3 Any explanation?
The general character of the Zipf-Mandelbrot’s law is hypnotizing. We already mentioned several wildly different areas where it was encountered. Can it be considered as some universal law for complex systems? And if so, what is the underlying principle which unifies all of these seemingly different systems? What kind of principle can be common for natural languages, individual wealth distribution in some society, urban development, scientific citations, and female first name frequencies distribution? The latter is reproduced below :
Another question is whether the Mandelbrot’s parameters $`p_2`$ and $`p_3`$ can tell us something about the (complex) process which triggered the corresponding Zipf-Mandelbrot distribution. For this goal an important issue is how to perform the fit (least square, $`\chi ^2`$, method of moments or something else?). I do not have any answer to this question now. However let us compare the parameters for the female first name distribution from the above given histogram and for the male first name distribution (data are taken from the same source ). In both cases $`\chi ^2`$ fit was applied with 5% errors assumed for each point.
The power-counting parameter $`p_3`$ is the same for both distributions, although the $`p_2`$ parameter has different values.
If you are fascinated by a possibility that very different complex systems can be described by a single simple law, you maybe will be disappointed (as was I) to learn that some simple stochastic processes can lead to very same Zipfian behavior. Say, what profit will you have from knowing that some text exhibits Zipf’s regularity, if this gives you no idea the text was written by Shakespeare or by monkey? Alas, it was shown that random texts (“monkey languages”) exhibit Zipf’s-law-like word frequency distribution. So Zipf’s law seems to be at least “linguistically very shallow” and “is not a deep law in natural language as one might first have thought”.
Two different approaches to the explanation of Zipf’s law is very well summarized in G. Millers introduction to the 1965 edition of Zipf’s book : “Faced with this massive statistical regularity, you have two alternatives. Either you can assume that it reflects some universal property of human mind, or you can assume that it reflects some necessary consequence of the laws of probabilities. Zipf chose the synthetic hypothesis and searched for a principle of least effort that would explain the apparent equilibrium between uniformity and diversity in our use of words. Most others who were subsequently attracted to the problems chose the analytic hypothesis and searched for a probabilistic explanation. Now, thirty years later, it seems clear that the others were right. Zipf’s curves are merely one way to express a necessary consequence of regarding a message source as a stochastic process”.
Were “others” indeed right? Even in the realm of linguistics the debate is still not over after another thirty years have passed . In the case of random texts, the origin of the Zipf’s law is well understood . In fact such texts exhibit no Zipfian distribution at all, but log-normal distribution, the latter giving in some cases a very good approximation to the Zipf’s law. So there is no doubt that simple stochastic (Bernoulli or Markov) processes can lead to a Zipfian behavior. No dynamically nontrivial properties (interactions and interdependence) is required at all from the underlying system. But it was also stressed in the literature that this fact does not preclude more complex and realistic systems to exhibit Zipfian behavior because of underlying nontrivial dynamics. In this case, we can hope that the Zipf-Mandelbrot parameters will be meaningful and can tell something about the system properties. Let us note that the rank-frequency distribution for complex systems is not always Zipfian. For example, if we consider the frequency of occurrence of letters, instead of words, in a long text, the empirical universal behavior, valid over 100 natural languages with alphabet sizes ranged between 14 and 60, is logarithmic
$$f\left(r\right)=AB\mathrm{ln}r$$
where $`A`$ and $`B`$ are constants. This fact, of course, is interesting by itself. It is argued in that both regularities (zipfian and logarithmic) can have the common stochastic origin.
An interesting example of Zipf-Mandelbrot’s parameters being useful and effective, is provided by ecology . The exponent $`p_3`$ is related to the evenness of the ecological community. It has higher values for “simple” and lower values for “complex” systems. The parameter $`p_2`$ is related to the “diversity of the environment” and serves as a measure of the complexity of initial preconditions.
The another pole in explanation of Zipf’s law seeks some universal principle behind it, such as “least effort” , “minimum cost” , “minimum energy” or “equilibrium” . The most impressive and, as the above ecological example shows, fruitful explanation is given by B. Mandelbrot and is based on fractals and self-similarity.
As we see, the suggested explanations are almost as numerous as the observed manifestations of this universal pow-like behavior. This probably indicates that some important ingredient in this regularity still escapes to be grasped. As M. Gell-Mann concludes “Zipf’s law remains essentially unexplained”.
## 4 The almighty chance
If monkeys can write texts they can make citations too! So let us imagine the following random citation model.
* At the beginning there is one “seminal” paper.
* Every sequential paper makes at most ten citations (or cites all preceding papers if their number does not exceed ten).
* All preceding papers have an equal probability to be cited.
* Multiple citations are excluded. So if some paper is selected by chance as an citation candidate more than once, the selection is ignored (in this case total number of citations in a new paper will be less than ten).
I doubt about monkeys but it is simple to learn computer to simulate such a process. Here is the result of simulation for 1000 papers.
So we see an apparent pow-like structure, although with staircase behavior. We expect this stepwise structure to disappear if we eliminate the democracy between papers and make some papers more probable to be cited.
Note that even the value of exponent $`p_3`$ is reasonably close to what was really observed for the most cited papers. But this can be merely an accident and I do not like to make some farfetched conclusion about the nature of citation process from this fact.
In reality “Success seems to attract success” . Therefore, let us try to see what happens if the equal probability axiom is changed by perhaps a more realistic one:
* The probability for a paper to be cited is proportional to $`n+1`$, where $`n`$ is the present total citation number for the paper.
It is still assumed that all preceding papers compete to be cited by a new paper, but with probabilities as follows from the above given law. The result for 1000 papers now looks like
The fit seems not so good now, nevertheless you can notice some resemblance with the case of individual scientists. Again I refrain from premature conclusions. Although it is not entirely surprising that the well-known a given paper of a certain author is, the more probable becomes its citation in a new paper.
## 5 Discussion
So scientific citations (leaving aside first name frequencies) provides one more example of Zipf-Mandelbrot’s regularity. I do not know whether this fact indicates only to significant stochastic nature of the process or to something else. In any case SPIRES, and the World Wide Web in general, gives us an excellent opportunity to study the characteristics of the complex process of scientific citations.
I do not know either whether Mandelbrot’s parameters are meaningful in this case, and if they can tell us something non-trivial about the citation process.
The very generality of the Zipf-Mandelbrot’s regularity can make it rather “shallow”. But remember, that the originality of answers on the question of whether there is something serious behind the Zipf-Mandelbrot’s law depends how restrictive frameworks we assume for the answer. Shallow framework will probably guarantee shallow answers. But if we do not restrict our imagination from the beginning, answers can turn out to be quite non-trivial. For example, fractals and self-similarity are certainly great and not shallow ideas. This point is very well illustrated by the “Barometer Story”, which I like so much that I’m tempted to reproduce it here (it is reproduced as given in M. Gell-Mann’s book ).
## 6 The Barometer Story – by Dr. A. Calandra
Some time ago, I received a call from a colleague who asked if I would be the referee on the grading if an examination question. It seemed that he was about to give a student a zero for his answer to a physics question, while the student claimed he should receive a perfect score and would do so if the system were not set up against the student. The instructor and the student agreed to submit this to an impartial arbiter, and I was selected…
I went to my colleague’s office and read the examination question, which was, “Show how it is possible to determine the height of a tall building with the aid of a barometer.”
The student’s answer was, “Take the barometer to the top of the building, attach a long rope to it, lower the barometer to the street, and then bring it up, measuring the length of the rope. The length of the rope is the height of the building.”
Now this is a very interesting answer, but should the student get credit for it? I pointed out that the student really had a strong case for full credit, since he had answered the question completely and correctly. On the other hand, if full credit were given, it could well contribute to a high grade for the student in his physics course. A high grade is supposed to certify that the student knows some physics, but the answer to the question did not confirm this. With this in mind, I suggested that the student have another try at answering the question. I was not surprised that my colleague agreed to this, but I was surprised that the student did.
Acting in the terms of the agreement, I gave the student six minutes to answer the question, with the warning that the answer should show some knowledge of physics. At the end of five minutes, he had not written anything. I asked if he wished to give up, since I had another class to take care of, but he said no, he was not giving up, he had many answers to this problem, he was just thinking of the best one. I excused myself for interrupting him to please go on. In the next minute, he dashed off his answer, which was: “Take the barometer to the top of the building, and lean over the edge of the roof. Drop the barometer, timing its fall with a stopwatch. Then, using the formula $`s=at^2/2`$, calculate the height of the building.”
At this point, I asked my colleague if he would give up. He conceded and I gave the student almost full credit. In leaving my colleague’s office, I recalled that the student had said that he had other answers to the problem, so I asked him what they were.
“Oh, yes,” said the student. “There are many ways of getting the height of a tall building with the aid of a barometer. For example, you could take the barometer out on a sunny day and measure the height of the barometer, the length of its shadow, and the length of the shadow of the building, and by the use of simple proportion, determine the height of the building.”
“Fine,” I said. “And the others?”
“Yes”, said the student. “There is a very basic measurement that you will like. In this method, you take the barometer and begin to walk up the stairs. As you climb the stairs, you mark off the length and this will give you the height of the building in barometer units. A very direct method.”
“Of course, if you want a more sophisticated method, you can tie the barometer to the end of a string, swing it as a pendulum, and determine the value of $`g`$ at the street level and at the top of the building. From the difference between the two values of $`g`$, the height of the building can, in principle, be calculated.”
Finally, he concluded, “If you don’t limit me to physics solution to this problem, there are many other answers, such as taking the barometer to the basement and knocking on the superintendent’s door. When the superintendent answers, you speak to him as follows:
Dear Mr. Superintendent, here I have a very fine barometer. If you will tell me the height of this building, I will give you this barometer …”
## acknowledgments
This work was done while the author was visiting Stanford Linear Accelerator Center. I’m grateful to Helmut Marsiske and Lynore Tillim for kind hospitality.
## Note added
After this paper was completed and submitted to e-Print Archive, I have learned that the Zipf’s distribution in scientific citations was discovered in fact earlier by S. Redner . He also cites some previous studies on citations, which were unknown to me.
I also became aware of G. Parisi’s interesting contribution from Dr. S. Juhos.
I thank S. Redner and S. Juhos for their correspondence.
|
no-problem/9901/hep-ph9901287.html
|
ar5iv
|
text
|
# HERA Physics Beyond the Standard Model
## 1 Introduction
After 6 years of running and major improvements of the electron proton collider machine, the experiments at HERA start to open a new focus of physics analyses looking at processes with cross sections of the order of 1 pb and below. This is the typical value for cross sections at large values of Bjorken $`x`$ and momentum transfer $`Q^2`$, or more generally of processes with large transverse momenta. At the upper limit of the available center-of-mass energy, the cross sections for deep inelastic scattering are equally determined by both electromagnetic and purely weak interactions. Moreover, measurements are possible of rare standard model processes like the production of an additional gauge boson, or of radiative processes in neutral and charged current scattering. Other examples are the production of lepton pairs or multi-jet systems with large invariant masses. These low cross section processes provide a wealth of possibilities to look for deviations from the standard model predictions and constitute important backgrounds for searches for physics beyond the standard model .
New physics may be found in a search for processes which are forbidden in the standard model or have tiny cross sections much below the level of 1 pb; some models of physics beyond the standard model predict such “gold-plated”, background-free signatures. However, more common is the situation that the cross section for conventional standard model processes are only slightly modified. The first task when searching for new physics is therefore to obtain a precise and detailed knowledge of standard model predictions. For deep inelastic scattering this task is twofold: on the one hand one has to provide precise parametrizations of parton distribution functions evolved in $`Q^2`$ according to next-to-leading order of QCD. On the other hand, the cross sections for hard lepton-quark and lepton-gluon subprocesses have to be known also at least to next-to-leading order. The theoretical tools needed to solve the first part of the task are well-established and the precision of parton distribution functions depends mainly on the quality of experimental data . On the other hand, NLO calculations for hard subprocesses, while continuously improving, have not yet reached a completely satisfactory status .
NLO calculations for inclusive scattering, i.e. $`O(\alpha _s)`$ corrections to the structure functions ($`F_L`$, $`\mathrm{\Delta }F_3`$ and, in the $`\overline{\mathrm{MS}}`$ scheme, $`\mathrm{\Delta }F_2`$) are known since long . Considerable progress has been made with respect to jet production in DIS and photoproduction , but $`Z`$ and $`W`$ exchange, relevant at large $`Q^2`$, is not taken into account in the available NLO Monte Carlo programs. Only recently, next-to-leading order corrections to the $`W`$ production $`epW+X`$, including resolved contributions to photoproduction, have been reported . Calculations for NLO corrections to the production of isolated photons are available for deep inelastic scattering and for photoproduction (see also ), but the transition region of small and large $`Q^2`$ has not yet been investigated. For other cases, like for example lepton pair production, NLO, sometimes even LO, calculations are still missing.
The demands for the precision of standard model predictions varies depending on the size of the cross section. For inclusive measurements in deep inelastic scattering one requires rather precise measurements when searching for new physics. Specific final states, in particular when they contain one or two particles with high transverse energy, can be left at a precision of $`O(10\%)`$. For “generic searches”, where deviations from standard model predictions are looked for without referring to any specific expectations , a classification of “interesting” final states containing a high-$`p_T`$ charged lepton, large missing transverse momentum, or a high-$`p_T`$ jet, or any combination of them, would be helpful for a systematic investigation of experimental uncertainties.
The motivation to search for new physics at HERA has received a strong impetus by the observation of enhancements of cross sections at several places. The excess of events at large $`x`$ and large $`Q^2`$ in neutral and charged current scattering has been discussed at lenght in the literature, see and references therein. Notably the occurence of events with an isolated muon and large missing transverse momentum at H1 which are seemingly not a sign of $`W`$ production presents a challenge for the understanding of the experiments. Also at other experiments data suggest themselves as an indication for the appearance of new physics. I would like to mention only the high-$`E_T`$ dijets at Tevatron and the cross section for $`e^+e^{}W^+W^{}`$ at LEP2 which is too low at the highest $`\sqrt{s}`$ . However, in the latter case, the experiments find a suppression, not an enhancement, and it seems more difficult to find a scenario which predicts the necessary amount of interference with the standard model. It is therefore more wide-spread to believe that this is a statistical fluctuation, but the same can be the origin of all the other observations as well. Probably the most strong hint for the presence of new physics is the experimental evidence for neutrino oscillations .
In the following, I selected some of the alternatives to standard model physics which, if realized in nature, have a good chance to be discovered at HERA. If not, HERA is expected to significantly contribute to setting limits on their respective model parameters. Other topics of interest are discussed in .
## 2 The main alternatives
Despite of the great success of the standard model, various conceptual problems provide a strong motivation to look for extensions and alternatives. Two main classes of frameworks can be identified among the many new physics scenarios discussed in the literature:
* Parametrizations of more general interaction terms in the Lagrangian like contact interactions or anomalous couplings of gauge bosons are helpful in order to quantify the agreement of standard model predictions with experimental results. In the event that deviations are observed, they provide a framework that allows to relate different experiments and cross-check possible theoretical interpretations. Being insufficient by themselves, e.g. because they are not renormalizable, parametrizations are expected to show the directions to the correct underlying theory if deviations are observed.
* Models, sometimes even complete theories, provide specific frameworks that allow a consistent derivation of cross sections for conventional and new processes. Examples are the two-Higgs-doublet extension of the standard model, grand unified theories and, most importantly supersymmetry with or without $`\text{/}R_p`$-violation.
The following three examples attained most interest when the excess of large-$`Q^2`$ events at HERA was made public . I will try to point out some of the open questions worth to be studied in future theoretical research.
### 2.1 Contact interactions
The contact interaction (CI) scenario relevant for HERA physics assumes that 4-fermion processes are modified by additional terms in the interaction Lagrangian of the form
$$_{\mathrm{CI}}=\underset{\begin{array}{c}i,k=L,R\\ q=u,d,\mathrm{}\end{array}}{}\eta _{ik}^q\frac{4\pi }{\left(\mathrm{\Lambda }_{ik}^q\right)^2}\left(\overline{e}_i\gamma ^\mu e_i\right)\left(\overline{q}_k\gamma _\mu q_k\right).$$
(1)
Similar terms with 4-quark interactions would be relevant for new physics searches at the Tevatron and 4-lepton terms would affect purely leptonic interactions. In equation (1), as usual, only products of vector or axial-vector currents are taken into account since limits on scalar or tensor interactions are very stringent. Such terms are motivated in many extensions of the standard model as effective interactions after having integrated out new physics degrees of freedom like heavy gauge bosons, leptoquarks and others, with masses beyond the production threshold. The normalization with the factor $`4\pi `$ is reminiscent of models which predict CI terms emerging from strong interactions at a large mass scale $`\mathrm{\Lambda }`$. Beyond their meaning as new physics effects, limits on the mass scale of contact interactions serve as an important means to quantify the agreement of experimental data with standard model predictions.
Equation 1 predicts modifications of cross sections for 4-fermion processes in all channels as visualized in Fig. 1. Both enhancement or suppression are expected at the largest possible energies if the CI mass scale is large, depending on the helicity structure of the contact term and its sign $`\eta _{ik}^q`$. Due to the extremely high experimental precision, also atomic parity violation experiments are sensitive to parity-odd combinations of helicities. The important advantage of the contact term approach is that it provides a framework which can be applied to all presently running high-energy experiments. The contact term approach relates predictions for DIS at HERA with hadron production in electron positron annihilation and Drell-Yan production at the Tevatron.
Table 1 gives a selection of recent limits on CI mass scales as reported at the 1998 summer conferences (limits for other combinations of helicities are available as well ). The numbers in this table show that all present high-energy experiments have achieved limits in a very similar mass range despite of their different center-of-mass energies. Consequently, with a signal at HERA one should expect visible effects at LEP2 and at the Tevatron. In the case of the observation of deviations from the standard model predictions, the combination of results obtained in different experiments and from measurements with polarized beams will be helpful to identify the helicity structure of contact interaction terms.
### 2.2 Leptoquarks
Leptoquarks appear in extensions of the standard model involving unification, technicolor, compositeness, or $`R`$-parity violating supersymmetry. In addition to their couplings to the standard model gauge bosons, leptoquarks have Yukawa-type couplings to lepton-quark pairs which allow their resonant production in $`ep`$ scattering. Their phenomenology in view of the observed excess of large-$`x`$, large-$`Q^2`$ events at HERA has been discussed extensively in the literature ( and references therein). The generally adopted framework described in Ref. is based on the assumption that the Yukawa interactions of leptoquarks should have the following properties:
* renormalizability
* $`SU(3)\times SU(2)\times U(1)`$ symmetry
* conservation of baryon and lepton number
* chirality of the couplings
* couplings exist only to one fermion generation
* no other interactions and/or particles exist
Dropping one of the first two of these assumptions would lead to severe theoretical problems; the other properties are dictated by phenomenology. One would certainly not like to give up assumption 3 since this avoids rapid proton decay. The chirality of couplings is necessary in order to escape the very strong bounds from leptonic pion decays and assumption 5 is a consequence of limits on FCNC processes. The last assumption is made for simplicity only; it seems to be rather unlikely than realistic.
These assumptions lead to a rather restricted set of allowed states and their branching fractions $`\beta `$ to a charged lepton final state can only be 1, 0.5, or 0. Those states which are interesting for HERA phenomenology have $`\beta =1`$ and are excluded by Tevatron bounds which require masses above 242 GeV .
The leptoquark scenario might remain interesting if it is possible to generalize the approach by relieving one or more of the above assumptions , notably the last one of the list. The Tevatron mass bounds are avoided if it was possible to adjust the branching ratios in the range $`0.3\stackrel{<}{}\beta \stackrel{<}{}\mathrm{\hspace{0.25em}0.7}`$ . In Ref. a scenario was proposed where two leptoquark states show mixing induced by coupling them to the standard model Higgs boson. Alternatively, interactions to new heavy fields might exist that, after integrating them out, could lead to leptoquark Yukawa couplings as an effective interaction , bypassing this way renormalizability as a condition since this is assumed to be restored at higher energies. In the more systematic study of Ref. , LQ couplings arise from mixing of standard model fermions with new heavy fermions with vector-like couplings and taking into account a coupling to the standard model Higgs. Up to now, no attempt was made to study in a systematic way the possibility of relieving the assumption that no intergenerational couplings should exist (see however Ref. ). The most interesting extension of the generic leptoquark scenario is, however, $`R_p`$-violating supersymmetry which is discussed in the next subsection.
### 2.3 $`R_p`$-violating supersymmetry
The Lagrangian of a supersymmetric version of the standard model may contain a superpotential of the form
$$\begin{array}{ccc}W_\mathit{}_p=\hfill & \lambda _{ijk}L_iL_jE_k^c\hfill & \overline{)}L\hfill \\ & +\lambda _{ijk}^{}L_iQ_jD_k^c\hfill & \overline{)}L(\mathrm{includesLQ}\mathrm{like}\mathrm{couplings})\hfill \\ & +\lambda _{ijk}^{\prime \prime }U_i^cD_j^cD_k^c\hfill & \overline{)}B\hfill \end{array}$$
(2)
which violates lepton or baryon number conservation as indicated. Imposing symmetry under $`R`$-parity (defined as $`R_p=(1)^{3B+L+2S}`$) forbids the presence of $`W_\mathit{}_p`$. The resulting phenomenology has been searched for at all present high energy experiments and HERA may set interesting limits which are complementary to those obtained at Tevatron . Future experiments at the LHC will extend the search limits for $`R_p`$-conserving supersymmetry considerably.
The present limits on the proton life-time do not forbid interactions of the form $`L_iQ_jD_k^c`$ proportional to $`\lambda _{ijk}^{}`$ provided the $`\lambda _{ijk}^{\prime \prime }`$ are chosen to be zero at the same time. This makes squarks appear as leptoquarks which can be produced on resonance in lepton-quark scattering. In contrast to the generic leptoquark scenarios described above, $`R_p`$-conserving decays of squarks lead to a large number of interesting and distinct signatures (see Ref. and references therein). Characteristically one expects multi-lepton and multi-jet final states. The branching ratios can be adjusted so as to avoid the strict mass limits from Tevatron.
Most of the analyses done so far assume that only one of the couplings $`\lambda _{ijk}^{}`$ is non-zero and only one squark state is in reach. A more general scenario with two light squark states has been considered in Ref. where it was shown that $`\stackrel{~}{t}_L`$$`\stackrel{~}{t}_R`$ mixing would lead to a broader $`x`$ distribution than expected for single-resonance production. The possibility of having more than one $`\lambda _{ijk}^{}0`$ was noticed in Ref. and deserves more theoretical study.
$`R_p`$-violating supersymmetry has also played a role in the search for explanations of the observation of a large number of events with an isolated $`\mu `$ and missing transverse momentum . Events of this kind can originate from $`W`$ production followed by the decay $`W\mu \nu _\mu `$; their observed number is, however, larger than expected and their kinematical properties are atypical for $`W`$ production. An explanation in terms of anomalous $`WW\gamma `$ couplings additionally has to face limits from Tevatron and LEP2 and leaves the question open why a similar excess of events is not seen in $`e+\text{/}p_T`$ events.
The observation of $`\mu +\text{/}p_T`$ events could find an explanation in $`R_p`$-violating scenarios if it is assumed that a stop is produced on-resonance at HERA. Figures 2 and 3 show examples for some of the possibilities. The process $`ed\stackrel{~}{t}\mu d^k`$ (Fig. 2a) which predicts $`\mu `$ but no large $`\text{/}p_T`$ in the final state requires two different non-zero $`\lambda ^{}`$ couplings. The relevant product $`\lambda _{1j1}^{}\lambda _{2jk}^{}`$ would induce flavor changing neutral currents and is therefore limited to unreasonably small values for $`1^{\mathrm{st}}`$ and $`2^{\mathrm{nd}}`$ generation quarks in the final state . The scenario shown in Fig. 2b requires a relatively light $`b`$ squark, $`m_{\stackrel{~}{b}}\stackrel{<}{}\mathrm{\hspace{0.25em}120}`$ GeV,and some fine-tuning in order to avoid too large effects on $`\mathrm{\Delta }\rho `$ in electroweak precision measurements. It could be identified by the simultaneous presence of multi-jet final states with $`\text{/}p_T`$ from hadronic decays of the $`W`$. Also the cascade decay shown in Fig. 3a involving $`R_p`$-violation only for the production of the $`\stackrel{~}{t}`$ resonance, not for its decay, seems difficult to be achievable since it requires both a light chargino and a long-lived neutralino. This, as well as the even more speculative process shown in Fig. 3b which requires $`R_p`$-violation in the $`L_iL_jE_k^c`$ sector ($`\lambda _{ijk}0`$) as well, can be checked from the event kinematics: assuming a value for the mass of the decaying $`\stackrel{~}{t}`$, the recoil mass distribution must cluster at a fixed value, the chargino mass. A more detailed discussion of the $`\mu +\text{/}p_T`$ events and their possible theoretical origin can be found in .
## 3 Concluding remarks
The search for new physics effects relies in many cases on trustworthy predictions from the standard model, in particular when generic searches look for “interesting” final states without having at hand a specific model that tells the experimenter what and where to look for precisely. New physics will always, if at all, show up at the frontier of the experiments, i.e. at the largest energies or transverse momenta where cross sections are smallest and experimental problems most severe. It is therefore a mandatory though nontrivial task to combine the information from as many as possible different experiments. In order to enhance the statistical significance and reduce the probability that experimental deficiencies lead to wrong interpretations, also experiments which did not obtain the most stringent limits are important. The experiments at HERA are therefore guaranteed to contribute to the search for new physics.
## 4 References
|
no-problem/9901/hep-ph9901256.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Progress in the study of multiparticle production has recently been made in two distinct directions among many others. One is in finding measures of event-to-event fluctuations that can probe the production dynamics more deeply than the conventional observables, such as multiplicity distribution and factorial moments . Such measures have been referred to as erraticity , which quantifies the erratic nature of the event structure. The other direction is in the construction of a Monte Carlo generator, called ECOMB , that simulates soft interaction in hadronic collisions, capable of reproducing the intermittency data . ECOMB stands for eikonal color mutation branching, which are the key words of a model that is based on the parton model rather than the string model for low $`p_T`$ processes. In this paper we combine the two, using ECOMB to generate events from which we calculate the erraticity measures. The result should be of considerable interest, since, on the one hand, the erraticity analysis of the NA22 data is currently being carried out, and, on the other, it can motivate the investigation and comparison of erraticities in various different collision processes, ranging from $`e^+e^{}`$ annihilation to heavy-ion collisions.
The study of erraticity originated in an attempt to understand possible chaotic behaviors in quark and gluon jets , since QCD is intrinsically nonlinear. In the search for a measure of chaos it was realized that the fluctuation of the hadronic final states of a parton jet is the only observable feature of the QCD process that can replace the unpredictable trajectories in classical nonlinear dynamics. A multiparticle final state in momentum space is a spatial pattern. Once a measure is found to quantify the fluctuation of spatial patterns, the usefulness of the method goes far beyond the original purpose of characterizing chaoticity in perturbative QCD processes. Many problems involve spatial patterns; they can range from phase transition in condensed matter to galactic clustering in astrophysics. Even continuous time series can be transformed by discrete mapping to spatial patterns . Thus the erraticity analysis, which is the study of the fluctuation of spatial patterns, is more general than the determination of chaotic behavior. Indeed, we have applied it to the study of phase transition in magnetic systems by use of the Ising model , as well as to the characterization of heartbeat irregularities in ECG time series .
Multiparticle production at low $`p_T`$ has always eluded first-principle calculation because of its nonperturbative nature. Various models that simulate the process can generate the average quantities, but fail in getting correctly the fluctuations from the averages . In particular, few models can fit the intermittency data . To our knowledge ECOMB is the only one that can reproduce those data , (apart from its predecessor ECCO ). Since that model is tuned to fit the data by the adjustment of several parameter, it is necessary to test its predictions on some new features of the production process. Erraticity is such a feature. The fluctuation of final-state patterns presents a severe test of any model.
ECOMB includes many sources of fluctuations in hadronic collisions. In the framework of the eikonal formalism it allows for fluctuations in impact parameter $`b`$. For any $`b`$ there is the fluctuation of the number $`\mu `$ of cut Pomerons. For any $`\mu `$ there is the fluctuation of the number $`\nu `$ of partons. For any $`\nu `$ the color distribution along the rapidity axis can still fluctuate initially. During the evolution process the local subprocesses of color mutation, spatial contraction and expansion, branching into neutral subclusters, and hadronization into particles or resonances can all fluctuate. Taken together the model can generate such widely fluctuating events that fitting some average quantity such as $`n`$ or $`dn/dy`$ does not explore the full extent of its characteristics. The dependence of normalized factorial moments $`F_q`$ on the bin size $`\delta `$ usually called intermittency, probes deeper, but it is nevertheless a measure that is averaged over all events. Erraticity is a true measure of event-to-event fluctuation.
## 2 Erraticity
There are various ways to characterize a spatial pattern. We shall use the horizontal factorial moments. Given the rapidity distribution of a particular event, we first convert it to a distribution in the cumulative variable $`X`$ , in terms of which the average rapidity distribution $`dn/dX`$ is uniform in $`X`$. We then calculate from that distribution for that event the normalized $`F_q`$
$`F_q=n(n1)\mathrm{}(nq+1)/n^q,`$ (1)
where $`\mathrm{}`$ signifies (horizontal) average over all bins, and $`n`$ is the multiplicity in a bin. We emphasize that (1) does not involve any average over events. $`F_q`$ does not fully describe the structure of an event, since at any fixed $`q`$ it is insensitive to the rearrangement of the bins. However, it does capture some aspect of the fluctuations from bin to bin, and is adequate for our purpose.
Since $`F_q`$ fluctuates from event to event, one obtains a (vertical) distribution $`P(F_q)`$ after many events. Let the vertical average of $`F_q`$ determined from $`P(F_q)`$ be denoted by $`F_q_v`$ . Then in terms of the normalized moments for separate events
$`\mathrm{\Phi }_q=F_q/F_q_v,`$ (2)
we can define the vertical $`p`$th order moments of the normalized $`q`$th order factorial (horizontal) moments
$`C_{p,q}=\mathrm{\Phi }_q^p_v.`$ (3)
Erraticity refers to the power law behavior of $`C_{p,q}`$
$`C_{p,q}M^{\psi _q(p)},`$ (4)
where $`M`$ is the number of bins, $`1/\delta `$, and the length in $`X`$ space is 1. $`\psi _q(p)`$ is referred to as the erraticity exponent. If the spatial pattern never changes from event to event, $`P(F_q)`$ would be a delta function at $`\mathrm{\Phi }_q=1`$, and $`C_{p,q}`$ would be 1 at all $`M`$, $`p`$, and $`q`$, resulting in $`\psi _q(p)=0`$. The larger $`\psi _q(p)`$ is, the more erratic is the fluctuation of the spatial patterns.
Since $`\psi _q(p)`$ is an increasing function of $`p`$ with increasing slope, an efficient way to characterize erraticity with one number (for every $`q`$) is simply to use the slope at $`p=1`$, i.e.
$`\mu _q={\displaystyle \frac{d}{dp}}\psi _q(p)|_{p=1}.`$ (5)
It is referred to as the entropy index . Experimentally, it is easier to determine first an entropy-like quantity $`\mathrm{\Sigma }_q`$ directly from $`\mathrm{\Phi }_q`$:
$`\mathrm{\Sigma }_q=\mathrm{\Phi }_q\mathrm{ln}\mathrm{\Phi }_q_v,`$ (6)
which follows from (3) and
$`\mathrm{\Sigma }_q=dC_{p,q}/dp|_{p=1},`$ (7)
and then to determine $`\mu _q`$ from $`\mathrm{\Sigma }_q`$ using
$`\mu _q={\displaystyle \frac{\mathrm{\Sigma }_q}{\mathrm{ln}M}},`$ (8)
provided that $`C_{p,q}`$ has the scaling behavior (4). In it is found that $`\mu _q`$ is larger for quark jets than for gluon jets, indicating that the branching process of the former is more chaotic, or, in more words, the event-to-event fluctuation is more erratic.
If the moments $`C_{p,q}`$ do not have the exact scaling behavior in $`M`$, as in (4), but have similar nonlinear dependences on $`M`$, we can consider a generalized form of scaling
$`C_{p,q}(M)g(M)^{\stackrel{~}{\psi }(p,q)}.`$ (9)
If (9) is approximately valid for a common $`g(M)`$ for all $`p`$ and $`q`$, it then follows from (7) that
$`\mathrm{\Sigma }_q(M)\stackrel{~}{\mu }_q\mathrm{ln}g(M),`$ (10)
where
$`\stackrel{~}{\mu }_q={\displaystyle \frac{d}{dp}}\stackrel{~}{\psi }(p,q)|_{p=1}.`$ (11)
Despite the similarity between (5) and (11) , $`\stackrel{~}{\mu }_q`$ is distinctly different from $`\mu _q`$ and should not be compared to one another unless $`g(M)=M`$.
If (10) is indeed good for a range of $`q`$ values, then we expect a linear dependence of $`\mathrm{\Sigma }_q`$ on $`\mathrm{\Sigma }_2`$ as $`M`$ is varied. Let the slope of such a dependence be denoted by $`\omega _q`$, i.e.,
$`\omega _q={\displaystyle \frac{\mathrm{\Sigma }_q}{\mathrm{\Sigma }_2}}.`$ (12)
Then we have
$`\stackrel{~}{\mu }_q=\stackrel{~}{\mu }_2\omega _q.`$ (13)
A variation of this scheme that makes use of an extra control parameter $`r`$ in the problem is considered in . It is found there that the entropy indices determined that way are as effective as Lyapunov exponents in characterizing classical nonlinear dynamical systems.
## 3 Scaling Behaviors
The erraticity analysis described above involves only measurable quantities, so it can be directly applied to the experimental data. The NA22 data at $`\sqrt{s}=22`$ GeV are ideally suited for this type of analysis, since $`F_q`$ fluctuates widely from event to event . The nuclear collision data, such as those of NA49, can also be studied, but $`p_T`$ cuts should be made to reduce the hadron multiplicity to be analyzed, thereby enhancing the erraticity to be quantified.
Here we apply the analysis to hadronic collisions generated by ECOMB. The parameters are tuned to fit $`n`$, $`P_n`$, $`dn/dy`$ and $`F_q_v`$ of the NA22 data . Without any further adjustment of the parameters in the model we calculate $`C_{p,q}(M)`$, which are therefore our predictions for hadronic collisions at 22 GeV. The results from simulating $`3\times 10^4`$ Monte Carlo events are shown on the left side of Fig. 1. The lines are drawn to guide the eye.
From the points shown, it is clear that the dependences of $`C_{p,q}`$ on $`M`$ in the log-log plots are not very linear, especially for the more reliable cases of $`q=2`$ and $`3`$, where the statistics are higher. Thus the power-law behavior in (4) is not well satisfied. Since the general behaviors of $`C_{p,q}`$ are rather similar in shape, we can regard $`C_{2,2}`$ as the reference that carries the typical dependence on $`M`$, and examine $`C_{p,q}`$ vs $`C_{2,2}`$ when $`M`$ is varied as an implicit variable. The results are shown on the right side of Fig. 1. We have left out the highest points that correspond to the smallest bin size, since they show saturation at $`q>2`$. We have also left out the points corresponding to ln $`M=0`$, since the scaling behaviors do not extend to the biggest bin size. The straightlines are linear fits of the points shown and lend support to the scaling behavior
$`C_{p,q}C_{2,2}^{\chi (p,q)}.`$ (14)
The slopes of the fits are $`\chi (p,q)`$, which are shown in Fig. 2. One may regard $`\chi (p,q)`$ as a representation of the erraticity properties of the particle production data, when there is no strict scaling law as in (4).
The behavior of $`\chi (p,q)`$ exhibited in Fig. 2 can be described analytically, if we fit the points by a quadratic formula for each $`q`$. The result is shown by the lines in Fig. 2. Evidently, the fits are excellent. The properties of the smooth behaviors can be further summarized by their derivatives at $`p=1`$:
$`\chi _q^{}{\displaystyle \frac{d}{dp}}\chi (p,q)|_{p=1}.`$ (15)
The values of $`\chi _q^{}`$ are 0.834, 2.818, 5.243 and 7.847 for $`q=2,\mathrm{},5,`$ and are shown in Fig. 3. We suggest that these values of $`\chi _q^{}`$ be used to compare with the experimental data.
Although $`C_{p,q}(M)`$ do not satisfy (4), we can consider the more general form (9). If the same function $`g(M)`$ is good enough in (9) for all $`p`$ and $`q`$, then it follows from (14) that
$`\chi (p,q)=\stackrel{~}{\psi }(p,q)/\stackrel{~}{\psi }(2,2).`$ (16)
Using (11) we then have
$`\stackrel{~}{\mu }_q=\stackrel{~}{\psi }(2,2)\chi _q^{}.`$ (17)
It should be noted that, whereas $`\chi _q^{}`$ follows only from the scaling property of (14), the determination of $`\stackrel{~}{\psi }(2,2)`$, and therefore $`\stackrel{~}{\mu }_q`$, requires the knowledge of $`g(M)`$ in (9).
To determine $`g(M)`$, we write it in the form
$`\mathrm{ln}g(M)=(\mathrm{ln}M)^a.`$ (18)
By varying $`a`$, we can find a good linear behavior of $`\mathrm{ln}C_{2,2}`$ vs $`\mathrm{ln}g(M)`$, as shown by the dashed line in Fig. 4 for $`a=1.8`$. The corresponding value of $`\stackrel{~}{\psi }(2,2)`$ determined by the slope of the straightline fit is 0.119. Using that in (17) yields a set of values of $`\stackrel{~}{\mu }_q`$, which are shown in Fig. 5 by the open-circle points. In particular, we have
$`\stackrel{~}{\mu }_2=0.099,`$ (19)
a quantity that has a separate significance below.
We remark that in checking the validity of (9) for values of $`p`$ and $`q`$ other than 2, one can improve the linearity of the points for each $`p`$ and $`q`$ by slight adjustments of the value of $`a`$. If there is a range of possible $`g(M)`$ that depends on $`p`$ and $`q`$ to yield the best fits, however small the variations in $`a`$ may be, the scheme defeats the point of defining a universal $`\stackrel{~}{\psi }(p,q)`$. We thus propose that the emphasis of the erraticity analysis should be placed on (14), which is independent of $`g(M)`$, and that (9) is examined only for $`p=2`$, $`q=2`$ so that (17) can be evaluated.
Since $`\stackrel{~}{\mu }_q`$ is distinct from $`\mu _q`$, we cannot compare our result on $`\stackrel{~}{\mu }_q`$ with the theoretical values of $`\mu _q`$ found for quark and gluon jets , nor with the experimental values of $`\mu _q`$ determined from $`pp`$ collisions at 400 GeV/c (NA27) .
The values of $`\stackrel{~}{\mu }_q`$ can also be determined independently by use of $`\mathrm{\Sigma }_q(M)`$. From the definition in (6) we have calculated $`\mathrm{\Sigma }_q`$ as functions of ln $`M`$, as shown in Fig. 6(a). Not surprisingly, the dependences are not linear. However, when $`\mathrm{\Sigma }_q`$ is plotted against $`\mathrm{\Sigma }_2`$ in Fig. 6(b), they all fall into straightlines, except for the point corresponding to the smallest bin for $`q=5`$ (which we have left out for the fit). The slopes, which give $`\omega _q`$ defined in (12), are 1.0, 3.244, 6.0, and 9.101 for $`q=2,\mathrm{},5`$. They are shown in Fig. 7. If we examine (10) for $`q=2`$ only, and plot $`\mathrm{\Sigma }_2`$ vs ln $`g(M)`$ with $`a=1.8`$, as in Fig. 4, we obtain a linear behavior with a slope
$`\stackrel{~}{\mu }_2=0.095.`$ (20)
This value is to be compared with that in (19) with only 4% discrepency. Of the two methods of determining $`\stackrel{~}{\mu }_2`$, this latter approach is more reliable, since the derivative in $`p`$ at $`p=1`$ is done analytically in the definition of $`\mathrm{\Sigma }_q`$ in (7), whereas in the former approach the differentiation is done in (15) using the fitted curve in Fig. 2. Substituting (20) into (13), we can determine the values of $`\stackrel{~}{\mu }_q`$ for $`q>2`$ from the values of $`\omega _q`$ in Fig. 7. The result is shown by the solid points in Fig. 5. Clearly, the two methods yield essentially the same result.
Another way to check the degree of consistency of the two methods, independent of the details on $`g(M)`$, is to examine the ratio $`r_q=\chi _q^{}/\omega _q`$. The quantities in that ratio are derived from the straightline fits of ln $`C_{p,q}`$ vs ln $`C_{2,2}`$ and $`\mathrm{\Sigma }_q`$ vs $`\mathrm{\Sigma }_2`$ (as in Fig. 1 and Fig. 6) without resorting to such equation as (18). According to (13) and (17), the ratio $`r_q`$ should be a constant, independent of $`q`$. From the values of $`\chi _q^{}`$ and $`\omega _q`$ given above in connection with Figs. 3 and 7, we find that $`r_q=0.834,0.867,0.874`$, and 0.862 for $`q=2,3,4,5`$. The average is 0.86, so the standard deviation is at the 1-2% level. Evidently, the two methods are quite consistent, whatever $`g(M)`$ may be. From (13) and (17), one would expect $`r_q`$ to be $`\stackrel{~}{\mu }_2/\stackrel{~}{\psi }(2,2)`$, which according to the numbers given in Fig. 4, is 0.798. The discrepency from 0.86 is nearly 7%. Thus the disagreement of the values of $`\stackrel{~}{\mu }_q`$ in Fig. 5, though not large, has the same root as the disagreement between (19) and (20), namely, the necessity to use a specific form of $`g(M)`$. Nevertheless, at the level of inaccuracy of 4%, which is comparable to the typical uncertainty in the experimental data, the value of $`\stackrel{~}{\mu }_2`$ given by either (19) or (20) clearly provides an effective measure of erraticity in soft production.
## 4 Conclusion
In conclusion, we recapitulate the two essential points of this paper. One is the prediction of ECOMB on the nature of fluctuations of the factorial moments $`F_q`$ from event to event. The other is the proposed method of summarizing the scaling behaviors of $`C_{p,q}`$ that do not have strict power-law dependences on the bin size. The two aspects of this paper converge on the new erraticity measures $`\chi (p,q)`$, $`\chi _q^{}`$, $`\omega _q`$ and $`\stackrel{~}{\mu }_q`$.
It is hoped that the data from both NA22 and NA27 can be analyzed in terms of these measures so that the dynamics of soft interaction contained in ECOMB can be checked by the experiments.
The proposed measures of erraticity are, of course, more general than the application made here to soft production. Event-to-event fluctuation has recently become an important theme in collisions of all varieties: $`e^+e^{}`$ annihilation, leptoproduction, hadronic collisions at very high energies where hard subprocesses are important, and heavy-ion collisions. What was lacking previously is an efficient measure of such fluctuations. The erraticity measures proposed in , now generalized to $`\chi (p,q)`$, $`\chi _q^{}`$, $`\omega _q`$ and $`\stackrel{~}{\mu }_q`$ are well suited for that purpose. They may be redundant, if strict scaling in $`M`$ is good enough to give the erraticity indices $`\psi (p,q)`$. The method of treating less-strict scaling properties proposed here may well be more generally applicable to the wide range of collision processes amenable to erraticity study.
### Acknowledgments
This work was supported in part by U.S. Department of Energy under Grant No. DE-FG03-96ER40972 and by the National Science Foundation under contract No. PHY-93-21949.
## Figure Captions
Log-log plots of $`C_{p,q}`$ versus $`M`$ on the left side and versus $`C_{2,2}`$ on the right side. The lines on the left side are to guide the eye, while the ones on the right side are linear fits.
The slopes of the linear fits on the right side of Fig. 1 are plotted against $`p`$ for various values of $`q`$. The lines are fits by quadratic formula.
The derivatives of $`\chi (p,q)`$ in Fig. 2 at $`p=1`$.
The open circles are for $`C_{2,2}`$ and the solid points are for $`\mathrm{\Sigma }_2`$. The lines are linear fits, whose slopes are $`\stackrel{~}{\psi }(2,2)`$ and $`\stackrel{~}{\mu }_2`$, respectively.
$`\stackrel{~}{\mu }_q`$ determined in two different ways: Eq. (17) for the open circles and Eq. (13) for the solid points.
(a) $`\mathrm{\Sigma }_q`$ vs ln $`M`$ for various $`q`$; (b) $`\mathrm{\Sigma }_q`$ vs $`\mathrm{\Sigma }_2`$ with the lines being linear fits.
The slopes of the straightlines in Fig. 6(b), $`\omega _q`$, plotted against $`q`$.
|
no-problem/9901/hep-ph9901249.html
|
ar5iv
|
text
|
# 1 Total cross section for the photoproduction of cc̄ pairs, as a function of the γp centre-of-mass energy: next-to-leading order QCD predictions versus experimental results.
## Acknowledgements
I wish to thank the Organizing Committee for the warm hospitality in Durham. I also thank Stefano Frixione and Jenny Williams for useful suggestions.
|
no-problem/9901/hep-ph9901263.html
|
ar5iv
|
text
|
# LAPTH A new approach for the vertical part of the contour in thermal field theories
## 1 Introduction
When deriving the matrix Feynman rules of the closed time path (CTP in the following) formalism, an intriguing problem is to understand how one can reach a $`2`$ components matrix formulation from the path represented on figure 1.
Indeed, it is widely accepted that each component of the matrix formalism corresponds to one of the two horizontal branches $`𝒞_1`$ and $`𝒞_2`$ of the path . In this picture, there is no room for the vertical part $`𝒞_v`$. Therefore, most of the derivations of the matrix formalism found in the literature got rid in some way of the vertical part of the path. Usually, one invokes the limits $`t__I\mathrm{}`$ and $`t__F+\mathrm{}`$, in conjunction with an ad hoc choice of the asymptotic properties of the source $`j(x)`$ coupled to the field in the generating functional.
This derivation seems highly artificial, as one can judge by the numerous attempts to find other, less ad hoc, justifications. Among them, the most “revolutionary” approach was that of Niegawa who rejected the hypothesis according to which the vertical part of the time path does not contributes to Green’s functions in the real time formalism. Instead, he argued that it does contribute in some cases, and this contribution can be taken into account by the so called “$`n(|k^o|)`$ prescription”. Although correct in his statement, his proof is controversial since he still makes use of the artificial limits $`t__I\mathrm{}`$ and $`t__F+\mathrm{}`$.
In a previous paper , I attempted to show this result without using these limits at all. Indeed, I started by showing that (i) the vertical part of the path contributes in general, (ii) the time integrations involved in the calculation of Feynman diagrams in time coordinates give a result which is totally independent of the time $`t__I`$ and $`t__F`$. As a consequence, all the arguments based upon specific limits for $`t__I`$ and $`t__F`$ were suspect since nothing nontrivial could occur when taking these limits. After that, I showed in the case of self-energy insertions between propagators that the contribution from the vertical part is precisely the term that corresponds to the difference between the $`n(|k^o|)`$ prescription and the $`n(\omega _𝒌)`$ prescription. This proof was quite intricate, mainly because it involved dealing with delicate products of distributions, and seemed also to leave open the possibility for the vertical part to contribute in many other cases.
After that, the intricacies of the products of distributions were elegantly avoided by Le Bellac and Mabilat , who introduced a regularization of the propagators, which had the main property of preserving the Kubo Martin Schwinger (KMS in the following) boundary conditions, as well as the holomorphy of the propagators.<sup>2</sup><sup>2</sup>2Other justifications of the matrix formalism used regularizations as well, like . But, in these papers, the regularization scheme had the effect to break KMS and to make the contribution of the vertical part artificially vanish. Then, one introduces by hand the “$`n(|k^o|)`$ prescription” in order to reinforce KMS, and it happens that this prescription is precisely what was needed to take into account the contribution of the vertical part, as we shall see later. A consistent justification of the matrix formalism should never need to “reinforce KMS” since it should never use intermediate steps that break KMS. In their proof, the necessity of using $`|k^o|`$ as the argument of statistical factors appeared quite naturally, but in a way which was not obviously related to the vertical part of the path.
My purpose in the present paper is to present an alternative proof of this result, in a way which avoids all the intricacies of the multiplication of distributions, while being more complete than since all the situations in which the vertical part of the path can contribute are clearly identified. The part of the proof dealing with self-energy insertions, which was nontrivial in , is now quite straightforward thanks to the use of simple algebraic properties of the contour integration.
The structure of this paper is as follows. In section 2, I start by recalling the origin of the vertical part of the path, and its precise role for the consistent perturbative expansion of a theory of quantum fields in thermal equilibrium. Then, in section 3, I explain why performing the Fourier transform to go from the time variable to the energy variable is more complicated at finite temperature than it is at zero temperature. In this section, I also derive the matrix formalism in a naive (and incorrect) way, assuming first for the sake of simplicity that the vertical part of the path does not contribute.
Section 4 is devoted to a detailed study of the circumstances in which the vertical part of the path contributes. It is shown that it can contribute only in two simple cases: vacuum diagrams and self-energy insertions.
In section 5, I study the effect of the vertical part in the case of self-energy insertions,<sup>3</sup><sup>3</sup>3The necessity of modified Feynman rules in order to calculate vacuum diagrams in the real time formalism is known for a long time. Justifications can be found in . and show how the matrix Feynman rules must be modified to generate properly the contribution of the vertical part. This proof starts by the almost trivial case of repeated concatenation of free propagators, which is then generalized to the case of general self-energy insertions by simple algebraic arguments. Finally, the last section is devoted to concluding remarks. Some technical details are relegated into two appendices.
## 2 Origin of the vertical part
In , I derived the perturbative expansion of a thermal field theory in time coordinates by using the canonical approach in order to make more explicit the role of the vertical part of the path. I will just summarize here the main points of this derivation. When doing this perturbative expansion, the main difference with respect to the zero temperature situation is related to the fact that the parameter of the expansion (the coupling constant of the theory) appears not only in the dynamics of the fields via their evolution equation, but also in the averaging procedure itself via the density operator $`e^{\beta H}`$ ($`\beta 1/T`$, $`k__B=1`$). Indeed, the Hamiltonian $`H`$ contains the coupling constant. One then sees easily that the two horizontal branches are necessary in order to expand in powers of the coupling constant the time evolution of the fields. But in order to have a consistent perturbative expansion (in particular to preserve thermal equilibrium order by order in the coupling constant), one needs also to expand in powers of the coupling constant the density operator itself. This is done easily thanks to the following formula <sup>4</sup><sup>4</sup>4The reason why such a formula is possible is related to the analogy between the canonical density operator and an evolution operator. The role of the vertical part of the path seems to have remained unnoticed by particle physicists, who usually derived the perturbative expansion of thermal field theories by functional methods based on the Feynman-Kac formula .
$$e^{\beta H}=e^{\beta H_o}\mathrm{T}_c\mathrm{exp}i_{𝒞_v\times ^3}_{\mathrm{in}}(\varphi _{\mathrm{in}}(x))d^4x,$$
(1)
where $`H_o`$ is the free part of the Hamiltonian, $`𝒞_v`$ is a path in the complex time plane going from $`t__I`$ to $`t__Ii\beta `$, $`_{\mathrm{in}}`$ is the interaction part of the Lagrangian density, and $`\varphi _{\mathrm{in}}`$ is the field in the interaction picture (i.e. a free field). With this formula, it is now obvious that the perturbative expansion of the density operator itself made possible by the addition of the vertical part $`𝒞_v`$ to the previous two horizontal branches.
The physical meaning of the vertical part $`𝒞_v`$ is now quite clear: this piece of the contour is needed because the interaction modifies the equilibrium density operator. Therefore, it is likely that this vertical part is crucial for the consistency of the perturbative expansion, and that arguments suggesting that it can simply be dropped are wrong.<sup>5</sup><sup>5</sup>5We are now in a position to understand why enforcing by hand KMS and taking into account the vertical part can be related: without the vertical part, the perturbative expansion would be inconsistent because the density operator would not be expanded in powers of the coupling constant. In other words, statistical equilibrium, i.e. KMS, would be broken.
## 3 From time to energy - <br>First approach to the RTF
### 3.1 From time to energy
At this stage, we have definite Feynman rules to calculate perturbatively a Green’s function in time coordinates: at each vertex, one must integrate over time along the whole path $`𝒞𝒞_1𝒞_2𝒞_v`$. As at zero temperature, the problem is that these Feynman rules are not very convenient for practical calculations. One usually prefers to work in the Fourier space with the conjugate variables $`(k^o,𝒌)`$. Since in the sector of spatial variables, everything is similar to the zero temperature case, going from position to 3-momentum is trivial and works exactly in the same way as at $`T=0`$ (in the following, I assume that the transformation $`𝒙𝒌`$ has already been performed, and I do not write explicitly the spatial variables).
Problems arise when one tries to go from time to energy. Indeed, the property behind the usefulness of the Fourier transform is the relation existing between the Fourier transform (FT in the following) and the convolution product. More precisely, given two 2-point functions $`f(x_1^o,x_2^o)`$ and $`g(x_1^o,x_2^o)`$, one expects the FT to satisfy the identity
$$FT(fg)(k_1^o,k_2^o)=[FT(f)(k_1^o,k_2^o)][FT(g)(k_1^o,k_2^o)].$$
(2)
The problem comes from the fact that the relevant convolution product at finite temperature is defined by an integration along the path $`𝒞`$ instead of the real axis $``$:
$$(fg)(x_1^o,x_2^o)_𝒞𝑑y^of(x_1^o,y^o)g(y^o,x_2^o).$$
(3)
Obviously, the usual definition of the Fourier transform cannot accommodate the relations of Eq. (3) and Eq. (2). This definition should be modified in order to make these relations compatible. A first solution that I will not develop here is provided by the so called imaginary time formalism, which can be seen in this context as a work-around for the above problem. More precisely, one makes use of the $`i\beta `$-periodicity properties of thermal Green’s functions in order to expand them in Fourier series, the Fourier modes (called Matsubara frequencies in this context) being imaginary since the period is imaginary.
### 3.2 Naive approach to the RTF
Another solution is provided by the matrix formulation (often called real time formalism when the context makes obvious the fact that we are in the Fourier space). For each $`n`$-point function, one defines $`2^n`$ distinct Fourier transforms labelled by $`n`$ superscripts $`a_i=1`$ or $`2`$, via the relations:<sup>6</sup><sup>6</sup>6I am implicitly assuming that $`𝒞_{1,2}`$ are extended from $`\mathrm{}`$ to $`+\mathrm{}`$ in this definition of the Fourier transforms, in order to make them as close as possible to the usual one. Nevertheless, it should be emphasized that this limit has no effect as far as the contribution of the vertical part is concerned, since the integrand $`G(x_1,\mathrm{},x_n)`$ is totally independent upon the times $`t__I`$ and $`t__F`$ (see appendix A). Therefore, we still have to find how to deal with the contribution of $`𝒞_v`$ in this matrix formalism.
$$G^{\{a_i\}}(k_1,\mathrm{},k_n)\left[\underset{i=1}{\overset{n}{}}_{𝒞_{a_i}\times ^3}d^4x_ie^{ik_ix_i}\right]G(x_1,\mathrm{},x_n).$$
(4)
Then, one would like to have Feynman rules enabling a direct calculation of these new Green’s functions, without going through the stage of the function in time coordinates. As a first approach, let us first assume that the vertical part $`𝒞_v`$ does not contribute to the calculation of the function $`G(x_1,\mathrm{},x_n)`$. This hypothesis has been at the basis of most of the attempts to derive the matrix formalism, and the focus has mainly been on findings arguments to justify it. In the present paper, I will first derive the Feynman rules of the real time formalism in situations where the vertical part does not contributes. If we assume that the vertical part of the path does not contribute in the convolution $`P(x_1,x_2)(FG)(x_1,x_2)`$, then we have obviously in terms of the previously defined Fourier transforms:
$$P^{ab}=F^{a1}G^{1b}F^{a2}G^{2b}=(F\tau _3G)^{ab},$$
(5)
where the Pauli matrix $`\tau _3\mathrm{Diag}(1,1)`$ deals with the minus sign associated to type $`2`$ indices. This relation can be seen as a particular form of Eq. (2), the product of the right hand side being a matrix product. Therefore, we see that transforming 2-point functions into $`2\times 2`$ matrices enables to generalize the usual relationship between the Fourier transform and the convolution product to the thermal case.
Going on along this line, we would of course obtain the standard matrix formulation for the real time formalism in Fourier space. Nevertheless, this justification is valid only for situations in which the vertical branch does not contribute. At this point, the standard way has been to try to get rid of the vertical part. Instead of that, I will determine precisely the situations in which it contributes, and show that its contribution can be included in the matrix formalism by a minor modification of its Feynman rules.
## 4 Diagrams in which <br>the vertical part contributes
### 4.1 Example
A simple example showing that the vertical part can contribute to the result of a path integration is provided by the convolution of two bare propagators. Such a calculation would appear for instance in the insertion of a mass term. This calculation has been done explicitly in and shows the following features:
(i) The vertical part is mandatory in order to have a result invariant under time translation.
(ii) The vertical part enables to get rid of the $`t__I`$ dependence that would show up in the result if one where using only $`𝒞_1𝒞_2`$ (the validity of this result is quite general, see appendix A).
(iii) A $`t__I`$-independent, invariant under time translation, contribution of the vertical part of the path is left in the result.
The existence of such an explicit example definitively rules out the justifications based on the initial hypothesis that the vertical part does not contribute.
### 4.2 Generic contour integration
It is convenient to work with the mixed coordinates $`(x^o,𝒌)`$ in which the bare propagator has the following explicit expression:
$$G_o(x^o,y^o;𝒌)=\frac{1}{2\omega _𝒌}\underset{s=\pm }{}G_{o,s}^{\omega _𝒌}(x^o,y^o),$$
(6)
with
$$G_{o,s}^E(x^o,y^o)e^{isE(y^ox^o)}\left[\theta _c(s(y^ox^o))+n__B(E)\right]$$
(7)
and
$$\omega _𝒌\sqrt{𝒌^2+m^2}n__B(E)\frac{1}{e^{\beta E}1}.$$
(8)
Because of the structure of this bare propagator, it is a priori obvious that every time integration can be reduced to integrals of the following type:
$$I_𝒞(\mathrm{\Sigma })_𝒞𝑑x^oe^{i\mathrm{\Sigma }x^o}f(x^o,\mathrm{\Sigma }).$$
(9)
In the above integral, $`\mathrm{\Sigma }`$ is a linear combination of the on-shell energies $`\omega _{𝒌_i}`$ corresponding to the various legs (internal as well as external) of the diagram, with coefficients $`0`$, $`1`$ or $`1`$, while the function $`f()`$ is a product of factors like $`\theta _c(\pm (x^ox_i^o))+n__B(\omega _{𝒌_i})`$. This function is therefore piece-wise constant along the path $`𝒞`$. Moreover, the KMS boundary condition is such that the integrand takes equal values at both ends of the path:<sup>7</sup><sup>7</sup>7In situations where fermions with chemical potential are present in the theory, this result remains true because the fermions always come in pairs at vertices and because charges are conserved at each vertex.
$$e^{i\mathrm{\Sigma }t__I}f(t__I,\mathrm{\Sigma })=e^{i\mathrm{\Sigma }(t__Ii\beta )}f(t__Ii\beta ,\mathrm{\Sigma }).$$
(10)
I want now to show that the object $`I_𝒞(\mathrm{\Sigma })`$ receives a contribution of the vertical part $`𝒞_v`$ if and only if $`\mathrm{\Sigma }=0`$. Let us first assume that $`\mathrm{\Sigma }0`$. Therefore, an integration by parts gives immediately:
$$I_𝒞(\mathrm{\Sigma })=\frac{1}{i\mathrm{\Sigma }}_𝒞𝑑x^oe^{i\mathrm{\Sigma }x^o}\frac{f(x^o,\mathrm{\Sigma })}{x^o}.$$
(11)
Then, since the function $`f()`$ is piece-wise constant, its derivative is a discrete sum of Dirac’s distributions $`\delta _c()`$. Therefore, the generic structure of the above integral is a sum like
$$I_𝒞(\mathrm{\Sigma })=\frac{1}{i\mathrm{\Sigma }}\underset{i}{}c_ie^{i\mathrm{\Sigma }x_i^o},$$
(12)
where the $`c_i`$ are coefficients we don’t need to make more explicit ($`c_i`$ is the value at the point $`x^o=x_i^o`$ of the coefficient in front of $`\delta (x^ox_i^o)`$ in $`f/x^o`$) and the $`x_i^o`$ are the times at which the value of $`f(x^o,\mathrm{\Sigma })`$ changes. Now, in order to see if there is in the above result a contribution which is specific to the vertical part of the path, let us calculate the same Feynman diagram using only $`𝒞_1𝒞_2`$. This means that $`x^o`$, as well as all the $`x_i^o`$ are now restricted to the horizontal branches of the time path. For the integral $`I(\mathrm{\Sigma })`$, the result would be the same sum restricted to those times $`x_i^o`$ that are on the horizontal branches:
$$I_{𝒞_1𝒞_2}(\mathrm{\Sigma })=\frac{1}{i\mathrm{\Sigma }}\underset{\{i|x_i^o𝒞_1𝒞_2\}}{}c_ie^{i\mathrm{\Sigma }x_i^o}.$$
(13)
But, by definition of the calculation based on only $`𝒞_1𝒞_2`$, all the other times $`x_i^o`$ are also on $`𝒞_1𝒞_2`$, so that the “restricted” sum contains in fact all the terms of the full sum, with the same coefficients $`c_i`$. Therefore:
$$\mathrm{if}\mathrm{\Sigma }0,I_𝒞(\mathrm{\Sigma })=I_{𝒞_1𝒞_2}(\mathrm{\Sigma }),$$
(14)
and there is no contribution specific to $`𝒞_v`$ in this case. In other words, all the contour integrals give the same result whether they appear in the calculation with the full path or in the calculation with only the horizontal branches, if $`\mathrm{\Sigma }0`$.
Let us now consider the case where $`\mathrm{\Sigma }=0`$. The integration by parts gives now
$$I_𝒞(\mathrm{\Sigma })=i\beta f(t__I,0)_𝒞𝑑x^o\frac{f(x^o,0)}{x^o}.$$
(15)
By the same arguments as before, we can show that there is no contribution specific to the vertical part in the second term. But now the factor $`i\beta `$ in the first term comes from the difference $`t__I(t__Ii\beta )`$ of the two extremities of the time path. Therefore, this term would vanish if we were dropping the vertical part. From that, we conclude that this first term is a contribution from the vertical part.<sup>8</sup><sup>8</sup>8We see that the contribution of the vertical part is not a continuous function of $`\mathrm{\Sigma }`$. Nevertheless, the total contribution is a continuous function of $`\mathrm{\Sigma }`$.
### 4.3 Localization of the contribution of $`𝒞_v`$
The condition $`\mathrm{\Sigma }=0`$ necessary to have a contribution of the vertical part is a constraint on the 3-momenta (both internal and external) of the diagram. But not all the situations where $`\mathrm{\Sigma }=0`$ lead to a contribution of the vertical part at the very end of the calculation. Indeed, since the function $`I_𝒞(\mathrm{\Sigma })`$ is continuous at $`\mathrm{\Sigma }=0`$, we won’t have a contribution of $`𝒞_v`$ at the end if the condition $`\mathrm{\Sigma }=0`$ defines a sub-manifold of zero measure in the space accessible to 3-momenta (taking into account the constraints provided by 3-momentum conservation). This is in fact the generic case.
There are only two distinct situations in which the condition $`\mathrm{\Sigma }=0`$ does not reduce the accessible space more than the 3-momentum conservation does. The first of these two cases correspond to vacuum diagrams (diagrams without external legs) for which the last time integration has always $`\mathrm{\Sigma }=0`$ because of the invariance under time translation (because a function of a single time must be a constant if invariance under time translation holds). The fact that the last time integration plays a particular role in such a diagram is at the origin of the specific Feynman rules for vacuum diagrams: (i) the last time integration just gives an extra factor $`i\beta `$, and (ii) one of the vertices must be kept fixed to type $`1`$ or type $`2`$. A justification of these additional rules is given in and won’t be reproduced in the present paper.
The second situation in which a contribution of the vertical part is left at the end of the calculation is encountered for self-energy insertions between propagators. Indeed, in that case, the frequency $`\mathrm{\Sigma }`$ can be the difference $`\omega _{𝒌_1}\omega _{𝒌_2}`$ of the incoming and outgoing on-shell energies while 3-momentum conservation imposes $`𝒌_1=𝒌_2`$, i.e. $`\mathrm{\Sigma }=0`$. This is the situation I will study in detail in the next section. It is worth noticing that compared to , the insertion of self-energies is shown to be the only situation in which the vertical part contributes.<sup>9</sup><sup>9</sup>9In , I identified the condition $`\mathrm{\Sigma }=0`$ as the necessary condition to have a contribution of the vertical part in $`I_𝒞(\mathrm{\Sigma })`$, but didn’t realize that this condition is relevant only if it defines a sub-manifold of strictly positive measure.
## 5 Effect of the vertical part <br>on the RTF Feynman rules
### 5.1 Basic example
I now study in detail the case of self-energy insertions which is the only one in which the self-energy contributes, besides vacuum diagrams, in order to show that the contribution of $`𝒞_v`$ is automatically included by the matrix formalism provided that one uses $`|k^o|`$ for the argument of statistical weights. The general philosophy of the proof is to start from a Green’s function expressed in time coordinates, for which we have unambiguous Feynman rules. Then we have to Fourier transform it in order to obtain the corresponding matrix. Finally, we must deduce from the result the Feynman rules in Fourier space that would have given the same function. I will start by the trivial example of repeated mass insertions. But contrary to where this example was only used as an illustration for the more general case of self-energy insertions, this example is in the present paper at the very heart of the proof. Indeed, I show in the next paragraph that the most general case can be reduced to the trivial one by making use of simple algebraic properties of the contour integration.
The object we are interested in is the propagator obtained after the resummation of an additional mass term $`i\mu ^2`$:
$$G(x_1^o,x_2^o)\underset{n=0}{\overset{+\mathrm{}}{}}(i\mu ^2)^n(G_o\mathrm{}G_o)(x_1^o,x_2^o),$$
(16)
where the convolution product appearing at order $`n`$ in the sum contains $`n+1`$ factors. Of course, the result of this sum is well known without the need of performing the calculation:<sup>10</sup><sup>10</sup>10For the term of order $`n`$ in the infinite sum, this result implies:
$$G_o\mathrm{}G_o=\frac{1}{n!}\left[i\frac{}{m^2}\right]^nG_o,$$
(17) a relation known as the mass derivative formula .
$$G(x_1^o,x_2^o)=G_o(x_1^o,x_2^o)|_{m^2m^2+\mu ^2},$$
(18)
where the notation $`m^2m^2+\mu ^2`$ means that each occurrence of $`m^2`$ in $`G_o`$ is replaced by $`m^2+\mu ^2`$. Since the Fourier transform given by Eq. (4) does not involve the mass, the above result for the resummed propagator also holds for its Fourier transform. Therefore, we have to find out the Feynman rules that would give the matrix propagator in which the mass squared is translated by an amount equal to $`\mu ^2`$. Let us now do the same resummation in the matrix formalism by making use of Eq. (5), in order to determine how it should be modified in order to reach the expected result Eq. (18). To that effect, it is convenient to factorize the free matrix propagator as follows
$$G_o(k)=U(k)\left(\begin{array}{ccc}& \mathrm{\Delta }__F(k)& 0\\ & 0& \mathrm{\Delta }__F^{}(k)\end{array}\right)U(k),$$
(19)
where $`\mathrm{\Delta }__Fi/(k^2m^2)+\pi \delta (k^2m^2)`$ is the usual Feynman propagator, and $`U(k)`$ is a matrix containing the statistical factors:
$$U(k)=\left(\begin{array}{ccc}& \sqrt{1+n__B}& (\theta (k^o)+n__B)/\sqrt{1+n__B}\\ & (\theta (k^o)+n__B)/\sqrt{1+n__B}& \sqrt{1+n__B}\end{array}\right).$$
(20)
At this stage, it seems that we still have the choice ($`|k^o|`$ or $`\omega _𝒌`$) for the arguments of the statistical weights. In the matrix formalism, the resummation is performed by
$`G(k)`$ $`={\displaystyle \underset{n=0}{\overset{+\mathrm{}}{}}}(i\mu ^2)^nG_o(k)[\tau _3G_o(k)]^n`$ (21)
$`=U(k)\left[{\displaystyle \underset{n=0}{\overset{+\mathrm{}}{}}}(i\mu ^2)^nD_o(k)[\tau _3D_o(k)]^n\right]U(k)`$
$`=U(k)\left[D_o(k)|_{m^2m^2+\mu ^2}\right]U(k),`$
where I denote $`D_o(k)\mathrm{Diag}(\mathrm{\Delta }__F(k),\mathrm{\Delta }__F^{}(k))`$. In order to do the sum, I have used the algebraic relation $`U\tau _3U=\tau _3`$. It is now obvious that if we want to have the relation $`G(k)=G_o(k)|_{m^2m^2+\mu ^2}`$, we need $`U(k)=U(k)|_{m^2m^2+\mu ^2}`$, which means that the matrix $`U(k)`$ should be independent of $`m^2`$. The only way to achieve that is to use $`|k^o|`$ as the argument of $`n__B`$ in $`U(k)`$.
Therefore, we have justified in the case of this simple example the fact that the prescription $`n__B(|k^o|)`$ should be used in the RTF Feynman rules in order to get the correct result. Moreover, since we have seen in section 2 that the vertical part enables to take into account the interaction (here the term $`i\mu ^2`$) in the density operator, i.e. in the statistical factors, we can conclude that choosing the right argument for the statistical functions reintroduces the contribution of the vertical part in the result.
### 5.2 Repeated self-energy insertions
I want now to generalize the previous result concerning the $`n__B(|k^o|)`$ prescription to the general case of self-energy insertions, illustrated on figure 2.
This situation seems more complicated at first sight since we don’t know a priori the result. The calculation to be performed in time coordinates is
$$G(x_1^o,x_2^o)\underset{n=0}{\overset{+\mathrm{}}{}}(G_o\mathrm{\Pi }\mathrm{}\mathrm{\Pi }G_o)(x_1^o,x_2^o),$$
(22)
where the term of order $`n`$ in the right hand side contains $`n`$ factors $`\mathrm{\Pi }`$ and $`n+1`$ factors $`G_o`$. This is where the properties of the contour convolution discussed in appendix B are quite helpful. Indeed, if we use now the commutativity of this product of convolution (which holds here since all the convoluted objects satisfy KMS), we can rewrite
$$G(x_1^o,x_2^o)=\underset{n=0}{\overset{+\mathrm{}}{}}((G_o\mathrm{}G_o)(\mathrm{\Pi }\mathrm{}\mathrm{\Pi }))(x_1^o,x_2^o).$$
(23)
Now that the free propagators $`G_o`$ are grouped together, we have reduced the problem to the previous one. Indeed, we know that we don’t have any contribution of the vertical part in the convolution of two objects if at least one of them is one particle irreducible, which is the case of $`\mathrm{\Pi }`$. Therefore, $`𝒞_v`$ does not contribute in the product $`\mathrm{\Pi }\mathrm{}\mathrm{\Pi }`$, and we can obtain its Fourier transform with the Feynman rules (Eq. (5)) established under the hypothesis that $`𝒞_v`$ does not contribute. The only problem related to $`𝒞_v`$ comes from the product $`G_o\mathrm{}G_o`$ which has already been considered in the previous paragraph. Its Fourier transform is obtained by the Feynman rules with the $`n__B(|k^o|)`$ prescription. Therefore, the Fourier transform is given by
$$G(k)=\underset{n=0}{\overset{+\mathrm{}}{}}G_o(k)[\tau _3G_o(k)]^n\mathrm{\Pi }(k)[\tau _3\mathrm{\Pi }(k)]^{n1},$$
(24)
in which one should use the $`n__B(|k^o|)`$ prescription for the $`n+1`$ $`G_o`$’s. At this stage, it is trivial to put the various factors back into a more natural order to get<sup>11</sup><sup>11</sup>11This is possible because we can write $`G_o(k)=UD_oU`$ and $`\mathrm{\Pi }(k)=UPU`$ with $`D_o`$ and $`P`$ diagonal matrices, and because $`U`$ satisfies $`U\tau _3U=\tau _3`$. This merely says that the commutativity of the contour convolution is transported in the matrix formalism.
$$G(k)=\underset{n=0}{\overset{+\mathrm{}}{}}G_o(k)[\tau _3\mathrm{\Pi }(k)\tau _3G_o(k)]^n.$$
(25)
This trick based on the commutativity of the contour convolution enabled us to reduce the general case to the simpler one treated in the previous paragraph, and to see again that the argument of the statistical weights for the propagators along the chain<sup>12</sup><sup>12</sup>12For the other propagators, the prescription for the statistical factors is indifferent. must be $`|k^o|`$.
## 6 Concluding remarks
In this paper, I have given a new, quite compact, justification for the matrix formalism for the RTF in Fourier space. The focus has been on a correct treatment of the vertical part of the time path. In particular, no use is made of the limits $`t__I\mathrm{}`$ and $`t__F+\mathrm{}`$, since KMS implies a total independence of the Green’s functions with respect to these parameters. The justification is made in three steps: (i) identify the diagrams in which the vertical part contributes (ii) show in the case of mass insertions that the contribution of $`𝒞_v`$ is included by the $`n(|k^o|)`$ prescription and (iii) use simple properties of the contour convolution to reduce the general case to the previous one.
The present justification is complementary to that of Le Bellac and Mabilat, since it provides a better control on which are the topologies receiving a contribution from the vertical part, while in all the topologies appear on the same footing. Compared to that of , this proof is more complete since the situations in which the vertical part contributes are clearly delimited, and the end of the proof is considerably simplified by making use of the commutativity of the contour convolution.
I would also like to emphasize again on the physics encoded in the vertical part of the path. Indeed, since we know that the role of the vertical part in the perturbative expansion is to extract the dependence upon the coupling constant contained in the density operator, it was obvious right from the beginning that its effects on the Feynman rules could only affect the statistical factors.
To end this paper, it is worth making a comment on the Keldysh formalism used in out-of-equilibrium situations. This formalism is based on a time path which does not contain the vertical part $`𝒞_v`$. Indeed, there is no need for it here since the initial density operator is not related to the Hamiltonian and therefore does not contain the coupling constant. All the properties of the equilibrium Green’s functions that are related to the presence of the vertical part are lost: out-of-equilibrium Green’s functions depend explicitly on the initial time $`t__I`$, and are not invariant under time translation. For this reason, going to Fourier space is also much less straightforward.
## Appendix A Path independence of contour integrations
For the purpose of discussing the effect of the vertical part of the path in the real time formalism, we need first to recall some basic properties of the contour integration.
The most noticeable property of this integration is that it gives a result which is independent of the initial time $`t__I`$ used to define the path . This property is in fact a quite direct consequence of the KMS relations satisfied by propagators appearing in the perturbative expansion, and was to be expected given the physical meaning of thermal equilibrium.
Indeed, we can write any Green’s function $`G(x_1^o,\mathrm{},x_n^o)`$ calculated perturbatively as:
$$G(x_1^o,\mathrm{},x_n^o)\left[\underset{i=1}{\overset{V}{}}_𝒞𝑑y_i^o\right]g(x_1^o,\mathrm{},x_n^o|y_1^o,\mathrm{},y__V^o),$$
(26)
where $`V`$ is the total number of vertices in the diagram, and the $`y_i^o`$ the inner times. Now, if we consider a function
$$a(y^o)\theta _c(y^oy_+^o)a^+(y^o)+\theta _c(y_{}^oy^o)a^{}(y^o)$$
(27)
on the path $`𝒞`$, with holomorphic functions $`a^\pm (y^o)`$, and then calculate the integral
$$A_𝒞𝑑y^oa(y^o),$$
(28)
we have the following two properties:
(i) $`A`$ depends only on the extremities of the path, and on the other times $`y_\pm ^o`$, but not on its precise shape.
(ii) we have $`dA/dt__I=a(t__Ii\beta )a(t__I)`$.
Looking now at the structure of the bare propagators (see Eq. (6)), we see that the integrand $`g`$ satisfies the conditions of the previous lemma with the additional property of taking the same value at both extremities of the path for each inner variable $`y_i^o`$. Applying therefore (ii), we conclude that the function $`G(x_1^o,\mathrm{},x_n^o)`$ is independent upon $`t__I`$. Using then the possibility to deform the path (i), we can change $`t__F`$ without changing the result of the integrals. The function $`G(x_1^o,\mathrm{},x_n^o)`$ is therefore also independent of $`t__F`$. Finally, the only dependence of $`G(x_1^o,\mathrm{},x_n^o)`$ upon the path comes through the external times $`x_i^o`$ which are supposed to be on the path.
## Appendix B Properties of the contour convolution
In order to deal simply with self-energy insertions, it is convenient to discuss first a few properties of the contour convolution defined by Eq. (3).
The first obvious property is that the result is independent of both $`t__I`$ and $`t__F`$ provided that the two functions one is convoluting satisfy the KMS relations and correspond to two particles of the same nature.<sup>13</sup><sup>13</sup>13This limitation is not important in practice since convoluting a bosonic function with a fermionic one would be totally meaningless.
In order to simplify the study of this operation for functions satisfying KMS, the first step is to write two-point functions by means of their spectral representation:<sup>14</sup><sup>14</sup>14It is possible to group the two terms in this sum in order to obtain a single term containing the full free propagator. This splitting is natural here since $`G_{o,s}^E`$ is the smallest part of the free propagator that still satisfies KMS. Any property that is a consequence of KMS can be obtained by limiting the study to this very simple piece.
$$F(x_1^o,x_2^o)=\underset{s=\pm }{}_0^+\mathrm{}𝑑Ef_s(E)G_{o,s}^E(x_1^o,x_2^o),$$
(29)
where the $`G_{o,s}^E`$ are the building blocks of the free propagator given by Eq. (7). If one uses this spectral representation, it is sufficient to limit the study of the operation $``$ to its action on simple objects like $`G_{o,s}^E`$.
An elementary integration based on Eq. (11) gives immediately:
$$G_{o,ϵ}^AG_{o,\eta }^B=\frac{1}{i(ϵA\eta B)}\left[ϵG_{o,\eta }^B\eta G_{o,ϵ}^A\right].$$
(30)
We notice that the result is unchanged if we permute the two objects we are convoluting. This property is transported to general two-point functions through their spectral representation: the contour convolution is commutative.
Iterating the above relation, we obtain:
$`\left(G_{o,ϵ}^AG_{o,\eta }^B\right)G_{o,\mu }^C`$ $`={\displaystyle \frac{\eta }{i(ϵA\eta B)}}{\displaystyle \frac{\mu }{i(ϵA\mu C)}}G_{o,ϵ}^A`$ (31)
$`+{\displaystyle \frac{ϵ}{i(\eta BϵA)}}{\displaystyle \frac{\mu }{i(\eta B\mu C)}}G_{o,\eta }^B`$
$`+{\displaystyle \frac{ϵ}{i(\mu CϵA)}}{\displaystyle \frac{\eta }{i(\mu C\eta B)}}G_{o,\mu }^C.`$
The remarkable property of this result is its symmetry under any permutation of the three objects one is convoluting. Again, this is trivially extended to any triplet of two-point functions: the contour convolution is associative.
To conclude this appendix, one can say that as far as functions satisfying KMS are concerned, the contour convolution possesses the same basic properties as the ordinary convolution product.
|
no-problem/9901/cond-mat9901330.html
|
ar5iv
|
text
|
# A Discrete Model for Nonequilibrium Growth Under Surface Diffusion Bias
\[
## Abstract
A limited mobility nonequilibrium solid-on-solid dynamical model for kinetic surface growth is introduced as a simple description for the morphological evolution of a growing interface under random vapor deposition and surface diffusion bias conditions. Simulations using a local coordination dependent instantaneous relaxation of the deposited atoms produce complex surface mound morphologies whose dynamical evolution is inconsistent with all the proposed continuum surface growth equations. For any finite bias, mound coarsening is found to be only an initial transient which vanishes asymptotically, with the asymptotic growth exponent being $`0.5`$ in both 1+1 and 2+1 dimensions. Possible experimental implications of the proposed limited mobility nonequilibrium model for real interface growth under a surface diffusion bias are critically discussed.
\]
An atom moving on a free surface is known to encounter an additional potential barrier, often called a surface diffusion bias , as it approaches a step from the upper terrace — there is no such extra barrier for an atom approaching the step from the lower terrace (the surface step separates the upper and the lower terrace). Since this diffusion bias makes it preferentially more likely for an atom to attach itself to the upper terrace than the lower one, it leads to mound (or pyramid) - type structures on the surface under growth conditions as deposited atoms are probabilistically less able to come down from upper to lower terraces. This dynamical growth behavior is sometimes called an “instability” because a flat (“singular”) two dimensional surface growing under a surface diffusion bias is unstable toward three dimensional mound/pyramid formation. There has been a great deal of recent interest in the morphological evolution of growing interfaces under nonequilibrium growth conditions in the presence of such a surface diffusion bias. In this paper we propose a minimal nonequilibrium cellular automata - type atomistic growth model for ideal molecular beam epitaxial - type random vapor deposition growth under a surface diffusion bias. Extensive stochastic simulation results presented in this paper establish the morphological evolution of a surface growing under diffusion bias conditions to be surprisingly complex even for this extremely simple minimal model. Various critical growth exponents, which asymptotically describe the large-scale dynamical evolution of the growing surface in our minimal discrete growth model, are inconsistent with all the proposed continuum theories for nonequilibrium surface growth under diffusion bias conditions. Our results based on our extensive study of this minimal model lead to the conclusion that a continuum description for nonequilibrium growth under a surface diffusion bias does not exist (even for this extremely simple minimal model) and may require a theoretical formulation which is substantially different from the ones currently existing in the literature. Our results in the initial non-asymptotic transient growth regime (lasting upto several hundred or a few thousand layers of growth) do, however, agree with existing theoretical and (many, but not all) experimental findings in the literature.
In Fig. 1(a) we schematically show our solid-on-solid (SOS) nonequilibrium growth model : (1) Atoms are deposited randomly (with an average rate of 1 layer/unit time, which defines the unit of time in the growth problem — the length unit is the lattice spacing taken to be the same along the substrate plane and the growth direction) and sequentially on the surface starting with a flat substrate; (2) a deposited atom is incorporated instantaneously if it has at least one lateral nearest-neighbor atom; (3) singly coordinated deposited atoms (i.e. the ones without any lateral neighbors) could instantaneously relax to a neighboring site within a diffusion length of $`l`$ provided the neighboring site of incorporation has a higher coordination than the original deposition site; (4) the instantaneous relaxation process is constrained by two probabilities $`P_L`$ and $`P_U`$ ($`0P_L,P_U1`$) where $`P_{L(U)}`$ is the probability for the atom to attach itself to the lower(upper) terrace after relaxation (note that a “terrace” here could be just one other atom). The surface diffusion bias is implemented in our model by taking $`P_U>P_L`$, making it more likely for atoms to attach to the upper terrace. Under the surface diffusion bias, therefore, an atom deposited at the top of a step edge feels a barrier (whose strength is controlled by $`P_U/P_L`$) in coming down compared with an atom at the lower terrace attaching itself to the step. Our model is well-defined for any value of the diffusion length $`l`$ including the most commonly studied situation of nearest-neighbor relaxation ($`l=1`$). (We should emphasize, however, that the definition of a surface diffusion bias is not unique even within our extremely simple limited mobility nonequilibrium growth model — what we study in this Letter is the so-called edge diffusion bias .)
We have carried out extensive simulations both in $`1+1`$ and $`2+1`$ dimensions (d) varying $`P_L`$, $`P_U`$ as well as $`l`$, also including in our simulations the inverse situation (the so-called ‘negative’ bias condition) with $`P_L>P_U`$ so that deposited atoms preferentially come down attaching themselves to lower steps producing in the process a smooth growth morphology. Because of lack of space we do not present here our ‘negative’ bias ($`1P_L>P_U`$) results (to be published elsewhere) except to note that the smooth dynamical growth morphology under our negative bias model obeys exactly the expected linear Edwards - Wilkinson universality . Our growth model is the most obvious finite bias generalization of the well-studied cellular automaton model referred to as the DT model or the 1+ model in the literature .
Before presenting our numerical results we point out two important features of our growth model : (1) For $`P_L=P_U=1`$ our model reduces to the one introduced in ref. 22, often called the DT model, (and studied extensively in the literature) as a minimal model for molecular beam epitaxy in the absence of any diffusion bias; (2) we find, in complete agreement with earlier findings in the absence of diffusion bias, that the diffusion length $`l`$ is an irrelevant variable (even in the presence of bias) which does not affect any of our calculated critical exponents (but does affect finite size corrections — increasing $`l`$ requires a concomitant increase in the system size to reduce finite size effects). To demonstrate this, we compare two systems (see Fig. 2(c)), one with diffusion length $`l=1`$ while the other with $`l=5`$. Both systems are on sufficiently large substrates of size $`L=10^4`$ to prevent finite size effect and both have the same bias strength ($`P_U=1.0`$ and $`P_L=0.9`$). Except for the expected layer-by-layer growth seen in the first few layers in the $`l=5`$ system, we see the same critical behavior from the two systems with the coarsening processes slightly faster in the $`l=5`$ system in early time regime. In the rest of this paper (except for Fig. 2(c) where we present some representative $`l=5`$ results), we present our $`l=1`$ simulation results emphasizing that our critical exponents are independent of $`l`$ provided finite size effects are appropriately accounted for. Our calculated exponents are also independent of the precise values of $`P_U`$ and $`P_L`$ ( $`<P_U`$ ) as found in the unbiased $`P_U=P_L`$ case.
In Fig.1 we show our representative d=1+1 (a and b) and 2+1 (c and d) simulated dynamical growth morphology evolution. The diffusion bias produces mounded structures which are visually statistically scale invariant only on length scales much larger (or smaller) than the typical mound size. Note that the mounding in the growth morphology starts very early during growth and is already prominent in the first 100 monolayers (ML) as is obvious in Fig. 1(b). In producing our final results we utilize a noise reduction technique which accepts only a fraction of the attempted kinetic events, and in the process produces smoother results (reducing noise effects) without affecting the critical exponents. We have explicitly checked that the noise reduction technique does not change our calculated critical exponents and has only the cosmetic effect of suppressing noise in our simulated growth morphology.
To proceed quantitatively we now introduce the dynamic scaling ansatz which seems to describe well all our simulated results.
We have studied the root mean square surface width or surface roughness ($`W`$), the average mound size ($`R`$), the average mound height ($`H`$), and the average mound slope ($`M`$) as functions of growth time. We have also studied the various moments of dynamical height-height correlation function, and these correlation function results (to be reported elsewhere) are consistent with the ones obtained from our study of $`W(t)`$, $`R(t)`$, $`H(t)`$, and $`M(t)`$. The dynamical scaling ansatz in the context of the evolving mound morphologies can be written as power laws in growth time (which is equivalent to power laws in the average film thickness) : $`W(t)t^\beta `$ ; $`R(t)t^n`$ ; $`H(t)t^\kappa `$ ; $`M(t)t^\lambda `$ ; $`\xi (t)t^{1/z}`$, where $`\xi (t)`$ is the lateral correlation length (with $`z`$ as the dynamical exponent) and $`\beta `$, $`n`$, $`\kappa `$, $`\lambda `$, $`z`$ are various growth exponents which are not necessarily independent. We find in all our simulations $`nz^1`$, and thus the coarsening exponent $`n`$, which describes how the individual mound sizes increase in time, is the same as the inverse dynamical exponent in our model. We also find $`\beta =\kappa `$ in all our results, which is understandable in a mound-dominated morphology. In addition, all our results satisfy the expected exponent identity $`\beta =\kappa =n+\lambda `$ because the mound slope $`MH/R`$. The evolving growth morphology is thus completely defined by two independent critical exponents $`\beta `$ (the growth exponent) and $`n`$ (the coarsening exponent), which is similar to the standard (i.e. without any diffusion bias) dynamic scaling situation where $`\beta `$ and $`z`$ ($`=n^1`$ in the presence of diffusion bias) completely define the scaling properties.
In Figs.2 and 3 we show our representative scaling results in d=1+1 (Fig.2) and 2+1 (Fig.3) for nonequilibrium growth under surface diffusion bias conditions. It is clear that we consistently find the growth exponent $`\beta 0.5`$ in both d=1+1 and 2+1 in the long time asymptotic limit independent of $`P_L`$ and $`P_U`$ as long as $`P_L/P_U<1`$. This $`\beta 0.5`$ is, however, different from the usual Poisson growth under pure random deposition with no relaxation where there are no lateral correlations. Our calculated asymptotic coarsening exponent $`n`$ in both d=1+1 and 2+1 is essentially zero ($`<0.1`$) at long times. In all our results we find the effective coarsening (growth) exponent showing a crossover from $`n`$ ($`\beta `$) $`0.2`$ ($`0.25`$ or $`0.33`$ depending on d=2+1 or 1+1) at early times ($`1<t10^3`$) to a rather small (large) value ( $`n<0.1`$, $`\beta 0.5`$ as $`t\mathrm{}`$) at long times — we believe the asymptotic $`n`$ ($`\beta `$) to be zero (half) in our model. Our calculated steepening exponent $`\lambda `$ satisfies the exponent identity $`\lambda =\beta n`$ rather well, indicating that steepening ($`Mt^\lambda `$) and coarsening ($`Rt^n`$) are competing processes. We find that during the initial transient regime (for $`1<t10^3`$ depending on d and $`P_L/P_U`$), when considerable mound coarsening takes place ($`n0.2`$), $`\lambda `$ is rather small ($`\lambda <0.1`$) and does not change much. We note that the mound formation dominates our growth morphology even during the initial transient — mounding starts early, coarsens rapidly, and then coarsening slows down or almost stops. After the initial transient, however, $`\lambda `$ is finite and large ($`\lambda 0.4`$), indicating significant steepening of the mounds. Based on our simulation results we are compelled to conclude that the evolutionary behavior of our mound dynamics makes a crossover (at $`t=t^{}10^3`$) from a coarsening-dominated preasymptotic regime $`I`$ ($`n_I0.2`$, $`\lambda _I<0.1`$) to a steepening-dominated regime $`II`$ ($`n_{II}<0.1`$, $`\lambda _{II}0.4`$) which we believe to be the asymptotic regime. The initial transient (regime $`I`$: $`t<t^{}`$) can be construed as a “slope selection” regime where the steepening exponent $`\lambda `$ is very small (and the coarsening exponent, $`n0.2`$, is approximately a constant), but the asymptotic long time behavior (regime $`II`$: $`t>t^{}`$) is clearly dominated by slope steepening (large $`\lambda `$) with coarsening essentially dying down ($`n<0.1`$). The crossover time $`t^{}`$ between regime $`I`$ (coarsening) and $`II`$ (steepening) depends on the details of the model (e.g. d, L, $`P_L`$, $`P_U`$, $`l`$), and could be quite large ($`t^{}10^210^4`$).
We emphasize that our initial transient ($`t<t^{}`$) exponents in regime $`I`$ ($`\beta _I0.26`$; $`n_I0.17`$; $`\lambda _I0.09`$ in d=2+1, Fig.3) agree quantitatively with several non-minimal detailed (temperature-dependent Arrhenius diffusion) growth simulations as well as with direct numerical simulations of a proposed (empirical) continuum growth equation, which have all been claimed to be in good agreement with the observed coarsening behavior in various epitaxial growth experiments. Other simulations and theories find no slope selection, which also agree with some experiments. We believe that the observed slope selection ($`\lambda 0`$) in experiments and simulations is a (long-lasting) transient (our regime $`I`$, $`t<t^{}`$) behavior, which should disappear in the asymptotic regime, at least in models without any artificial transient downward mobility. Theoretically, a distinction has been made between ‘weak’ and ‘strong’ diffusion bias cases, corresponding respectively to our regime $`I`$ (‘mounding’) and $`II`$ (‘steepening’), respectively. Our results show that this theoretical distinction is not meaningful because the ‘weak’ bias case crosses over to the ‘strong’ bias case for $`t>t^{}`$, and the long time asymptotic regime is invariably the ‘strong’ bias steepening regime for any finite diffusion bias. (There could be accidental slope selections at rather large slopes when crystallographic orientations are taken into account , a process neglected in our minimal growth model.)
In comparing with the existing continuum growth equation results we find that none can quantitatively explain all our findings. Golubovic predicts $`\beta =0.5`$, which is consistent with our asymptotic result ($`\beta _{II}0.5`$), but his finding of $`n=\lambda =1/4`$ in both d=1+1 and 2+1 is inconsistent with our asymptotic results (asymptotically $`n<0.1`$, and $`\lambda 0.40.5`$) while being approximately consistent with our initial transient results (for $`t<t^{}`$). The analytic results of Rost and Krug also cannot explain our results, because they predict, in agreement with Golubovic, that if $`\beta =1/2`$, then $`n=\lambda =1/4`$. We also find our asymptotic $`\beta `$ to be essentially $`0.5`$ independent of the actual value of $`P_U/P_L`$, which disagrees with ref. 19. Interestingly our initial transient regime is, in fact, approximately consistent with the theory of ref. 19. We have approximate slope selection ($`\lambda <0.1`$) only in the initial transient regime, which could, however, be of considerable experimental relevance because the pre-asymptotic regime is a long-lasting transient. The only prior work in the literature that has some similarity to our results is that of Villain and collaborators , who in a one dimensional deterministic (i.e. without the deposition beam shot noise) macroscopic continuum description of growth roughness and instabilities under surface diffusion bias, called the Zeno model by the authors, found, in agreement with our atomistic two dimensional stochastic cellular automaton simulation results, a scenario in which coarsening becomes “extremely slow after the mounds have reached a (characteristic) radius” , which is reminiscent of our crossover from weak bias ($`t<t^{}`$) to strong bias ($`t>t^{}`$). While the precise relationship between our two (and one) dimensional atomistic/stochastic model and their one dimensional macroscopic/deterministic model is unclear at the present time, it is interesting to note that the authors of refs. came to a similar negative conclusion as we do about the non-existence of any continuum growth equation describing the strong bias asymptotic regime where the microscopic lattice size may play a crucial role . Finally, we note that a very recent experimental work reports growth morphology of Ge(001) surface which agrees qualitatively with the scenario predicted in this paper, namely, that even for a very weak diffusion bias, the mound slope continues to increase without any observable coarsening.
Although our diffusion bias model is an extremely simple limited mobility model which may be viewed as unrealistic, our d=1+1 results are remarkably similar to those obtained from a study of a full temperature-dependent Arrhenius hopping model with step edge barrier . This study offered essentially the same picture as what we present here, i.e. a smaller value of the growth exponent that crosses over to $`\beta 0.5`$ at larger time, and a very large dynamical exponent corresponding to very little coarsening ($`n0`$) after approximately 100 monolayers of deposition. To be specific, the dynamical exponent determined by the growth of the correlation length is $`z16.6`$ in the temperature-dependent edge bias model while our study yields $`z10`$. Although the dynamical exponents from the two studies are not exactly the same, they are both exceptionally large indicating the mound coarsening process to be negligible in the large time regime. This qualitative agreement between our simple limited mobility results and a d=1+1 full diffusion results argue strongly in favor of our minimal growth model being of reasonably general qualitative validity in experimental situations.
Finally, we note that the introduction of limited mobility models has opened a whole new way of studying kinetic surface roughening in molecular beam epitaxial growth. These limited mobility nonequilibrium models make it possible to study very large systems in very large time limit when it is impossible to do so in the realistic temperature-dependent full-activated diffusion models. Particularly, we point out the success of the DT model in providing an excellent zeroth order description of molecular beam epitaxial growth in the absence of any surface diffusion bias. The d=2+1 critical exponents in the unbiased (DT) model , which belongs to the same universality class as the conserved fourth-order nonlinear continuum MBE growth equation , are $`\beta =0.250.2`$ and $`\alpha 0.60.7`$, which are in quantitative agreement with a number of experimental measurements where surface diffusion bias is thought to be dynamically unimportant. With this in mind, it is, therefore, conceivable that our study of the limited mobility model which includes surface diffusion bias (i.e. a generalized version of the DT model) presented in this paper may benefit the subject in a way similar to what the DT model did for the unbiased molecular beam epitaxy growth study. This is particularly significant since there are still many open questions regarding the interface growth under surface diffusion bias condition. Since the model we study here is a generalized DT model and an approximate continuum description for the original unbiased model has recently been developed , one could use that as the starting point to construct a continuum growth model for the biased growth situation. Such a continuum description is, however, extremely complex as it requires the existence of an infinite number of nonlinear terms in the growth equation, and it therefore remains unclear whether a meaningful continuum description for our discrete simulation results is indeed possible .
In conclusion, we want to emphasize the fact that the limited mobility model studied in this paper is an extremely simple model (“the minimal model” in the sense that it is perhaps the simplest nonequilibrium model which captures the minimal features of growth under surface diffusion bias), and realistic growth under experimental conditions should be substantially more complex than this minimal model (we speculate that this minimal model is in the same growth universality class as realistic growth under a surface diffusion bias for reasons discussed above, but we certainly cannot prove that at this incomplete stage of the development of the subject. A word of caution is in order in comparing experimental results with our calculated critical exponents because of the extreme simplicity and the limited mobility nature of our nonequilibrium growth model.) If we do not have a reasonable theoretical understanding (from a continuum equation approach) of even such a simple minimal model, as we have found in this paper, then current efforts at understanding realistic growth under surface diffusion bias must be quite futile. The main weakness of limited mobility model (of the type presented here) is that they are manifestly nonequilibrium model — this, however, should not be a particularly serious problem in the context of the mound/pyramid formation in the growth morphology, which by definition is a nonequilibrium effect and must disappear in a properly equilibrated surface. Our conclusion based on the results presented in this Letter is that a continuum growth equation for nonequilibrium growth under a surface diffusion bias does not exist at the present time.
This work is supported by the US-ONR and the NSF-DMR-MRSEC.
|
no-problem/9901/astro-ph9901118.html
|
ar5iv
|
text
|
# On the minimum period of uniformly rotating neutron stars
## 1 Introduction
The lower limit on the period of a uniformly rotating neutron star is sensitive to the equation of state (EOS) of dense matter above the nuclear density. Therefore, an uncertainty in the high density EOS implies a large uncertainty in the minimum period of uniform rotation, $`P_{\mathrm{min}}`$ (see, e.g. Friedman & Ipser 1987; Friedman, Parker & Ipser 1989; Salgado et al. 1994a,b; Cook et al. 1994). Hence, it is, therefore, of interest to find a lower limit on $`P_{\mathrm{min}}`$, that is independent of the EOS. This limit results from the condition of causality, combined with the requirement that EOS yields neutron stars with masses compatible with observed ones \[currently the highest accurately measured neutron star mass is $`M_{\mathrm{obs}}^{\mathrm{max}}=1.442\mathrm{M}_{}`$ (Taylor & Weisberg 1989)\]. It will be hereafter referred to as $`P_{\mathrm{min}}^{\mathrm{CL}}`$.
The first calculation of $`P_{\mathrm{min}}^{\mathrm{CL}}`$ was done by Glendenning (1992), who found the value of $`0.33`$ms. Glendenning (1992), however, used a rather imprecise empirical formula, to calculate a lowest $`P_{\mathrm{min}}`$ by using the parameters (mass and radius) of the maximum mass configurations of a family of non-rotating neutron star models. His result, therefore, should be considered only as an estimate of $`P_{\mathrm{min}}^{\mathrm{CL}}`$. Recently, Koranda et al. (1997) extracted the value of $`P_{\mathrm{min}}^{\mathrm{CL}}`$ from extensive exact calculations of uniformly rotating neutron star models. They have shown, that the method of Glendenning (1992) overestimated the value of $`P_{\mathrm{min}}^{\mathrm{CL}}`$ by 6%. The result of Koranda et al. (1997) calculations can be summarized in a formula
$$P_{\mathrm{min}}^{\mathrm{CL}}=0.196\frac{M_{\mathrm{obs}}^{\mathrm{max}}}{\mathrm{M}_{}}\mathrm{ms},$$
(1)
which combined with measured mass of PSR B1913+16 yields today’s lower bound for $`P_{\mathrm{min}}^{\mathrm{CL}}=0.282`$ ms. This absolute bound on the minimum period was obtained for the “causality limit (CL) EOS” $`p=(\rho \rho _0)c^2`$, which yields neutron star models of the surface density $`\rho _0`$ and is maximally stiff ($`\mathrm{d}p/\mathrm{d}\rho =c^2`$) everywhere within the star; it does not depend on the value of $`\rho _0`$. In the present letter we show that Eq. (1) can be reproduced using an empirical formula for $`P_{\mathrm{min}}`$ derived for realistic causal EOS by Lasota et al. (1996), combined with an upper bound on the relativistic (compactness) parameter $`2GM/Rc^2`$ for static neutron stars with causal EOS.
## 2 Relation between $`x_\mathrm{s}`$ and $`P_{\mathrm{min}}`$
As shown by Lasota et al. (1996), numerical results of Salgado et al. (1994a,b) for the maximum frequency of uniform stable rotation can be reproduced (within better than 2%), for a broad set of realistic causal EOS of dense matter, by an empirical formula
$$\left(\mathrm{\Omega }_{\mathrm{max}}\right)_{\mathrm{e}.\mathrm{f}.}=𝒞(x_\mathrm{s})\left(\frac{GM_\mathrm{s}}{R_\mathrm{s}^3}\right)^{\frac{1}{2}},$$
(2)
where $`M_\mathrm{s}`$ is the maximum mass of a spherical (nonrotating) neutron star and $`R_\mathrm{s}`$ is the corresponding radius, and $`𝒞(x_\mathrm{s})`$ is a universal (i.e. independent of the EOS) function of the compactness parameter $`x_\mathrm{s}2GM_\mathrm{s}/R_\mathrm{s}c^2`$ for the static maximum mass configuration,
$$𝒞(x_\mathrm{s})=0.468+0.378x_\mathrm{s}.$$
(3)
Combining Eq. (2) and Eq. (3) we get
$$\left(P_{\mathrm{min}}\right)_{\mathrm{e}.\mathrm{f}.}=\frac{8.754\times 10^2}{𝒞(x_\mathrm{s})x_\mathrm{s}^{\frac{3}{2}}}\frac{M_\mathrm{s}}{\mathrm{M}_{}}\mathrm{ms}.$$
(4)
At given maximum mass of a spherical configuration, the maximum rotation frequency (minimum rotation period) is obtained for the maximum value of $`x_\mathrm{s}`$. At fixed $`x_\mathrm{s}`$, the value of $`P_{\mathrm{min}}`$ is proportional to $`M_\mathrm{s}`$. Neutron stars for which masses have been measured, rotate so slowly that their structure can be very well approximated by that of a spherical star. Observations impose thus a condition $`M_\mathrm{s}M_{\mathrm{obs}}^{\mathrm{max}}`$.
## 3 Lower bound on $`P_{\mathrm{min}}`$
Our empirical relation, Eq. (4), indicates, that to minimize $`P_{\mathrm{min}}`$ for given $`M_{\mathrm{obs}}^{\mathrm{max}}`$ we have to look for an EOS which yields maximum $`x_\mathrm{s}`$ at $`M_\mathrm{s}=M_{\mathrm{obs}}^{\mathrm{max}}`$. It is well known, that if one relaxes the condition of causality, the absolute upper bound on $`x_\mathrm{s}`$ for stable neutron star models is reached for an incompressible fluid (i.e., $`\rho =const.`$) EOS; the value of $`x_\mathrm{s}`$ is then independent of $`M_\mathrm{s}`$ and equal $`8/9`$ (see, e.g., Shapiro & Teukolsky 1983). It is therefore rather natural to expect that in order to maximize $`x_\mathrm{s}`$ under the condition of causality, one has to maximize sound velocity throughout the star. Together with condition of density continuity in the stellar interior this points out at the CL EOS, $`p=(\rho \rho _0)c^2`$, as to that which yields “maximally compact neutron stars”; introducing density discontinuities does not increase the value of $`x_\mathrm{s}`$, see Gondek & Zdunik (1995). \[The conjecture that the CL EOS minimizes $`P_{\mathrm{min}}`$ was already proposed and then confirmed numerically in extensive exact calculations by Koranda et al. (1997)\]. Note, that the value of $`x_\mathrm{s}`$ for CL EOS does not depend on $`\rho _0`$ (and therefore is $`M_\mathrm{s}`$-independent). It represents an absolute upper bound on $`x_s`$ for causal EOS, $`x_{\mathrm{s},\mathrm{max}}`$. Our numerical calculation gives $`x_\mathrm{s}(\mathrm{CLEOS})=x_{\mathrm{s},\mathrm{max}}=0.7081`$. This corresponds to an absolute upper bound on the surface redshift of neutron star models with causal EOS, $`z_{\mathrm{max}}=(1x_{\mathrm{s},\mathrm{max}})^{1/2}1=0.8509`$.
Let us consider the effect of the presence of a crust (more generally, of an envelope of normal neutron star matter). For a given EOS of the normal envelope, the relevant (small) parameter is the ratio $`p_\mathrm{b}/\rho _\mathrm{b}c^2`$, where $`p_\mathrm{b}`$ and $`\rho _\mathrm{b}`$ are, respectively, pressure and mass density at the bottom of the crust (Lindblom 1984). The case of $`p_\mathrm{b}=0`$ corresponds to stellar models with no normal crust. Numerical calculations show, that adding a crust onto a CL EOS core implies an increase of $`R_\mathrm{s}`$, which is linear in $`p_\mathrm{b}/\rho _\mathrm{b}c^2`$; for a solid crust we have typically $`p_\mathrm{b}/\rho _\mathrm{b}c^210^2`$. The change (increase) in $`M_\mathrm{s}`$ is negligibly small; it turns out to be quadratic in $`p_\mathrm{b}/\rho _\mathrm{b}c^2`$. This implies, that the decrease of $`x_{\mathrm{s},\mathrm{max}}`$, and of the maximum surface redshift $`z_{\mathrm{s},\mathrm{max}}`$, due to the presence of a crust, is proportional to $`p_\mathrm{b}/\rho _\mathrm{b}c^2`$. This is consistent with Table 1 of Lindblom (1984). However, the extrapolation of his results to $`p_\mathrm{b}=0`$ yields $`z_{\mathrm{max}}=0.891`$, which is nearly 5% higher than our value of $`z_{\mathrm{max}}`$ ! This might reflect a lack of precision of the variational method used by Lindblom (1984), which led to an overestimate of the value of $`z_{\mathrm{max}}`$. It should be stressed that while a precise determination of $`M_{\mathrm{max}}M_\mathrm{s}`$ for static neutron star models is rather easy, determination of the precise value value of the radius of the maximum mass configuration, $`R_\mathrm{s}`$, (with the same relative precision as $`M_\mathrm{s}`$) and consequently of the value of $`x_\mathrm{s}`$ (with, say, four significant digits), is much more difficult and requires a rather high precision of numerical integration of the TOV equations.
In what follows, we restrict ourselves to the case of the absolute upper bound on $`x_\mathrm{s}`$, obtained for neutron star models with no crust. Inserting the value of $`x_{\mathrm{s},\mathrm{max}}`$ into Eq. (4) we get
$$\left(P_{\mathrm{min}}^{\mathrm{CL}}\right)_{\mathrm{e}.\mathrm{f}.}=0.1997\frac{M_{\mathrm{obs}}^{\mathrm{max}}}{\mathrm{M}_{}}\mathrm{ms}.$$
(5)
Current lower bound on $`P`$, resulting from the above equation, is thus $`0.288`$ms, which is only 2% higher than the result of extensive exact numerical calculations of Koranda et al. (1997).
The formula (5) deserves an additional comment. In numerical calculations, of a family of stable uniformly rotating stellar models, for a given EOS of dense matter, one has to distinguish between the rotating configuration of maximum mass, which corresponds to the rotation frequency $`\mathrm{\Omega }_{M_{\mathrm{max}}}(\mathrm{EOS})`$, and the maximally rotating one, which rotates at $`\mathrm{\Omega }_{\mathrm{max}}(\mathrm{EOS})`$ (Cook et al. 1994, Stergioulas & Friedman 1995). Notice, that determination of a maximum mass rotating configuration (and therefore of $`\mathrm{\Omega }_{M_{\mathrm{max}}}`$) is a much simpler task than the calculation of exact value of $`\mathrm{\Omega }_{\mathrm{max}}`$, which is time consuming and very demanding as far as the precision of numerical calculations is concerned. Usually, both configurations are very close to each other, and $`\mathrm{\Omega }_{\mathrm{max}}`$ is typically only 1-2% higher than $`\mathrm{\Omega }_{M_{\mathrm{max}}}`$; such a small difference is within the typical precision of the empirical formulae for $`\mathrm{\Omega }_{\mathrm{max}}`$. Actually, the formula for $`𝒞(x_\mathrm{s})`$, Eq. (3), was fitted to the values of $`\mathrm{\Omega }_{M_{\mathrm{max}}}(\mathrm{EOS})`$ calculated in (Salgado et al. 1994a,b). Therefore, Eq.(5) should in principle be used to evaluate the causal lower bound to $`P_{\mathrm{min},M_{\mathrm{max}}}`$; it actually reproduces, within 0.2%, the exact formula for this quantity, obtained by Koranda et al. (1997) \[see their Eq. (8)\].
It should be stressed that Eq. (5) results from an extrapolation of the empirical formula of Lasota et al. (1996). General experience shows that - in contrast to interpolation - extrapolation is a risky procedure. The fact that in our case extrapolation of an empirical formula yields - within 2% - the value of $`P_{\mathrm{min}}`$ of Koranda et al. (1997) (and reproduces their value of $`P_{\mathrm{min},M_{\mathrm{max}}}`$), proves the usefulness of compact “empirical expressions” which might summarize, in a quantitative way, a relevant content of extensive numerical calculations of uniformly rotating neutron star models.
* This research was partially supported by the KBN grant No. 2P03D.014.13. During his stay at DARC, P. Haensel was supported by the PAST Professorship of French MENESRT.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.